00:00:00.001 Started by upstream project "autotest-spdk-v24.05-vs-dpdk-v23.11" build number 107 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3285 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.049 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.050 The recommended git tool is: git 00:00:00.050 using credential 00000000-0000-0000-0000-000000000002 00:00:00.052 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.079 Fetching changes from the remote Git repository 00:00:00.082 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.127 Using shallow fetch with depth 1 00:00:00.127 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.127 > git --version # timeout=10 00:00:00.170 > git --version # 'git version 2.39.2' 00:00:00.170 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.203 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.203 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:03.751 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:03.762 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:03.773 Checking out Revision 1c6ed56008363df82da0fcec030d6d5a1f7bd340 (FETCH_HEAD) 00:00:03.773 > git config core.sparsecheckout # timeout=10 00:00:03.783 > git read-tree -mu HEAD # timeout=10 00:00:03.801 > git checkout -f 1c6ed56008363df82da0fcec030d6d5a1f7bd340 # timeout=5 00:00:03.822 Commit message: "spdk-abi-per-patch: pass revision to subbuild" 00:00:03.822 > git rev-list --no-walk 1c6ed56008363df82da0fcec030d6d5a1f7bd340 # timeout=10 00:00:03.910 [Pipeline] Start of Pipeline 00:00:03.923 [Pipeline] library 00:00:03.924 Loading library shm_lib@master 00:00:03.924 Library shm_lib@master is cached. Copying from home. 00:00:03.941 [Pipeline] node 00:00:03.953 Running on GP11 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:03.954 [Pipeline] { 00:00:03.965 [Pipeline] catchError 00:00:03.966 [Pipeline] { 00:00:03.978 [Pipeline] wrap 00:00:03.986 [Pipeline] { 00:00:03.993 [Pipeline] stage 00:00:03.995 [Pipeline] { (Prologue) 00:00:04.204 [Pipeline] sh 00:00:04.482 + logger -p user.info -t JENKINS-CI 00:00:04.501 [Pipeline] echo 00:00:04.502 Node: GP11 00:00:04.511 [Pipeline] sh 00:00:04.811 [Pipeline] setCustomBuildProperty 00:00:04.823 [Pipeline] echo 00:00:04.824 Cleanup processes 00:00:04.827 [Pipeline] sh 00:00:05.102 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.102 2170317 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.113 [Pipeline] sh 00:00:05.392 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.392 ++ grep -v 'sudo pgrep' 00:00:05.392 ++ awk '{print $1}' 00:00:05.392 + sudo kill -9 00:00:05.392 + true 00:00:05.406 [Pipeline] cleanWs 00:00:05.416 [WS-CLEANUP] Deleting project workspace... 00:00:05.416 [WS-CLEANUP] Deferred wipeout is used... 00:00:05.421 [WS-CLEANUP] done 00:00:05.424 [Pipeline] setCustomBuildProperty 00:00:05.435 [Pipeline] sh 00:00:05.711 + sudo git config --global --replace-all safe.directory '*' 00:00:05.814 [Pipeline] httpRequest 00:00:05.843 [Pipeline] echo 00:00:05.893 Sorcerer 10.211.164.101 is alive 00:00:05.902 [Pipeline] httpRequest 00:00:05.910 HttpMethod: GET 00:00:05.910 URL: http://10.211.164.101/packages/jbp_1c6ed56008363df82da0fcec030d6d5a1f7bd340.tar.gz 00:00:05.910 Sending request to url: http://10.211.164.101/packages/jbp_1c6ed56008363df82da0fcec030d6d5a1f7bd340.tar.gz 00:00:05.915 Response Code: HTTP/1.1 200 OK 00:00:05.915 Success: Status code 200 is in the accepted range: 200,404 00:00:05.916 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_1c6ed56008363df82da0fcec030d6d5a1f7bd340.tar.gz 00:00:08.869 [Pipeline] sh 00:00:09.151 + tar --no-same-owner -xf jbp_1c6ed56008363df82da0fcec030d6d5a1f7bd340.tar.gz 00:00:09.168 [Pipeline] httpRequest 00:00:09.181 [Pipeline] echo 00:00:09.183 Sorcerer 10.211.164.101 is alive 00:00:09.193 [Pipeline] httpRequest 00:00:09.198 HttpMethod: GET 00:00:09.198 URL: http://10.211.164.101/packages/spdk_5fa2f5086d008303c3936a88b8ec036d6970b1e3.tar.gz 00:00:09.199 Sending request to url: http://10.211.164.101/packages/spdk_5fa2f5086d008303c3936a88b8ec036d6970b1e3.tar.gz 00:00:09.219 Response Code: HTTP/1.1 200 OK 00:00:09.219 Success: Status code 200 is in the accepted range: 200,404 00:00:09.220 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_5fa2f5086d008303c3936a88b8ec036d6970b1e3.tar.gz 00:01:20.333 [Pipeline] sh 00:01:20.617 + tar --no-same-owner -xf spdk_5fa2f5086d008303c3936a88b8ec036d6970b1e3.tar.gz 00:01:23.156 [Pipeline] sh 00:01:23.438 + git -C spdk log --oneline -n5 00:01:23.438 5fa2f5086 nvme: add lock_depth for ctrlr_lock 00:01:23.438 330a4f94d nvme: check pthread_mutex_destroy() return value 00:01:23.438 7b72c3ced nvme: add nvme_ctrlr_lock 00:01:23.438 fc7a37019 nvme: always use nvme_robust_mutex_lock for ctrlr_lock 00:01:23.438 3e04ecdd1 bdev_nvme: use spdk_nvme_ctrlr_fail() on ctrlr_loss_timeout 00:01:23.456 [Pipeline] withCredentials 00:01:23.467 > git --version # timeout=10 00:01:23.479 > git --version # 'git version 2.39.2' 00:01:23.497 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:01:23.499 [Pipeline] { 00:01:23.508 [Pipeline] retry 00:01:23.510 [Pipeline] { 00:01:23.541 [Pipeline] sh 00:01:23.826 + git ls-remote http://dpdk.org/git/dpdk-stable v23.11 00:01:24.411 [Pipeline] } 00:01:24.439 [Pipeline] // retry 00:01:24.445 [Pipeline] } 00:01:24.469 [Pipeline] // withCredentials 00:01:24.482 [Pipeline] httpRequest 00:01:24.502 [Pipeline] echo 00:01:24.504 Sorcerer 10.211.164.101 is alive 00:01:24.514 [Pipeline] httpRequest 00:01:24.520 HttpMethod: GET 00:01:24.520 URL: http://10.211.164.101/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:24.521 Sending request to url: http://10.211.164.101/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:24.527 Response Code: HTTP/1.1 200 OK 00:01:24.528 Success: Status code 200 is in the accepted range: 200,404 00:01:24.529 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:38.405 [Pipeline] sh 00:01:38.690 + tar --no-same-owner -xf dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:40.601 [Pipeline] sh 00:01:40.882 + git -C dpdk log --oneline -n5 00:01:40.882 eeb0605f11 version: 23.11.0 00:01:40.882 238778122a doc: update release notes for 23.11 00:01:40.883 46aa6b3cfc doc: fix description of RSS features 00:01:40.883 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:01:40.883 7e421ae345 devtools: support skipping forbid rule check 00:01:40.894 [Pipeline] } 00:01:40.912 [Pipeline] // stage 00:01:40.922 [Pipeline] stage 00:01:40.924 [Pipeline] { (Prepare) 00:01:40.949 [Pipeline] writeFile 00:01:40.969 [Pipeline] sh 00:01:41.250 + logger -p user.info -t JENKINS-CI 00:01:41.263 [Pipeline] sh 00:01:41.555 + logger -p user.info -t JENKINS-CI 00:01:41.567 [Pipeline] sh 00:01:41.846 + cat autorun-spdk.conf 00:01:41.846 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:41.846 SPDK_TEST_NVMF=1 00:01:41.846 SPDK_TEST_NVME_CLI=1 00:01:41.846 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:41.846 SPDK_TEST_NVMF_NICS=e810 00:01:41.846 SPDK_TEST_VFIOUSER=1 00:01:41.846 SPDK_RUN_UBSAN=1 00:01:41.846 NET_TYPE=phy 00:01:41.846 SPDK_TEST_NATIVE_DPDK=v23.11 00:01:41.846 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:41.852 RUN_NIGHTLY=1 00:01:41.856 [Pipeline] readFile 00:01:41.881 [Pipeline] withEnv 00:01:41.882 [Pipeline] { 00:01:41.894 [Pipeline] sh 00:01:42.175 + set -ex 00:01:42.175 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:42.175 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:42.175 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:42.175 ++ SPDK_TEST_NVMF=1 00:01:42.175 ++ SPDK_TEST_NVME_CLI=1 00:01:42.175 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:42.175 ++ SPDK_TEST_NVMF_NICS=e810 00:01:42.175 ++ SPDK_TEST_VFIOUSER=1 00:01:42.175 ++ SPDK_RUN_UBSAN=1 00:01:42.175 ++ NET_TYPE=phy 00:01:42.175 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:01:42.175 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:42.175 ++ RUN_NIGHTLY=1 00:01:42.175 + case $SPDK_TEST_NVMF_NICS in 00:01:42.175 + DRIVERS=ice 00:01:42.175 + [[ tcp == \r\d\m\a ]] 00:01:42.175 + [[ -n ice ]] 00:01:42.175 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:42.175 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:42.175 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:42.175 rmmod: ERROR: Module irdma is not currently loaded 00:01:42.175 rmmod: ERROR: Module i40iw is not currently loaded 00:01:42.175 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:42.175 + true 00:01:42.175 + for D in $DRIVERS 00:01:42.175 + sudo modprobe ice 00:01:42.175 + exit 0 00:01:42.184 [Pipeline] } 00:01:42.203 [Pipeline] // withEnv 00:01:42.208 [Pipeline] } 00:01:42.226 [Pipeline] // stage 00:01:42.236 [Pipeline] catchError 00:01:42.237 [Pipeline] { 00:01:42.251 [Pipeline] timeout 00:01:42.252 Timeout set to expire in 50 min 00:01:42.253 [Pipeline] { 00:01:42.266 [Pipeline] stage 00:01:42.267 [Pipeline] { (Tests) 00:01:42.281 [Pipeline] sh 00:01:42.562 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:42.562 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:42.562 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:42.562 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:42.562 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:42.562 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:42.562 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:42.562 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:42.562 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:42.562 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:42.562 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:42.562 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:42.562 + source /etc/os-release 00:01:42.562 ++ NAME='Fedora Linux' 00:01:42.562 ++ VERSION='38 (Cloud Edition)' 00:01:42.562 ++ ID=fedora 00:01:42.562 ++ VERSION_ID=38 00:01:42.562 ++ VERSION_CODENAME= 00:01:42.562 ++ PLATFORM_ID=platform:f38 00:01:42.562 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:42.562 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:42.562 ++ LOGO=fedora-logo-icon 00:01:42.562 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:42.562 ++ HOME_URL=https://fedoraproject.org/ 00:01:42.562 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:42.562 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:42.562 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:42.562 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:42.562 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:42.562 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:42.562 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:42.563 ++ SUPPORT_END=2024-05-14 00:01:42.563 ++ VARIANT='Cloud Edition' 00:01:42.563 ++ VARIANT_ID=cloud 00:01:42.563 + uname -a 00:01:42.563 Linux spdk-gp-11 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:42.563 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:43.494 Hugepages 00:01:43.494 node hugesize free / total 00:01:43.494 node0 1048576kB 0 / 0 00:01:43.494 node0 2048kB 0 / 0 00:01:43.494 node1 1048576kB 0 / 0 00:01:43.494 node1 2048kB 0 / 0 00:01:43.494 00:01:43.494 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:43.494 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:01:43.494 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:01:43.495 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:01:43.495 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:01:43.495 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:01:43.495 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:01:43.495 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:01:43.495 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:01:43.495 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:01:43.495 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:01:43.495 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:01:43.495 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:01:43.495 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:01:43.495 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:01:43.495 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:01:43.495 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:01:43.495 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:01:43.752 + rm -f /tmp/spdk-ld-path 00:01:43.752 + source autorun-spdk.conf 00:01:43.752 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:43.752 ++ SPDK_TEST_NVMF=1 00:01:43.752 ++ SPDK_TEST_NVME_CLI=1 00:01:43.752 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:43.752 ++ SPDK_TEST_NVMF_NICS=e810 00:01:43.752 ++ SPDK_TEST_VFIOUSER=1 00:01:43.752 ++ SPDK_RUN_UBSAN=1 00:01:43.752 ++ NET_TYPE=phy 00:01:43.752 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:01:43.752 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:43.752 ++ RUN_NIGHTLY=1 00:01:43.752 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:43.752 + [[ -n '' ]] 00:01:43.752 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:43.752 + for M in /var/spdk/build-*-manifest.txt 00:01:43.752 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:43.752 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:43.752 + for M in /var/spdk/build-*-manifest.txt 00:01:43.752 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:43.752 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:43.752 ++ uname 00:01:43.752 + [[ Linux == \L\i\n\u\x ]] 00:01:43.752 + sudo dmesg -T 00:01:43.752 + sudo dmesg --clear 00:01:43.752 + dmesg_pid=2171025 00:01:43.752 + [[ Fedora Linux == FreeBSD ]] 00:01:43.752 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:43.752 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:43.752 + sudo dmesg -Tw 00:01:43.752 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:43.752 + [[ -x /usr/src/fio-static/fio ]] 00:01:43.752 + export FIO_BIN=/usr/src/fio-static/fio 00:01:43.752 + FIO_BIN=/usr/src/fio-static/fio 00:01:43.752 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:43.752 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:43.752 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:43.752 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:43.752 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:43.752 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:43.752 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:43.752 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:43.752 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:43.752 Test configuration: 00:01:43.752 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:43.752 SPDK_TEST_NVMF=1 00:01:43.753 SPDK_TEST_NVME_CLI=1 00:01:43.753 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:43.753 SPDK_TEST_NVMF_NICS=e810 00:01:43.753 SPDK_TEST_VFIOUSER=1 00:01:43.753 SPDK_RUN_UBSAN=1 00:01:43.753 NET_TYPE=phy 00:01:43.753 SPDK_TEST_NATIVE_DPDK=v23.11 00:01:43.753 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:43.753 RUN_NIGHTLY=1 03:11:28 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:43.753 03:11:28 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:43.753 03:11:28 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:43.753 03:11:28 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:43.753 03:11:28 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:43.753 03:11:28 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:43.753 03:11:28 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:43.753 03:11:28 -- paths/export.sh@5 -- $ export PATH 00:01:43.753 03:11:28 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:43.753 03:11:28 -- common/autobuild_common.sh@436 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:43.753 03:11:28 -- common/autobuild_common.sh@437 -- $ date +%s 00:01:43.753 03:11:28 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1721524288.XXXXXX 00:01:43.753 03:11:28 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1721524288.WHqFvB 00:01:43.753 03:11:28 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:01:43.753 03:11:28 -- common/autobuild_common.sh@443 -- $ '[' -n v23.11 ']' 00:01:43.753 03:11:28 -- common/autobuild_common.sh@444 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:43.753 03:11:28 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:01:43.753 03:11:28 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:43.753 03:11:28 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:43.753 03:11:28 -- common/autobuild_common.sh@453 -- $ get_config_params 00:01:43.753 03:11:28 -- common/autotest_common.sh@395 -- $ xtrace_disable 00:01:43.753 03:11:28 -- common/autotest_common.sh@10 -- $ set +x 00:01:43.753 03:11:28 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:01:43.753 03:11:28 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:01:43.753 03:11:28 -- pm/common@17 -- $ local monitor 00:01:43.753 03:11:28 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:43.753 03:11:28 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:43.753 03:11:28 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:43.753 03:11:28 -- pm/common@21 -- $ date +%s 00:01:43.753 03:11:28 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:43.753 03:11:28 -- pm/common@21 -- $ date +%s 00:01:43.753 03:11:28 -- pm/common@25 -- $ sleep 1 00:01:43.753 03:11:28 -- pm/common@21 -- $ date +%s 00:01:43.753 03:11:28 -- pm/common@21 -- $ date +%s 00:01:43.753 03:11:28 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721524288 00:01:43.753 03:11:28 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721524288 00:01:43.753 03:11:28 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721524288 00:01:43.753 03:11:28 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721524288 00:01:43.753 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721524288_collect-vmstat.pm.log 00:01:43.753 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721524288_collect-cpu-load.pm.log 00:01:43.753 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721524288_collect-cpu-temp.pm.log 00:01:43.753 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721524288_collect-bmc-pm.bmc.pm.log 00:01:44.687 03:11:29 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:01:44.687 03:11:29 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:44.687 03:11:29 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:44.687 03:11:29 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:44.687 03:11:29 -- spdk/autobuild.sh@16 -- $ date -u 00:01:44.687 Sun Jul 21 01:11:29 AM UTC 2024 00:01:44.687 03:11:29 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:44.687 v24.05-13-g5fa2f5086 00:01:44.687 03:11:29 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:44.687 03:11:29 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:44.687 03:11:29 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:44.687 03:11:29 -- common/autotest_common.sh@1097 -- $ '[' 3 -le 1 ']' 00:01:44.687 03:11:29 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:01:44.687 03:11:29 -- common/autotest_common.sh@10 -- $ set +x 00:01:44.946 ************************************ 00:01:44.946 START TEST ubsan 00:01:44.946 ************************************ 00:01:44.946 03:11:30 ubsan -- common/autotest_common.sh@1121 -- $ echo 'using ubsan' 00:01:44.946 using ubsan 00:01:44.946 00:01:44.946 real 0m0.000s 00:01:44.946 user 0m0.000s 00:01:44.946 sys 0m0.000s 00:01:44.946 03:11:30 ubsan -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:01:44.946 03:11:30 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:44.946 ************************************ 00:01:44.946 END TEST ubsan 00:01:44.946 ************************************ 00:01:44.946 03:11:30 -- spdk/autobuild.sh@27 -- $ '[' -n v23.11 ']' 00:01:44.946 03:11:30 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:01:44.946 03:11:30 -- common/autobuild_common.sh@429 -- $ run_test build_native_dpdk _build_native_dpdk 00:01:44.946 03:11:30 -- common/autotest_common.sh@1097 -- $ '[' 2 -le 1 ']' 00:01:44.946 03:11:30 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:01:44.946 03:11:30 -- common/autotest_common.sh@10 -- $ set +x 00:01:44.946 ************************************ 00:01:44.946 START TEST build_native_dpdk 00:01:44.946 ************************************ 00:01:44.946 03:11:30 build_native_dpdk -- common/autotest_common.sh@1121 -- $ _build_native_dpdk 00:01:44.946 03:11:30 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:01:44.946 03:11:30 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:01:44.946 03:11:30 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:01:44.946 03:11:30 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:01:44.946 03:11:30 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:01:44.946 03:11:30 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:01:44.946 03:11:30 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:01:44.946 03:11:30 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:01:44.946 03:11:30 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:01:44.946 03:11:30 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:01:44.946 03:11:30 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:01:44.946 03:11:30 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:01:44.946 03:11:30 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:01:44.946 03:11:30 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:01:44.946 03:11:30 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:44.946 03:11:30 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:44.946 03:11:30 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:44.946 03:11:30 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk ]] 00:01:44.946 03:11:30 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:44.946 03:11:30 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk log --oneline -n 5 00:01:44.946 eeb0605f11 version: 23.11.0 00:01:44.946 238778122a doc: update release notes for 23.11 00:01:44.946 46aa6b3cfc doc: fix description of RSS features 00:01:44.946 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:01:44.946 7e421ae345 devtools: support skipping forbid rule check 00:01:44.946 03:11:30 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:01:44.946 03:11:30 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:01:44.946 03:11:30 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=23.11.0 00:01:44.946 03:11:30 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:01:44.946 03:11:30 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:01:44.946 03:11:30 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:01:44.946 03:11:30 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:01:44.946 03:11:30 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:01:44.946 03:11:30 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:01:44.946 03:11:30 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:01:44.946 03:11:30 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:01:44.946 03:11:30 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:44.946 03:11:30 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:44.946 03:11:30 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:01:44.946 03:11:30 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:44.946 03:11:30 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s 00:01:44.946 03:11:30 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:01:44.946 03:11:30 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 23.11.0 21.11.0 00:01:44.946 03:11:30 build_native_dpdk -- scripts/common.sh@370 -- $ cmp_versions 23.11.0 '<' 21.11.0 00:01:44.946 03:11:30 build_native_dpdk -- scripts/common.sh@330 -- $ local ver1 ver1_l 00:01:44.946 03:11:30 build_native_dpdk -- scripts/common.sh@331 -- $ local ver2 ver2_l 00:01:44.946 03:11:30 build_native_dpdk -- scripts/common.sh@333 -- $ IFS=.-: 00:01:44.946 03:11:30 build_native_dpdk -- scripts/common.sh@333 -- $ read -ra ver1 00:01:44.946 03:11:30 build_native_dpdk -- scripts/common.sh@334 -- $ IFS=.-: 00:01:44.946 03:11:30 build_native_dpdk -- scripts/common.sh@334 -- $ read -ra ver2 00:01:44.946 03:11:30 build_native_dpdk -- scripts/common.sh@335 -- $ local 'op=<' 00:01:44.946 03:11:30 build_native_dpdk -- scripts/common.sh@337 -- $ ver1_l=3 00:01:44.946 03:11:30 build_native_dpdk -- scripts/common.sh@338 -- $ ver2_l=3 00:01:44.946 03:11:30 build_native_dpdk -- scripts/common.sh@340 -- $ local lt=0 gt=0 eq=0 v 00:01:44.946 03:11:30 build_native_dpdk -- scripts/common.sh@341 -- $ case "$op" in 00:01:44.946 03:11:30 build_native_dpdk -- scripts/common.sh@342 -- $ : 1 00:01:44.946 03:11:30 build_native_dpdk -- scripts/common.sh@361 -- $ (( v = 0 )) 00:01:44.946 03:11:30 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:44.946 03:11:30 build_native_dpdk -- scripts/common.sh@362 -- $ decimal 23 00:01:44.946 03:11:30 build_native_dpdk -- scripts/common.sh@350 -- $ local d=23 00:01:44.946 03:11:30 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:01:44.946 03:11:30 build_native_dpdk -- scripts/common.sh@352 -- $ echo 23 00:01:44.946 03:11:30 build_native_dpdk -- scripts/common.sh@362 -- $ ver1[v]=23 00:01:44.946 03:11:30 build_native_dpdk -- scripts/common.sh@363 -- $ decimal 21 00:01:44.946 03:11:30 build_native_dpdk -- scripts/common.sh@350 -- $ local d=21 00:01:44.946 03:11:30 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:01:44.946 03:11:30 build_native_dpdk -- scripts/common.sh@352 -- $ echo 21 00:01:44.946 03:11:30 build_native_dpdk -- scripts/common.sh@363 -- $ ver2[v]=21 00:01:44.946 03:11:30 build_native_dpdk -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:01:44.946 03:11:30 build_native_dpdk -- scripts/common.sh@364 -- $ return 1 00:01:44.946 03:11:30 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1 00:01:44.946 patching file config/rte_config.h 00:01:44.946 Hunk #1 succeeded at 60 (offset 1 line). 00:01:44.946 03:11:30 build_native_dpdk -- common/autobuild_common.sh@177 -- $ dpdk_kmods=false 00:01:44.946 03:11:30 build_native_dpdk -- common/autobuild_common.sh@178 -- $ uname -s 00:01:44.946 03:11:30 build_native_dpdk -- common/autobuild_common.sh@178 -- $ '[' Linux = FreeBSD ']' 00:01:44.946 03:11:30 build_native_dpdk -- common/autobuild_common.sh@182 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:01:44.946 03:11:30 build_native_dpdk -- common/autobuild_common.sh@182 -- $ meson build-tmp --prefix=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:01:49.162 The Meson build system 00:01:49.162 Version: 1.3.1 00:01:49.162 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:49.162 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp 00:01:49.162 Build type: native build 00:01:49.162 Program cat found: YES (/usr/bin/cat) 00:01:49.162 Project name: DPDK 00:01:49.162 Project version: 23.11.0 00:01:49.162 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:49.162 C linker for the host machine: gcc ld.bfd 2.39-16 00:01:49.162 Host machine cpu family: x86_64 00:01:49.162 Host machine cpu: x86_64 00:01:49.162 Message: ## Building in Developer Mode ## 00:01:49.162 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:49.162 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/check-symbols.sh) 00:01:49.162 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/options-ibverbs-static.sh) 00:01:49.162 Program python3 found: YES (/usr/bin/python3) 00:01:49.162 Program cat found: YES (/usr/bin/cat) 00:01:49.162 config/meson.build:113: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:01:49.162 Compiler for C supports arguments -march=native: YES 00:01:49.162 Checking for size of "void *" : 8 00:01:49.162 Checking for size of "void *" : 8 (cached) 00:01:49.162 Library m found: YES 00:01:49.162 Library numa found: YES 00:01:49.162 Has header "numaif.h" : YES 00:01:49.162 Library fdt found: NO 00:01:49.162 Library execinfo found: NO 00:01:49.162 Has header "execinfo.h" : YES 00:01:49.162 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:49.162 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:49.162 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:49.162 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:49.162 Run-time dependency openssl found: YES 3.0.9 00:01:49.163 Run-time dependency libpcap found: YES 1.10.4 00:01:49.163 Has header "pcap.h" with dependency libpcap: YES 00:01:49.163 Compiler for C supports arguments -Wcast-qual: YES 00:01:49.163 Compiler for C supports arguments -Wdeprecated: YES 00:01:49.163 Compiler for C supports arguments -Wformat: YES 00:01:49.163 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:49.163 Compiler for C supports arguments -Wformat-security: NO 00:01:49.163 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:49.163 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:49.163 Compiler for C supports arguments -Wnested-externs: YES 00:01:49.163 Compiler for C supports arguments -Wold-style-definition: YES 00:01:49.163 Compiler for C supports arguments -Wpointer-arith: YES 00:01:49.163 Compiler for C supports arguments -Wsign-compare: YES 00:01:49.163 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:49.163 Compiler for C supports arguments -Wundef: YES 00:01:49.163 Compiler for C supports arguments -Wwrite-strings: YES 00:01:49.163 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:49.163 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:49.163 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:49.163 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:49.163 Program objdump found: YES (/usr/bin/objdump) 00:01:49.163 Compiler for C supports arguments -mavx512f: YES 00:01:49.163 Checking if "AVX512 checking" compiles: YES 00:01:49.163 Fetching value of define "__SSE4_2__" : 1 00:01:49.163 Fetching value of define "__AES__" : 1 00:01:49.163 Fetching value of define "__AVX__" : 1 00:01:49.163 Fetching value of define "__AVX2__" : (undefined) 00:01:49.163 Fetching value of define "__AVX512BW__" : (undefined) 00:01:49.163 Fetching value of define "__AVX512CD__" : (undefined) 00:01:49.163 Fetching value of define "__AVX512DQ__" : (undefined) 00:01:49.163 Fetching value of define "__AVX512F__" : (undefined) 00:01:49.163 Fetching value of define "__AVX512VL__" : (undefined) 00:01:49.163 Fetching value of define "__PCLMUL__" : 1 00:01:49.163 Fetching value of define "__RDRND__" : 1 00:01:49.163 Fetching value of define "__RDSEED__" : (undefined) 00:01:49.163 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:49.163 Fetching value of define "__znver1__" : (undefined) 00:01:49.163 Fetching value of define "__znver2__" : (undefined) 00:01:49.163 Fetching value of define "__znver3__" : (undefined) 00:01:49.163 Fetching value of define "__znver4__" : (undefined) 00:01:49.163 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:49.163 Message: lib/log: Defining dependency "log" 00:01:49.163 Message: lib/kvargs: Defining dependency "kvargs" 00:01:49.163 Message: lib/telemetry: Defining dependency "telemetry" 00:01:49.163 Checking for function "getentropy" : NO 00:01:49.163 Message: lib/eal: Defining dependency "eal" 00:01:49.163 Message: lib/ring: Defining dependency "ring" 00:01:49.163 Message: lib/rcu: Defining dependency "rcu" 00:01:49.163 Message: lib/mempool: Defining dependency "mempool" 00:01:49.163 Message: lib/mbuf: Defining dependency "mbuf" 00:01:49.163 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:49.163 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:49.163 Compiler for C supports arguments -mpclmul: YES 00:01:49.163 Compiler for C supports arguments -maes: YES 00:01:49.163 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:49.163 Compiler for C supports arguments -mavx512bw: YES 00:01:49.163 Compiler for C supports arguments -mavx512dq: YES 00:01:49.163 Compiler for C supports arguments -mavx512vl: YES 00:01:49.163 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:49.163 Compiler for C supports arguments -mavx2: YES 00:01:49.163 Compiler for C supports arguments -mavx: YES 00:01:49.163 Message: lib/net: Defining dependency "net" 00:01:49.163 Message: lib/meter: Defining dependency "meter" 00:01:49.163 Message: lib/ethdev: Defining dependency "ethdev" 00:01:49.163 Message: lib/pci: Defining dependency "pci" 00:01:49.163 Message: lib/cmdline: Defining dependency "cmdline" 00:01:49.163 Message: lib/metrics: Defining dependency "metrics" 00:01:49.163 Message: lib/hash: Defining dependency "hash" 00:01:49.163 Message: lib/timer: Defining dependency "timer" 00:01:49.163 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:49.163 Fetching value of define "__AVX512VL__" : (undefined) (cached) 00:01:49.163 Fetching value of define "__AVX512CD__" : (undefined) (cached) 00:01:49.163 Fetching value of define "__AVX512BW__" : (undefined) (cached) 00:01:49.163 Compiler for C supports arguments -mavx512f -mavx512vl -mavx512cd -mavx512bw: YES 00:01:49.163 Message: lib/acl: Defining dependency "acl" 00:01:49.163 Message: lib/bbdev: Defining dependency "bbdev" 00:01:49.163 Message: lib/bitratestats: Defining dependency "bitratestats" 00:01:49.163 Run-time dependency libelf found: YES 0.190 00:01:49.163 Message: lib/bpf: Defining dependency "bpf" 00:01:49.163 Message: lib/cfgfile: Defining dependency "cfgfile" 00:01:49.163 Message: lib/compressdev: Defining dependency "compressdev" 00:01:49.163 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:49.163 Message: lib/distributor: Defining dependency "distributor" 00:01:49.163 Message: lib/dmadev: Defining dependency "dmadev" 00:01:49.163 Message: lib/efd: Defining dependency "efd" 00:01:49.163 Message: lib/eventdev: Defining dependency "eventdev" 00:01:49.163 Message: lib/dispatcher: Defining dependency "dispatcher" 00:01:49.163 Message: lib/gpudev: Defining dependency "gpudev" 00:01:49.163 Message: lib/gro: Defining dependency "gro" 00:01:49.163 Message: lib/gso: Defining dependency "gso" 00:01:49.163 Message: lib/ip_frag: Defining dependency "ip_frag" 00:01:49.163 Message: lib/jobstats: Defining dependency "jobstats" 00:01:49.163 Message: lib/latencystats: Defining dependency "latencystats" 00:01:49.163 Message: lib/lpm: Defining dependency "lpm" 00:01:49.163 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:49.163 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:01:49.163 Fetching value of define "__AVX512IFMA__" : (undefined) 00:01:49.163 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:01:49.163 Message: lib/member: Defining dependency "member" 00:01:49.163 Message: lib/pcapng: Defining dependency "pcapng" 00:01:49.163 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:49.163 Message: lib/power: Defining dependency "power" 00:01:49.163 Message: lib/rawdev: Defining dependency "rawdev" 00:01:49.163 Message: lib/regexdev: Defining dependency "regexdev" 00:01:49.163 Message: lib/mldev: Defining dependency "mldev" 00:01:49.163 Message: lib/rib: Defining dependency "rib" 00:01:49.163 Message: lib/reorder: Defining dependency "reorder" 00:01:49.163 Message: lib/sched: Defining dependency "sched" 00:01:49.163 Message: lib/security: Defining dependency "security" 00:01:49.163 Message: lib/stack: Defining dependency "stack" 00:01:49.163 Has header "linux/userfaultfd.h" : YES 00:01:49.163 Has header "linux/vduse.h" : YES 00:01:49.163 Message: lib/vhost: Defining dependency "vhost" 00:01:49.163 Message: lib/ipsec: Defining dependency "ipsec" 00:01:49.163 Message: lib/pdcp: Defining dependency "pdcp" 00:01:49.163 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:49.163 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:01:49.163 Compiler for C supports arguments -mavx512f -mavx512dq: YES 00:01:49.163 Compiler for C supports arguments -mavx512bw: YES (cached) 00:01:49.163 Message: lib/fib: Defining dependency "fib" 00:01:49.163 Message: lib/port: Defining dependency "port" 00:01:49.163 Message: lib/pdump: Defining dependency "pdump" 00:01:49.163 Message: lib/table: Defining dependency "table" 00:01:49.163 Message: lib/pipeline: Defining dependency "pipeline" 00:01:49.163 Message: lib/graph: Defining dependency "graph" 00:01:49.163 Message: lib/node: Defining dependency "node" 00:01:50.540 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:50.540 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:50.540 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:50.540 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:50.540 Compiler for C supports arguments -Wno-sign-compare: YES 00:01:50.540 Compiler for C supports arguments -Wno-unused-value: YES 00:01:50.540 Compiler for C supports arguments -Wno-format: YES 00:01:50.540 Compiler for C supports arguments -Wno-format-security: YES 00:01:50.540 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:01:50.540 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:01:50.540 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:01:50.540 Compiler for C supports arguments -Wno-unused-parameter: YES 00:01:50.540 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:50.540 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:50.540 Compiler for C supports arguments -mavx512bw: YES (cached) 00:01:50.540 Compiler for C supports arguments -march=skylake-avx512: YES 00:01:50.541 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:01:50.541 Has header "sys/epoll.h" : YES 00:01:50.541 Program doxygen found: YES (/usr/bin/doxygen) 00:01:50.541 Configuring doxy-api-html.conf using configuration 00:01:50.541 Configuring doxy-api-man.conf using configuration 00:01:50.541 Program mandb found: YES (/usr/bin/mandb) 00:01:50.541 Program sphinx-build found: NO 00:01:50.541 Configuring rte_build_config.h using configuration 00:01:50.541 Message: 00:01:50.541 ================= 00:01:50.541 Applications Enabled 00:01:50.541 ================= 00:01:50.541 00:01:50.541 apps: 00:01:50.541 dumpcap, graph, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, 00:01:50.541 test-crypto-perf, test-dma-perf, test-eventdev, test-fib, test-flow-perf, test-gpudev, test-mldev, test-pipeline, 00:01:50.541 test-pmd, test-regex, test-sad, test-security-perf, 00:01:50.541 00:01:50.541 Message: 00:01:50.541 ================= 00:01:50.541 Libraries Enabled 00:01:50.541 ================= 00:01:50.541 00:01:50.541 libs: 00:01:50.541 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:50.541 net, meter, ethdev, pci, cmdline, metrics, hash, timer, 00:01:50.541 acl, bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, 00:01:50.541 dmadev, efd, eventdev, dispatcher, gpudev, gro, gso, ip_frag, 00:01:50.541 jobstats, latencystats, lpm, member, pcapng, power, rawdev, regexdev, 00:01:50.541 mldev, rib, reorder, sched, security, stack, vhost, ipsec, 00:01:50.541 pdcp, fib, port, pdump, table, pipeline, graph, node, 00:01:50.541 00:01:50.541 00:01:50.541 Message: 00:01:50.541 =============== 00:01:50.541 Drivers Enabled 00:01:50.541 =============== 00:01:50.541 00:01:50.541 common: 00:01:50.541 00:01:50.541 bus: 00:01:50.541 pci, vdev, 00:01:50.541 mempool: 00:01:50.541 ring, 00:01:50.541 dma: 00:01:50.541 00:01:50.541 net: 00:01:50.541 i40e, 00:01:50.541 raw: 00:01:50.541 00:01:50.541 crypto: 00:01:50.541 00:01:50.541 compress: 00:01:50.541 00:01:50.541 regex: 00:01:50.541 00:01:50.541 ml: 00:01:50.541 00:01:50.541 vdpa: 00:01:50.541 00:01:50.541 event: 00:01:50.541 00:01:50.541 baseband: 00:01:50.541 00:01:50.541 gpu: 00:01:50.541 00:01:50.541 00:01:50.541 Message: 00:01:50.541 ================= 00:01:50.541 Content Skipped 00:01:50.541 ================= 00:01:50.541 00:01:50.541 apps: 00:01:50.541 00:01:50.541 libs: 00:01:50.541 00:01:50.541 drivers: 00:01:50.541 common/cpt: not in enabled drivers build config 00:01:50.541 common/dpaax: not in enabled drivers build config 00:01:50.541 common/iavf: not in enabled drivers build config 00:01:50.541 common/idpf: not in enabled drivers build config 00:01:50.541 common/mvep: not in enabled drivers build config 00:01:50.541 common/octeontx: not in enabled drivers build config 00:01:50.541 bus/auxiliary: not in enabled drivers build config 00:01:50.541 bus/cdx: not in enabled drivers build config 00:01:50.541 bus/dpaa: not in enabled drivers build config 00:01:50.541 bus/fslmc: not in enabled drivers build config 00:01:50.541 bus/ifpga: not in enabled drivers build config 00:01:50.541 bus/platform: not in enabled drivers build config 00:01:50.541 bus/vmbus: not in enabled drivers build config 00:01:50.541 common/cnxk: not in enabled drivers build config 00:01:50.541 common/mlx5: not in enabled drivers build config 00:01:50.541 common/nfp: not in enabled drivers build config 00:01:50.541 common/qat: not in enabled drivers build config 00:01:50.541 common/sfc_efx: not in enabled drivers build config 00:01:50.541 mempool/bucket: not in enabled drivers build config 00:01:50.541 mempool/cnxk: not in enabled drivers build config 00:01:50.541 mempool/dpaa: not in enabled drivers build config 00:01:50.541 mempool/dpaa2: not in enabled drivers build config 00:01:50.541 mempool/octeontx: not in enabled drivers build config 00:01:50.541 mempool/stack: not in enabled drivers build config 00:01:50.541 dma/cnxk: not in enabled drivers build config 00:01:50.541 dma/dpaa: not in enabled drivers build config 00:01:50.541 dma/dpaa2: not in enabled drivers build config 00:01:50.541 dma/hisilicon: not in enabled drivers build config 00:01:50.541 dma/idxd: not in enabled drivers build config 00:01:50.541 dma/ioat: not in enabled drivers build config 00:01:50.541 dma/skeleton: not in enabled drivers build config 00:01:50.541 net/af_packet: not in enabled drivers build config 00:01:50.541 net/af_xdp: not in enabled drivers build config 00:01:50.541 net/ark: not in enabled drivers build config 00:01:50.541 net/atlantic: not in enabled drivers build config 00:01:50.541 net/avp: not in enabled drivers build config 00:01:50.541 net/axgbe: not in enabled drivers build config 00:01:50.541 net/bnx2x: not in enabled drivers build config 00:01:50.541 net/bnxt: not in enabled drivers build config 00:01:50.541 net/bonding: not in enabled drivers build config 00:01:50.541 net/cnxk: not in enabled drivers build config 00:01:50.541 net/cpfl: not in enabled drivers build config 00:01:50.541 net/cxgbe: not in enabled drivers build config 00:01:50.541 net/dpaa: not in enabled drivers build config 00:01:50.541 net/dpaa2: not in enabled drivers build config 00:01:50.541 net/e1000: not in enabled drivers build config 00:01:50.541 net/ena: not in enabled drivers build config 00:01:50.541 net/enetc: not in enabled drivers build config 00:01:50.541 net/enetfec: not in enabled drivers build config 00:01:50.541 net/enic: not in enabled drivers build config 00:01:50.541 net/failsafe: not in enabled drivers build config 00:01:50.541 net/fm10k: not in enabled drivers build config 00:01:50.541 net/gve: not in enabled drivers build config 00:01:50.541 net/hinic: not in enabled drivers build config 00:01:50.541 net/hns3: not in enabled drivers build config 00:01:50.541 net/iavf: not in enabled drivers build config 00:01:50.541 net/ice: not in enabled drivers build config 00:01:50.541 net/idpf: not in enabled drivers build config 00:01:50.541 net/igc: not in enabled drivers build config 00:01:50.541 net/ionic: not in enabled drivers build config 00:01:50.541 net/ipn3ke: not in enabled drivers build config 00:01:50.541 net/ixgbe: not in enabled drivers build config 00:01:50.541 net/mana: not in enabled drivers build config 00:01:50.541 net/memif: not in enabled drivers build config 00:01:50.541 net/mlx4: not in enabled drivers build config 00:01:50.541 net/mlx5: not in enabled drivers build config 00:01:50.541 net/mvneta: not in enabled drivers build config 00:01:50.541 net/mvpp2: not in enabled drivers build config 00:01:50.541 net/netvsc: not in enabled drivers build config 00:01:50.541 net/nfb: not in enabled drivers build config 00:01:50.541 net/nfp: not in enabled drivers build config 00:01:50.541 net/ngbe: not in enabled drivers build config 00:01:50.541 net/null: not in enabled drivers build config 00:01:50.541 net/octeontx: not in enabled drivers build config 00:01:50.541 net/octeon_ep: not in enabled drivers build config 00:01:50.541 net/pcap: not in enabled drivers build config 00:01:50.541 net/pfe: not in enabled drivers build config 00:01:50.541 net/qede: not in enabled drivers build config 00:01:50.541 net/ring: not in enabled drivers build config 00:01:50.541 net/sfc: not in enabled drivers build config 00:01:50.541 net/softnic: not in enabled drivers build config 00:01:50.541 net/tap: not in enabled drivers build config 00:01:50.541 net/thunderx: not in enabled drivers build config 00:01:50.541 net/txgbe: not in enabled drivers build config 00:01:50.541 net/vdev_netvsc: not in enabled drivers build config 00:01:50.541 net/vhost: not in enabled drivers build config 00:01:50.541 net/virtio: not in enabled drivers build config 00:01:50.541 net/vmxnet3: not in enabled drivers build config 00:01:50.541 raw/cnxk_bphy: not in enabled drivers build config 00:01:50.541 raw/cnxk_gpio: not in enabled drivers build config 00:01:50.541 raw/dpaa2_cmdif: not in enabled drivers build config 00:01:50.541 raw/ifpga: not in enabled drivers build config 00:01:50.541 raw/ntb: not in enabled drivers build config 00:01:50.541 raw/skeleton: not in enabled drivers build config 00:01:50.541 crypto/armv8: not in enabled drivers build config 00:01:50.541 crypto/bcmfs: not in enabled drivers build config 00:01:50.541 crypto/caam_jr: not in enabled drivers build config 00:01:50.541 crypto/ccp: not in enabled drivers build config 00:01:50.541 crypto/cnxk: not in enabled drivers build config 00:01:50.541 crypto/dpaa_sec: not in enabled drivers build config 00:01:50.541 crypto/dpaa2_sec: not in enabled drivers build config 00:01:50.541 crypto/ipsec_mb: not in enabled drivers build config 00:01:50.541 crypto/mlx5: not in enabled drivers build config 00:01:50.541 crypto/mvsam: not in enabled drivers build config 00:01:50.541 crypto/nitrox: not in enabled drivers build config 00:01:50.541 crypto/null: not in enabled drivers build config 00:01:50.541 crypto/octeontx: not in enabled drivers build config 00:01:50.541 crypto/openssl: not in enabled drivers build config 00:01:50.541 crypto/scheduler: not in enabled drivers build config 00:01:50.541 crypto/uadk: not in enabled drivers build config 00:01:50.541 crypto/virtio: not in enabled drivers build config 00:01:50.541 compress/isal: not in enabled drivers build config 00:01:50.541 compress/mlx5: not in enabled drivers build config 00:01:50.541 compress/octeontx: not in enabled drivers build config 00:01:50.541 compress/zlib: not in enabled drivers build config 00:01:50.541 regex/mlx5: not in enabled drivers build config 00:01:50.541 regex/cn9k: not in enabled drivers build config 00:01:50.541 ml/cnxk: not in enabled drivers build config 00:01:50.541 vdpa/ifc: not in enabled drivers build config 00:01:50.541 vdpa/mlx5: not in enabled drivers build config 00:01:50.541 vdpa/nfp: not in enabled drivers build config 00:01:50.541 vdpa/sfc: not in enabled drivers build config 00:01:50.542 event/cnxk: not in enabled drivers build config 00:01:50.542 event/dlb2: not in enabled drivers build config 00:01:50.542 event/dpaa: not in enabled drivers build config 00:01:50.542 event/dpaa2: not in enabled drivers build config 00:01:50.542 event/dsw: not in enabled drivers build config 00:01:50.542 event/opdl: not in enabled drivers build config 00:01:50.542 event/skeleton: not in enabled drivers build config 00:01:50.542 event/sw: not in enabled drivers build config 00:01:50.542 event/octeontx: not in enabled drivers build config 00:01:50.542 baseband/acc: not in enabled drivers build config 00:01:50.542 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:01:50.542 baseband/fpga_lte_fec: not in enabled drivers build config 00:01:50.542 baseband/la12xx: not in enabled drivers build config 00:01:50.542 baseband/null: not in enabled drivers build config 00:01:50.542 baseband/turbo_sw: not in enabled drivers build config 00:01:50.542 gpu/cuda: not in enabled drivers build config 00:01:50.542 00:01:50.542 00:01:50.542 Build targets in project: 220 00:01:50.542 00:01:50.542 DPDK 23.11.0 00:01:50.542 00:01:50.542 User defined options 00:01:50.542 libdir : lib 00:01:50.542 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:50.542 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:01:50.542 c_link_args : 00:01:50.542 enable_docs : false 00:01:50.542 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:01:50.542 enable_kmods : false 00:01:50.542 machine : native 00:01:50.542 tests : false 00:01:50.542 00:01:50.542 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:50.542 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:01:50.542 03:11:35 build_native_dpdk -- common/autobuild_common.sh@186 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j48 00:01:50.542 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:01:50.804 [1/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:50.804 [2/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:50.804 [3/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:50.804 [4/710] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:50.804 [5/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:50.804 [6/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:50.804 [7/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:50.804 [8/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:50.804 [9/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:50.804 [10/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:50.804 [11/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:50.804 [12/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:50.804 [13/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:50.804 [14/710] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:50.804 [15/710] Linking static target lib/librte_kvargs.a 00:01:50.804 [16/710] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:50.804 [17/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:51.065 [18/710] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:51.065 [19/710] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:51.065 [20/710] Linking static target lib/librte_log.a 00:01:51.065 [21/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:51.326 [22/710] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.593 [23/710] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.851 [24/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:51.851 [25/710] Linking target lib/librte_log.so.24.0 00:01:51.851 [26/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:51.851 [27/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:51.851 [28/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:51.851 [29/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:51.851 [30/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:51.851 [31/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:51.851 [32/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:51.851 [33/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:51.851 [34/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:51.851 [35/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:51.851 [36/710] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:51.851 [37/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:51.851 [38/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:51.851 [39/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:51.851 [40/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:51.851 [41/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:51.851 [42/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:51.851 [43/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:51.851 [44/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:51.851 [45/710] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:51.851 [46/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:51.851 [47/710] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:01:51.851 [48/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:51.851 [49/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:52.110 [50/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:52.110 [51/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:52.110 [52/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:52.110 [53/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:52.110 [54/710] Linking target lib/librte_kvargs.so.24.0 00:01:52.110 [55/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:52.110 [56/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:52.110 [57/710] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:52.110 [58/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:52.110 [59/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:52.110 [60/710] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:52.110 [61/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:52.110 [62/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:52.110 [63/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:52.110 [64/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:52.370 [65/710] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:01:52.370 [66/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:52.370 [67/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:52.370 [68/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:52.629 [69/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:52.629 [70/710] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:52.629 [71/710] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:52.629 [72/710] Linking static target lib/librte_pci.a 00:01:52.629 [73/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:52.629 [74/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:52.629 [75/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:52.893 [76/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:52.893 [77/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:52.893 [78/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:52.893 [79/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:52.893 [80/710] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.893 [81/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:52.893 [82/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:52.893 [83/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:52.893 [84/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:52.893 [85/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:52.893 [86/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:52.893 [87/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:52.893 [88/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:53.155 [89/710] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:53.155 [90/710] Linking static target lib/librte_ring.a 00:01:53.155 [91/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:53.155 [92/710] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:53.155 [93/710] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:53.155 [94/710] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:53.155 [95/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:53.155 [96/710] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:53.155 [97/710] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:53.155 [98/710] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:53.155 [99/710] Linking static target lib/librte_meter.a 00:01:53.155 [100/710] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:53.155 [101/710] Linking static target lib/librte_telemetry.a 00:01:53.155 [102/710] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:53.155 [103/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:53.155 [104/710] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:53.432 [105/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:53.432 [106/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:53.432 [107/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:53.433 [108/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:53.433 [109/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:53.433 [110/710] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:53.433 [111/710] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:53.433 [112/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:53.433 [113/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:53.433 [114/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:53.433 [115/710] Linking static target lib/librte_eal.a 00:01:53.433 [116/710] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.695 [117/710] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.695 [118/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:53.695 [119/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:53.695 [120/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:53.695 [121/710] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:53.695 [122/710] Linking static target lib/librte_net.a 00:01:53.695 [123/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:53.695 [124/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:53.695 [125/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:53.958 [126/710] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.958 [127/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:53.958 [128/710] Linking static target lib/librte_cmdline.a 00:01:53.958 [129/710] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:53.958 [130/710] Linking static target lib/librte_mempool.a 00:01:53.958 [131/710] Linking target lib/librte_telemetry.so.24.0 00:01:53.958 [132/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:54.215 [133/710] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:01:54.215 [134/710] Linking static target lib/librte_cfgfile.a 00:01:54.215 [135/710] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.215 [136/710] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:01:54.215 [137/710] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:54.215 [138/710] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:01:54.215 [139/710] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:01:54.215 [140/710] Linking static target lib/librte_metrics.a 00:01:54.215 [141/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:54.215 [142/710] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:01:54.215 [143/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:54.479 [144/710] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:01:54.479 [145/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:01:54.479 [146/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:01:54.479 [147/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:01:54.740 [148/710] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:54.740 [149/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:01:54.740 [150/710] Linking static target lib/librte_rcu.a 00:01:54.740 [151/710] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:01:54.740 [152/710] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.740 [153/710] Linking static target lib/librte_bitratestats.a 00:01:54.740 [154/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:54.740 [155/710] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:01:54.740 [156/710] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:54.740 [157/710] Linking static target lib/librte_timer.a 00:01:54.740 [158/710] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.740 [159/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:01:55.001 [160/710] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:55.001 [161/710] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:01:55.001 [162/710] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.001 [163/710] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:55.001 [164/710] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.001 [165/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:01:55.001 [166/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:01:55.001 [167/710] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.262 [168/710] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:55.262 [169/710] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:01:55.262 [170/710] Linking static target lib/librte_bbdev.a 00:01:55.262 [171/710] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.262 [172/710] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:01:55.262 [173/710] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:55.262 [174/710] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.524 [175/710] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:55.524 [176/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:55.524 [177/710] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:55.524 [178/710] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:01:55.524 [179/710] Linking static target lib/librte_compressdev.a 00:01:55.524 [180/710] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:01:55.524 [181/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:01:55.784 [182/710] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:01:55.784 [183/710] Linking static target lib/librte_distributor.a 00:01:55.784 [184/710] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:55.784 [185/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:01:56.045 [186/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:01:56.045 [187/710] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:56.045 [188/710] Linking static target lib/librte_dmadev.a 00:01:56.045 [189/710] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.308 [190/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:01:56.308 [191/710] Linking static target lib/librte_bpf.a 00:01:56.308 [192/710] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:01:56.308 [193/710] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.308 [194/710] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:01:56.308 [195/710] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.308 [196/710] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:01:56.308 [197/710] Compiling C object lib/librte_dispatcher.a.p/dispatcher_rte_dispatcher.c.o 00:01:56.308 [198/710] Linking static target lib/librte_dispatcher.a 00:01:56.308 [199/710] Compiling C object lib/librte_gro.a.p/gro_gro_tcp6.c.o 00:01:56.308 [200/710] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:01:56.308 [201/710] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:01:56.575 [202/710] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:56.575 [203/710] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:01:56.576 [204/710] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:01:56.576 [205/710] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:01:56.576 [206/710] Linking static target lib/librte_gpudev.a 00:01:56.576 [207/710] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:56.576 [208/710] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:01:56.576 [209/710] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:01:56.576 [210/710] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:01:56.576 [211/710] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.576 [212/710] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:01:56.576 [213/710] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:56.576 [214/710] Linking static target lib/librte_gro.a 00:01:56.576 [215/710] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.838 [216/710] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:01:56.838 [217/710] Linking static target lib/librte_jobstats.a 00:01:56.838 [218/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:01:56.838 [219/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:01:56.838 [220/710] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:01:57.100 [221/710] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.101 [222/710] Generating lib/dispatcher.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.101 [223/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_dma_adapter.c.o 00:01:57.101 [224/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:01:57.101 [225/710] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.359 [226/710] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:01:57.359 [227/710] Linking static target lib/librte_latencystats.a 00:01:57.359 [228/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:01:57.359 [229/710] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:01:57.359 [230/710] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:01:57.359 [231/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:01:57.359 [232/710] Linking static target lib/member/libsketch_avx512_tmp.a 00:01:57.623 [233/710] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:01:57.623 [234/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:01:57.623 [235/710] Linking static target lib/librte_ip_frag.a 00:01:57.623 [236/710] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:01:57.623 [237/710] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.623 [238/710] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:01:57.623 [239/710] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:57.623 [240/710] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:57.888 [241/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:01:57.888 [242/710] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev_pmd.c.o 00:01:57.888 [243/710] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:57.888 [244/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:01:57.888 [245/710] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.888 [246/710] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.146 [247/710] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:01:58.146 [248/710] Linking static target lib/librte_gso.a 00:01:58.146 [249/710] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils.c.o 00:01:58.146 [250/710] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:58.146 [251/710] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:01:58.146 [252/710] Linking static target lib/librte_regexdev.a 00:01:58.409 [253/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:01:58.409 [254/710] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev.c.o 00:01:58.409 [255/710] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:01:58.409 [256/710] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:01:58.409 [257/710] Linking static target lib/librte_rawdev.a 00:01:58.409 [258/710] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar_bfloat16.c.o 00:01:58.409 [259/710] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.409 [260/710] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:01:58.409 [261/710] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:01:58.409 [262/710] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar.c.o 00:01:58.409 [263/710] Linking static target lib/librte_mldev.a 00:01:58.669 [264/710] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:01:58.669 [265/710] Linking static target lib/librte_pcapng.a 00:01:58.669 [266/710] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:01:58.669 [267/710] Linking static target lib/librte_efd.a 00:01:58.669 [268/710] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:01:58.669 [269/710] Linking static target lib/acl/libavx2_tmp.a 00:01:58.669 [270/710] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:01:58.669 [271/710] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:01:58.669 [272/710] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:01:58.669 [273/710] Linking static target lib/librte_stack.a 00:01:58.669 [274/710] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:01:58.669 [275/710] Linking static target lib/librte_lpm.a 00:01:58.941 [276/710] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:01:58.941 [277/710] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:58.941 [278/710] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:58.941 [279/710] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.941 [280/710] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:58.941 [281/710] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:58.941 [282/710] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.941 [283/710] Linking static target lib/librte_hash.a 00:01:58.941 [284/710] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.941 [285/710] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:59.202 [286/710] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:01:59.202 [287/710] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.202 [288/710] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:59.202 [289/710] Linking static target lib/librte_reorder.a 00:01:59.202 [290/710] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:59.202 [291/710] Linking static target lib/librte_power.a 00:01:59.202 [292/710] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:59.460 [293/710] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:59.460 [294/710] Compiling C object lib/acl/libavx512_tmp.a.p/acl_run_avx512.c.o 00:01:59.460 [295/710] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.460 [296/710] Linking static target lib/acl/libavx512_tmp.a 00:01:59.460 [297/710] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:59.460 [298/710] Linking static target lib/librte_acl.a 00:01:59.460 [299/710] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.460 [300/710] Linking static target lib/librte_security.a 00:01:59.460 [301/710] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:59.718 [302/710] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:01:59.718 [303/710] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.718 [304/710] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:59.718 [305/710] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:59.718 [306/710] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:01:59.718 [307/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_crypto.c.o 00:01:59.980 [308/710] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:01:59.980 [309/710] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.980 [310/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_reorder.c.o 00:01:59.980 [311/710] Linking static target lib/librte_rib.a 00:01:59.980 [312/710] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.980 [313/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:59.980 [314/710] Linking static target lib/librte_mbuf.a 00:01:59.980 [315/710] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:01:59.980 [316/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_cnt.c.o 00:01:59.980 [317/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_ctrl_pdu.c.o 00:01:59.980 [318/710] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.240 [319/710] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:02:00.240 [320/710] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.240 [321/710] Compiling C object lib/fib/libtrie_avx512_tmp.a.p/trie_avx512.c.o 00:02:00.240 [322/710] Linking static target lib/fib/libtrie_avx512_tmp.a 00:02:00.240 [323/710] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:02:00.240 [324/710] Compiling C object lib/fib/libdir24_8_avx512_tmp.a.p/dir24_8_avx512.c.o 00:02:00.240 [325/710] Linking static target lib/fib/libdir24_8_avx512_tmp.a 00:02:00.240 [326/710] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:02:00.240 [327/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:00.505 [328/710] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.505 [329/710] Generating lib/mldev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.766 [330/710] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:02:00.766 [331/710] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.766 [332/710] Compiling C object lib/librte_pdcp.a.p/pdcp_rte_pdcp.c.o 00:02:00.766 [333/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:02:01.026 [334/710] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:02:01.026 [335/710] Linking static target lib/librte_member.a 00:02:01.026 [336/710] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:02:01.026 [337/710] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:01.026 [338/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:02:01.026 [339/710] Linking static target lib/librte_eventdev.a 00:02:01.337 [340/710] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:02:01.337 [341/710] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:01.337 [342/710] Linking static target lib/librte_cryptodev.a 00:02:01.337 [343/710] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:02:01.337 [344/710] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:02:01.337 [345/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:02:01.337 [346/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:02:01.337 [347/710] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:02:01.337 [348/710] Linking static target lib/librte_sched.a 00:02:01.337 [349/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:02:01.337 [350/710] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:02:01.337 [351/710] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:02:01.600 [352/710] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:02:01.600 [353/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:02:01.600 [354/710] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:02:01.600 [355/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:02:01.600 [356/710] Linking static target lib/librte_fib.a 00:02:01.600 [357/710] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.600 [358/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:01.600 [359/710] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:02:01.600 [360/710] Linking static target lib/librte_ethdev.a 00:02:01.601 [361/710] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:02:01.869 [362/710] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:02:01.869 [363/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:02:01.869 [364/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:02:01.869 [365/710] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:02:01.869 [366/710] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:02:01.869 [367/710] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:02.130 [368/710] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.130 [369/710] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:02:02.130 [370/710] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.130 [371/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:02:02.130 [372/710] Compiling C object lib/librte_node.a.p/node_null.c.o 00:02:02.130 [373/710] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:02:02.391 [374/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:02:02.391 [375/710] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:02:02.391 [376/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:02:02.391 [377/710] Linking static target lib/librte_pdump.a 00:02:02.654 [378/710] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:02:02.654 [379/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:02.654 [380/710] Compiling C object lib/librte_graph.a.p/graph_rte_graph_worker.c.o 00:02:02.654 [381/710] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:02:02.654 [382/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:02:02.654 [383/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:02:02.654 [384/710] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:02:02.654 [385/710] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:02:02.918 [386/710] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:02.919 [387/710] Compiling C object lib/librte_graph.a.p/graph_graph_pcap.c.o 00:02:02.919 [388/710] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:02:02.919 [389/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:02:02.919 [390/710] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:02:02.919 [391/710] Linking static target lib/librte_ipsec.a 00:02:02.919 [392/710] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.919 [393/710] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:02:03.181 [394/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:02:03.181 [395/710] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.181 [396/710] Linking static target lib/librte_table.a 00:02:03.181 [397/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:02:03.181 [398/710] Compiling C object lib/librte_node.a.p/node_log.c.o 00:02:03.443 [399/710] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:02:03.443 [400/710] Compiling C object lib/librte_node.a.p/node_kernel_tx.c.o 00:02:03.443 [401/710] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.708 [402/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:02:03.708 [403/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:03.970 [404/710] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:02:03.970 [405/710] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:02:03.970 [406/710] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:02:03.970 [407/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:03.970 [408/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:03.970 [409/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:03.970 [410/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:02:04.231 [411/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ipsec.c.o 00:02:04.231 [412/710] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:04.231 [413/710] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:04.231 [414/710] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:02:04.231 [415/710] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.494 [416/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:02:04.494 [417/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:04.494 [418/710] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.494 [419/710] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:04.494 [420/710] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:04.494 [421/710] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.494 [422/710] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:04.494 [423/710] Linking static target drivers/librte_bus_vdev.a 00:02:04.494 [424/710] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:02:04.756 [425/710] Linking static target lib/librte_port.a 00:02:04.756 [426/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:02:04.756 [427/710] Linking target lib/librte_eal.so.24.0 00:02:04.756 [428/710] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:04.756 [429/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:02:05.036 [430/710] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:05.036 [431/710] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:05.036 [432/710] Linking static target drivers/librte_bus_pci.a 00:02:05.036 [433/710] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:02:05.036 [434/710] Compiling C object lib/librte_node.a.p/node_kernel_rx.c.o 00:02:05.036 [435/710] Compiling C object lib/librte_graph.a.p/graph_rte_graph_model_mcore_dispatch.c.o 00:02:05.036 [436/710] Linking target lib/librte_ring.so.24.0 00:02:05.036 [437/710] Linking target lib/librte_meter.so.24.0 00:02:05.036 [438/710] Compiling C object lib/librte_node.a.p/node_ip4_reassembly.c.o 00:02:05.036 [439/710] Linking target lib/librte_pci.so.24.0 00:02:05.036 [440/710] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.036 [441/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:02:05.036 [442/710] Linking target lib/librte_timer.so.24.0 00:02:05.362 [443/710] Linking target lib/librte_cfgfile.so.24.0 00:02:05.362 [444/710] Linking target lib/librte_acl.so.24.0 00:02:05.362 [445/710] Linking target lib/librte_dmadev.so.24.0 00:02:05.362 [446/710] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:02:05.362 [447/710] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:02:05.362 [448/710] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:02:05.362 [449/710] Compiling C object app/dpdk-graph.p/graph_cli.c.o 00:02:05.362 [450/710] Linking target lib/librte_jobstats.so.24.0 00:02:05.362 [451/710] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:05.362 [452/710] Linking static target lib/librte_graph.a 00:02:05.362 [453/710] Linking target lib/librte_rcu.so.24.0 00:02:05.362 [454/710] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:02:05.362 [455/710] Linking target lib/librte_rawdev.so.24.0 00:02:05.362 [456/710] Linking target lib/librte_mempool.so.24.0 00:02:05.362 [457/710] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:05.362 [458/710] Generating symbol file lib/librte_acl.so.24.0.p/librte_acl.so.24.0.symbols 00:02:05.362 [459/710] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:05.362 [460/710] Linking target lib/librte_stack.so.24.0 00:02:05.362 [461/710] Compiling C object app/dpdk-graph.p/graph_conn.c.o 00:02:05.362 [462/710] Linking target drivers/librte_bus_vdev.so.24.0 00:02:05.636 [463/710] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:02:05.636 [464/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:02:05.636 [465/710] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.636 [466/710] Compiling C object app/dpdk-graph.p/graph_ethdev_rx.c.o 00:02:05.636 [467/710] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:02:05.636 [468/710] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:02:05.636 [469/710] Compiling C object app/dpdk-graph.p/graph_ip4_route.c.o 00:02:05.636 [470/710] Generating symbol file drivers/librte_bus_vdev.so.24.0.p/librte_bus_vdev.so.24.0.symbols 00:02:05.636 [471/710] Linking target lib/librte_mbuf.so.24.0 00:02:05.636 [472/710] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:05.636 [473/710] Linking target lib/librte_rib.so.24.0 00:02:05.895 [474/710] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.895 [475/710] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:05.895 [476/710] Linking static target drivers/librte_mempool_ring.a 00:02:05.895 [477/710] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:05.895 [478/710] Linking target drivers/librte_bus_pci.so.24.0 00:02:05.895 [479/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:02:05.895 [480/710] Compiling C object app/dpdk-graph.p/graph_ethdev.c.o 00:02:05.896 [481/710] Linking target drivers/librte_mempool_ring.so.24.0 00:02:05.896 [482/710] Compiling C object app/dpdk-graph.p/graph_graph.c.o 00:02:05.896 [483/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:02:05.896 [484/710] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:02:05.896 [485/710] Generating symbol file lib/librte_rib.so.24.0.p/librte_rib.so.24.0.symbols 00:02:05.896 [486/710] Compiling C object app/dpdk-graph.p/graph_ip6_route.c.o 00:02:06.156 [487/710] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:02:06.156 [488/710] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:02:06.156 [489/710] Linking target lib/librte_net.so.24.0 00:02:06.156 [490/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:02:06.156 [491/710] Linking target lib/librte_bbdev.so.24.0 00:02:06.156 [492/710] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:02:06.156 [493/710] Generating symbol file drivers/librte_bus_pci.so.24.0.p/librte_bus_pci.so.24.0.symbols 00:02:06.156 [494/710] Linking target lib/librte_compressdev.so.24.0 00:02:06.156 [495/710] Linking target lib/librte_cryptodev.so.24.0 00:02:06.156 [496/710] Linking target lib/librte_distributor.so.24.0 00:02:06.156 [497/710] Linking target lib/librte_gpudev.so.24.0 00:02:06.156 [498/710] Linking target lib/librte_regexdev.so.24.0 00:02:06.156 [499/710] Linking target lib/librte_mldev.so.24.0 00:02:06.156 [500/710] Linking target lib/librte_reorder.so.24.0 00:02:06.156 [501/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_recycle_mbufs_vec_common.c.o 00:02:06.156 [502/710] Linking target lib/librte_sched.so.24.0 00:02:06.156 [503/710] Linking target lib/librte_fib.so.24.0 00:02:06.156 [504/710] Compiling C object app/dpdk-graph.p/graph_l3fwd.c.o 00:02:06.156 [505/710] Compiling C object lib/librte_node.a.p/node_ip4_local.c.o 00:02:06.419 [506/710] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:02:06.419 [507/710] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:02:06.419 [508/710] Linking target lib/librte_cmdline.so.24.0 00:02:06.419 [509/710] Generating symbol file lib/librte_reorder.so.24.0.p/librte_reorder.so.24.0.symbols 00:02:06.419 [510/710] Generating symbol file lib/librte_sched.so.24.0.p/librte_sched.so.24.0.symbols 00:02:06.419 [511/710] Linking target lib/librte_hash.so.24.0 00:02:06.419 [512/710] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.419 [513/710] Linking target lib/librte_security.so.24.0 00:02:06.419 [514/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:02:06.679 [515/710] Compiling C object app/dpdk-graph.p/graph_mempool.c.o 00:02:06.679 [516/710] Generating symbol file lib/librte_security.so.24.0.p/librte_security.so.24.0.symbols 00:02:06.679 [517/710] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:02:06.679 [518/710] Compiling C object app/dpdk-graph.p/graph_main.c.o 00:02:06.679 [519/710] Linking target lib/librte_efd.so.24.0 00:02:06.679 [520/710] Linking target lib/librte_lpm.so.24.0 00:02:06.679 [521/710] Compiling C object app/dpdk-graph.p/graph_utils.c.o 00:02:06.938 [522/710] Linking target lib/librte_member.so.24.0 00:02:06.938 [523/710] Linking target lib/librte_ipsec.so.24.0 00:02:06.938 [524/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:02:06.938 [525/710] Compiling C object lib/librte_node.a.p/node_udp4_input.c.o 00:02:06.938 [526/710] Generating symbol file lib/librte_lpm.so.24.0.p/librte_lpm.so.24.0.symbols 00:02:06.938 [527/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:02:07.198 [528/710] Generating symbol file lib/librte_ipsec.so.24.0.p/librte_ipsec.so.24.0.symbols 00:02:07.198 [529/710] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:02:07.198 [530/710] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:02:07.198 [531/710] Compiling C object app/dpdk-graph.p/graph_neigh.c.o 00:02:07.198 [532/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:02:07.458 [533/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:02:07.458 [534/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:02:07.458 [535/710] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:02:07.458 [536/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:02:07.715 [537/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:02:07.715 [538/710] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:02:07.715 [539/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:02:07.715 [540/710] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:02:07.715 [541/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:02:07.976 [542/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:02:07.976 [543/710] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:02:07.976 [544/710] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:02:08.240 [545/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:02:08.240 [546/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:02:08.240 [547/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:02:08.240 [548/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:02:08.240 [549/710] Compiling C object lib/librte_node.a.p/node_ip6_lookup.c.o 00:02:08.240 [550/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:02:08.240 [551/710] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:02:08.240 [552/710] Linking static target drivers/net/i40e/base/libi40e_base.a 00:02:08.501 [553/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:02:08.501 [554/710] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_main.c.o 00:02:08.501 [555/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_test.c.o 00:02:08.761 [556/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:02:08.761 [557/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:02:08.761 [558/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_parser.c.o 00:02:08.761 [559/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:02:09.022 [560/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:02:09.282 [561/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:02:09.542 [562/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:02:09.542 [563/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_main.c.o 00:02:09.543 [564/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:02:09.543 [565/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:02:09.543 [566/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_common.c.o 00:02:09.803 [567/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_device_ops.c.o 00:02:09.803 [568/710] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.804 [569/710] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:02:09.804 [570/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:02:09.804 [571/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_options.c.o 00:02:09.804 [572/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:02:09.804 [573/710] Linking target lib/librte_ethdev.so.24.0 00:02:10.062 [574/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_ordered.c.o 00:02:10.062 [575/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_interleave.c.o 00:02:10.062 [576/710] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:02:10.062 [577/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_ops.c.o 00:02:10.062 [578/710] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:02:10.062 [579/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:02:10.062 [580/710] Linking target lib/librte_metrics.so.24.0 00:02:10.325 [581/710] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_benchmark.c.o 00:02:10.325 [582/710] Linking target lib/librte_bpf.so.24.0 00:02:10.325 [583/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:02:10.325 [584/710] Linking target lib/librte_gro.so.24.0 00:02:10.325 [585/710] Linking target lib/librte_eventdev.so.24.0 00:02:10.325 [586/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:02:10.325 [587/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_process.c.o 00:02:10.325 [588/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_stats.c.o 00:02:10.325 [589/710] Linking target lib/librte_gso.so.24.0 00:02:10.325 [590/710] Linking target lib/librte_ip_frag.so.24.0 00:02:10.325 [591/710] Linking static target lib/librte_pdcp.a 00:02:10.584 [592/710] Generating symbol file lib/librte_metrics.so.24.0.p/librte_metrics.so.24.0.symbols 00:02:10.584 [593/710] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:02:10.584 [594/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:02:10.584 [595/710] Linking target lib/librte_pcapng.so.24.0 00:02:10.584 [596/710] Generating symbol file lib/librte_bpf.so.24.0.p/librte_bpf.so.24.0.symbols 00:02:10.584 [597/710] Linking target lib/librte_power.so.24.0 00:02:10.584 [598/710] Linking target lib/librte_latencystats.so.24.0 00:02:10.584 [599/710] Linking target lib/librte_bitratestats.so.24.0 00:02:10.584 [600/710] Generating symbol file lib/librte_eventdev.so.24.0.p/librte_eventdev.so.24.0.symbols 00:02:10.584 [601/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_common.c.o 00:02:10.584 [602/710] Linking target lib/librte_dispatcher.so.24.0 00:02:10.584 [603/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:02:10.584 [604/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:02:10.584 [605/710] Generating symbol file lib/librte_ip_frag.so.24.0.p/librte_ip_frag.so.24.0.symbols 00:02:10.584 [606/710] Generating symbol file lib/librte_pcapng.so.24.0.p/librte_pcapng.so.24.0.symbols 00:02:10.846 [607/710] Linking target lib/librte_pdump.so.24.0 00:02:10.846 [608/710] Linking target lib/librte_port.so.24.0 00:02:10.846 [609/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:02:10.846 [610/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:02:10.846 [611/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:02:10.846 [612/710] Linking target lib/librte_graph.so.24.0 00:02:10.846 [613/710] Generating lib/pdcp.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.105 [614/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:02:11.105 [615/710] Generating symbol file lib/librte_port.so.24.0.p/librte_port.so.24.0.symbols 00:02:11.105 [616/710] Linking target lib/librte_pdcp.so.24.0 00:02:11.105 [617/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:02:11.105 [618/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:02:11.105 [619/710] Generating symbol file lib/librte_graph.so.24.0.p/librte_graph.so.24.0.symbols 00:02:11.105 [620/710] Linking target lib/librte_table.so.24.0 00:02:11.105 [621/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:02:11.364 [622/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:02:11.364 [623/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:02:11.364 [624/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:02:11.364 [625/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:02:11.364 [626/710] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:02:11.364 [627/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_cman.c.o 00:02:11.364 [628/710] Generating symbol file lib/librte_table.so.24.0.p/librte_table.so.24.0.symbols 00:02:11.629 [629/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:02:11.890 [630/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:02:11.890 [631/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:02:11.890 [632/710] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:02:11.890 [633/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:02:12.147 [634/710] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:02:12.147 [635/710] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:02:12.147 [636/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:02:12.147 [637/710] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:02:12.404 [638/710] Compiling C object app/dpdk-testpmd.p/test-pmd_recycle_mbufs.c.o 00:02:12.404 [639/710] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:02:12.404 [640/710] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:02:12.404 [641/710] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:02:12.662 [642/710] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:02:12.662 [643/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:02:12.662 [644/710] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:02:12.662 [645/710] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:02:12.662 [646/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:02:12.921 [647/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:02:12.921 [648/710] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:02:12.921 [649/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:02:12.921 [650/710] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:02:12.921 [651/710] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:02:13.179 [652/710] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:02:13.438 [653/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:02:13.438 [654/710] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:02:13.438 [655/710] Linking static target drivers/libtmp_rte_net_i40e.a 00:02:13.438 [656/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:02:13.438 [657/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:02:13.696 [658/710] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:02:13.696 [659/710] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:02:13.696 [660/710] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:02:13.696 [661/710] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:13.696 [662/710] Compiling C object drivers/librte_net_i40e.so.24.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:13.696 [663/710] Linking static target drivers/librte_net_i40e.a 00:02:13.954 [664/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:02:13.954 [665/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:02:14.212 [666/710] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:02:14.212 [667/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:02:14.212 [668/710] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.471 [669/710] Linking target drivers/librte_net_i40e.so.24.0 00:02:14.471 [670/710] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:02:15.036 [671/710] Compiling C object lib/librte_node.a.p/node_ip6_rewrite.c.o 00:02:15.036 [672/710] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:02:15.600 [673/710] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:02:15.600 [674/710] Linking static target lib/librte_node.a 00:02:15.857 [675/710] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.857 [676/710] Linking target lib/librte_node.so.24.0 00:02:16.420 [677/710] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:02:16.677 [678/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_common.c.o 00:02:17.269 [679/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:02:18.207 [680/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:02:18.770 [681/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:02:24.099 [682/710] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:02.792 [683/710] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:02.792 [684/710] Linking static target lib/librte_vhost.a 00:03:02.792 [685/710] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.792 [686/710] Linking target lib/librte_vhost.so.24.0 00:03:10.924 [687/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:03:10.924 [688/710] Linking static target lib/librte_pipeline.a 00:03:10.924 [689/710] Linking target app/dpdk-test-acl 00:03:10.924 [690/710] Linking target app/dpdk-proc-info 00:03:10.924 [691/710] Linking target app/dpdk-test-dma-perf 00:03:10.924 [692/710] Linking target app/dpdk-dumpcap 00:03:10.924 [693/710] Linking target app/dpdk-test-flow-perf 00:03:10.924 [694/710] Linking target app/dpdk-test-fib 00:03:10.924 [695/710] Linking target app/dpdk-test-regex 00:03:10.924 [696/710] Linking target app/dpdk-test-cmdline 00:03:10.924 [697/710] Linking target app/dpdk-test-gpudev 00:03:10.924 [698/710] Linking target app/dpdk-pdump 00:03:10.924 [699/710] Linking target app/dpdk-graph 00:03:10.924 [700/710] Linking target app/dpdk-test-bbdev 00:03:10.924 [701/710] Linking target app/dpdk-test-sad 00:03:10.924 [702/710] Linking target app/dpdk-test-pipeline 00:03:10.924 [703/710] Linking target app/dpdk-test-security-perf 00:03:10.924 [704/710] Linking target app/dpdk-test-crypto-perf 00:03:10.924 [705/710] Linking target app/dpdk-test-mldev 00:03:10.924 [706/710] Linking target app/dpdk-test-compress-perf 00:03:10.924 [707/710] Linking target app/dpdk-test-eventdev 00:03:10.924 [708/710] Linking target app/dpdk-testpmd 00:03:12.823 [709/710] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:13.081 [710/710] Linking target lib/librte_pipeline.so.24.0 00:03:13.081 03:12:58 build_native_dpdk -- common/autobuild_common.sh@187 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j48 install 00:03:13.081 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:03:13.081 [0/1] Installing files. 00:03:13.341 Installing subdir /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples 00:03:13.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:13.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:13.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:13.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:13.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:13.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_route.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:13.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:13.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:13.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:13.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:13.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:13.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:13.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:13.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:13.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:13.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:13.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:13.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_fib.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:13.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:13.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:13.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:13.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:13.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:13.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:13.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:13.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:13.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:13.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:13.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:13.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:13.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:13.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:13.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:13.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:13.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:13.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:03:13.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:03:13.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/flow_blocks.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:03:13.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:13.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:13.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:13.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:13.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:13.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:13.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:13.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:13.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:13.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:13.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:13.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:13.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:13.342 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:13.342 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:13.342 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:13.342 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/pkt_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common 00:03:13.342 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/neon/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/neon 00:03:13.342 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/altivec/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/altivec 00:03:13.342 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/sse/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/sse 00:03:13.342 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/ptpclient.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:03:13.342 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:03:13.342 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:03:13.342 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:03:13.342 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:03:13.342 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:03:13.342 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:13.342 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:13.342 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:13.342 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:13.342 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:13.342 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:13.342 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:13.342 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:13.342 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:13.342 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:13.342 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:13.342 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:13.342 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:13.342 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:13.342 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:13.342 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:13.342 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:13.342 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:13.342 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:13.342 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:13.342 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:13.342 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:13.342 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:13.342 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/app_thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:13.342 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_ov.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:13.342 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:13.342 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:13.342 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:13.342 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cmdline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:13.342 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:13.342 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:13.342 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/stats.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:13.342 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:13.342 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:13.342 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_red.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:13.342 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_pie.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:13.342 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:13.342 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:13.342 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:13.342 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:13.342 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:13.342 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:13.342 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:13.342 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:13.342 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:13.342 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:13.342 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:03:13.342 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:03:13.342 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:03:13.342 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/vdpa_blk_compact.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:03:13.342 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/virtio_net.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:03:13.342 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:03:13.342 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:03:13.342 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:03:13.342 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk_spec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:13.342 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:13.342 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk_compat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:13.342 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:13.343 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:13.343 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:13.343 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:13.343 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_aes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:13.343 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_sha.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:13.343 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_tdes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:13.343 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:13.343 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:13.343 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:13.343 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_rsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:13.343 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:13.343 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:13.343 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_gcm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:13.343 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_cmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:13.343 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_xts.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:13.343 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_hmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:13.343 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ccm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:13.343 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:13.343 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:03:13.343 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:03:13.343 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:03:13.343 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:03:13.343 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/dmafwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:03:13.343 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process 00:03:13.343 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:13.343 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:13.343 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:13.343 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:13.343 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:13.343 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:13.343 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:13.343 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:13.343 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:13.343 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:13.343 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:13.343 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:03:13.343 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:13.343 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:13.343 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:13.343 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:13.343 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:13.343 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:13.343 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:13.343 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:13.343 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:03:13.343 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:13.343 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:13.343 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:13.343 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:13.343 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:13.343 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:13.343 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:03:13.343 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:03:13.343 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:13.343 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:13.343 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:13.343 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:13.343 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:13.343 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:13.343 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:13.343 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:13.343 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/basicfwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:03:13.343 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:03:13.343 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-macsec/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:03:13.343 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-macsec/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:03:13.343 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:03:13.343 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:03:13.343 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:03:13.343 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:03:13.343 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:13.343 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:13.343 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:13.344 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:13.344 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:13.344 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:13.344 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:13.344 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:13.344 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:13.344 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:13.344 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep1.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:13.344 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp4.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:13.344 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:13.344 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:13.344 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_process.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:13.344 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:13.344 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:13.344 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/rt.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:13.344 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:13.344 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:13.344 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:13.344 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:13.344 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:13.344 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:13.344 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:13.344 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:13.344 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:13.344 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp6.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:13.344 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:13.344 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep0.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:13.344 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:13.344 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:13.344 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:13.344 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:13.344 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:13.344 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:13.344 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/load_env.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:13.344 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:13.344 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:13.344 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/run_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:13.344 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:13.344 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:13.344 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:13.344 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:13.344 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:13.344 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:13.344 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:13.344 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:13.344 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:13.344 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:13.344 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:13.344 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:13.344 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:13.344 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:13.344 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:13.344 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:13.344 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/linux_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:13.344 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:13.344 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:13.344 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:13.344 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:13.344 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd 00:03:13.344 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_node/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:03:13.344 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_node/node.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:03:13.345 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:13.345 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:13.345 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:13.345 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:13.345 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:13.345 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:13.345 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:03:13.345 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:13.345 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:13.345 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:13.345 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:13.345 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:13.345 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:13.345 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:13.345 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:13.345 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:13.345 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:13.345 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:13.345 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:13.345 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:13.345 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:13.345 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:13.345 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:13.345 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:13.345 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:13.345 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:13.345 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:13.345 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:13.345 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:13.345 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:13.345 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:13.345 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:13.345 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:13.345 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:13.345 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:13.345 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:13.345 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:13.345 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/firewall.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:13.345 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/tap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:13.345 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:13.345 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:13.345 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:13.345 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t1.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:03:13.345 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t3.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:03:13.345 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/README to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:03:13.345 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/dummy.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:03:13.345 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t2.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:03:13.345 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:03:13.345 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:03:13.345 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:13.345 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:13.345 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:13.345 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:13.345 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:03:13.345 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:03:13.345 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:13.345 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:13.345 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:13.345 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:13.345 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:13.345 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:13.346 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:13.346 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:13.346 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:13.346 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:13.346 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/rss.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:13.346 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:13.346 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:13.346 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:13.346 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:13.346 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:13.346 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ethdev.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:13.346 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:13.346 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:13.346 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:13.346 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:13.346 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:13.346 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:13.346 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:13.346 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:13.346 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:13.346 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:13.346 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_routing_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:13.346 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:13.346 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:13.346 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:13.346 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:13.346 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:13.346 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:13.346 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:13.346 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:13.346 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:13.346 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:13.346 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:13.346 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/packet.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:13.346 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:13.346 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:13.346 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:13.346 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/pcap.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:13.346 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:13.346 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:13.346 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec_sa.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:13.346 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:13.346 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:13.346 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:13.346 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:13.346 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:13.346 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:03:13.346 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:03:13.346 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:03:13.346 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:03:13.346 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:03:13.346 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool 00:03:13.346 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:13.346 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:13.346 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:13.346 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:13.346 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:13.346 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:13.346 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:13.346 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:03:13.346 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:03:13.346 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/ntb_fwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:03:13.346 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:13.346 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:13.346 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:03:13.346 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:03:13.346 Installing lib/librte_log.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.603 Installing lib/librte_log.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.603 Installing lib/librte_kvargs.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.603 Installing lib/librte_kvargs.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.603 Installing lib/librte_telemetry.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.603 Installing lib/librte_telemetry.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.603 Installing lib/librte_eal.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.603 Installing lib/librte_eal.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.603 Installing lib/librte_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.603 Installing lib/librte_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.603 Installing lib/librte_rcu.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.603 Installing lib/librte_rcu.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.603 Installing lib/librte_mempool.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.603 Installing lib/librte_mempool.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.603 Installing lib/librte_mbuf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.603 Installing lib/librte_mbuf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.603 Installing lib/librte_net.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.603 Installing lib/librte_net.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.603 Installing lib/librte_meter.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.603 Installing lib/librte_meter.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.603 Installing lib/librte_ethdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.603 Installing lib/librte_ethdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.603 Installing lib/librte_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.603 Installing lib/librte_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.603 Installing lib/librte_cmdline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.603 Installing lib/librte_cmdline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.603 Installing lib/librte_metrics.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.603 Installing lib/librte_metrics.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.603 Installing lib/librte_hash.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.603 Installing lib/librte_hash.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.603 Installing lib/librte_timer.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.603 Installing lib/librte_timer.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.603 Installing lib/librte_acl.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.603 Installing lib/librte_acl.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.603 Installing lib/librte_bbdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.603 Installing lib/librte_bbdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.603 Installing lib/librte_bitratestats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.603 Installing lib/librte_bitratestats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.603 Installing lib/librte_bpf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.603 Installing lib/librte_bpf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.603 Installing lib/librte_cfgfile.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.603 Installing lib/librte_cfgfile.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.603 Installing lib/librte_compressdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.603 Installing lib/librte_compressdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.603 Installing lib/librte_cryptodev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.603 Installing lib/librte_cryptodev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.603 Installing lib/librte_distributor.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.603 Installing lib/librte_distributor.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.603 Installing lib/librte_dmadev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.603 Installing lib/librte_dmadev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.603 Installing lib/librte_efd.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.603 Installing lib/librte_efd.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.603 Installing lib/librte_eventdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.603 Installing lib/librte_eventdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.603 Installing lib/librte_dispatcher.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.603 Installing lib/librte_dispatcher.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.603 Installing lib/librte_gpudev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.603 Installing lib/librte_gpudev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.603 Installing lib/librte_gro.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.603 Installing lib/librte_gro.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.603 Installing lib/librte_gso.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.603 Installing lib/librte_gso.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.603 Installing lib/librte_ip_frag.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.603 Installing lib/librte_ip_frag.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.603 Installing lib/librte_jobstats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.603 Installing lib/librte_jobstats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.603 Installing lib/librte_latencystats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.603 Installing lib/librte_latencystats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.603 Installing lib/librte_lpm.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.603 Installing lib/librte_lpm.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.603 Installing lib/librte_member.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.603 Installing lib/librte_member.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.603 Installing lib/librte_pcapng.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.603 Installing lib/librte_pcapng.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.603 Installing lib/librte_power.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.603 Installing lib/librte_power.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.603 Installing lib/librte_rawdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.603 Installing lib/librte_rawdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.603 Installing lib/librte_regexdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.603 Installing lib/librte_regexdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.603 Installing lib/librte_mldev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.603 Installing lib/librte_mldev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.603 Installing lib/librte_rib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.603 Installing lib/librte_rib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.603 Installing lib/librte_reorder.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:14.213 Installing lib/librte_reorder.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:14.213 Installing lib/librte_sched.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:14.213 Installing lib/librte_sched.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:14.213 Installing lib/librte_security.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:14.213 Installing lib/librte_security.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:14.213 Installing lib/librte_stack.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:14.213 Installing lib/librte_stack.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:14.213 Installing lib/librte_vhost.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:14.213 Installing lib/librte_vhost.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:14.213 Installing lib/librte_ipsec.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:14.213 Installing lib/librte_ipsec.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:14.213 Installing lib/librte_pdcp.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:14.213 Installing lib/librte_pdcp.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:14.213 Installing lib/librte_fib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:14.213 Installing lib/librte_fib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:14.213 Installing lib/librte_port.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:14.213 Installing lib/librte_port.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:14.213 Installing lib/librte_pdump.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:14.213 Installing lib/librte_pdump.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:14.213 Installing lib/librte_table.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:14.213 Installing lib/librte_table.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:14.213 Installing lib/librte_pipeline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:14.213 Installing lib/librte_pipeline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:14.213 Installing lib/librte_graph.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:14.213 Installing lib/librte_graph.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:14.213 Installing lib/librte_node.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:14.213 Installing lib/librte_node.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:14.213 Installing drivers/librte_bus_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:14.213 Installing drivers/librte_bus_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:03:14.213 Installing drivers/librte_bus_vdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:14.213 Installing drivers/librte_bus_vdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:03:14.213 Installing drivers/librte_mempool_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:14.213 Installing drivers/librte_mempool_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:03:14.213 Installing drivers/librte_net_i40e.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:14.213 Installing drivers/librte_net_i40e.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:03:14.213 Installing app/dpdk-dumpcap to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:14.213 Installing app/dpdk-graph to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:14.213 Installing app/dpdk-pdump to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:14.213 Installing app/dpdk-proc-info to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:14.213 Installing app/dpdk-test-acl to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:14.213 Installing app/dpdk-test-bbdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:14.213 Installing app/dpdk-test-cmdline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:14.213 Installing app/dpdk-test-compress-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:14.213 Installing app/dpdk-test-crypto-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:14.213 Installing app/dpdk-test-dma-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:14.213 Installing app/dpdk-test-eventdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:14.213 Installing app/dpdk-test-fib to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:14.213 Installing app/dpdk-test-flow-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:14.213 Installing app/dpdk-test-gpudev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:14.213 Installing app/dpdk-test-mldev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:14.213 Installing app/dpdk-test-pipeline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:14.213 Installing app/dpdk-testpmd to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:14.213 Installing app/dpdk-test-regex to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:14.213 Installing app/dpdk-test-sad to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:14.213 Installing app/dpdk-test-security-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:14.213 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/rte_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.213 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/log/rte_log.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.213 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/kvargs/rte_kvargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.213 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/telemetry/rte_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.213 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:14.213 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:14.213 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:14.213 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:14.213 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:14.213 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:14.213 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:14.213 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:14.213 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:14.213 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:14.213 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:14.213 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:14.213 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.213 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.213 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.213 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.213 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.213 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.213 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.213 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.213 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.213 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rtm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.213 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.213 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.213 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.213 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.213 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.213 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.213 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.213 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_alarm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.213 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitmap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.214 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.214 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_branch_prediction.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.214 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bus.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.214 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_class.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.214 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.214 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_compat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.214 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_debug.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.214 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_dev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.214 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_devargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.214 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.214 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_memconfig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.214 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.214 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_errno.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.214 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_epoll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.214 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_fbarray.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.214 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hexdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.214 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hypervisor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.214 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_interrupts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.214 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_keepalive.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.214 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_launch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.214 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.214 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lock_annotations.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.214 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_malloc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.214 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_mcslock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.214 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memory.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.214 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memzone.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.214 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.214 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_features.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.214 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_per_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.214 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pflock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.214 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_random.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.214 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_reciprocal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.214 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqcount.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.214 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.214 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.214 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service_component.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.214 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_stdatomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.214 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_string_fns.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.214 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_tailq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.214 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.214 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_ticketlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.214 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_time.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.214 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.214 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.214 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point_register.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.214 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_uuid.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.214 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_version.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.214 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_vfio.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.214 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/linux/include/rte_os.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.214 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.214 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.214 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.214 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.214 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_c11_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.214 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_generic_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.214 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.214 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.214 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.214 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.214 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_zc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.214 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.214 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.214 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rcu/rte_rcu_qsbr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.214 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.214 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.214 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.214 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.214 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_ptype.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.214 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.214 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_dyn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.214 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.214 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.214 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_udp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.214 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.214 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_dtls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.214 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.214 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_sctp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.214 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_icmp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.214 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_arp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.214 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ether.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.214 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_macsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.214 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_vxlan.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.214 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gre.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.214 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gtp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.214 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.214 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.215 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_mpls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.215 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_higig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.215 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ecpri.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.215 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_pdcp_hdr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.215 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_geneve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.215 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_l2tpv2.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.215 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ppp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.215 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.215 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/meter/rte_meter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.215 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_cman.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.215 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.215 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.215 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_dev_info.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.215 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.215 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.215 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.215 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.215 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.215 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.215 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.215 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_eth_ctrl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.215 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pci/rte_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.215 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.215 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.215 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_num.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.215 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.215 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.215 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_string.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.215 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_rdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.215 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_vt100.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.215 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_socket.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.215 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_cirbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.215 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_portlist.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.215 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.215 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.215 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_fbk_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.215 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.215 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.215 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_jhash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.215 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.215 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.215 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.215 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.215 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_sw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.215 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.215 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_x86_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.215 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/timer/rte_timer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.215 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.215 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl_osdep.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.215 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.215 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.215 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_op.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.215 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bitratestats/rte_bitrate.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.215 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/bpf_def.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.215 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.215 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.215 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cfgfile/rte_cfgfile.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.215 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_compressdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.215 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_comp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.215 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.215 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.215 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.215 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_sym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.215 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_asym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.215 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.215 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/distributor/rte_distributor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.215 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.215 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.215 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/efd/rte_efd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.215 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.215 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_dma_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.215 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.215 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.215 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.215 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_timer_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.215 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.215 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.215 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.215 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dispatcher/rte_dispatcher.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.215 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gpudev/rte_gpudev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.215 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gro/rte_gro.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.215 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gso/rte_gso.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.215 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ip_frag/rte_ip_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.215 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/jobstats/rte_jobstats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.215 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/latencystats/rte_latencystats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.215 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.215 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.215 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.215 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.216 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.216 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.216 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.216 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/member/rte_member.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.216 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pcapng/rte_pcapng.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.216 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.216 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_guest_channel.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.216 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_pmd_mgmt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.216 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_uncore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.216 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.216 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.216 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.216 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.216 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.216 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mldev/rte_mldev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.216 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mldev/rte_mldev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.216 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.216 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.216 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/reorder/rte_reorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.216 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_approx.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.216 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_red.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.216 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.216 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.216 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_pie.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.216 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.216 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.216 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.216 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_std.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.216 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.216 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.216 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_c11.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.216 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_stubs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.216 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vdpa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.216 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.216 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_async.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.216 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.216 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.216 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.216 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.216 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.216 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdcp/rte_pdcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.216 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdcp/rte_pdcp_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.216 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.216 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.216 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.216 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.216 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.216 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ras.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.216 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.216 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.216 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.216 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.216 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sym_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.216 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.216 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.216 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.216 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.216 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.216 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.216 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdump/rte_pdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.216 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.216 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.216 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.216 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.216 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_learner.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.216 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_selector.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.216 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_wm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.216 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.216 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.216 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_array.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.216 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.216 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_cuckoo.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.216 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.216 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.216 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm_ipv6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.216 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_stub.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.216 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.216 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.216 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.216 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.216 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_port_in_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.216 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_table_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.216 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.216 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.216 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_extern.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.216 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ctl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.216 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.216 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.216 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_model_mcore_dispatch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.217 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_model_rtc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.217 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.217 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_eth_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.217 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip4_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.217 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip6_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.217 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_udp4_input_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.217 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/pci/rte_bus_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.217 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.217 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.217 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/dpdk-cmdline-gen.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:14.217 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-devbind.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:14.217 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-pmdinfo.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:14.217 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-telemetry.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:14.217 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-hugepages.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:14.217 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-rss-flows.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:14.217 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/rte_build_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.217 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:03:14.217 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:03:14.217 Installing symlink pointing to librte_log.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_log.so.24 00:03:14.217 Installing symlink pointing to librte_log.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_log.so 00:03:14.217 Installing symlink pointing to librte_kvargs.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so.24 00:03:14.217 Installing symlink pointing to librte_kvargs.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so 00:03:14.217 Installing symlink pointing to librte_telemetry.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so.24 00:03:14.217 Installing symlink pointing to librte_telemetry.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so 00:03:14.217 Installing symlink pointing to librte_eal.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so.24 00:03:14.217 Installing symlink pointing to librte_eal.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so 00:03:14.217 Installing symlink pointing to librte_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so.24 00:03:14.217 Installing symlink pointing to librte_ring.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so 00:03:14.217 Installing symlink pointing to librte_rcu.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so.24 00:03:14.217 Installing symlink pointing to librte_rcu.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so 00:03:14.217 Installing symlink pointing to librte_mempool.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so.24 00:03:14.217 Installing symlink pointing to librte_mempool.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so 00:03:14.217 Installing symlink pointing to librte_mbuf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so.24 00:03:14.217 Installing symlink pointing to librte_mbuf.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so 00:03:14.217 Installing symlink pointing to librte_net.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so.24 00:03:14.217 Installing symlink pointing to librte_net.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so 00:03:14.217 Installing symlink pointing to librte_meter.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so.24 00:03:14.217 Installing symlink pointing to librte_meter.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so 00:03:14.217 Installing symlink pointing to librte_ethdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so.24 00:03:14.217 Installing symlink pointing to librte_ethdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so 00:03:14.217 Installing symlink pointing to librte_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so.24 00:03:14.217 Installing symlink pointing to librte_pci.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so 00:03:14.217 Installing symlink pointing to librte_cmdline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so.24 00:03:14.217 Installing symlink pointing to librte_cmdline.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so 00:03:14.217 Installing symlink pointing to librte_metrics.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so.24 00:03:14.217 Installing symlink pointing to librte_metrics.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so 00:03:14.217 Installing symlink pointing to librte_hash.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so.24 00:03:14.217 Installing symlink pointing to librte_hash.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so 00:03:14.217 Installing symlink pointing to librte_timer.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so.24 00:03:14.217 Installing symlink pointing to librte_timer.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so 00:03:14.217 Installing symlink pointing to librte_acl.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so.24 00:03:14.217 Installing symlink pointing to librte_acl.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so 00:03:14.217 Installing symlink pointing to librte_bbdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so.24 00:03:14.217 Installing symlink pointing to librte_bbdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so 00:03:14.217 Installing symlink pointing to librte_bitratestats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so.24 00:03:14.217 Installing symlink pointing to librte_bitratestats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so 00:03:14.217 Installing symlink pointing to librte_bpf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so.24 00:03:14.217 Installing symlink pointing to librte_bpf.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so 00:03:14.217 Installing symlink pointing to librte_cfgfile.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so.24 00:03:14.217 Installing symlink pointing to librte_cfgfile.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so 00:03:14.217 Installing symlink pointing to librte_compressdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so.24 00:03:14.217 Installing symlink pointing to librte_compressdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so 00:03:14.217 Installing symlink pointing to librte_cryptodev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so.24 00:03:14.217 Installing symlink pointing to librte_cryptodev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so 00:03:14.217 Installing symlink pointing to librte_distributor.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so.24 00:03:14.217 Installing symlink pointing to librte_distributor.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so 00:03:14.217 Installing symlink pointing to librte_dmadev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so.24 00:03:14.217 Installing symlink pointing to librte_dmadev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so 00:03:14.217 Installing symlink pointing to librte_efd.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so.24 00:03:14.217 Installing symlink pointing to librte_efd.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so 00:03:14.217 Installing symlink pointing to librte_eventdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so.24 00:03:14.217 Installing symlink pointing to librte_eventdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so 00:03:14.217 Installing symlink pointing to librte_dispatcher.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dispatcher.so.24 00:03:14.217 Installing symlink pointing to librte_dispatcher.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dispatcher.so 00:03:14.217 Installing symlink pointing to librte_gpudev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so.24 00:03:14.217 Installing symlink pointing to librte_gpudev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so 00:03:14.217 Installing symlink pointing to librte_gro.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so.24 00:03:14.218 Installing symlink pointing to librte_gro.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so 00:03:14.218 Installing symlink pointing to librte_gso.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so.24 00:03:14.218 Installing symlink pointing to librte_gso.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so 00:03:14.218 Installing symlink pointing to librte_ip_frag.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so.24 00:03:14.218 Installing symlink pointing to librte_ip_frag.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so 00:03:14.218 Installing symlink pointing to librte_jobstats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so.24 00:03:14.218 Installing symlink pointing to librte_jobstats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so 00:03:14.218 Installing symlink pointing to librte_latencystats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so.24 00:03:14.218 Installing symlink pointing to librte_latencystats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so 00:03:14.218 Installing symlink pointing to librte_lpm.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so.24 00:03:14.218 Installing symlink pointing to librte_lpm.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so 00:03:14.218 Installing symlink pointing to librte_member.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so.24 00:03:14.218 Installing symlink pointing to librte_member.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so 00:03:14.218 Installing symlink pointing to librte_pcapng.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so.24 00:03:14.218 Installing symlink pointing to librte_pcapng.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so 00:03:14.218 Installing symlink pointing to librte_power.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so.24 00:03:14.218 Installing symlink pointing to librte_power.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so 00:03:14.218 './librte_bus_pci.so' -> 'dpdk/pmds-24.0/librte_bus_pci.so' 00:03:14.218 './librte_bus_pci.so.24' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24' 00:03:14.218 './librte_bus_pci.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24.0' 00:03:14.218 './librte_bus_vdev.so' -> 'dpdk/pmds-24.0/librte_bus_vdev.so' 00:03:14.218 './librte_bus_vdev.so.24' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24' 00:03:14.218 './librte_bus_vdev.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24.0' 00:03:14.218 './librte_mempool_ring.so' -> 'dpdk/pmds-24.0/librte_mempool_ring.so' 00:03:14.218 './librte_mempool_ring.so.24' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24' 00:03:14.218 './librte_mempool_ring.so.24.0' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24.0' 00:03:14.218 './librte_net_i40e.so' -> 'dpdk/pmds-24.0/librte_net_i40e.so' 00:03:14.218 './librte_net_i40e.so.24' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24' 00:03:14.218 './librte_net_i40e.so.24.0' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24.0' 00:03:14.218 Installing symlink pointing to librte_rawdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so.24 00:03:14.218 Installing symlink pointing to librte_rawdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so 00:03:14.218 Installing symlink pointing to librte_regexdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so.24 00:03:14.218 Installing symlink pointing to librte_regexdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so 00:03:14.218 Installing symlink pointing to librte_mldev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mldev.so.24 00:03:14.218 Installing symlink pointing to librte_mldev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mldev.so 00:03:14.218 Installing symlink pointing to librte_rib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so.24 00:03:14.218 Installing symlink pointing to librte_rib.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so 00:03:14.218 Installing symlink pointing to librte_reorder.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so.24 00:03:14.218 Installing symlink pointing to librte_reorder.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so 00:03:14.218 Installing symlink pointing to librte_sched.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so.24 00:03:14.218 Installing symlink pointing to librte_sched.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so 00:03:14.218 Installing symlink pointing to librte_security.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so.24 00:03:14.218 Installing symlink pointing to librte_security.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so 00:03:14.218 Installing symlink pointing to librte_stack.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so.24 00:03:14.218 Installing symlink pointing to librte_stack.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so 00:03:14.218 Installing symlink pointing to librte_vhost.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so.24 00:03:14.218 Installing symlink pointing to librte_vhost.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so 00:03:14.218 Installing symlink pointing to librte_ipsec.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so.24 00:03:14.218 Installing symlink pointing to librte_ipsec.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so 00:03:14.218 Installing symlink pointing to librte_pdcp.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdcp.so.24 00:03:14.218 Installing symlink pointing to librte_pdcp.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdcp.so 00:03:14.218 Installing symlink pointing to librte_fib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so.24 00:03:14.218 Installing symlink pointing to librte_fib.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so 00:03:14.218 Installing symlink pointing to librte_port.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so.24 00:03:14.218 Installing symlink pointing to librte_port.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so 00:03:14.218 Installing symlink pointing to librte_pdump.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so.24 00:03:14.218 Installing symlink pointing to librte_pdump.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so 00:03:14.218 Installing symlink pointing to librte_table.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so.24 00:03:14.218 Installing symlink pointing to librte_table.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so 00:03:14.218 Installing symlink pointing to librte_pipeline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so.24 00:03:14.218 Installing symlink pointing to librte_pipeline.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so 00:03:14.218 Installing symlink pointing to librte_graph.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so.24 00:03:14.218 Installing symlink pointing to librte_graph.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so 00:03:14.218 Installing symlink pointing to librte_node.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so.24 00:03:14.218 Installing symlink pointing to librte_node.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so 00:03:14.218 Installing symlink pointing to librte_bus_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24 00:03:14.218 Installing symlink pointing to librte_bus_pci.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:03:14.218 Installing symlink pointing to librte_bus_vdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24 00:03:14.218 Installing symlink pointing to librte_bus_vdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:03:14.218 Installing symlink pointing to librte_mempool_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24 00:03:14.218 Installing symlink pointing to librte_mempool_ring.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:03:14.218 Installing symlink pointing to librte_net_i40e.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24 00:03:14.218 Installing symlink pointing to librte_net_i40e.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:03:14.218 Running custom install script '/bin/sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-24.0' 00:03:14.218 03:12:59 build_native_dpdk -- common/autobuild_common.sh@189 -- $ uname -s 00:03:14.218 03:12:59 build_native_dpdk -- common/autobuild_common.sh@189 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:03:14.218 03:12:59 build_native_dpdk -- common/autobuild_common.sh@200 -- $ cat 00:03:14.218 03:12:59 build_native_dpdk -- common/autobuild_common.sh@205 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:14.218 00:03:14.218 real 1m29.354s 00:03:14.218 user 18m5.523s 00:03:14.218 sys 2m6.908s 00:03:14.218 03:12:59 build_native_dpdk -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:03:14.218 03:12:59 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:03:14.218 ************************************ 00:03:14.218 END TEST build_native_dpdk 00:03:14.218 ************************************ 00:03:14.218 03:12:59 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:14.218 03:12:59 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:14.218 03:12:59 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:14.218 03:12:59 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:14.218 03:12:59 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:14.218 03:12:59 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:14.218 03:12:59 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:14.218 03:12:59 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --with-shared 00:03:14.218 Using /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig for additional libs... 00:03:14.476 DPDK libraries: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:14.476 DPDK includes: //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.476 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:03:14.733 Using 'verbs' RDMA provider 00:03:25.293 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:03:33.402 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:03:33.660 Creating mk/config.mk...done. 00:03:33.660 Creating mk/cc.flags.mk...done. 00:03:33.660 Type 'make' to build. 00:03:33.660 03:13:18 -- spdk/autobuild.sh@69 -- $ run_test make make -j48 00:03:33.660 03:13:18 -- common/autotest_common.sh@1097 -- $ '[' 3 -le 1 ']' 00:03:33.660 03:13:18 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:03:33.660 03:13:18 -- common/autotest_common.sh@10 -- $ set +x 00:03:33.660 ************************************ 00:03:33.660 START TEST make 00:03:33.660 ************************************ 00:03:33.660 03:13:18 make -- common/autotest_common.sh@1121 -- $ make -j48 00:03:33.918 make[1]: Nothing to be done for 'all'. 00:03:35.305 The Meson build system 00:03:35.305 Version: 1.3.1 00:03:35.305 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:03:35.305 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:35.305 Build type: native build 00:03:35.305 Project name: libvfio-user 00:03:35.305 Project version: 0.0.1 00:03:35.305 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:03:35.305 C linker for the host machine: gcc ld.bfd 2.39-16 00:03:35.305 Host machine cpu family: x86_64 00:03:35.305 Host machine cpu: x86_64 00:03:35.305 Run-time dependency threads found: YES 00:03:35.305 Library dl found: YES 00:03:35.305 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:03:35.305 Run-time dependency json-c found: YES 0.17 00:03:35.305 Run-time dependency cmocka found: YES 1.1.7 00:03:35.305 Program pytest-3 found: NO 00:03:35.305 Program flake8 found: NO 00:03:35.305 Program misspell-fixer found: NO 00:03:35.305 Program restructuredtext-lint found: NO 00:03:35.305 Program valgrind found: YES (/usr/bin/valgrind) 00:03:35.305 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:35.305 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:35.305 Compiler for C supports arguments -Wwrite-strings: YES 00:03:35.305 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:35.305 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:03:35.305 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:03:35.305 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:35.305 Build targets in project: 8 00:03:35.305 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:03:35.305 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:03:35.305 00:03:35.305 libvfio-user 0.0.1 00:03:35.305 00:03:35.305 User defined options 00:03:35.305 buildtype : debug 00:03:35.305 default_library: shared 00:03:35.305 libdir : /usr/local/lib 00:03:35.305 00:03:35.305 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:36.250 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:03:36.511 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:03:36.511 [2/37] Compiling C object samples/lspci.p/lspci.c.o 00:03:36.511 [3/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:03:36.511 [4/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:03:36.511 [5/37] Compiling C object samples/null.p/null.c.o 00:03:36.511 [6/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:03:36.511 [7/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:03:36.511 [8/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:03:36.511 [9/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:03:36.511 [10/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:03:36.511 [11/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:03:36.511 [12/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:03:36.511 [13/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:03:36.511 [14/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:03:36.511 [15/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:03:36.511 [16/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:03:36.511 [17/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:03:36.511 [18/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:03:36.511 [19/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:03:36.774 [20/37] Compiling C object samples/server.p/server.c.o 00:03:36.774 [21/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:03:36.774 [22/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:03:36.774 [23/37] Compiling C object test/unit_tests.p/mocks.c.o 00:03:36.774 [24/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:03:36.774 [25/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:03:36.774 [26/37] Compiling C object samples/client.p/client.c.o 00:03:36.774 [27/37] Linking target samples/client 00:03:36.774 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:03:36.774 [29/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:03:37.041 [30/37] Linking target lib/libvfio-user.so.0.0.1 00:03:37.041 [31/37] Linking target test/unit_tests 00:03:37.041 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:03:37.041 [33/37] Linking target samples/server 00:03:37.301 [34/37] Linking target samples/lspci 00:03:37.301 [35/37] Linking target samples/null 00:03:37.301 [36/37] Linking target samples/shadow_ioeventfd_server 00:03:37.301 [37/37] Linking target samples/gpio-pci-idio-16 00:03:37.301 INFO: autodetecting backend as ninja 00:03:37.301 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:37.301 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:37.873 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:03:37.873 ninja: no work to do. 00:03:50.062 CC lib/ut/ut.o 00:03:50.062 CC lib/log/log.o 00:03:50.062 CC lib/log/log_flags.o 00:03:50.062 CC lib/ut_mock/mock.o 00:03:50.062 CC lib/log/log_deprecated.o 00:03:50.062 LIB libspdk_ut.a 00:03:50.062 LIB libspdk_log.a 00:03:50.062 LIB libspdk_ut_mock.a 00:03:50.062 SO libspdk_ut.so.2.0 00:03:50.062 SO libspdk_log.so.7.0 00:03:50.062 SO libspdk_ut_mock.so.6.0 00:03:50.062 SYMLINK libspdk_ut.so 00:03:50.062 SYMLINK libspdk_ut_mock.so 00:03:50.062 SYMLINK libspdk_log.so 00:03:50.062 CXX lib/trace_parser/trace.o 00:03:50.062 CC lib/ioat/ioat.o 00:03:50.062 CC lib/dma/dma.o 00:03:50.062 CC lib/util/base64.o 00:03:50.062 CC lib/util/bit_array.o 00:03:50.062 CC lib/util/cpuset.o 00:03:50.062 CC lib/util/crc16.o 00:03:50.062 CC lib/util/crc32.o 00:03:50.062 CC lib/util/crc32c.o 00:03:50.062 CC lib/util/crc32_ieee.o 00:03:50.062 CC lib/util/crc64.o 00:03:50.062 CC lib/util/dif.o 00:03:50.062 CC lib/util/fd.o 00:03:50.062 CC lib/util/file.o 00:03:50.062 CC lib/util/hexlify.o 00:03:50.062 CC lib/util/iov.o 00:03:50.062 CC lib/util/math.o 00:03:50.062 CC lib/util/pipe.o 00:03:50.062 CC lib/util/strerror_tls.o 00:03:50.062 CC lib/util/string.o 00:03:50.062 CC lib/util/uuid.o 00:03:50.062 CC lib/util/fd_group.o 00:03:50.062 CC lib/util/xor.o 00:03:50.062 CC lib/util/zipf.o 00:03:50.062 CC lib/vfio_user/host/vfio_user_pci.o 00:03:50.062 CC lib/vfio_user/host/vfio_user.o 00:03:50.062 LIB libspdk_dma.a 00:03:50.062 SO libspdk_dma.so.4.0 00:03:50.062 SYMLINK libspdk_dma.so 00:03:50.062 LIB libspdk_ioat.a 00:03:50.062 SO libspdk_ioat.so.7.0 00:03:50.062 LIB libspdk_vfio_user.a 00:03:50.062 SO libspdk_vfio_user.so.5.0 00:03:50.062 SYMLINK libspdk_ioat.so 00:03:50.062 SYMLINK libspdk_vfio_user.so 00:03:50.062 LIB libspdk_util.a 00:03:50.319 SO libspdk_util.so.9.0 00:03:50.319 SYMLINK libspdk_util.so 00:03:50.577 CC lib/env_dpdk/env.o 00:03:50.577 CC lib/rdma/common.o 00:03:50.577 CC lib/vmd/vmd.o 00:03:50.577 CC lib/conf/conf.o 00:03:50.577 CC lib/env_dpdk/memory.o 00:03:50.577 CC lib/rdma/rdma_verbs.o 00:03:50.577 CC lib/json/json_parse.o 00:03:50.577 CC lib/vmd/led.o 00:03:50.577 CC lib/idxd/idxd.o 00:03:50.577 CC lib/env_dpdk/pci.o 00:03:50.577 CC lib/idxd/idxd_user.o 00:03:50.577 CC lib/json/json_util.o 00:03:50.577 CC lib/env_dpdk/init.o 00:03:50.577 CC lib/idxd/idxd_kernel.o 00:03:50.577 CC lib/json/json_write.o 00:03:50.577 CC lib/env_dpdk/threads.o 00:03:50.577 CC lib/env_dpdk/pci_ioat.o 00:03:50.577 CC lib/env_dpdk/pci_virtio.o 00:03:50.577 CC lib/env_dpdk/pci_vmd.o 00:03:50.577 CC lib/env_dpdk/pci_idxd.o 00:03:50.577 CC lib/env_dpdk/pci_event.o 00:03:50.577 CC lib/env_dpdk/pci_dpdk.o 00:03:50.577 CC lib/env_dpdk/sigbus_handler.o 00:03:50.577 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:50.577 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:50.577 LIB libspdk_trace_parser.a 00:03:50.577 SO libspdk_trace_parser.so.5.0 00:03:50.835 SYMLINK libspdk_trace_parser.so 00:03:50.835 LIB libspdk_conf.a 00:03:50.835 SO libspdk_conf.so.6.0 00:03:50.835 LIB libspdk_rdma.a 00:03:50.835 SYMLINK libspdk_conf.so 00:03:50.835 LIB libspdk_json.a 00:03:50.835 SO libspdk_rdma.so.6.0 00:03:50.835 SO libspdk_json.so.6.0 00:03:50.835 SYMLINK libspdk_rdma.so 00:03:51.092 SYMLINK libspdk_json.so 00:03:51.092 CC lib/jsonrpc/jsonrpc_server.o 00:03:51.092 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:51.092 CC lib/jsonrpc/jsonrpc_client.o 00:03:51.092 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:51.092 LIB libspdk_idxd.a 00:03:51.092 SO libspdk_idxd.so.12.0 00:03:51.350 SYMLINK libspdk_idxd.so 00:03:51.350 LIB libspdk_vmd.a 00:03:51.350 SO libspdk_vmd.so.6.0 00:03:51.350 SYMLINK libspdk_vmd.so 00:03:51.350 LIB libspdk_jsonrpc.a 00:03:51.350 SO libspdk_jsonrpc.so.6.0 00:03:51.608 SYMLINK libspdk_jsonrpc.so 00:03:51.608 CC lib/rpc/rpc.o 00:03:51.865 LIB libspdk_rpc.a 00:03:51.865 SO libspdk_rpc.so.6.0 00:03:51.865 SYMLINK libspdk_rpc.so 00:03:52.123 CC lib/keyring/keyring.o 00:03:52.123 CC lib/trace/trace.o 00:03:52.123 CC lib/notify/notify.o 00:03:52.123 CC lib/trace/trace_flags.o 00:03:52.123 CC lib/keyring/keyring_rpc.o 00:03:52.123 CC lib/notify/notify_rpc.o 00:03:52.123 CC lib/trace/trace_rpc.o 00:03:52.381 LIB libspdk_notify.a 00:03:52.381 SO libspdk_notify.so.6.0 00:03:52.381 LIB libspdk_keyring.a 00:03:52.381 SYMLINK libspdk_notify.so 00:03:52.381 LIB libspdk_trace.a 00:03:52.381 SO libspdk_keyring.so.1.0 00:03:52.381 SO libspdk_trace.so.10.0 00:03:52.381 SYMLINK libspdk_keyring.so 00:03:52.381 SYMLINK libspdk_trace.so 00:03:52.639 LIB libspdk_env_dpdk.a 00:03:52.639 CC lib/sock/sock.o 00:03:52.639 CC lib/sock/sock_rpc.o 00:03:52.639 SO libspdk_env_dpdk.so.14.0 00:03:52.639 CC lib/thread/thread.o 00:03:52.639 CC lib/thread/iobuf.o 00:03:52.639 SYMLINK libspdk_env_dpdk.so 00:03:52.897 LIB libspdk_sock.a 00:03:52.897 SO libspdk_sock.so.9.0 00:03:53.155 SYMLINK libspdk_sock.so 00:03:53.155 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:53.155 CC lib/nvme/nvme_ctrlr.o 00:03:53.155 CC lib/nvme/nvme_fabric.o 00:03:53.155 CC lib/nvme/nvme_ns_cmd.o 00:03:53.155 CC lib/nvme/nvme_ns.o 00:03:53.155 CC lib/nvme/nvme_pcie_common.o 00:03:53.155 CC lib/nvme/nvme_pcie.o 00:03:53.155 CC lib/nvme/nvme_qpair.o 00:03:53.155 CC lib/nvme/nvme.o 00:03:53.155 CC lib/nvme/nvme_quirks.o 00:03:53.155 CC lib/nvme/nvme_transport.o 00:03:53.155 CC lib/nvme/nvme_discovery.o 00:03:53.155 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:53.155 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:53.155 CC lib/nvme/nvme_tcp.o 00:03:53.155 CC lib/nvme/nvme_opal.o 00:03:53.155 CC lib/nvme/nvme_io_msg.o 00:03:53.155 CC lib/nvme/nvme_poll_group.o 00:03:53.155 CC lib/nvme/nvme_zns.o 00:03:53.155 CC lib/nvme/nvme_stubs.o 00:03:53.155 CC lib/nvme/nvme_auth.o 00:03:53.155 CC lib/nvme/nvme_cuse.o 00:03:53.155 CC lib/nvme/nvme_rdma.o 00:03:53.155 CC lib/nvme/nvme_vfio_user.o 00:03:54.088 LIB libspdk_thread.a 00:03:54.088 SO libspdk_thread.so.10.0 00:03:54.361 SYMLINK libspdk_thread.so 00:03:54.361 CC lib/accel/accel.o 00:03:54.361 CC lib/blob/blobstore.o 00:03:54.361 CC lib/accel/accel_rpc.o 00:03:54.362 CC lib/init/json_config.o 00:03:54.362 CC lib/blob/request.o 00:03:54.362 CC lib/virtio/virtio.o 00:03:54.362 CC lib/vfu_tgt/tgt_endpoint.o 00:03:54.362 CC lib/accel/accel_sw.o 00:03:54.362 CC lib/blob/zeroes.o 00:03:54.362 CC lib/virtio/virtio_vhost_user.o 00:03:54.362 CC lib/init/subsystem.o 00:03:54.362 CC lib/vfu_tgt/tgt_rpc.o 00:03:54.362 CC lib/blob/blob_bs_dev.o 00:03:54.362 CC lib/init/subsystem_rpc.o 00:03:54.362 CC lib/virtio/virtio_vfio_user.o 00:03:54.362 CC lib/init/rpc.o 00:03:54.362 CC lib/virtio/virtio_pci.o 00:03:54.670 LIB libspdk_init.a 00:03:54.670 SO libspdk_init.so.5.0 00:03:54.670 LIB libspdk_vfu_tgt.a 00:03:54.670 LIB libspdk_virtio.a 00:03:54.928 SYMLINK libspdk_init.so 00:03:54.928 SO libspdk_vfu_tgt.so.3.0 00:03:54.928 SO libspdk_virtio.so.7.0 00:03:54.928 SYMLINK libspdk_vfu_tgt.so 00:03:54.928 SYMLINK libspdk_virtio.so 00:03:54.928 CC lib/event/app.o 00:03:54.928 CC lib/event/reactor.o 00:03:54.928 CC lib/event/log_rpc.o 00:03:54.928 CC lib/event/app_rpc.o 00:03:54.928 CC lib/event/scheduler_static.o 00:03:55.496 LIB libspdk_event.a 00:03:55.496 SO libspdk_event.so.13.0 00:03:55.496 SYMLINK libspdk_event.so 00:03:55.496 LIB libspdk_accel.a 00:03:55.496 SO libspdk_accel.so.15.0 00:03:55.496 LIB libspdk_nvme.a 00:03:55.496 SYMLINK libspdk_accel.so 00:03:55.752 SO libspdk_nvme.so.13.0 00:03:55.752 CC lib/bdev/bdev.o 00:03:55.752 CC lib/bdev/bdev_rpc.o 00:03:55.752 CC lib/bdev/bdev_zone.o 00:03:55.752 CC lib/bdev/part.o 00:03:55.752 CC lib/bdev/scsi_nvme.o 00:03:56.008 SYMLINK libspdk_nvme.so 00:03:57.379 LIB libspdk_blob.a 00:03:57.379 SO libspdk_blob.so.11.0 00:03:57.379 SYMLINK libspdk_blob.so 00:03:57.637 CC lib/lvol/lvol.o 00:03:57.637 CC lib/blobfs/blobfs.o 00:03:57.637 CC lib/blobfs/tree.o 00:03:58.572 LIB libspdk_bdev.a 00:03:58.572 SO libspdk_bdev.so.15.0 00:03:58.572 LIB libspdk_blobfs.a 00:03:58.572 SYMLINK libspdk_bdev.so 00:03:58.572 SO libspdk_blobfs.so.10.0 00:03:58.572 LIB libspdk_lvol.a 00:03:58.572 SO libspdk_lvol.so.10.0 00:03:58.572 SYMLINK libspdk_blobfs.so 00:03:58.572 SYMLINK libspdk_lvol.so 00:03:58.572 CC lib/ublk/ublk.o 00:03:58.572 CC lib/ublk/ublk_rpc.o 00:03:58.572 CC lib/ftl/ftl_core.o 00:03:58.572 CC lib/nbd/nbd.o 00:03:58.572 CC lib/scsi/dev.o 00:03:58.572 CC lib/nvmf/ctrlr.o 00:03:58.572 CC lib/ftl/ftl_init.o 00:03:58.572 CC lib/nbd/nbd_rpc.o 00:03:58.572 CC lib/scsi/lun.o 00:03:58.572 CC lib/nvmf/ctrlr_discovery.o 00:03:58.572 CC lib/ftl/ftl_layout.o 00:03:58.572 CC lib/scsi/port.o 00:03:58.572 CC lib/nvmf/ctrlr_bdev.o 00:03:58.572 CC lib/scsi/scsi.o 00:03:58.572 CC lib/nvmf/subsystem.o 00:03:58.572 CC lib/ftl/ftl_debug.o 00:03:58.572 CC lib/scsi/scsi_bdev.o 00:03:58.572 CC lib/nvmf/nvmf.o 00:03:58.572 CC lib/ftl/ftl_io.o 00:03:58.572 CC lib/ftl/ftl_sb.o 00:03:58.572 CC lib/nvmf/nvmf_rpc.o 00:03:58.572 CC lib/scsi/scsi_pr.o 00:03:58.572 CC lib/ftl/ftl_l2p.o 00:03:58.572 CC lib/nvmf/transport.o 00:03:58.572 CC lib/scsi/scsi_rpc.o 00:03:58.572 CC lib/ftl/ftl_l2p_flat.o 00:03:58.572 CC lib/scsi/task.o 00:03:58.572 CC lib/ftl/ftl_nv_cache.o 00:03:58.572 CC lib/nvmf/tcp.o 00:03:58.572 CC lib/nvmf/stubs.o 00:03:58.572 CC lib/ftl/ftl_band.o 00:03:58.572 CC lib/nvmf/mdns_server.o 00:03:58.572 CC lib/ftl/ftl_band_ops.o 00:03:58.572 CC lib/nvmf/vfio_user.o 00:03:58.572 CC lib/ftl/ftl_writer.o 00:03:58.572 CC lib/nvmf/rdma.o 00:03:58.572 CC lib/ftl/ftl_rq.o 00:03:58.572 CC lib/nvmf/auth.o 00:03:58.572 CC lib/ftl/ftl_reloc.o 00:03:58.572 CC lib/ftl/ftl_l2p_cache.o 00:03:58.572 CC lib/ftl/ftl_p2l.o 00:03:58.572 CC lib/ftl/mngt/ftl_mngt.o 00:03:58.572 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:58.572 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:58.572 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:58.572 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:58.572 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:58.572 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:59.143 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:59.143 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:59.143 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:59.143 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:59.143 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:59.143 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:59.143 CC lib/ftl/utils/ftl_conf.o 00:03:59.143 CC lib/ftl/utils/ftl_md.o 00:03:59.143 CC lib/ftl/utils/ftl_mempool.o 00:03:59.143 CC lib/ftl/utils/ftl_bitmap.o 00:03:59.143 CC lib/ftl/utils/ftl_property.o 00:03:59.143 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:59.143 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:59.143 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:59.143 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:59.143 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:59.143 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:59.143 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:59.143 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:59.143 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:59.401 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:59.401 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:59.401 CC lib/ftl/base/ftl_base_dev.o 00:03:59.401 CC lib/ftl/base/ftl_base_bdev.o 00:03:59.401 CC lib/ftl/ftl_trace.o 00:03:59.401 LIB libspdk_nbd.a 00:03:59.659 SO libspdk_nbd.so.7.0 00:03:59.659 LIB libspdk_scsi.a 00:03:59.659 SYMLINK libspdk_nbd.so 00:03:59.659 SO libspdk_scsi.so.9.0 00:03:59.659 LIB libspdk_ublk.a 00:03:59.659 SYMLINK libspdk_scsi.so 00:03:59.659 SO libspdk_ublk.so.3.0 00:03:59.916 SYMLINK libspdk_ublk.so 00:03:59.916 CC lib/vhost/vhost.o 00:03:59.916 CC lib/iscsi/conn.o 00:03:59.916 CC lib/iscsi/init_grp.o 00:03:59.916 CC lib/vhost/vhost_rpc.o 00:03:59.916 CC lib/vhost/vhost_scsi.o 00:03:59.916 CC lib/iscsi/iscsi.o 00:03:59.916 CC lib/iscsi/md5.o 00:03:59.916 CC lib/vhost/vhost_blk.o 00:03:59.916 CC lib/vhost/rte_vhost_user.o 00:03:59.916 CC lib/iscsi/param.o 00:03:59.916 CC lib/iscsi/portal_grp.o 00:03:59.916 CC lib/iscsi/tgt_node.o 00:03:59.916 CC lib/iscsi/iscsi_subsystem.o 00:03:59.916 CC lib/iscsi/iscsi_rpc.o 00:03:59.916 CC lib/iscsi/task.o 00:04:00.173 LIB libspdk_ftl.a 00:04:00.173 SO libspdk_ftl.so.9.0 00:04:00.738 SYMLINK libspdk_ftl.so 00:04:00.996 LIB libspdk_vhost.a 00:04:01.253 SO libspdk_vhost.so.8.0 00:04:01.253 SYMLINK libspdk_vhost.so 00:04:01.253 LIB libspdk_nvmf.a 00:04:01.253 LIB libspdk_iscsi.a 00:04:01.253 SO libspdk_nvmf.so.18.0 00:04:01.511 SO libspdk_iscsi.so.8.0 00:04:01.511 SYMLINK libspdk_iscsi.so 00:04:01.511 SYMLINK libspdk_nvmf.so 00:04:01.769 CC module/env_dpdk/env_dpdk_rpc.o 00:04:01.769 CC module/vfu_device/vfu_virtio.o 00:04:01.769 CC module/vfu_device/vfu_virtio_blk.o 00:04:01.769 CC module/vfu_device/vfu_virtio_scsi.o 00:04:01.769 CC module/vfu_device/vfu_virtio_rpc.o 00:04:01.769 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:01.769 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:01.769 CC module/scheduler/gscheduler/gscheduler.o 00:04:01.769 CC module/accel/iaa/accel_iaa.o 00:04:01.769 CC module/sock/posix/posix.o 00:04:01.769 CC module/blob/bdev/blob_bdev.o 00:04:01.769 CC module/accel/iaa/accel_iaa_rpc.o 00:04:01.769 CC module/keyring/file/keyring.o 00:04:01.769 CC module/keyring/file/keyring_rpc.o 00:04:01.769 CC module/keyring/linux/keyring.o 00:04:01.769 CC module/keyring/linux/keyring_rpc.o 00:04:01.769 CC module/accel/dsa/accel_dsa.o 00:04:01.769 CC module/accel/error/accel_error.o 00:04:01.769 CC module/accel/dsa/accel_dsa_rpc.o 00:04:01.769 CC module/accel/error/accel_error_rpc.o 00:04:01.769 CC module/accel/ioat/accel_ioat.o 00:04:01.769 CC module/accel/ioat/accel_ioat_rpc.o 00:04:02.026 LIB libspdk_env_dpdk_rpc.a 00:04:02.026 SO libspdk_env_dpdk_rpc.so.6.0 00:04:02.026 SYMLINK libspdk_env_dpdk_rpc.so 00:04:02.026 LIB libspdk_keyring_linux.a 00:04:02.026 LIB libspdk_scheduler_dpdk_governor.a 00:04:02.026 LIB libspdk_keyring_file.a 00:04:02.026 LIB libspdk_scheduler_gscheduler.a 00:04:02.026 SO libspdk_scheduler_gscheduler.so.4.0 00:04:02.026 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:02.026 SO libspdk_keyring_linux.so.1.0 00:04:02.026 SO libspdk_keyring_file.so.1.0 00:04:02.026 LIB libspdk_accel_error.a 00:04:02.026 LIB libspdk_accel_ioat.a 00:04:02.026 LIB libspdk_scheduler_dynamic.a 00:04:02.026 LIB libspdk_accel_iaa.a 00:04:02.026 SO libspdk_accel_error.so.2.0 00:04:02.026 SO libspdk_scheduler_dynamic.so.4.0 00:04:02.026 SO libspdk_accel_ioat.so.6.0 00:04:02.026 SYMLINK libspdk_scheduler_gscheduler.so 00:04:02.026 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:02.026 SO libspdk_accel_iaa.so.3.0 00:04:02.026 SYMLINK libspdk_keyring_linux.so 00:04:02.026 SYMLINK libspdk_keyring_file.so 00:04:02.282 LIB libspdk_accel_dsa.a 00:04:02.282 SYMLINK libspdk_accel_error.so 00:04:02.282 LIB libspdk_blob_bdev.a 00:04:02.282 SYMLINK libspdk_scheduler_dynamic.so 00:04:02.282 SYMLINK libspdk_accel_ioat.so 00:04:02.282 SYMLINK libspdk_accel_iaa.so 00:04:02.282 SO libspdk_accel_dsa.so.5.0 00:04:02.282 SO libspdk_blob_bdev.so.11.0 00:04:02.282 SYMLINK libspdk_blob_bdev.so 00:04:02.282 SYMLINK libspdk_accel_dsa.so 00:04:02.540 LIB libspdk_vfu_device.a 00:04:02.540 SO libspdk_vfu_device.so.3.0 00:04:02.540 CC module/bdev/nvme/bdev_nvme.o 00:04:02.540 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:02.540 CC module/bdev/passthru/vbdev_passthru.o 00:04:02.540 CC module/bdev/null/bdev_null.o 00:04:02.540 CC module/bdev/ftl/bdev_ftl.o 00:04:02.540 CC module/bdev/aio/bdev_aio.o 00:04:02.540 CC module/bdev/malloc/bdev_malloc.o 00:04:02.540 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:02.540 CC module/bdev/aio/bdev_aio_rpc.o 00:04:02.540 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:02.540 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:02.540 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:02.540 CC module/bdev/raid/bdev_raid.o 00:04:02.540 CC module/bdev/null/bdev_null_rpc.o 00:04:02.540 CC module/bdev/gpt/gpt.o 00:04:02.540 CC module/bdev/delay/vbdev_delay.o 00:04:02.540 CC module/bdev/raid/bdev_raid_rpc.o 00:04:02.540 CC module/bdev/gpt/vbdev_gpt.o 00:04:02.540 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:02.540 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:02.540 CC module/blobfs/bdev/blobfs_bdev.o 00:04:02.540 CC module/bdev/error/vbdev_error.o 00:04:02.540 CC module/bdev/split/vbdev_split.o 00:04:02.540 CC module/bdev/nvme/nvme_rpc.o 00:04:02.540 CC module/bdev/split/vbdev_split_rpc.o 00:04:02.540 CC module/bdev/raid/bdev_raid_sb.o 00:04:02.540 CC module/bdev/error/vbdev_error_rpc.o 00:04:02.540 CC module/bdev/nvme/bdev_mdns_client.o 00:04:02.540 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:02.540 CC module/bdev/iscsi/bdev_iscsi.o 00:04:02.540 CC module/bdev/lvol/vbdev_lvol.o 00:04:02.540 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:02.540 CC module/bdev/raid/raid0.o 00:04:02.540 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:02.540 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:02.540 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:02.540 CC module/bdev/nvme/vbdev_opal.o 00:04:02.540 CC module/bdev/raid/raid1.o 00:04:02.540 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:02.540 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:02.540 CC module/bdev/raid/concat.o 00:04:02.540 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:02.540 SYMLINK libspdk_vfu_device.so 00:04:02.798 LIB libspdk_sock_posix.a 00:04:02.798 SO libspdk_sock_posix.so.6.0 00:04:02.798 LIB libspdk_blobfs_bdev.a 00:04:02.798 SO libspdk_blobfs_bdev.so.6.0 00:04:03.056 SYMLINK libspdk_blobfs_bdev.so 00:04:03.056 SYMLINK libspdk_sock_posix.so 00:04:03.056 LIB libspdk_bdev_split.a 00:04:03.056 LIB libspdk_bdev_gpt.a 00:04:03.056 SO libspdk_bdev_split.so.6.0 00:04:03.056 LIB libspdk_bdev_null.a 00:04:03.056 LIB libspdk_bdev_ftl.a 00:04:03.056 SO libspdk_bdev_gpt.so.6.0 00:04:03.056 LIB libspdk_bdev_aio.a 00:04:03.056 SO libspdk_bdev_null.so.6.0 00:04:03.056 LIB libspdk_bdev_error.a 00:04:03.056 SO libspdk_bdev_ftl.so.6.0 00:04:03.056 SO libspdk_bdev_aio.so.6.0 00:04:03.056 SYMLINK libspdk_bdev_split.so 00:04:03.056 LIB libspdk_bdev_passthru.a 00:04:03.056 SO libspdk_bdev_error.so.6.0 00:04:03.056 LIB libspdk_bdev_malloc.a 00:04:03.056 SYMLINK libspdk_bdev_gpt.so 00:04:03.056 LIB libspdk_bdev_iscsi.a 00:04:03.056 SO libspdk_bdev_passthru.so.6.0 00:04:03.056 SYMLINK libspdk_bdev_null.so 00:04:03.056 SO libspdk_bdev_malloc.so.6.0 00:04:03.056 LIB libspdk_bdev_zone_block.a 00:04:03.056 SYMLINK libspdk_bdev_ftl.so 00:04:03.056 SYMLINK libspdk_bdev_aio.so 00:04:03.056 SO libspdk_bdev_iscsi.so.6.0 00:04:03.056 SYMLINK libspdk_bdev_error.so 00:04:03.056 LIB libspdk_bdev_delay.a 00:04:03.056 SO libspdk_bdev_zone_block.so.6.0 00:04:03.056 SYMLINK libspdk_bdev_passthru.so 00:04:03.056 SYMLINK libspdk_bdev_malloc.so 00:04:03.056 SO libspdk_bdev_delay.so.6.0 00:04:03.314 SYMLINK libspdk_bdev_iscsi.so 00:04:03.314 LIB libspdk_bdev_virtio.a 00:04:03.314 SYMLINK libspdk_bdev_zone_block.so 00:04:03.314 SYMLINK libspdk_bdev_delay.so 00:04:03.314 SO libspdk_bdev_virtio.so.6.0 00:04:03.314 LIB libspdk_bdev_lvol.a 00:04:03.314 SO libspdk_bdev_lvol.so.6.0 00:04:03.314 SYMLINK libspdk_bdev_virtio.so 00:04:03.314 SYMLINK libspdk_bdev_lvol.so 00:04:03.880 LIB libspdk_bdev_raid.a 00:04:03.880 SO libspdk_bdev_raid.so.6.0 00:04:03.880 SYMLINK libspdk_bdev_raid.so 00:04:04.813 LIB libspdk_bdev_nvme.a 00:04:05.072 SO libspdk_bdev_nvme.so.7.0 00:04:05.072 SYMLINK libspdk_bdev_nvme.so 00:04:05.329 CC module/event/subsystems/scheduler/scheduler.o 00:04:05.329 CC module/event/subsystems/sock/sock.o 00:04:05.329 CC module/event/subsystems/vmd/vmd.o 00:04:05.329 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:04:05.329 CC module/event/subsystems/iobuf/iobuf.o 00:04:05.329 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:05.329 CC module/event/subsystems/keyring/keyring.o 00:04:05.329 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:05.329 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:05.588 LIB libspdk_event_keyring.a 00:04:05.588 LIB libspdk_event_sock.a 00:04:05.588 LIB libspdk_event_scheduler.a 00:04:05.588 LIB libspdk_event_vfu_tgt.a 00:04:05.588 LIB libspdk_event_vhost_blk.a 00:04:05.588 LIB libspdk_event_vmd.a 00:04:05.588 SO libspdk_event_keyring.so.1.0 00:04:05.588 LIB libspdk_event_iobuf.a 00:04:05.588 SO libspdk_event_vhost_blk.so.3.0 00:04:05.588 SO libspdk_event_vfu_tgt.so.3.0 00:04:05.588 SO libspdk_event_sock.so.5.0 00:04:05.588 SO libspdk_event_scheduler.so.4.0 00:04:05.588 SO libspdk_event_vmd.so.6.0 00:04:05.588 SO libspdk_event_iobuf.so.3.0 00:04:05.588 SYMLINK libspdk_event_keyring.so 00:04:05.588 SYMLINK libspdk_event_vhost_blk.so 00:04:05.588 SYMLINK libspdk_event_sock.so 00:04:05.588 SYMLINK libspdk_event_scheduler.so 00:04:05.588 SYMLINK libspdk_event_vfu_tgt.so 00:04:05.588 SYMLINK libspdk_event_vmd.so 00:04:05.588 SYMLINK libspdk_event_iobuf.so 00:04:05.845 CC module/event/subsystems/accel/accel.o 00:04:06.103 LIB libspdk_event_accel.a 00:04:06.103 SO libspdk_event_accel.so.6.0 00:04:06.103 SYMLINK libspdk_event_accel.so 00:04:06.362 CC module/event/subsystems/bdev/bdev.o 00:04:06.362 LIB libspdk_event_bdev.a 00:04:06.362 SO libspdk_event_bdev.so.6.0 00:04:06.619 SYMLINK libspdk_event_bdev.so 00:04:06.619 CC module/event/subsystems/nbd/nbd.o 00:04:06.619 CC module/event/subsystems/scsi/scsi.o 00:04:06.619 CC module/event/subsystems/ublk/ublk.o 00:04:06.619 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:06.619 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:06.878 LIB libspdk_event_nbd.a 00:04:06.878 LIB libspdk_event_ublk.a 00:04:06.878 LIB libspdk_event_scsi.a 00:04:06.878 SO libspdk_event_nbd.so.6.0 00:04:06.878 SO libspdk_event_ublk.so.3.0 00:04:06.878 SO libspdk_event_scsi.so.6.0 00:04:06.878 SYMLINK libspdk_event_ublk.so 00:04:06.878 SYMLINK libspdk_event_nbd.so 00:04:06.878 SYMLINK libspdk_event_scsi.so 00:04:06.878 LIB libspdk_event_nvmf.a 00:04:06.878 SO libspdk_event_nvmf.so.6.0 00:04:06.878 SYMLINK libspdk_event_nvmf.so 00:04:07.136 CC module/event/subsystems/iscsi/iscsi.o 00:04:07.136 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:07.136 LIB libspdk_event_vhost_scsi.a 00:04:07.136 LIB libspdk_event_iscsi.a 00:04:07.136 SO libspdk_event_vhost_scsi.so.3.0 00:04:07.394 SO libspdk_event_iscsi.so.6.0 00:04:07.394 SYMLINK libspdk_event_vhost_scsi.so 00:04:07.394 SYMLINK libspdk_event_iscsi.so 00:04:07.394 SO libspdk.so.6.0 00:04:07.394 SYMLINK libspdk.so 00:04:07.659 TEST_HEADER include/spdk/accel.h 00:04:07.659 TEST_HEADER include/spdk/accel_module.h 00:04:07.659 TEST_HEADER include/spdk/assert.h 00:04:07.659 TEST_HEADER include/spdk/barrier.h 00:04:07.659 TEST_HEADER include/spdk/base64.h 00:04:07.659 TEST_HEADER include/spdk/bdev.h 00:04:07.659 CXX app/trace/trace.o 00:04:07.659 TEST_HEADER include/spdk/bdev_module.h 00:04:07.659 CC app/trace_record/trace_record.o 00:04:07.659 TEST_HEADER include/spdk/bdev_zone.h 00:04:07.659 CC test/rpc_client/rpc_client_test.o 00:04:07.659 CC app/spdk_top/spdk_top.o 00:04:07.659 CC app/spdk_nvme_perf/perf.o 00:04:07.659 CC app/spdk_nvme_discover/discovery_aer.o 00:04:07.659 CC app/spdk_lspci/spdk_lspci.o 00:04:07.659 TEST_HEADER include/spdk/bit_array.h 00:04:07.659 CC app/spdk_nvme_identify/identify.o 00:04:07.659 TEST_HEADER include/spdk/bit_pool.h 00:04:07.659 TEST_HEADER include/spdk/blob_bdev.h 00:04:07.659 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:07.659 TEST_HEADER include/spdk/blobfs.h 00:04:07.659 TEST_HEADER include/spdk/blob.h 00:04:07.659 TEST_HEADER include/spdk/conf.h 00:04:07.659 TEST_HEADER include/spdk/config.h 00:04:07.659 TEST_HEADER include/spdk/cpuset.h 00:04:07.659 TEST_HEADER include/spdk/crc16.h 00:04:07.659 TEST_HEADER include/spdk/crc32.h 00:04:07.659 TEST_HEADER include/spdk/crc64.h 00:04:07.659 TEST_HEADER include/spdk/dif.h 00:04:07.659 TEST_HEADER include/spdk/dma.h 00:04:07.659 TEST_HEADER include/spdk/endian.h 00:04:07.659 TEST_HEADER include/spdk/env_dpdk.h 00:04:07.659 CC app/spdk_dd/spdk_dd.o 00:04:07.659 TEST_HEADER include/spdk/env.h 00:04:07.659 TEST_HEADER include/spdk/event.h 00:04:07.659 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:07.659 TEST_HEADER include/spdk/fd_group.h 00:04:07.659 TEST_HEADER include/spdk/fd.h 00:04:07.659 CC app/iscsi_tgt/iscsi_tgt.o 00:04:07.659 TEST_HEADER include/spdk/file.h 00:04:07.659 TEST_HEADER include/spdk/ftl.h 00:04:07.659 TEST_HEADER include/spdk/gpt_spec.h 00:04:07.659 CC app/nvmf_tgt/nvmf_main.o 00:04:07.659 TEST_HEADER include/spdk/hexlify.h 00:04:07.659 TEST_HEADER include/spdk/histogram_data.h 00:04:07.659 CC app/vhost/vhost.o 00:04:07.659 TEST_HEADER include/spdk/idxd.h 00:04:07.659 TEST_HEADER include/spdk/idxd_spec.h 00:04:07.659 TEST_HEADER include/spdk/init.h 00:04:07.659 TEST_HEADER include/spdk/ioat.h 00:04:07.659 TEST_HEADER include/spdk/ioat_spec.h 00:04:07.659 TEST_HEADER include/spdk/iscsi_spec.h 00:04:07.659 TEST_HEADER include/spdk/json.h 00:04:07.659 TEST_HEADER include/spdk/jsonrpc.h 00:04:07.659 TEST_HEADER include/spdk/keyring.h 00:04:07.659 TEST_HEADER include/spdk/keyring_module.h 00:04:07.659 CC test/app/histogram_perf/histogram_perf.o 00:04:07.659 TEST_HEADER include/spdk/likely.h 00:04:07.659 CC app/spdk_tgt/spdk_tgt.o 00:04:07.659 CC examples/ioat/verify/verify.o 00:04:07.659 TEST_HEADER include/spdk/log.h 00:04:07.659 CC examples/util/zipf/zipf.o 00:04:07.659 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:07.659 CC examples/vmd/led/led.o 00:04:07.659 TEST_HEADER include/spdk/lvol.h 00:04:07.659 CC app/fio/nvme/fio_plugin.o 00:04:07.659 CC examples/ioat/perf/perf.o 00:04:07.659 CC examples/vmd/lsvmd/lsvmd.o 00:04:07.659 TEST_HEADER include/spdk/memory.h 00:04:07.659 CC test/app/jsoncat/jsoncat.o 00:04:07.659 CC examples/nvme/reconnect/reconnect.o 00:04:07.659 CC examples/idxd/perf/perf.o 00:04:07.659 TEST_HEADER include/spdk/mmio.h 00:04:07.659 CC examples/sock/hello_world/hello_sock.o 00:04:07.659 CC examples/nvme/hello_world/hello_world.o 00:04:07.918 TEST_HEADER include/spdk/nbd.h 00:04:07.918 CC test/app/stub/stub.o 00:04:07.918 TEST_HEADER include/spdk/notify.h 00:04:07.918 CC test/event/event_perf/event_perf.o 00:04:07.918 CC test/thread/poller_perf/poller_perf.o 00:04:07.918 CC examples/accel/perf/accel_perf.o 00:04:07.918 CC test/nvme/reset/reset.o 00:04:07.918 CC test/nvme/sgl/sgl.o 00:04:07.918 TEST_HEADER include/spdk/nvme.h 00:04:07.918 TEST_HEADER include/spdk/nvme_intel.h 00:04:07.918 CC test/nvme/aer/aer.o 00:04:07.918 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:07.918 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:07.918 TEST_HEADER include/spdk/nvme_spec.h 00:04:07.918 TEST_HEADER include/spdk/nvme_zns.h 00:04:07.918 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:07.918 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:07.918 TEST_HEADER include/spdk/nvmf.h 00:04:07.918 CC examples/blob/hello_world/hello_blob.o 00:04:07.918 TEST_HEADER include/spdk/nvmf_spec.h 00:04:07.918 CC test/bdev/bdevio/bdevio.o 00:04:07.918 CC examples/blob/cli/blobcli.o 00:04:07.918 CC test/blobfs/mkfs/mkfs.o 00:04:07.918 CC examples/nvmf/nvmf/nvmf.o 00:04:07.918 TEST_HEADER include/spdk/nvmf_transport.h 00:04:07.918 CC examples/thread/thread/thread_ex.o 00:04:07.918 TEST_HEADER include/spdk/opal.h 00:04:07.918 CC test/dma/test_dma/test_dma.o 00:04:07.918 CC test/accel/dif/dif.o 00:04:07.918 CC test/app/bdev_svc/bdev_svc.o 00:04:07.918 TEST_HEADER include/spdk/opal_spec.h 00:04:07.918 TEST_HEADER include/spdk/pci_ids.h 00:04:07.918 TEST_HEADER include/spdk/pipe.h 00:04:07.918 TEST_HEADER include/spdk/queue.h 00:04:07.918 TEST_HEADER include/spdk/reduce.h 00:04:07.918 CC examples/bdev/hello_world/hello_bdev.o 00:04:07.918 TEST_HEADER include/spdk/rpc.h 00:04:07.918 TEST_HEADER include/spdk/scheduler.h 00:04:07.918 TEST_HEADER include/spdk/scsi.h 00:04:07.918 TEST_HEADER include/spdk/scsi_spec.h 00:04:07.918 TEST_HEADER include/spdk/sock.h 00:04:07.918 TEST_HEADER include/spdk/stdinc.h 00:04:07.918 TEST_HEADER include/spdk/string.h 00:04:07.918 TEST_HEADER include/spdk/thread.h 00:04:07.918 TEST_HEADER include/spdk/trace.h 00:04:07.918 TEST_HEADER include/spdk/trace_parser.h 00:04:07.918 TEST_HEADER include/spdk/tree.h 00:04:07.918 TEST_HEADER include/spdk/ublk.h 00:04:07.918 TEST_HEADER include/spdk/util.h 00:04:07.918 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:07.918 CC test/env/mem_callbacks/mem_callbacks.o 00:04:07.918 TEST_HEADER include/spdk/uuid.h 00:04:07.918 TEST_HEADER include/spdk/version.h 00:04:07.918 LINK spdk_lspci 00:04:07.918 CC test/lvol/esnap/esnap.o 00:04:07.918 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:07.918 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:07.918 TEST_HEADER include/spdk/vhost.h 00:04:07.918 TEST_HEADER include/spdk/vmd.h 00:04:07.918 TEST_HEADER include/spdk/xor.h 00:04:07.918 TEST_HEADER include/spdk/zipf.h 00:04:07.918 CXX test/cpp_headers/accel.o 00:04:08.180 LINK rpc_client_test 00:04:08.180 LINK spdk_nvme_discover 00:04:08.180 LINK lsvmd 00:04:08.180 LINK jsoncat 00:04:08.180 LINK histogram_perf 00:04:08.180 LINK interrupt_tgt 00:04:08.180 LINK nvmf_tgt 00:04:08.180 LINK poller_perf 00:04:08.180 LINK zipf 00:04:08.180 LINK event_perf 00:04:08.180 LINK led 00:04:08.180 LINK vhost 00:04:08.180 LINK spdk_trace_record 00:04:08.180 LINK stub 00:04:08.180 LINK iscsi_tgt 00:04:08.180 LINK verify 00:04:08.180 LINK spdk_tgt 00:04:08.180 LINK ioat_perf 00:04:08.180 LINK bdev_svc 00:04:08.180 LINK hello_world 00:04:08.180 LINK mkfs 00:04:08.180 LINK hello_sock 00:04:08.455 LINK reset 00:04:08.455 LINK sgl 00:04:08.455 CXX test/cpp_headers/accel_module.o 00:04:08.455 LINK thread 00:04:08.455 LINK hello_blob 00:04:08.455 LINK hello_bdev 00:04:08.455 LINK aer 00:04:08.455 LINK spdk_dd 00:04:08.455 CXX test/cpp_headers/assert.o 00:04:08.455 CC examples/nvme/arbitration/arbitration.o 00:04:08.455 LINK nvmf 00:04:08.455 LINK reconnect 00:04:08.455 LINK idxd_perf 00:04:08.455 LINK spdk_trace 00:04:08.455 CC test/event/reactor/reactor.o 00:04:08.455 CXX test/cpp_headers/barrier.o 00:04:08.455 CC test/env/vtophys/vtophys.o 00:04:08.455 CC test/nvme/e2edp/nvme_dp.o 00:04:08.720 LINK bdevio 00:04:08.720 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:08.720 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:08.720 LINK test_dma 00:04:08.720 CC test/event/reactor_perf/reactor_perf.o 00:04:08.720 CC app/fio/bdev/fio_plugin.o 00:04:08.720 CC examples/nvme/hotplug/hotplug.o 00:04:08.720 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:08.720 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:08.720 CC examples/bdev/bdevperf/bdevperf.o 00:04:08.720 CC test/env/memory/memory_ut.o 00:04:08.720 CC test/nvme/overhead/overhead.o 00:04:08.720 CC test/event/app_repeat/app_repeat.o 00:04:08.720 LINK nvme_manage 00:04:08.720 LINK dif 00:04:08.720 LINK accel_perf 00:04:08.720 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:08.720 CC test/env/pci/pci_ut.o 00:04:08.720 CC test/nvme/err_injection/err_injection.o 00:04:08.720 LINK nvme_fuzz 00:04:08.720 CXX test/cpp_headers/base64.o 00:04:08.720 CXX test/cpp_headers/bdev.o 00:04:08.720 CC test/nvme/reserve/reserve.o 00:04:08.720 CXX test/cpp_headers/bdev_module.o 00:04:08.720 LINK blobcli 00:04:08.720 CC test/event/scheduler/scheduler.o 00:04:08.720 CC test/nvme/connect_stress/connect_stress.o 00:04:08.720 CC test/nvme/simple_copy/simple_copy.o 00:04:08.720 CC test/nvme/startup/startup.o 00:04:08.720 CXX test/cpp_headers/bdev_zone.o 00:04:08.720 LINK spdk_nvme 00:04:08.982 LINK reactor 00:04:08.982 LINK vtophys 00:04:08.982 CC test/nvme/boot_partition/boot_partition.o 00:04:08.982 CC test/nvme/compliance/nvme_compliance.o 00:04:08.982 CXX test/cpp_headers/bit_array.o 00:04:08.982 CXX test/cpp_headers/bit_pool.o 00:04:08.982 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:08.982 CC examples/nvme/abort/abort.o 00:04:08.982 LINK reactor_perf 00:04:08.982 LINK env_dpdk_post_init 00:04:08.982 CC test/nvme/fused_ordering/fused_ordering.o 00:04:08.982 CXX test/cpp_headers/blob_bdev.o 00:04:08.982 CXX test/cpp_headers/blobfs_bdev.o 00:04:08.982 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:08.982 CXX test/cpp_headers/blobfs.o 00:04:08.982 LINK app_repeat 00:04:08.982 LINK cmb_copy 00:04:08.982 CXX test/cpp_headers/blob.o 00:04:08.982 LINK arbitration 00:04:08.982 CXX test/cpp_headers/conf.o 00:04:09.241 CC test/nvme/fdp/fdp.o 00:04:09.241 LINK nvme_dp 00:04:09.241 CXX test/cpp_headers/config.o 00:04:09.241 LINK hotplug 00:04:09.241 CXX test/cpp_headers/cpuset.o 00:04:09.241 LINK mem_callbacks 00:04:09.241 CXX test/cpp_headers/crc16.o 00:04:09.241 CXX test/cpp_headers/crc32.o 00:04:09.241 LINK spdk_nvme_perf 00:04:09.241 LINK err_injection 00:04:09.241 CC test/nvme/cuse/cuse.o 00:04:09.241 CXX test/cpp_headers/crc64.o 00:04:09.241 LINK startup 00:04:09.241 LINK boot_partition 00:04:09.241 LINK connect_stress 00:04:09.241 CXX test/cpp_headers/dif.o 00:04:09.241 CXX test/cpp_headers/dma.o 00:04:09.241 CXX test/cpp_headers/endian.o 00:04:09.241 LINK spdk_nvme_identify 00:04:09.241 CXX test/cpp_headers/env.o 00:04:09.241 CXX test/cpp_headers/env_dpdk.o 00:04:09.241 LINK overhead 00:04:09.241 LINK reserve 00:04:09.241 CXX test/cpp_headers/event.o 00:04:09.241 CXX test/cpp_headers/fd_group.o 00:04:09.241 CXX test/cpp_headers/fd.o 00:04:09.241 LINK scheduler 00:04:09.241 LINK pmr_persistence 00:04:09.241 LINK simple_copy 00:04:09.241 LINK spdk_top 00:04:09.509 LINK fused_ordering 00:04:09.509 CXX test/cpp_headers/file.o 00:04:09.509 CXX test/cpp_headers/ftl.o 00:04:09.509 CXX test/cpp_headers/gpt_spec.o 00:04:09.509 CXX test/cpp_headers/hexlify.o 00:04:09.509 CXX test/cpp_headers/histogram_data.o 00:04:09.509 LINK doorbell_aers 00:04:09.509 CXX test/cpp_headers/idxd_spec.o 00:04:09.509 CXX test/cpp_headers/idxd.o 00:04:09.509 CXX test/cpp_headers/init.o 00:04:09.509 CXX test/cpp_headers/ioat.o 00:04:09.509 CXX test/cpp_headers/ioat_spec.o 00:04:09.509 CXX test/cpp_headers/iscsi_spec.o 00:04:09.509 LINK pci_ut 00:04:09.509 CXX test/cpp_headers/json.o 00:04:09.509 CXX test/cpp_headers/jsonrpc.o 00:04:09.509 LINK nvme_compliance 00:04:09.509 CXX test/cpp_headers/keyring.o 00:04:09.509 CXX test/cpp_headers/keyring_module.o 00:04:09.509 CXX test/cpp_headers/likely.o 00:04:09.509 CXX test/cpp_headers/log.o 00:04:09.509 LINK vhost_fuzz 00:04:09.509 CXX test/cpp_headers/lvol.o 00:04:09.509 CXX test/cpp_headers/memory.o 00:04:09.509 CXX test/cpp_headers/mmio.o 00:04:09.509 CXX test/cpp_headers/nbd.o 00:04:09.509 LINK spdk_bdev 00:04:09.509 CXX test/cpp_headers/notify.o 00:04:09.509 CXX test/cpp_headers/nvme.o 00:04:09.509 CXX test/cpp_headers/nvme_intel.o 00:04:09.509 CXX test/cpp_headers/nvme_ocssd.o 00:04:09.509 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:09.509 CXX test/cpp_headers/nvme_spec.o 00:04:09.509 CXX test/cpp_headers/nvme_zns.o 00:04:09.779 CXX test/cpp_headers/nvmf_cmd.o 00:04:09.779 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:09.779 CXX test/cpp_headers/nvmf.o 00:04:09.779 CXX test/cpp_headers/nvmf_spec.o 00:04:09.779 CXX test/cpp_headers/nvmf_transport.o 00:04:09.779 CXX test/cpp_headers/opal.o 00:04:09.779 CXX test/cpp_headers/opal_spec.o 00:04:09.779 LINK abort 00:04:09.779 CXX test/cpp_headers/pci_ids.o 00:04:09.779 CXX test/cpp_headers/pipe.o 00:04:09.779 CXX test/cpp_headers/queue.o 00:04:09.779 LINK fdp 00:04:09.779 CXX test/cpp_headers/reduce.o 00:04:09.779 CXX test/cpp_headers/rpc.o 00:04:09.779 CXX test/cpp_headers/scheduler.o 00:04:09.779 CXX test/cpp_headers/scsi.o 00:04:09.779 CXX test/cpp_headers/scsi_spec.o 00:04:09.779 CXX test/cpp_headers/sock.o 00:04:09.779 CXX test/cpp_headers/stdinc.o 00:04:09.779 CXX test/cpp_headers/string.o 00:04:09.779 CXX test/cpp_headers/thread.o 00:04:09.779 CXX test/cpp_headers/trace.o 00:04:09.779 CXX test/cpp_headers/trace_parser.o 00:04:09.779 CXX test/cpp_headers/tree.o 00:04:09.779 CXX test/cpp_headers/ublk.o 00:04:09.779 CXX test/cpp_headers/util.o 00:04:09.779 CXX test/cpp_headers/uuid.o 00:04:10.044 CXX test/cpp_headers/version.o 00:04:10.044 CXX test/cpp_headers/vfio_user_pci.o 00:04:10.044 CXX test/cpp_headers/vfio_user_spec.o 00:04:10.044 CXX test/cpp_headers/vhost.o 00:04:10.044 CXX test/cpp_headers/vmd.o 00:04:10.044 CXX test/cpp_headers/xor.o 00:04:10.044 CXX test/cpp_headers/zipf.o 00:04:10.044 LINK bdevperf 00:04:10.640 LINK memory_ut 00:04:10.897 LINK iscsi_fuzz 00:04:10.897 LINK cuse 00:04:14.175 LINK esnap 00:04:14.175 00:04:14.175 real 0m40.477s 00:04:14.175 user 7m33.188s 00:04:14.175 sys 1m49.179s 00:04:14.175 03:13:59 make -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:04:14.175 03:13:59 make -- common/autotest_common.sh@10 -- $ set +x 00:04:14.175 ************************************ 00:04:14.175 END TEST make 00:04:14.175 ************************************ 00:04:14.175 03:13:59 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:14.175 03:13:59 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:14.175 03:13:59 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:14.175 03:13:59 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:14.175 03:13:59 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:04:14.175 03:13:59 -- pm/common@44 -- $ pid=2171062 00:04:14.175 03:13:59 -- pm/common@50 -- $ kill -TERM 2171062 00:04:14.175 03:13:59 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:14.175 03:13:59 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:04:14.175 03:13:59 -- pm/common@44 -- $ pid=2171064 00:04:14.175 03:13:59 -- pm/common@50 -- $ kill -TERM 2171064 00:04:14.175 03:13:59 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:14.175 03:13:59 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:04:14.175 03:13:59 -- pm/common@44 -- $ pid=2171065 00:04:14.175 03:13:59 -- pm/common@50 -- $ kill -TERM 2171065 00:04:14.175 03:13:59 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:14.175 03:13:59 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:04:14.175 03:13:59 -- pm/common@44 -- $ pid=2171092 00:04:14.175 03:13:59 -- pm/common@50 -- $ sudo -E kill -TERM 2171092 00:04:14.176 03:13:59 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:14.176 03:13:59 -- nvmf/common.sh@7 -- # uname -s 00:04:14.176 03:13:59 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:14.176 03:13:59 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:14.176 03:13:59 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:14.176 03:13:59 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:14.176 03:13:59 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:14.176 03:13:59 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:14.176 03:13:59 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:14.176 03:13:59 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:14.176 03:13:59 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:14.176 03:13:59 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:14.176 03:13:59 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:04:14.176 03:13:59 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:04:14.176 03:13:59 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:14.176 03:13:59 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:14.176 03:13:59 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:04:14.176 03:13:59 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:14.176 03:13:59 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:14.176 03:13:59 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:14.176 03:13:59 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:14.176 03:13:59 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:14.176 03:13:59 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:14.176 03:13:59 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:14.176 03:13:59 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:14.176 03:13:59 -- paths/export.sh@5 -- # export PATH 00:04:14.176 03:13:59 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:14.176 03:13:59 -- nvmf/common.sh@47 -- # : 0 00:04:14.176 03:13:59 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:14.176 03:13:59 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:14.176 03:13:59 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:14.176 03:13:59 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:14.176 03:13:59 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:14.176 03:13:59 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:14.176 03:13:59 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:14.176 03:13:59 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:14.176 03:13:59 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:14.176 03:13:59 -- spdk/autotest.sh@32 -- # uname -s 00:04:14.176 03:13:59 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:14.176 03:13:59 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:14.176 03:13:59 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:04:14.176 03:13:59 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:04:14.176 03:13:59 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:04:14.176 03:13:59 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:14.176 03:13:59 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:14.176 03:13:59 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:14.176 03:13:59 -- spdk/autotest.sh@48 -- # udevadm_pid=2247743 00:04:14.176 03:13:59 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:14.176 03:13:59 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:14.176 03:13:59 -- pm/common@17 -- # local monitor 00:04:14.176 03:13:59 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:14.176 03:13:59 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:14.176 03:13:59 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:14.176 03:13:59 -- pm/common@21 -- # date +%s 00:04:14.176 03:13:59 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:14.176 03:13:59 -- pm/common@21 -- # date +%s 00:04:14.176 03:13:59 -- pm/common@25 -- # sleep 1 00:04:14.176 03:13:59 -- pm/common@21 -- # date +%s 00:04:14.176 03:13:59 -- pm/common@21 -- # date +%s 00:04:14.176 03:13:59 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721524439 00:04:14.176 03:13:59 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721524439 00:04:14.176 03:13:59 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721524439 00:04:14.176 03:13:59 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721524439 00:04:14.176 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721524439_collect-vmstat.pm.log 00:04:14.176 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721524439_collect-cpu-load.pm.log 00:04:14.176 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721524439_collect-cpu-temp.pm.log 00:04:14.176 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721524439_collect-bmc-pm.bmc.pm.log 00:04:15.550 03:14:00 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:15.550 03:14:00 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:15.550 03:14:00 -- common/autotest_common.sh@720 -- # xtrace_disable 00:04:15.550 03:14:00 -- common/autotest_common.sh@10 -- # set +x 00:04:15.550 03:14:00 -- spdk/autotest.sh@59 -- # create_test_list 00:04:15.550 03:14:00 -- common/autotest_common.sh@744 -- # xtrace_disable 00:04:15.550 03:14:00 -- common/autotest_common.sh@10 -- # set +x 00:04:15.550 03:14:00 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:04:15.550 03:14:00 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:15.550 03:14:00 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:15.550 03:14:00 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:04:15.550 03:14:00 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:15.550 03:14:00 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:15.550 03:14:00 -- common/autotest_common.sh@1451 -- # uname 00:04:15.550 03:14:00 -- common/autotest_common.sh@1451 -- # '[' Linux = FreeBSD ']' 00:04:15.550 03:14:00 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:15.550 03:14:00 -- common/autotest_common.sh@1471 -- # uname 00:04:15.550 03:14:00 -- common/autotest_common.sh@1471 -- # [[ Linux = FreeBSD ]] 00:04:15.550 03:14:00 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:04:15.550 03:14:00 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:04:15.550 03:14:00 -- spdk/autotest.sh@72 -- # hash lcov 00:04:15.550 03:14:00 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:04:15.550 03:14:00 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:04:15.550 --rc lcov_branch_coverage=1 00:04:15.550 --rc lcov_function_coverage=1 00:04:15.550 --rc genhtml_branch_coverage=1 00:04:15.550 --rc genhtml_function_coverage=1 00:04:15.550 --rc genhtml_legend=1 00:04:15.550 --rc geninfo_all_blocks=1 00:04:15.550 ' 00:04:15.550 03:14:00 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:04:15.550 --rc lcov_branch_coverage=1 00:04:15.550 --rc lcov_function_coverage=1 00:04:15.550 --rc genhtml_branch_coverage=1 00:04:15.550 --rc genhtml_function_coverage=1 00:04:15.550 --rc genhtml_legend=1 00:04:15.550 --rc geninfo_all_blocks=1 00:04:15.550 ' 00:04:15.550 03:14:00 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:04:15.550 --rc lcov_branch_coverage=1 00:04:15.550 --rc lcov_function_coverage=1 00:04:15.550 --rc genhtml_branch_coverage=1 00:04:15.550 --rc genhtml_function_coverage=1 00:04:15.550 --rc genhtml_legend=1 00:04:15.550 --rc geninfo_all_blocks=1 00:04:15.550 --no-external' 00:04:15.550 03:14:00 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:04:15.550 --rc lcov_branch_coverage=1 00:04:15.550 --rc lcov_function_coverage=1 00:04:15.550 --rc genhtml_branch_coverage=1 00:04:15.550 --rc genhtml_function_coverage=1 00:04:15.550 --rc genhtml_legend=1 00:04:15.550 --rc geninfo_all_blocks=1 00:04:15.550 --no-external' 00:04:15.551 03:14:00 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:04:15.551 lcov: LCOV version 1.14 00:04:15.551 03:14:00 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:04:30.412 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:30.412 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:04:45.277 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:04:45.277 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:04:45.277 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:04:45.277 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:04:45.277 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:04:45.277 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:04:45.277 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:04:45.277 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:04:45.277 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:04:45.277 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:04:45.277 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:04:45.277 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:04:45.277 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:04:45.277 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:04:45.277 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:04:45.277 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:04:45.277 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:04:45.277 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:04:45.277 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:04:45.277 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:04:45.277 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:04:45.277 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:04:45.277 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:04:45.277 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:04:45.277 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:04:45.277 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:04:45.277 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:04:45.277 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:04:45.277 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:04:45.277 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:04:45.277 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:04:45.277 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:04:45.277 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:04:45.277 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:04:45.277 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:04:45.277 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:04:45.277 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:04:45.277 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:04:45.277 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:04:45.277 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:04:45.277 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:04:45.277 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:04:45.277 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:04:45.277 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:04:45.277 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:04:45.277 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:04:45.277 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:04:45.277 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:04:45.277 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:04:45.277 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:04:45.277 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:04:45.277 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:04:45.277 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:04:45.277 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:04:45.277 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:04:45.277 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:04:45.277 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:04:45.277 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:04:45.277 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:04:45.277 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:04:45.277 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:04:45.277 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:04:45.277 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:04:45.277 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:04:45.277 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:04:45.277 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:04:45.277 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:04:45.277 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:04:45.277 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:04:45.277 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:04:45.277 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:04:45.277 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:04:45.277 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:04:45.277 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:04:45.277 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:04:45.277 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:04:45.277 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:04:45.277 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:04:45.277 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:04:45.277 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:04:45.277 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:04:45.277 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:04:45.277 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:04:45.277 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:04:45.277 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:04:45.277 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:04:45.277 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:04:45.277 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:04:45.277 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:04:45.277 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:04:45.277 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:04:45.277 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:04:45.277 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:04:45.277 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:04:45.277 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:04:45.277 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:04:45.277 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:04:45.277 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:04:45.277 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:04:45.277 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:04:45.277 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:04:45.277 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:04:45.277 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:04:45.277 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:04:45.277 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:04:45.277 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:04:45.277 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:04:45.277 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:04:45.277 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:04:45.278 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:04:45.278 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:04:45.278 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:04:45.278 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:04:45.278 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:04:45.278 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:04:45.278 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:04:45.278 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:04:45.278 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:04:45.278 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:04:45.278 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:04:45.278 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:04:45.278 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:04:45.278 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:04:45.278 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:04:45.278 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:04:45.278 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:04:45.278 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:04:45.278 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:04:45.278 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:04:45.278 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:04:45.278 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:04:45.278 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:04:45.278 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:04:45.278 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:04:45.278 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:04:45.278 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:04:45.278 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:04:45.278 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:04:45.278 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:04:45.278 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:04:45.278 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:04:45.278 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:04:45.278 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:04:45.278 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:04:45.278 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:04:45.278 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:04:45.278 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:04:45.278 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:04:45.278 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:04:45.278 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:04:45.278 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:04:45.278 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:04:45.278 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:04:45.278 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:04:45.278 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:04:45.278 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:04:45.278 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:04:45.278 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:04:45.278 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:04:45.278 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:04:45.278 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:04:45.278 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:04:45.278 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:04:45.278 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:04:45.278 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:04:45.278 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:04:45.278 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:04:45.278 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:04:45.278 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:04:45.278 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:04:45.278 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:04:45.278 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:04:45.278 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:04:45.278 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:04:45.278 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:04:45.278 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:04:49.455 03:14:33 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:04:49.455 03:14:33 -- common/autotest_common.sh@720 -- # xtrace_disable 00:04:49.455 03:14:33 -- common/autotest_common.sh@10 -- # set +x 00:04:49.455 03:14:33 -- spdk/autotest.sh@91 -- # rm -f 00:04:49.455 03:14:33 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:50.019 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:04:50.019 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:04:50.019 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:04:50.019 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:04:50.019 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:04:50.019 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:04:50.019 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:04:50.019 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:04:50.019 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:04:50.019 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:04:50.019 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:04:50.019 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:04:50.019 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:04:50.019 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:04:50.019 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:04:50.276 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:04:50.276 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:04:50.276 03:14:35 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:04:50.276 03:14:35 -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:04:50.276 03:14:35 -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:04:50.276 03:14:35 -- common/autotest_common.sh@1666 -- # local nvme bdf 00:04:50.276 03:14:35 -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:04:50.276 03:14:35 -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:04:50.276 03:14:35 -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:04:50.276 03:14:35 -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:50.276 03:14:35 -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:04:50.276 03:14:35 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:04:50.276 03:14:35 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:50.276 03:14:35 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:50.276 03:14:35 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:04:50.276 03:14:35 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:04:50.276 03:14:35 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:50.276 No valid GPT data, bailing 00:04:50.276 03:14:35 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:50.276 03:14:35 -- scripts/common.sh@391 -- # pt= 00:04:50.276 03:14:35 -- scripts/common.sh@392 -- # return 1 00:04:50.276 03:14:35 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:50.276 1+0 records in 00:04:50.276 1+0 records out 00:04:50.276 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0022681 s, 462 MB/s 00:04:50.276 03:14:35 -- spdk/autotest.sh@118 -- # sync 00:04:50.276 03:14:35 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:50.276 03:14:35 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:50.276 03:14:35 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:52.176 03:14:37 -- spdk/autotest.sh@124 -- # uname -s 00:04:52.176 03:14:37 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:04:52.176 03:14:37 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:04:52.176 03:14:37 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:52.176 03:14:37 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:52.176 03:14:37 -- common/autotest_common.sh@10 -- # set +x 00:04:52.176 ************************************ 00:04:52.176 START TEST setup.sh 00:04:52.176 ************************************ 00:04:52.176 03:14:37 setup.sh -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:04:52.436 * Looking for test storage... 00:04:52.436 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:52.436 03:14:37 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:04:52.436 03:14:37 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:04:52.436 03:14:37 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:04:52.436 03:14:37 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:52.436 03:14:37 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:52.436 03:14:37 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:52.436 ************************************ 00:04:52.436 START TEST acl 00:04:52.436 ************************************ 00:04:52.436 03:14:37 setup.sh.acl -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:04:52.436 * Looking for test storage... 00:04:52.436 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:52.436 03:14:37 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:04:52.437 03:14:37 setup.sh.acl -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:04:52.437 03:14:37 setup.sh.acl -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:04:52.437 03:14:37 setup.sh.acl -- common/autotest_common.sh@1666 -- # local nvme bdf 00:04:52.437 03:14:37 setup.sh.acl -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:04:52.437 03:14:37 setup.sh.acl -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:04:52.437 03:14:37 setup.sh.acl -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:04:52.437 03:14:37 setup.sh.acl -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:52.437 03:14:37 setup.sh.acl -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:04:52.437 03:14:37 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:04:52.437 03:14:37 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:04:52.437 03:14:37 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:04:52.437 03:14:37 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:04:52.437 03:14:37 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:04:52.437 03:14:37 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:52.437 03:14:37 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:53.843 03:14:39 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:04:53.843 03:14:39 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:04:53.843 03:14:39 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:53.843 03:14:39 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:04:53.843 03:14:39 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:04:53.843 03:14:39 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:55.213 Hugepages 00:04:55.213 node hugesize free / total 00:04:55.213 03:14:40 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:55.213 03:14:40 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:55.213 03:14:40 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:55.213 03:14:40 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:55.213 03:14:40 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:55.213 03:14:40 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:55.213 03:14:40 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:55.213 03:14:40 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:55.213 03:14:40 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:55.213 00:04:55.213 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:55.213 03:14:40 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:55.213 03:14:40 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:55.213 03:14:40 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:55.213 03:14:40 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:04:55.213 03:14:40 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:55.213 03:14:40 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:55.213 03:14:40 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:55.213 03:14:40 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:04:55.213 03:14:40 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:55.213 03:14:40 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:55.213 03:14:40 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:55.213 03:14:40 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:04:55.213 03:14:40 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:55.213 03:14:40 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:55.213 03:14:40 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:55.213 03:14:40 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:04:55.213 03:14:40 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:55.213 03:14:40 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:55.213 03:14:40 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:55.213 03:14:40 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:04:55.213 03:14:40 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:55.213 03:14:40 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:55.213 03:14:40 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:55.213 03:14:40 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:04:55.213 03:14:40 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:55.213 03:14:40 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:55.213 03:14:40 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:55.213 03:14:40 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:04:55.213 03:14:40 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:55.213 03:14:40 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:55.213 03:14:40 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:55.213 03:14:40 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:04:55.213 03:14:40 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:55.213 03:14:40 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:55.213 03:14:40 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:55.213 03:14:40 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:04:55.213 03:14:40 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:55.213 03:14:40 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:55.213 03:14:40 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:55.213 03:14:40 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:04:55.213 03:14:40 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:55.213 03:14:40 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:55.213 03:14:40 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:55.213 03:14:40 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:04:55.213 03:14:40 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:55.213 03:14:40 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:55.213 03:14:40 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:55.213 03:14:40 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:04:55.213 03:14:40 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:55.213 03:14:40 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:55.213 03:14:40 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:55.213 03:14:40 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:04:55.213 03:14:40 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:55.213 03:14:40 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:55.213 03:14:40 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:55.213 03:14:40 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:04:55.213 03:14:40 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:55.213 03:14:40 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:55.213 03:14:40 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:55.213 03:14:40 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:04:55.213 03:14:40 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:55.213 03:14:40 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:55.213 03:14:40 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:55.213 03:14:40 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:04:55.213 03:14:40 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:55.213 03:14:40 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:55.213 03:14:40 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:55.213 03:14:40 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:88:00.0 == *:*:*.* ]] 00:04:55.213 03:14:40 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:55.213 03:14:40 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\8\8\:\0\0\.\0* ]] 00:04:55.213 03:14:40 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:55.213 03:14:40 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:55.213 03:14:40 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:55.213 03:14:40 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:04:55.213 03:14:40 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:04:55.213 03:14:40 setup.sh.acl -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:55.213 03:14:40 setup.sh.acl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:55.213 03:14:40 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:55.213 ************************************ 00:04:55.213 START TEST denied 00:04:55.213 ************************************ 00:04:55.213 03:14:40 setup.sh.acl.denied -- common/autotest_common.sh@1121 -- # denied 00:04:55.213 03:14:40 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:88:00.0' 00:04:55.213 03:14:40 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:04:55.213 03:14:40 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:88:00.0' 00:04:55.213 03:14:40 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:04:55.213 03:14:40 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:56.589 0000:88:00.0 (8086 0a54): Skipping denied controller at 0000:88:00.0 00:04:56.589 03:14:41 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:88:00.0 00:04:56.589 03:14:41 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:04:56.589 03:14:41 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:04:56.589 03:14:41 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:88:00.0 ]] 00:04:56.589 03:14:41 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:88:00.0/driver 00:04:56.589 03:14:41 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:56.589 03:14:41 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:56.589 03:14:41 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:04:56.589 03:14:41 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:56.589 03:14:41 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:59.114 00:04:59.114 real 0m3.803s 00:04:59.114 user 0m1.182s 00:04:59.114 sys 0m1.708s 00:04:59.114 03:14:44 setup.sh.acl.denied -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:59.114 03:14:44 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:04:59.114 ************************************ 00:04:59.114 END TEST denied 00:04:59.114 ************************************ 00:04:59.114 03:14:44 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:04:59.114 03:14:44 setup.sh.acl -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:59.114 03:14:44 setup.sh.acl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:59.114 03:14:44 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:59.114 ************************************ 00:04:59.114 START TEST allowed 00:04:59.114 ************************************ 00:04:59.114 03:14:44 setup.sh.acl.allowed -- common/autotest_common.sh@1121 -- # allowed 00:04:59.114 03:14:44 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:88:00.0 00:04:59.114 03:14:44 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:04:59.114 03:14:44 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:88:00.0 .*: nvme -> .*' 00:04:59.114 03:14:44 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:04:59.114 03:14:44 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:01.640 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:05:01.640 03:14:46 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:05:01.640 03:14:46 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:05:01.640 03:14:46 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:05:01.640 03:14:46 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:01.640 03:14:46 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:03.015 00:05:03.015 real 0m3.918s 00:05:03.015 user 0m1.017s 00:05:03.015 sys 0m1.729s 00:05:03.015 03:14:48 setup.sh.acl.allowed -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:03.015 03:14:48 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:05:03.015 ************************************ 00:05:03.015 END TEST allowed 00:05:03.015 ************************************ 00:05:03.015 00:05:03.015 real 0m10.539s 00:05:03.015 user 0m3.337s 00:05:03.015 sys 0m5.188s 00:05:03.015 03:14:48 setup.sh.acl -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:03.015 03:14:48 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:05:03.015 ************************************ 00:05:03.015 END TEST acl 00:05:03.015 ************************************ 00:05:03.015 03:14:48 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:05:03.015 03:14:48 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:03.015 03:14:48 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:03.015 03:14:48 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:03.015 ************************************ 00:05:03.015 START TEST hugepages 00:05:03.015 ************************************ 00:05:03.015 03:14:48 setup.sh.hugepages -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:05:03.015 * Looking for test storage... 00:05:03.015 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:05:03.015 03:14:48 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:05:03.015 03:14:48 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:05:03.015 03:14:48 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:05:03.015 03:14:48 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:05:03.015 03:14:48 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:05:03.015 03:14:48 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:05:03.015 03:14:48 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:05:03.015 03:14:48 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:05:03.015 03:14:48 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:05:03.015 03:14:48 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:05:03.015 03:14:48 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:03.015 03:14:48 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:03.015 03:14:48 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:03.015 03:14:48 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:05:03.015 03:14:48 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:03.015 03:14:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:03.015 03:14:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:03.015 03:14:48 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 41507444 kB' 'MemAvailable: 45001264 kB' 'Buffers: 3736 kB' 'Cached: 12554040 kB' 'SwapCached: 0 kB' 'Active: 9501620 kB' 'Inactive: 3500996 kB' 'Active(anon): 9112612 kB' 'Inactive(anon): 0 kB' 'Active(file): 389008 kB' 'Inactive(file): 3500996 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 448048 kB' 'Mapped: 173616 kB' 'Shmem: 8667772 kB' 'KReclaimable: 194876 kB' 'Slab: 550492 kB' 'SReclaimable: 194876 kB' 'SUnreclaim: 355616 kB' 'KernelStack: 12624 kB' 'PageTables: 8016 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36562304 kB' 'Committed_AS: 10241388 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195824 kB' 'VmallocChunk: 0 kB' 'Percpu: 32832 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 1678940 kB' 'DirectMap2M: 13969408 kB' 'DirectMap1G: 53477376 kB' 00:05:03.015 03:14:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:03.015 03:14:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:03.015 03:14:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:03.015 03:14:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:03.015 03:14:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:03.015 03:14:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:03.015 03:14:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:03.015 03:14:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:03.015 03:14:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:03.015 03:14:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:03.015 03:14:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:03.015 03:14:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:03.015 03:14:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:03.015 03:14:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:03.015 03:14:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:03.015 03:14:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:03.015 03:14:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:03.015 03:14:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:03.015 03:14:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:03.015 03:14:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:03.015 03:14:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:03.015 03:14:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:03.015 03:14:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:03.015 03:14:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:03.016 03:14:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:03.016 03:14:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:03.016 03:14:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:03.016 03:14:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:03.016 03:14:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:03.016 03:14:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:03.016 03:14:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:03.016 03:14:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:03.016 03:14:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:03.016 03:14:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:03.016 03:14:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:03.016 03:14:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:03.016 03:14:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:03.016 03:14:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:03.016 03:14:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:03.016 03:14:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:03.016 03:14:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:03.016 03:14:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:03.016 03:14:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:03.016 03:14:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:03.016 03:14:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:03.016 03:14:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:03.016 03:14:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:03.016 03:14:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:03.016 03:14:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:03.016 03:14:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:03.016 03:14:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:03.016 03:14:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:03.016 03:14:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:03.016 03:14:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:03.016 03:14:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:03.016 03:14:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:03.016 03:14:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:03.016 03:14:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:03.016 03:14:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:03.016 03:14:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:03.016 03:14:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:03.016 03:14:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:03.016 03:14:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:03.016 03:14:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:03.016 03:14:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:03.016 03:14:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:03.016 03:14:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:03.016 03:14:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:03.016 03:14:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:03.016 03:14:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:03.016 03:14:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:03.016 03:14:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:03.016 03:14:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:03.016 03:14:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:03.016 03:14:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:03.016 03:14:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:03.016 03:14:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:03.016 03:14:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:03.016 03:14:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:03.016 03:14:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:03.016 03:14:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:03.016 03:14:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:03.016 03:14:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:03.016 03:14:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:03.016 03:14:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:03.016 03:14:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:03.016 03:14:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:03.016 03:14:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:03.016 03:14:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:03.016 03:14:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:03.016 03:14:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:03.016 03:14:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:03.016 03:14:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:03.016 03:14:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:03.016 03:14:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:03.016 03:14:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:03.016 03:14:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:03.016 03:14:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:03.016 03:14:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:03.016 03:14:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:03.016 03:14:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:03.016 03:14:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:03.016 03:14:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:03.016 03:14:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:03.016 03:14:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:03.016 03:14:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:03.016 03:14:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:03.016 03:14:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:03.016 03:14:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:03.016 03:14:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:03.016 03:14:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:03.016 03:14:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:03.016 03:14:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:03.016 03:14:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:03.016 03:14:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:03.016 03:14:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:03.016 03:14:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:03.016 03:14:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:03.016 03:14:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:03.016 03:14:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:03.016 03:14:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:03.016 03:14:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:03.016 03:14:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:03.016 03:14:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:03.017 03:14:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:03.017 03:14:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:03.017 03:14:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:03.017 03:14:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:03.017 03:14:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:03.017 03:14:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:03.017 03:14:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:03.017 03:14:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:03.017 03:14:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:03.017 03:14:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:03.017 03:14:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:03.017 03:14:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:03.017 03:14:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:03.017 03:14:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:03.017 03:14:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:03.017 03:14:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:03.017 03:14:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:03.017 03:14:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:03.017 03:14:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:03.017 03:14:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:03.017 03:14:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:03.017 03:14:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:03.017 03:14:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:03.017 03:14:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:03.017 03:14:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:03.017 03:14:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:03.017 03:14:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:03.017 03:14:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:03.017 03:14:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:03.017 03:14:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:03.017 03:14:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:03.017 03:14:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:03.017 03:14:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:03.017 03:14:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:03.017 03:14:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:03.017 03:14:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:03.017 03:14:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:03.017 03:14:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:03.017 03:14:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:03.017 03:14:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:03.017 03:14:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:03.017 03:14:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:03.017 03:14:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:03.017 03:14:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:03.017 03:14:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:03.017 03:14:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:03.017 03:14:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:03.017 03:14:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:03.017 03:14:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:03.017 03:14:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:03.017 03:14:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:03.017 03:14:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:03.017 03:14:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:03.017 03:14:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:03.017 03:14:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:03.017 03:14:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:03.017 03:14:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:03.017 03:14:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:03.017 03:14:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:03.017 03:14:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:03.017 03:14:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:03.017 03:14:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:03.017 03:14:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:03.017 03:14:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:03.017 03:14:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:03.017 03:14:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:03.017 03:14:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:03.017 03:14:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:03.017 03:14:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:03.017 03:14:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:03.017 03:14:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:03.017 03:14:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:03.017 03:14:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:03.017 03:14:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:03.017 03:14:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:03.017 03:14:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:03.017 03:14:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:03.017 03:14:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:03.017 03:14:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:03.017 03:14:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:03.017 03:14:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:03.017 03:14:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:03.017 03:14:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:03.017 03:14:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:03.017 03:14:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:03.017 03:14:48 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:05:03.017 03:14:48 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:05:03.017 03:14:48 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:05:03.017 03:14:48 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:05:03.017 03:14:48 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:05:03.017 03:14:48 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:05:03.017 03:14:48 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:05:03.017 03:14:48 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:05:03.018 03:14:48 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:05:03.018 03:14:48 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:05:03.018 03:14:48 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:05:03.018 03:14:48 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:03.018 03:14:48 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:05:03.018 03:14:48 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:03.018 03:14:48 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:05:03.018 03:14:48 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:03.018 03:14:48 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:03.018 03:14:48 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:05:03.018 03:14:48 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:05:03.018 03:14:48 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:03.018 03:14:48 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:03.018 03:14:48 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:03.018 03:14:48 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:03.018 03:14:48 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:03.018 03:14:48 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:03.018 03:14:48 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:03.018 03:14:48 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:03.018 03:14:48 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:03.018 03:14:48 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:03.018 03:14:48 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:03.018 03:14:48 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:03.018 03:14:48 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:05:03.018 03:14:48 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:03.018 03:14:48 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:03.018 03:14:48 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:03.018 ************************************ 00:05:03.018 START TEST default_setup 00:05:03.018 ************************************ 00:05:03.018 03:14:48 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1121 -- # default_setup 00:05:03.018 03:14:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:05:03.018 03:14:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:05:03.018 03:14:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:03.018 03:14:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:05:03.018 03:14:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:03.018 03:14:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:05:03.018 03:14:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:03.018 03:14:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:03.018 03:14:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:03.018 03:14:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:03.018 03:14:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:05:03.018 03:14:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:03.018 03:14:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:03.018 03:14:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:03.018 03:14:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:03.018 03:14:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:03.018 03:14:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:03.018 03:14:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:05:03.018 03:14:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:05:03.018 03:14:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:05:03.018 03:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:05:03.018 03:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:04.395 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:04.395 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:04.395 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:04.395 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:04.395 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:04.395 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:04.395 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:04.395 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:04.395 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:04.395 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:04.395 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:04.395 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:04.395 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:04.395 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:04.395 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:04.395 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:05.334 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:05:05.334 03:14:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:05:05.334 03:14:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:05:05.334 03:14:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:05:05.334 03:14:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:05:05.334 03:14:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:05:05.334 03:14:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:05:05.334 03:14:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:05:05.334 03:14:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:05.334 03:14:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:05.334 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:05.334 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:05.334 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:05.334 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:05.334 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:05.334 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:05.334 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:05.334 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:05.334 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:05.334 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.334 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.334 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43597060 kB' 'MemAvailable: 47090904 kB' 'Buffers: 3736 kB' 'Cached: 12554128 kB' 'SwapCached: 0 kB' 'Active: 9519368 kB' 'Inactive: 3500996 kB' 'Active(anon): 9130360 kB' 'Inactive(anon): 0 kB' 'Active(file): 389008 kB' 'Inactive(file): 3500996 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 465912 kB' 'Mapped: 173672 kB' 'Shmem: 8667860 kB' 'KReclaimable: 194924 kB' 'Slab: 550156 kB' 'SReclaimable: 194924 kB' 'SUnreclaim: 355232 kB' 'KernelStack: 12640 kB' 'PageTables: 7916 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 10261856 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195856 kB' 'VmallocChunk: 0 kB' 'Percpu: 32832 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1678940 kB' 'DirectMap2M: 13969408 kB' 'DirectMap1G: 53477376 kB' 00:05:05.334 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.334 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.334 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.334 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.334 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.334 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.334 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.334 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.334 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.334 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.334 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.334 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.334 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.334 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.334 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.334 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.334 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.334 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.334 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.334 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.334 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.334 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.334 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.334 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.334 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.334 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.334 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.334 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.334 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.334 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.334 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.334 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.334 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.334 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.334 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.334 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.334 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.334 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.334 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.334 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.334 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.334 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.334 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.334 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.334 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.334 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.334 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.334 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.334 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.334 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.334 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.334 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.334 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.334 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.334 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.334 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.334 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.334 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.334 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.334 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.334 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.334 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.334 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.334 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.335 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.335 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.335 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.335 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.335 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.335 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.335 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.335 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.335 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.335 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.335 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.335 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.335 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.335 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.335 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.335 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.335 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.335 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.335 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.335 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.335 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.335 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.335 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.335 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.335 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.335 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.335 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.335 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.335 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.335 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.335 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.335 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.335 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.335 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.335 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.335 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.335 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.335 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.335 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.335 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.335 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.335 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.335 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.335 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.335 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.335 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.335 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.335 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.335 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.335 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.335 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.335 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.335 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.335 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.335 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.335 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.335 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.335 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.335 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.335 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.335 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.335 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.335 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.335 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.335 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.335 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.335 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.335 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.335 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.335 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.335 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.335 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.335 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.335 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.335 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.335 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.335 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.335 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.335 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.335 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.335 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.335 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.335 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.335 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.335 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.335 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.335 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.335 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.335 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.335 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.335 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.335 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.335 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.335 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.335 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.335 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.335 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.335 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:05.335 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:05.335 03:14:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:05:05.335 03:14:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:05.335 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:05.335 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:05.335 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:05.335 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:05.335 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:05.335 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:05.335 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:05.335 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:05.335 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:05.335 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.335 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.335 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43597568 kB' 'MemAvailable: 47091412 kB' 'Buffers: 3736 kB' 'Cached: 12554128 kB' 'SwapCached: 0 kB' 'Active: 9519692 kB' 'Inactive: 3500996 kB' 'Active(anon): 9130684 kB' 'Inactive(anon): 0 kB' 'Active(file): 389008 kB' 'Inactive(file): 3500996 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 466196 kB' 'Mapped: 173656 kB' 'Shmem: 8667860 kB' 'KReclaimable: 194924 kB' 'Slab: 550128 kB' 'SReclaimable: 194924 kB' 'SUnreclaim: 355204 kB' 'KernelStack: 12640 kB' 'PageTables: 7892 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 10261872 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195840 kB' 'VmallocChunk: 0 kB' 'Percpu: 32832 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1678940 kB' 'DirectMap2M: 13969408 kB' 'DirectMap1G: 53477376 kB' 00:05:05.335 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.335 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.335 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.335 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.335 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.336 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.336 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.336 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.336 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.336 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.336 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.336 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.336 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.336 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.336 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.336 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.336 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.336 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.336 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.336 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.336 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.336 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.336 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.336 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.336 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.336 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.336 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.336 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.336 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.336 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.336 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.336 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.336 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.336 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.336 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.336 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.336 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.336 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.336 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.336 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.336 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.336 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.336 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.336 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.336 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.336 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.336 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.336 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.336 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.336 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.336 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.336 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.336 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.336 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.336 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.336 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.336 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.336 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.336 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.336 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.336 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.336 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.336 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.336 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.336 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.336 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.336 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.336 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.336 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.336 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.336 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.336 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.336 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.336 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.336 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.336 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.336 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.336 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.336 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.336 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.336 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.336 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.336 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.336 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.336 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.336 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.336 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.336 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.336 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.336 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.336 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.336 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.336 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.336 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.336 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.336 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.336 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.336 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.336 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.336 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.336 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.336 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.336 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.336 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.336 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.336 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.336 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.336 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.336 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.336 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.336 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.336 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.336 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.336 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.336 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.336 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.336 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.336 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.336 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.336 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.336 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.336 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.336 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.336 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.336 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.336 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.336 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.336 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.336 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.336 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.336 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.336 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.336 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.336 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.336 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.337 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.337 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.337 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.337 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.337 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.337 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.337 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.337 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.337 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.337 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.337 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.337 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.337 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.337 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.337 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.337 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.337 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.337 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.337 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.337 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.337 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.337 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.337 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.337 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.337 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.337 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.337 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.337 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.337 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.337 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.337 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.337 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.337 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.337 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.337 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.337 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.337 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.337 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.337 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.337 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.337 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.337 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.337 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.337 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.337 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.337 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.337 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.337 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.337 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.337 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.337 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.337 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.337 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.337 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.337 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.337 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.337 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.337 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.337 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.337 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.337 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.337 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.337 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.337 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.337 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.337 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.337 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.337 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.337 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.337 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.337 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:05.337 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:05.337 03:14:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:05:05.337 03:14:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:05.337 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:05.337 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:05.337 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:05.337 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:05.337 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:05.337 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:05.337 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:05.337 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:05.337 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:05.337 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.337 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.337 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43598008 kB' 'MemAvailable: 47091852 kB' 'Buffers: 3736 kB' 'Cached: 12554148 kB' 'SwapCached: 0 kB' 'Active: 9519660 kB' 'Inactive: 3500996 kB' 'Active(anon): 9130652 kB' 'Inactive(anon): 0 kB' 'Active(file): 389008 kB' 'Inactive(file): 3500996 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 466076 kB' 'Mapped: 173656 kB' 'Shmem: 8667880 kB' 'KReclaimable: 194924 kB' 'Slab: 550168 kB' 'SReclaimable: 194924 kB' 'SUnreclaim: 355244 kB' 'KernelStack: 12640 kB' 'PageTables: 7864 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 10261896 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195840 kB' 'VmallocChunk: 0 kB' 'Percpu: 32832 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1678940 kB' 'DirectMap2M: 13969408 kB' 'DirectMap1G: 53477376 kB' 00:05:05.337 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.337 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.337 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.337 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.337 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.337 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.337 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.337 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.337 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.337 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.337 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.337 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.337 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.337 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.337 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.337 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.337 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.337 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.337 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.338 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.338 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.338 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.338 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.338 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.338 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.338 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.338 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.338 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.338 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.338 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.338 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.338 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.338 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.338 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.338 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.338 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.338 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.338 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.338 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.338 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.338 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.338 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.338 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.338 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.338 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.338 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.338 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.338 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.338 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.338 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.338 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.338 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.338 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.338 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.338 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.338 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.338 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.338 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.338 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.338 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.338 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.338 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.338 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.338 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.338 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.338 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.338 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.338 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.338 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.338 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.338 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.338 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.338 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.338 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.338 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.338 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.338 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.338 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.338 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.338 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.338 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.338 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.338 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.338 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.338 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.338 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.338 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.338 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.338 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.338 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.338 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.338 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.338 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.338 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.338 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.338 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.338 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.338 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.338 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.338 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.338 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.338 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.338 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.338 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.338 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.338 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.338 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.338 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.338 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.338 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.338 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.338 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.338 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.338 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.339 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.339 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.339 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.339 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.339 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.339 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.339 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.339 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.339 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.339 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.339 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.339 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.339 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.339 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.339 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.339 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.339 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.339 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.339 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.339 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.339 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.339 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.339 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.339 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.339 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.339 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.339 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.339 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.339 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.339 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.339 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.339 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.339 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.339 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.339 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.339 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.339 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.339 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.339 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.339 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.339 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.339 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.339 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.339 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.339 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.339 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.339 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.339 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.339 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.339 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.339 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.339 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.339 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.339 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.339 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.339 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.339 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.339 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.339 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.339 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.339 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.339 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.339 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.339 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.339 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.339 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.339 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.339 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.339 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.339 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.339 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.339 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.339 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.339 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.339 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.339 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.339 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.339 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.339 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.339 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.339 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.339 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.339 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.339 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.339 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.339 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.339 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.339 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:05.339 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:05.339 03:14:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:05:05.339 03:14:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:05.339 nr_hugepages=1024 00:05:05.339 03:14:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:05.339 resv_hugepages=0 00:05:05.339 03:14:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:05.339 surplus_hugepages=0 00:05:05.339 03:14:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:05.339 anon_hugepages=0 00:05:05.339 03:14:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:05.339 03:14:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:05.339 03:14:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:05.339 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:05.339 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:05.339 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:05.339 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:05.339 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:05.339 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:05.339 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:05.339 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:05.339 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:05.339 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.339 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.339 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43598008 kB' 'MemAvailable: 47091852 kB' 'Buffers: 3736 kB' 'Cached: 12554168 kB' 'SwapCached: 0 kB' 'Active: 9519696 kB' 'Inactive: 3500996 kB' 'Active(anon): 9130688 kB' 'Inactive(anon): 0 kB' 'Active(file): 389008 kB' 'Inactive(file): 3500996 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 466076 kB' 'Mapped: 173656 kB' 'Shmem: 8667900 kB' 'KReclaimable: 194924 kB' 'Slab: 550168 kB' 'SReclaimable: 194924 kB' 'SUnreclaim: 355244 kB' 'KernelStack: 12640 kB' 'PageTables: 7864 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 10261916 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195856 kB' 'VmallocChunk: 0 kB' 'Percpu: 32832 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1678940 kB' 'DirectMap2M: 13969408 kB' 'DirectMap1G: 53477376 kB' 00:05:05.339 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.339 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.340 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.340 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.340 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.340 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.340 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.340 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.340 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.340 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.340 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.340 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.340 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.340 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.340 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.340 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.340 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.340 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.340 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.340 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.340 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.340 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.340 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.340 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.340 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.340 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.340 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.340 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.340 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.340 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.340 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.340 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.340 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.340 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.340 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.340 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.340 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.340 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.340 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.340 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.340 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.340 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.340 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.340 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.340 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.340 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.340 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.340 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.340 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.340 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.340 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.340 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.340 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.340 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.340 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.340 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.340 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.340 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.340 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.340 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.340 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.340 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.340 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.340 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.340 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.340 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.340 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.340 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.340 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.340 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.340 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.340 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.340 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.340 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.340 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.340 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.340 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.340 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.340 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.340 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.340 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.340 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.340 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.340 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.340 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.340 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.340 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.340 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.340 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.340 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.340 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.340 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.340 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.340 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.340 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.340 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.340 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.340 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.340 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.340 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.340 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.340 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.340 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.340 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.340 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.340 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.340 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.340 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.340 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.340 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.340 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.340 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.340 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.340 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.340 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.340 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.340 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.340 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.340 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.340 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.341 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.341 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.341 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.341 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.341 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.341 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.341 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.341 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.341 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.341 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.341 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.341 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.341 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.341 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.341 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.341 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.341 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.341 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.341 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.341 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.341 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.600 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.600 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.600 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.600 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.601 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.601 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.601 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.601 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.601 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.601 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.601 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.601 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.601 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.601 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.601 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.601 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.601 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.601 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.601 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.601 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.601 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.601 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.601 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.601 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.601 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.601 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.601 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.601 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.601 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.601 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.601 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.601 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.601 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.601 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.601 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.601 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.601 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.601 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.601 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.601 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.601 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.601 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.601 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.601 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.601 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.601 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.601 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.601 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.601 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.601 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.601 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.601 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.601 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:05:05.601 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:05.601 03:14:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:05.601 03:14:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:05:05.601 03:14:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:05:05.601 03:14:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:05.601 03:14:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:05.601 03:14:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:05.601 03:14:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:05:05.601 03:14:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:05.601 03:14:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:05.601 03:14:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:05.601 03:14:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:05.601 03:14:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:05.601 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:05.601 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:05:05.601 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:05.601 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:05.601 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:05.601 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:05.601 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:05.601 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:05.601 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:05.601 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.601 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.601 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 26455880 kB' 'MemUsed: 6374004 kB' 'SwapCached: 0 kB' 'Active: 3193656 kB' 'Inactive: 146700 kB' 'Active(anon): 3037124 kB' 'Inactive(anon): 0 kB' 'Active(file): 156532 kB' 'Inactive(file): 146700 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3067768 kB' 'Mapped: 71676 kB' 'AnonPages: 275748 kB' 'Shmem: 2764536 kB' 'KernelStack: 7784 kB' 'PageTables: 4496 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 97724 kB' 'Slab: 308720 kB' 'SReclaimable: 97724 kB' 'SUnreclaim: 210996 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:05.601 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.601 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.601 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.601 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.601 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.601 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.601 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.601 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.601 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.601 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.601 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.601 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.601 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.601 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.601 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.601 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.601 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.601 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.601 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.601 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.601 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.601 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.601 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.601 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.601 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.601 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.601 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.601 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.601 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.601 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.601 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.601 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.601 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.601 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.601 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.601 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.601 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.601 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.601 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.601 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.601 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.601 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.601 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.601 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.601 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.601 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.601 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.601 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.601 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.601 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.601 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.602 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.602 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.602 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.602 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.602 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.602 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.602 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.602 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.602 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.602 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.602 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.602 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.602 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.602 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.602 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.602 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.602 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.602 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.602 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.602 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.602 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.602 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.602 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.602 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.602 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.602 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.602 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.602 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.602 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.602 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.602 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.602 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.602 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.602 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.602 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.602 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.602 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.602 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.602 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.602 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.602 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.602 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.602 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.602 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.602 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.602 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.602 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.602 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.602 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.602 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.602 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.602 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.602 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.602 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.602 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.602 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.602 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.602 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.602 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.602 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.602 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.602 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.602 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.602 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.602 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.602 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.602 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.602 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.602 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.602 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.602 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.602 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.602 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.602 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.602 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.602 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.602 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.602 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.602 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.602 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.602 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.602 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.602 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.602 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.602 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.602 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.602 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.602 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.602 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.602 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.602 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:05.602 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:05.602 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:05.602 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.602 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:05.602 03:14:50 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:05.602 03:14:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:05.602 03:14:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:05.602 03:14:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:05.602 03:14:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:05.602 03:14:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:05.602 node0=1024 expecting 1024 00:05:05.602 03:14:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:05.602 00:05:05.602 real 0m2.419s 00:05:05.602 user 0m0.634s 00:05:05.602 sys 0m0.903s 00:05:05.602 03:14:50 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:05.602 03:14:50 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:05:05.602 ************************************ 00:05:05.602 END TEST default_setup 00:05:05.602 ************************************ 00:05:05.602 03:14:50 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:05:05.602 03:14:50 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:05.602 03:14:50 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:05.602 03:14:50 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:05.602 ************************************ 00:05:05.602 START TEST per_node_1G_alloc 00:05:05.602 ************************************ 00:05:05.602 03:14:50 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1121 -- # per_node_1G_alloc 00:05:05.602 03:14:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:05:05.602 03:14:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:05:05.602 03:14:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:05:05.602 03:14:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:05:05.602 03:14:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:05:05.602 03:14:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:05:05.602 03:14:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:05:05.602 03:14:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:05.602 03:14:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:05:05.602 03:14:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:05:05.602 03:14:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:05:05.602 03:14:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:05.602 03:14:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:05.602 03:14:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:05.602 03:14:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:05.602 03:14:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:05.602 03:14:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:05:05.602 03:14:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:05.602 03:14:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:05:05.602 03:14:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:05.602 03:14:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:05:05.602 03:14:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:05:05.603 03:14:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:05:05.603 03:14:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:05:05.603 03:14:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:05:05.603 03:14:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:05.603 03:14:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:06.538 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:06.538 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:05:06.538 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:06.538 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:06.538 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:06.538 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:06.538 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:06.538 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:06.538 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:06.538 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:06.538 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:06.538 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:06.538 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:06.538 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:06.801 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:06.801 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:06.801 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:06.801 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:05:06.801 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:05:06.801 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:05:06.801 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:06.801 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:06.801 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:06.801 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:06.801 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:06.801 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:06.801 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:06.801 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:06.801 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:06.801 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:06.801 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:06.801 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:06.801 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:06.801 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:06.801 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:06.801 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:06.801 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.801 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.801 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43620164 kB' 'MemAvailable: 47114004 kB' 'Buffers: 3736 kB' 'Cached: 12554240 kB' 'SwapCached: 0 kB' 'Active: 9519600 kB' 'Inactive: 3500996 kB' 'Active(anon): 9130592 kB' 'Inactive(anon): 0 kB' 'Active(file): 389008 kB' 'Inactive(file): 3500996 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 465848 kB' 'Mapped: 173748 kB' 'Shmem: 8667972 kB' 'KReclaimable: 194916 kB' 'Slab: 549924 kB' 'SReclaimable: 194916 kB' 'SUnreclaim: 355008 kB' 'KernelStack: 12576 kB' 'PageTables: 7652 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 10262104 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195888 kB' 'VmallocChunk: 0 kB' 'Percpu: 32832 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1678940 kB' 'DirectMap2M: 13969408 kB' 'DirectMap1G: 53477376 kB' 00:05:06.801 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.801 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:06.801 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.801 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.801 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.801 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:06.801 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.801 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.801 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.801 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:06.801 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.801 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.801 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.801 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:06.801 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.801 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.801 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.801 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:06.801 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.801 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.801 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.801 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:06.801 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.801 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.801 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.801 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:06.801 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.801 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.801 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.801 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:06.801 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.801 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.801 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.801 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:06.801 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.801 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.801 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.801 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:06.802 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.802 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.802 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.802 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:06.802 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.802 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.802 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.802 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:06.802 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.802 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.802 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.802 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:06.802 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.802 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.802 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.802 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:06.802 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.802 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.802 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.802 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:06.802 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.802 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.802 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.802 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:06.802 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.802 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.802 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.802 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:06.802 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.802 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.802 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.802 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:06.802 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.802 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.802 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.802 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:06.802 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.802 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.802 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.802 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:06.802 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.802 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.802 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.802 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:06.802 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.802 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.802 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.802 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:06.802 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.802 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.802 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.802 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:06.802 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.802 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.802 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.802 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:06.802 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.802 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.802 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.802 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:06.802 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.802 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.802 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.802 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:06.802 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.802 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.802 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.802 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:06.802 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.802 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.802 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.802 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:06.802 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.802 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.802 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.802 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:06.802 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.802 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.802 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.802 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:06.802 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.802 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.802 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.802 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:06.802 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.802 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.802 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.802 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:06.802 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.802 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.802 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.802 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:06.802 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.802 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.802 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.802 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:06.802 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.802 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.802 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.802 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:06.802 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.802 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.802 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.802 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:06.802 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.802 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.802 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.802 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:06.802 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.802 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.802 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.802 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:06.802 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.802 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.802 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.802 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:06.802 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.802 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.802 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.802 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:06.802 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.802 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.802 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.802 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:06.802 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:06.802 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:06.803 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:06.803 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:06.803 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:06.803 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:06.803 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:06.803 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:06.803 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:06.803 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:06.803 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:06.803 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:06.803 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.803 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.803 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43622780 kB' 'MemAvailable: 47116620 kB' 'Buffers: 3736 kB' 'Cached: 12554244 kB' 'SwapCached: 0 kB' 'Active: 9520000 kB' 'Inactive: 3500996 kB' 'Active(anon): 9130992 kB' 'Inactive(anon): 0 kB' 'Active(file): 389008 kB' 'Inactive(file): 3500996 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 466272 kB' 'Mapped: 173676 kB' 'Shmem: 8667976 kB' 'KReclaimable: 194916 kB' 'Slab: 549924 kB' 'SReclaimable: 194916 kB' 'SUnreclaim: 355008 kB' 'KernelStack: 12672 kB' 'PageTables: 7844 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 10262120 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195904 kB' 'VmallocChunk: 0 kB' 'Percpu: 32832 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1678940 kB' 'DirectMap2M: 13969408 kB' 'DirectMap1G: 53477376 kB' 00:05:06.803 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.803 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:06.803 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.803 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.803 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.803 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:06.803 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.803 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.803 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.803 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:06.803 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.803 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.803 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.803 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:06.803 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.803 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.803 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.803 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:06.803 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.803 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.803 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.803 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:06.803 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.803 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.803 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.803 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:06.803 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.803 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.803 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.803 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:06.803 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.803 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.803 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.803 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:06.803 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.803 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.803 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.803 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:06.803 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.803 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.803 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.803 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:06.803 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.803 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.803 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.803 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:06.803 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.803 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.803 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.803 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:06.803 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.803 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.803 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.803 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:06.803 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.803 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.803 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.803 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:06.803 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.803 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.803 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.803 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:06.803 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.803 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.803 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.803 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:06.803 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.803 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.803 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.803 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:06.803 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.803 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.803 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.803 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:06.803 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.803 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.803 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.803 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:06.803 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.803 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.803 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.803 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:06.803 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.803 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.803 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.803 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:06.803 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.803 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.803 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.803 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:06.803 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.803 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.803 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.803 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:06.803 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.803 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.803 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.803 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:06.803 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.803 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.803 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.804 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:06.804 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.804 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.804 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.804 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:06.804 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.804 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.804 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.804 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:06.804 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.804 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.804 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.804 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:06.804 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.804 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.804 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.804 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:06.804 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.804 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.804 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.804 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:06.804 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.804 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.804 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.804 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:06.804 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.804 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.804 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.804 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:06.804 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.804 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.804 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.804 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:06.804 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.804 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.804 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.804 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:06.804 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.804 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.804 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.804 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:06.804 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.804 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.804 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.804 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:06.804 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.804 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.804 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.804 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:06.804 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.804 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.804 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.804 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:06.804 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.804 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.804 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.804 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:06.804 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.804 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.804 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.804 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:06.804 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.804 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.804 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.804 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:06.804 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.804 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.804 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.804 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:06.804 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.804 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.804 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.804 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:06.804 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.804 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.804 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.804 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:06.804 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.804 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.804 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.804 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:06.804 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.804 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.804 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.804 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:06.804 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.804 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.804 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.804 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:06.804 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.804 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.804 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.804 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:06.804 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.804 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.804 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.804 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:06.804 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.804 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.804 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.804 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:06.804 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.804 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.804 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.804 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:06.804 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:06.804 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:06.804 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:06.804 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:06.804 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:06.804 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:06.804 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:06.804 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:06.804 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:06.804 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:06.804 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:06.804 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:06.804 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.804 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.805 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43624584 kB' 'MemAvailable: 47118424 kB' 'Buffers: 3736 kB' 'Cached: 12554264 kB' 'SwapCached: 0 kB' 'Active: 9520048 kB' 'Inactive: 3500996 kB' 'Active(anon): 9131040 kB' 'Inactive(anon): 0 kB' 'Active(file): 389008 kB' 'Inactive(file): 3500996 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 466344 kB' 'Mapped: 173676 kB' 'Shmem: 8667996 kB' 'KReclaimable: 194916 kB' 'Slab: 550044 kB' 'SReclaimable: 194916 kB' 'SUnreclaim: 355128 kB' 'KernelStack: 12688 kB' 'PageTables: 7908 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 10264024 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195888 kB' 'VmallocChunk: 0 kB' 'Percpu: 32832 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1678940 kB' 'DirectMap2M: 13969408 kB' 'DirectMap1G: 53477376 kB' 00:05:06.805 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.805 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:06.805 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.805 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.805 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.805 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:06.805 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.805 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.805 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.805 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:06.805 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.805 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.805 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.805 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:06.805 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.805 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.805 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.805 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:06.805 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.805 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.805 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.805 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:06.805 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.805 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.805 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.805 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:06.805 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.805 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.805 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.805 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:06.805 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.805 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.805 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.805 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:06.805 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.805 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.805 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.805 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:06.805 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.805 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.805 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.805 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:06.805 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.805 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.805 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.805 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:06.805 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.805 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.805 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.805 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:06.805 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.805 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.805 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.805 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:06.805 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.805 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.805 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.805 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:06.805 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.805 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.805 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.805 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:06.805 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.805 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.805 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.805 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:06.805 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.805 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.805 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.805 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:06.805 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.805 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.805 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.805 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:06.805 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.805 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.805 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.805 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:06.805 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.805 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.805 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.805 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:06.805 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.805 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.805 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.805 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:06.805 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.805 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.805 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.805 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:06.805 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.805 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.805 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.805 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:06.805 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.805 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.805 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.805 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:06.805 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.805 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.805 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.806 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:06.806 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.806 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.806 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.806 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:06.806 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.806 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.806 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.806 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:06.806 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.806 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.806 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.806 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:06.806 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.806 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.806 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.806 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:06.806 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.806 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.806 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.806 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:06.806 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.806 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.806 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.806 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:06.806 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.806 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.806 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.806 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:06.806 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.806 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.806 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.806 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:06.806 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.806 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.806 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.806 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:06.806 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.806 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.806 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.806 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:06.806 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.806 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.806 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.806 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:06.806 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.806 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.806 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.806 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:06.806 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.806 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.806 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.806 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:06.806 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.806 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.806 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.806 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:06.806 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.806 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.806 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.806 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:06.806 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.806 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.806 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.806 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:06.806 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.806 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.806 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.806 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:06.806 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.806 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.806 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.806 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:06.806 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.806 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.806 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.806 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:06.806 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.806 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.806 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.806 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:06.806 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.806 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.806 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.806 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:06.806 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.806 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.806 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.806 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:06.806 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.806 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.806 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.806 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:06.806 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.806 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.806 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.806 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:06.806 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.806 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.806 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.806 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:06.806 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:06.806 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:06.806 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:06.806 nr_hugepages=1024 00:05:06.806 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:06.806 resv_hugepages=0 00:05:06.806 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:06.806 surplus_hugepages=0 00:05:06.806 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:06.806 anon_hugepages=0 00:05:06.806 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:06.806 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:06.806 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:06.806 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:06.806 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:06.806 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:06.806 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:06.806 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:06.806 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:06.806 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:06.806 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:06.806 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:06.806 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.806 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.807 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43624816 kB' 'MemAvailable: 47118656 kB' 'Buffers: 3736 kB' 'Cached: 12554288 kB' 'SwapCached: 0 kB' 'Active: 9520464 kB' 'Inactive: 3500996 kB' 'Active(anon): 9131456 kB' 'Inactive(anon): 0 kB' 'Active(file): 389008 kB' 'Inactive(file): 3500996 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 466732 kB' 'Mapped: 173676 kB' 'Shmem: 8668020 kB' 'KReclaimable: 194916 kB' 'Slab: 550044 kB' 'SReclaimable: 194916 kB' 'SUnreclaim: 355128 kB' 'KernelStack: 12640 kB' 'PageTables: 7772 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 10262172 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195856 kB' 'VmallocChunk: 0 kB' 'Percpu: 32832 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1678940 kB' 'DirectMap2M: 13969408 kB' 'DirectMap1G: 53477376 kB' 00:05:06.807 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.807 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:06.807 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.807 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.807 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.807 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:06.807 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.807 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.807 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.807 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:06.807 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.807 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.807 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.807 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:06.807 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.807 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.807 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.807 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:06.807 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.807 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.807 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.807 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:06.807 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.807 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.807 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.807 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:06.807 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.807 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.807 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.807 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:06.807 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.807 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.807 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.807 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:06.807 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.807 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.807 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.807 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:06.807 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.807 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.807 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.068 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:07.068 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.068 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.068 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.068 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:07.068 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.068 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.068 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.068 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:07.068 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.068 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.068 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.068 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:07.068 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.068 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.068 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.068 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:07.068 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.068 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.068 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.068 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:07.068 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.068 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.068 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.068 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:07.068 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.068 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.068 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.068 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:07.068 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.068 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.068 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.068 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:07.068 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.068 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.068 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.068 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:07.068 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.068 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.068 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.068 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:07.068 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.068 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.068 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.068 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:07.068 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.068 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.068 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.068 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:07.068 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.068 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.068 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.068 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:07.068 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.068 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.068 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.068 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:07.068 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.068 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.069 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.069 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:07.069 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.069 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.069 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.069 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:07.069 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.069 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.069 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.069 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:07.069 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.069 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.069 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.069 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:07.069 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.069 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.069 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.069 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:07.069 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.069 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.069 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.069 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:07.069 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.069 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.069 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.069 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:07.069 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.069 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.069 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.069 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:07.069 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.069 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.069 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.069 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:07.069 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.069 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.069 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.069 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:07.069 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.069 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.069 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.069 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:07.069 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.069 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.069 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.069 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:07.069 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.069 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.069 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.069 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:07.069 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.069 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.069 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.069 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:07.069 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.069 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.069 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.069 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:07.069 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.069 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.069 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.069 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:07.069 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.069 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.069 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.069 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:07.069 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.069 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.069 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.069 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:07.069 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.069 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.069 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.069 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:07.069 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.069 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.069 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.069 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:07.069 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.069 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.069 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.069 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:07.069 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.069 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.069 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.069 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:07.069 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.069 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.069 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.069 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:07.069 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.069 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.069 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.069 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:05:07.069 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:07.069 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:07.069 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:07.069 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:05:07.069 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:07.069 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:07.069 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:07.069 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:07.069 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:07.069 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:07.069 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:07.069 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:07.069 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:07.069 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:07.069 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:05:07.069 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:07.069 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:07.069 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:07.069 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:07.069 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:07.069 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:07.069 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:07.069 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.069 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.070 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 27521292 kB' 'MemUsed: 5308592 kB' 'SwapCached: 0 kB' 'Active: 3195192 kB' 'Inactive: 146700 kB' 'Active(anon): 3038660 kB' 'Inactive(anon): 0 kB' 'Active(file): 156532 kB' 'Inactive(file): 146700 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3067784 kB' 'Mapped: 71696 kB' 'AnonPages: 277340 kB' 'Shmem: 2764552 kB' 'KernelStack: 7816 kB' 'PageTables: 4536 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 97724 kB' 'Slab: 308760 kB' 'SReclaimable: 97724 kB' 'SUnreclaim: 211036 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:07.070 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.070 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:07.070 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.070 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.070 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.070 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:07.070 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.070 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.070 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.070 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:07.070 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.070 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.070 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.070 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:07.070 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.070 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.070 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.070 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:07.070 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.070 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.070 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.070 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:07.070 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.070 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.070 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.070 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:07.070 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.070 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.070 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.070 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:07.070 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.070 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.070 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.070 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:07.070 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.070 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.070 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.070 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:07.070 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.070 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.070 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.070 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:07.070 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.070 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.070 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.070 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:07.070 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.070 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.070 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.070 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:07.070 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.070 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.070 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.070 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:07.070 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.070 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.070 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.070 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:07.070 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.070 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.070 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.070 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:07.070 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.070 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.070 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.070 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:07.070 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.070 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.070 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.070 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:07.070 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.070 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.070 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.070 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:07.070 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.070 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.070 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.070 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:07.070 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.070 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.070 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.070 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:07.070 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.070 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.070 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.070 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:07.070 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.070 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.070 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.070 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:07.070 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.070 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.070 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.070 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:07.070 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.070 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.070 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.070 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:07.070 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.070 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.070 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.070 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:07.070 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.070 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.070 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.070 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:07.070 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.070 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.070 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.070 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:07.070 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.070 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.070 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.070 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:07.070 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.070 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.070 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.070 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:07.070 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.070 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.070 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.070 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:07.070 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.070 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.071 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.071 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:07.071 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.071 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.071 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.071 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:07.071 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.071 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.071 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.071 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:07.071 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.071 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.071 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.071 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:07.071 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.071 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.071 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.071 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:07.071 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.071 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.071 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.071 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:07.071 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:07.071 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:07.071 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:07.071 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:07.071 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:05:07.071 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:07.071 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:05:07.071 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:07.071 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:07.071 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:07.071 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:05:07.071 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:05:07.071 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:07.071 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:07.071 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.071 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.071 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711824 kB' 'MemFree: 16110364 kB' 'MemUsed: 11601460 kB' 'SwapCached: 0 kB' 'Active: 6325560 kB' 'Inactive: 3354296 kB' 'Active(anon): 6093084 kB' 'Inactive(anon): 0 kB' 'Active(file): 232476 kB' 'Inactive(file): 3354296 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9490284 kB' 'Mapped: 102416 kB' 'AnonPages: 189652 kB' 'Shmem: 5903512 kB' 'KernelStack: 4808 kB' 'PageTables: 3168 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 97192 kB' 'Slab: 241284 kB' 'SReclaimable: 97192 kB' 'SUnreclaim: 144092 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:07.071 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.071 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:07.071 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.071 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.071 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.071 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:07.071 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.071 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.071 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.071 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:07.071 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.071 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.071 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.071 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:07.071 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.071 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.071 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.071 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:07.071 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.071 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.071 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.071 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:07.071 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.071 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.071 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.071 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:07.071 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.071 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.071 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.071 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:07.071 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.071 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.071 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.071 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:07.071 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.071 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.071 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.071 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:07.071 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.071 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.071 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.071 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:07.071 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.071 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.071 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.071 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:07.071 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.071 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.071 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.071 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:07.071 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.071 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.071 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.071 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:07.071 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.071 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.071 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.071 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:07.071 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.071 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.071 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.071 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:07.071 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.071 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.071 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.071 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:07.071 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.071 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.071 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.071 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:07.071 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.071 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.071 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.071 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:07.071 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.071 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.071 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.071 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:07.071 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.071 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.072 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.072 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:07.072 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.072 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.072 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.072 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:07.072 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.072 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.072 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.072 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:07.072 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.072 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.072 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.072 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:07.072 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.072 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.072 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.072 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:07.072 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.072 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.072 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.072 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:07.072 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.072 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.072 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.072 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:07.072 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.072 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.072 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.072 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:07.072 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.072 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.072 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.072 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:07.072 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.072 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.072 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.072 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:07.072 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.072 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.072 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.072 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:07.072 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.072 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.072 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.072 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:07.072 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.072 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.072 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.072 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:07.072 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.072 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.072 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.072 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:07.072 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.072 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.072 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.072 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:07.072 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.072 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.072 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.072 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:07.072 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.072 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.072 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.072 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:07.072 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:07.072 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:07.072 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:07.072 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:07.072 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:07.072 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:07.072 node0=512 expecting 512 00:05:07.072 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:07.072 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:07.072 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:07.072 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:05:07.072 node1=512 expecting 512 00:05:07.072 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:07.072 00:05:07.072 real 0m1.455s 00:05:07.072 user 0m0.583s 00:05:07.072 sys 0m0.825s 00:05:07.072 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:07.072 03:14:52 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:07.072 ************************************ 00:05:07.072 END TEST per_node_1G_alloc 00:05:07.072 ************************************ 00:05:07.072 03:14:52 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:05:07.072 03:14:52 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:07.072 03:14:52 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:07.072 03:14:52 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:07.072 ************************************ 00:05:07.072 START TEST even_2G_alloc 00:05:07.072 ************************************ 00:05:07.072 03:14:52 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1121 -- # even_2G_alloc 00:05:07.072 03:14:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:05:07.072 03:14:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:05:07.072 03:14:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:07.072 03:14:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:07.072 03:14:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:07.072 03:14:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:07.072 03:14:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:07.072 03:14:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:07.072 03:14:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:07.072 03:14:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:07.072 03:14:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:07.073 03:14:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:07.073 03:14:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:07.073 03:14:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:07.073 03:14:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:07.073 03:14:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:05:07.073 03:14:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:05:07.073 03:14:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:05:07.073 03:14:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:07.073 03:14:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:05:07.073 03:14:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:05:07.073 03:14:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:05:07.073 03:14:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:07.073 03:14:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:05:07.073 03:14:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:05:07.073 03:14:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:05:07.073 03:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:07.073 03:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:08.004 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:08.004 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:05:08.004 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:08.004 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:08.004 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:08.004 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:08.004 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:08.004 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:08.004 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:08.004 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:08.004 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:08.004 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:08.004 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:08.004 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:08.004 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:08.004 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:08.004 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:08.265 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:05:08.265 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:05:08.265 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:08.265 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:08.265 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:08.265 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:08.265 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:08.265 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:08.265 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:08.265 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:08.265 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:08.265 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:08.265 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:08.265 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:08.265 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:08.265 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:08.265 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:08.265 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:08.265 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.265 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.265 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43622832 kB' 'MemAvailable: 47116672 kB' 'Buffers: 3736 kB' 'Cached: 12554380 kB' 'SwapCached: 0 kB' 'Active: 9520136 kB' 'Inactive: 3500996 kB' 'Active(anon): 9131128 kB' 'Inactive(anon): 0 kB' 'Active(file): 389008 kB' 'Inactive(file): 3500996 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 466196 kB' 'Mapped: 173680 kB' 'Shmem: 8668112 kB' 'KReclaimable: 194916 kB' 'Slab: 549900 kB' 'SReclaimable: 194916 kB' 'SUnreclaim: 354984 kB' 'KernelStack: 12688 kB' 'PageTables: 7852 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 10262368 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196016 kB' 'VmallocChunk: 0 kB' 'Percpu: 32832 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1678940 kB' 'DirectMap2M: 13969408 kB' 'DirectMap1G: 53477376 kB' 00:05:08.265 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.265 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.265 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.265 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.265 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.265 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.265 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.265 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.265 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.265 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.265 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.265 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.266 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.266 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.266 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.266 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.266 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.266 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.266 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.266 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.266 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.266 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.266 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.266 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.266 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.266 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.266 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.266 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.266 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.266 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.266 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.266 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.266 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.266 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.266 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.266 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.266 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.266 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.266 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.266 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.266 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.266 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.266 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.266 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.266 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.266 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.266 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.266 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.266 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.266 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.266 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.266 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.266 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.266 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.266 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.266 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.266 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.266 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.266 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.266 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.266 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.266 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.266 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.266 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.266 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.266 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.266 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.266 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.266 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.266 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.266 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.266 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.266 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.266 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.266 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.266 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.266 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.266 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.266 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.266 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.266 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.266 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.266 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.266 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.266 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.266 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.266 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.266 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.266 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.266 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.266 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.266 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.266 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.266 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.266 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.266 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.266 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.267 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.267 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.267 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.267 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.267 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.267 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.267 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.267 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.267 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.267 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.267 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.267 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.267 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.267 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.267 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.267 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.267 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.267 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.267 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.267 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.267 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.267 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.267 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.267 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.267 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.267 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.267 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.267 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.267 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.267 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.267 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.267 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.267 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.267 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.267 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.267 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.267 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.267 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.267 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.267 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.267 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.267 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.267 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.267 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.267 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.267 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.267 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.267 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.267 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.267 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.267 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.267 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.267 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.267 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.267 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.267 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.267 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.267 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.267 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.267 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.267 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.267 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.267 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.267 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.267 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:08.267 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:08.267 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:08.267 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:08.267 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:08.267 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:08.267 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:08.267 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:08.267 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:08.267 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:08.267 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:08.267 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:08.267 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:08.267 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.267 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.268 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43622832 kB' 'MemAvailable: 47116672 kB' 'Buffers: 3736 kB' 'Cached: 12554384 kB' 'SwapCached: 0 kB' 'Active: 9520260 kB' 'Inactive: 3500996 kB' 'Active(anon): 9131252 kB' 'Inactive(anon): 0 kB' 'Active(file): 389008 kB' 'Inactive(file): 3500996 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 466332 kB' 'Mapped: 173676 kB' 'Shmem: 8668116 kB' 'KReclaimable: 194916 kB' 'Slab: 549900 kB' 'SReclaimable: 194916 kB' 'SUnreclaim: 354984 kB' 'KernelStack: 12688 kB' 'PageTables: 7852 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 10262384 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196000 kB' 'VmallocChunk: 0 kB' 'Percpu: 32832 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1678940 kB' 'DirectMap2M: 13969408 kB' 'DirectMap1G: 53477376 kB' 00:05:08.268 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.268 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.268 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.268 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.268 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.268 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.268 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.268 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.268 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.268 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.268 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.268 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.268 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.268 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.268 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.268 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.268 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.268 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.268 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.268 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.268 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.268 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.268 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.268 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.268 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.268 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.268 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.268 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.268 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.268 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.268 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.268 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.268 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.268 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.268 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.268 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.268 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.268 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.268 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.268 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.268 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.268 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.268 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.268 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.268 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.268 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.268 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.268 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.268 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.268 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.268 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.268 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.268 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.268 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.268 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.268 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.268 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.268 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.268 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.268 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.268 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.268 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.268 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.268 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.268 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.268 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.268 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.268 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.268 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.268 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.268 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.268 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.268 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.268 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.268 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.268 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.268 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.268 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.268 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.268 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.268 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.268 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.268 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.268 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.268 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.268 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.268 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.268 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.268 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.268 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.268 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.268 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.268 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.268 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.268 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.268 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.268 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.268 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.268 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.268 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.268 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.268 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.268 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.268 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.268 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.268 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.268 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.268 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.268 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.268 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.268 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.268 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.268 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.268 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.269 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.269 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.269 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.269 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.269 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.269 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.269 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.269 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.269 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.269 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.269 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.269 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.269 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.269 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.269 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.269 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.269 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.269 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.269 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.269 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.269 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.269 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.269 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.269 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.269 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.269 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.269 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.269 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.269 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.269 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.269 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.269 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.269 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.269 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.269 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.269 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.269 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.269 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.269 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.269 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.269 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.269 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.269 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.269 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.269 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.269 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.269 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.269 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.269 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.269 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.269 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.269 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.269 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.269 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.269 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.269 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.269 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.269 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.269 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.269 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.269 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.269 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.269 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.269 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.269 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.269 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.269 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.269 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.269 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.269 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.269 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.269 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.269 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.269 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.269 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.269 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.269 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.269 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.269 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.269 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.269 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.269 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.269 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.269 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.269 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.269 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.269 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.269 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.269 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.269 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.269 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.269 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:08.269 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:08.269 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:08.269 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:08.269 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:08.269 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:08.269 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:08.269 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:08.269 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:08.269 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:08.269 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:08.269 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:08.269 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:08.269 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.269 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.269 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43622848 kB' 'MemAvailable: 47116688 kB' 'Buffers: 3736 kB' 'Cached: 12554396 kB' 'SwapCached: 0 kB' 'Active: 9520544 kB' 'Inactive: 3500996 kB' 'Active(anon): 9131536 kB' 'Inactive(anon): 0 kB' 'Active(file): 389008 kB' 'Inactive(file): 3500996 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 466616 kB' 'Mapped: 173676 kB' 'Shmem: 8668128 kB' 'KReclaimable: 194916 kB' 'Slab: 549900 kB' 'SReclaimable: 194916 kB' 'SUnreclaim: 354984 kB' 'KernelStack: 12688 kB' 'PageTables: 7852 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 10262408 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196000 kB' 'VmallocChunk: 0 kB' 'Percpu: 32832 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1678940 kB' 'DirectMap2M: 13969408 kB' 'DirectMap1G: 53477376 kB' 00:05:08.269 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.269 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.269 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.269 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.269 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.269 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.269 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.269 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.269 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.270 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.270 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.270 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.270 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.270 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.270 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.270 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.270 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.270 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.270 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.270 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.270 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.270 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.270 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.270 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.270 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.270 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.270 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.270 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.270 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.270 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.270 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.270 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.270 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.270 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.270 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.270 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.270 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.270 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.270 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.270 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.270 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.270 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.270 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.270 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.270 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.270 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.270 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.270 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.270 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.270 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.270 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.270 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.270 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.270 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.270 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.270 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.270 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.270 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.270 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.270 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.270 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.270 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.270 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.270 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.270 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.270 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.270 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.270 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.270 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.270 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.270 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.270 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.270 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.270 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.270 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.270 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.270 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.270 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.270 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.270 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.270 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.270 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.270 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.270 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.270 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.270 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.270 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.270 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.270 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.270 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.270 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.270 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.270 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.270 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.270 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.270 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.270 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.270 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.270 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.270 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.270 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.270 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.270 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.270 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.270 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.270 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.270 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.270 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.270 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.270 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.270 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.270 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.270 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.270 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.270 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.270 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.270 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.271 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.271 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.271 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.271 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.271 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.271 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.271 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.271 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.271 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.271 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.271 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.271 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.271 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.271 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.271 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.271 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.271 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.271 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.271 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.271 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.271 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.271 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.271 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.271 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.271 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.271 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.271 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.271 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.271 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.271 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.271 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.271 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.271 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.271 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.271 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.271 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.271 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.271 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.271 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.271 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.271 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.271 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.271 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.271 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.271 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.271 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.271 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.271 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.271 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.271 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.271 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.271 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.271 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.271 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.271 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.271 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.271 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.271 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.271 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.271 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.271 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.271 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.271 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.271 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.271 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.271 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.271 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.271 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.271 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.271 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.271 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.271 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.271 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.271 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.271 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.271 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.271 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.271 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.271 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.271 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.271 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.271 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.271 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.271 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.271 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:08.271 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:08.271 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:08.271 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:08.271 nr_hugepages=1024 00:05:08.271 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:08.271 resv_hugepages=0 00:05:08.271 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:08.271 surplus_hugepages=0 00:05:08.271 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:08.271 anon_hugepages=0 00:05:08.271 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:08.271 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:08.271 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:08.271 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:08.271 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:08.271 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:08.271 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:08.271 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:08.271 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:08.271 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:08.271 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:08.271 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:08.271 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.271 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.271 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43627040 kB' 'MemAvailable: 47120880 kB' 'Buffers: 3736 kB' 'Cached: 12554420 kB' 'SwapCached: 0 kB' 'Active: 9516580 kB' 'Inactive: 3500996 kB' 'Active(anon): 9127572 kB' 'Inactive(anon): 0 kB' 'Active(file): 389008 kB' 'Inactive(file): 3500996 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 462656 kB' 'Mapped: 172592 kB' 'Shmem: 8668152 kB' 'KReclaimable: 194916 kB' 'Slab: 549892 kB' 'SReclaimable: 194916 kB' 'SUnreclaim: 354976 kB' 'KernelStack: 12624 kB' 'PageTables: 7544 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 10248492 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195936 kB' 'VmallocChunk: 0 kB' 'Percpu: 32832 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1678940 kB' 'DirectMap2M: 13969408 kB' 'DirectMap1G: 53477376 kB' 00:05:08.271 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.271 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.271 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.271 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.271 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.271 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.271 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.271 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.271 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.272 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.272 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.272 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.272 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.272 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.272 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.272 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.272 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.272 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.272 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.272 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.272 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.272 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.272 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.272 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.272 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.272 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.272 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.272 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.272 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.272 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.272 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.272 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.272 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.272 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.272 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.272 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.272 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.272 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.272 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.272 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.272 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.272 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.272 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.272 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.272 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.272 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.272 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.272 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.272 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.272 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.272 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.272 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.272 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.272 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.272 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.272 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.272 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.272 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.272 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.272 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.272 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.272 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.272 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.272 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.272 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.272 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.272 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.272 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.272 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.272 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.272 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.272 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.272 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.272 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.272 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.272 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.272 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.272 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.272 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.272 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.272 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.272 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.272 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.272 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.272 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.272 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.272 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.272 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.272 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.272 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.272 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.272 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.272 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.272 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.272 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.272 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.272 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.272 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.272 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.272 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.272 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.272 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.272 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.272 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.272 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.272 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.272 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.272 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.272 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.272 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.272 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.272 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.272 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.272 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.272 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.272 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.272 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.272 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.272 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.272 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.272 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.272 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.272 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.272 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.272 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.272 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.272 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.272 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.272 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.272 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.272 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.272 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.272 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.272 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.272 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.272 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.272 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.272 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.272 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.273 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.273 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.273 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.273 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.273 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.273 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.273 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.273 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.273 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.273 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.273 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.273 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.273 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.273 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.273 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.273 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.273 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.273 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.273 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.273 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.273 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.273 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.273 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.273 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.273 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.273 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.273 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.273 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.273 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.273 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.273 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.273 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.273 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.273 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.273 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.273 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.273 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.273 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.273 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.273 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.273 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.273 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.273 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.273 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.273 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.273 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.273 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.273 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.273 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.273 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.273 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.273 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.273 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.273 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.273 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:05:08.273 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:08.273 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:08.273 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:08.273 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:05:08.273 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:08.273 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:08.273 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:08.273 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:08.273 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:08.273 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:08.273 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:08.273 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:08.273 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:08.273 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:08.273 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:05:08.273 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:08.273 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:08.273 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:08.273 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:08.273 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:08.273 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:08.273 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:08.273 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.273 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.273 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 27523236 kB' 'MemUsed: 5306648 kB' 'SwapCached: 0 kB' 'Active: 3191644 kB' 'Inactive: 146700 kB' 'Active(anon): 3035112 kB' 'Inactive(anon): 0 kB' 'Active(file): 156532 kB' 'Inactive(file): 146700 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3067784 kB' 'Mapped: 70640 kB' 'AnonPages: 273636 kB' 'Shmem: 2764552 kB' 'KernelStack: 7736 kB' 'PageTables: 4120 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 97724 kB' 'Slab: 308572 kB' 'SReclaimable: 97724 kB' 'SUnreclaim: 210848 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:08.273 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.273 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.273 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.273 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.273 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.273 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.273 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.273 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.273 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.273 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.273 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.273 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.273 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.273 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.273 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.273 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.273 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.273 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.273 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.273 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.273 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.273 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.273 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.273 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.273 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.273 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.273 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.273 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.273 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.273 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.273 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.273 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.273 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.273 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.273 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.273 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.273 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.273 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.273 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.273 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.273 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.274 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.274 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.274 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.274 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.274 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.274 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.274 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.274 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.274 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.274 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.274 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.274 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.274 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.274 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.274 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.274 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.274 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.274 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.274 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.274 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.274 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.274 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.274 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.274 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.274 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.274 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.274 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.274 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.274 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.274 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.274 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.274 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.274 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.274 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.274 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.274 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.274 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.274 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.274 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.274 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.274 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.274 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.274 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.274 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.274 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.274 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.274 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.274 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.274 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.274 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.274 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.274 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.274 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.274 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.274 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.274 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.274 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.274 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.274 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.274 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.274 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.274 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.274 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.274 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.274 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.274 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.274 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.274 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.274 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.274 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.274 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.274 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.274 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.274 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.274 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.274 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.274 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.274 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.274 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.274 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.274 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.274 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.274 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.274 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.274 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.274 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.274 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.274 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.274 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.274 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.274 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.274 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.274 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.274 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.274 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.274 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.274 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.274 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.274 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.274 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.274 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.274 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.274 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.274 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.274 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:08.274 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:08.532 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:08.532 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:08.532 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:08.532 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:05:08.532 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:08.532 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:05:08.532 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:08.532 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:08.532 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:08.532 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:05:08.532 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:05:08.532 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:08.532 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:08.532 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.532 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.532 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711824 kB' 'MemFree: 16103804 kB' 'MemUsed: 11608020 kB' 'SwapCached: 0 kB' 'Active: 6324768 kB' 'Inactive: 3354296 kB' 'Active(anon): 6092292 kB' 'Inactive(anon): 0 kB' 'Active(file): 232476 kB' 'Inactive(file): 3354296 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9490412 kB' 'Mapped: 101952 kB' 'AnonPages: 188760 kB' 'Shmem: 5903640 kB' 'KernelStack: 4872 kB' 'PageTables: 3368 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 97184 kB' 'Slab: 241304 kB' 'SReclaimable: 97184 kB' 'SUnreclaim: 144120 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:08.532 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.532 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.532 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.532 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.532 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.532 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.532 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.532 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.532 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.532 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.532 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.532 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.532 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.532 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.532 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.532 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.532 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.532 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.532 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.532 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.532 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.532 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.533 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.533 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.533 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.533 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.533 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.533 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.533 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.533 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.533 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.533 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.533 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.533 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.533 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.533 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.533 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.533 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.533 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.533 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.533 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.533 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.533 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.533 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.533 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.533 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.533 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.533 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.533 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.533 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.533 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.533 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.533 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.533 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.533 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.533 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.533 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.533 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.533 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.533 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.533 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.533 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.533 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.533 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.533 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.533 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.533 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.533 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.533 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.533 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.533 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.533 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.533 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.533 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.533 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.533 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.533 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.533 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.533 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.533 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.533 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.533 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.533 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.533 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.533 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.533 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.533 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.533 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.533 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.533 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.533 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.533 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.533 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.533 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.533 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.533 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.533 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.533 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.533 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.533 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.533 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.533 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.533 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.533 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.533 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.533 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.533 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.533 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.533 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.534 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.534 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.534 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.534 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.534 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.534 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.534 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.534 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.534 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.534 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.534 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.534 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.534 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.534 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.534 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.534 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.534 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.534 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.534 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.534 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.534 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.534 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.534 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.534 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.534 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.534 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.534 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.534 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.534 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.534 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.534 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.534 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.534 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.534 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.534 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.534 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.534 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:08.534 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:08.534 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:08.534 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:08.534 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:08.534 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:08.534 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:08.534 node0=512 expecting 512 00:05:08.534 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:08.534 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:08.534 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:08.534 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:05:08.534 node1=512 expecting 512 00:05:08.534 03:14:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:08.534 00:05:08.534 real 0m1.374s 00:05:08.534 user 0m0.573s 00:05:08.534 sys 0m0.762s 00:05:08.534 03:14:53 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:08.534 03:14:53 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:08.534 ************************************ 00:05:08.534 END TEST even_2G_alloc 00:05:08.534 ************************************ 00:05:08.534 03:14:53 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:05:08.534 03:14:53 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:08.534 03:14:53 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:08.534 03:14:53 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:08.534 ************************************ 00:05:08.534 START TEST odd_alloc 00:05:08.534 ************************************ 00:05:08.534 03:14:53 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1121 -- # odd_alloc 00:05:08.534 03:14:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:05:08.534 03:14:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:05:08.534 03:14:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:08.534 03:14:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:08.534 03:14:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:05:08.534 03:14:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:08.534 03:14:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:08.534 03:14:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:08.534 03:14:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:05:08.534 03:14:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:08.534 03:14:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:08.534 03:14:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:08.534 03:14:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:08.534 03:14:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:08.534 03:14:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:08.534 03:14:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:05:08.534 03:14:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:05:08.534 03:14:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:05:08.534 03:14:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:08.534 03:14:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:05:08.534 03:14:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:05:08.534 03:14:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:05:08.534 03:14:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:08.534 03:14:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:05:08.535 03:14:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:05:08.535 03:14:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:05:08.535 03:14:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:08.535 03:14:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:09.464 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:09.464 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:05:09.464 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:09.464 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:09.464 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:09.464 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:09.464 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:09.464 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:09.464 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:09.464 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:09.464 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:09.464 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:09.464 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:09.464 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:09.464 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:09.464 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:09.464 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:09.737 03:14:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:05:09.737 03:14:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:05:09.737 03:14:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:09.737 03:14:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:09.737 03:14:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:09.737 03:14:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:09.737 03:14:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:09.737 03:14:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:09.737 03:14:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:09.737 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:09.737 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:09.737 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:09.737 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:09.737 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:09.737 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:09.737 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:09.737 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:09.737 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:09.737 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.737 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.737 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43622348 kB' 'MemAvailable: 47116184 kB' 'Buffers: 3736 kB' 'Cached: 12554512 kB' 'SwapCached: 0 kB' 'Active: 9516916 kB' 'Inactive: 3500996 kB' 'Active(anon): 9127908 kB' 'Inactive(anon): 0 kB' 'Active(file): 389008 kB' 'Inactive(file): 3500996 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 462856 kB' 'Mapped: 172660 kB' 'Shmem: 8668244 kB' 'KReclaimable: 194908 kB' 'Slab: 550060 kB' 'SReclaimable: 194908 kB' 'SUnreclaim: 355152 kB' 'KernelStack: 12640 kB' 'PageTables: 7496 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609856 kB' 'Committed_AS: 10248884 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195984 kB' 'VmallocChunk: 0 kB' 'Percpu: 32832 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1678940 kB' 'DirectMap2M: 13969408 kB' 'DirectMap1G: 53477376 kB' 00:05:09.737 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.737 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.737 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.737 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.737 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.737 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.737 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.737 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.737 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.737 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.737 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.737 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.737 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.737 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.737 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.737 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.737 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.737 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.737 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.737 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.737 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.737 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.737 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.737 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.737 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.737 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.737 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.737 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.737 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.737 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.737 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.737 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.737 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.737 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.737 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.737 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.737 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.737 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.737 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.737 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.737 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.737 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.737 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.737 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.737 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.737 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.737 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.737 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.737 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.737 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.737 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.737 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.737 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.737 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.737 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.737 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.737 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.737 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.737 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.737 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.737 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.737 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.737 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.737 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.737 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.737 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.737 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.737 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.737 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.737 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.737 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.737 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.737 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.737 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.737 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.737 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.737 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.737 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.737 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.737 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.737 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.737 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.737 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.738 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.738 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.738 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.738 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.738 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.738 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.738 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.738 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.738 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.738 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.738 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.738 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.738 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.738 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.738 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.738 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.738 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.738 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.738 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.738 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.738 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.738 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.738 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.738 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.738 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.738 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.738 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.738 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.738 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.738 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.738 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.738 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.738 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.738 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.738 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.738 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.738 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.738 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.738 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.738 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.738 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.738 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.738 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.738 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.738 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.738 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.738 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.738 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.738 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.738 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.738 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.738 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.738 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.738 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.738 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.738 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.738 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.738 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.738 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.738 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.738 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.738 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.738 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.738 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.738 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.738 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.738 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.738 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.738 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.738 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.738 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.738 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.738 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.738 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.738 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.738 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.738 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.738 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.738 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:09.738 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:09.738 03:14:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:09.738 03:14:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:09.738 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:09.738 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:09.738 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:09.738 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:09.738 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:09.738 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:09.738 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:09.738 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:09.738 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:09.738 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.738 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.738 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43622408 kB' 'MemAvailable: 47116244 kB' 'Buffers: 3736 kB' 'Cached: 12554512 kB' 'SwapCached: 0 kB' 'Active: 9516848 kB' 'Inactive: 3500996 kB' 'Active(anon): 9127840 kB' 'Inactive(anon): 0 kB' 'Active(file): 389008 kB' 'Inactive(file): 3500996 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 462788 kB' 'Mapped: 172600 kB' 'Shmem: 8668244 kB' 'KReclaimable: 194908 kB' 'Slab: 550048 kB' 'SReclaimable: 194908 kB' 'SUnreclaim: 355140 kB' 'KernelStack: 12656 kB' 'PageTables: 7532 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609856 kB' 'Committed_AS: 10248900 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195968 kB' 'VmallocChunk: 0 kB' 'Percpu: 32832 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1678940 kB' 'DirectMap2M: 13969408 kB' 'DirectMap1G: 53477376 kB' 00:05:09.738 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.738 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.738 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.738 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.738 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.738 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.738 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.738 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.738 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.738 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.738 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.738 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.738 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.738 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.738 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.738 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.738 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.738 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.738 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.738 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.738 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.738 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.738 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.738 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.738 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.738 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.738 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.738 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.739 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.739 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.739 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.739 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.739 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.739 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.739 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.739 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.739 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.739 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.739 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.739 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.739 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.739 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.739 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.739 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.739 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.739 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.739 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.739 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.739 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.739 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.739 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.739 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.739 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.739 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.739 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.739 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.739 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.739 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.739 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.739 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.739 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.739 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.739 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.739 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.739 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.739 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.739 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.739 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.739 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.739 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.739 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.739 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.739 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.739 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.739 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.739 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.739 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.739 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.739 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.739 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.739 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.739 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.739 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.739 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.739 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.739 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.739 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.739 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.739 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.739 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.739 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.739 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.739 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.739 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.739 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.739 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.739 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.739 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.739 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.739 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.739 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.739 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.739 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.739 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.739 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.739 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.739 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.739 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.739 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.739 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.739 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.739 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.739 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.739 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.739 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.739 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.739 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.739 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.739 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.739 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.739 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.739 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.739 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.739 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.739 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.739 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.739 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.739 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.739 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.739 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.739 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.739 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.739 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.739 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.739 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.739 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.739 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.739 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.739 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.739 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.739 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.739 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.739 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.739 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.739 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.739 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.739 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.739 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.739 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.739 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.739 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.739 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.739 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.739 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.739 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.739 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.739 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.739 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.739 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.739 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.740 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.740 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.740 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.740 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.740 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.740 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.740 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.740 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.740 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.740 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.740 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.740 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.740 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.740 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.740 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.740 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.740 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.740 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.740 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.740 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.740 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.740 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.740 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.740 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.740 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.740 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.740 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.740 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.740 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.740 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.740 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.740 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.740 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.740 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.740 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.740 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.740 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.740 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.740 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.740 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.740 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.740 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.740 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.740 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.740 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.740 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:09.740 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:09.740 03:14:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:09.740 03:14:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:09.740 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:09.740 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:09.740 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:09.740 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:09.740 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:09.740 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:09.740 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:09.740 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:09.740 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:09.740 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.740 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.740 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43623084 kB' 'MemAvailable: 47116920 kB' 'Buffers: 3736 kB' 'Cached: 12554532 kB' 'SwapCached: 0 kB' 'Active: 9516884 kB' 'Inactive: 3500996 kB' 'Active(anon): 9127876 kB' 'Inactive(anon): 0 kB' 'Active(file): 389008 kB' 'Inactive(file): 3500996 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 462808 kB' 'Mapped: 172600 kB' 'Shmem: 8668264 kB' 'KReclaimable: 194908 kB' 'Slab: 550072 kB' 'SReclaimable: 194908 kB' 'SUnreclaim: 355164 kB' 'KernelStack: 12656 kB' 'PageTables: 7536 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609856 kB' 'Committed_AS: 10248920 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195936 kB' 'VmallocChunk: 0 kB' 'Percpu: 32832 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1678940 kB' 'DirectMap2M: 13969408 kB' 'DirectMap1G: 53477376 kB' 00:05:09.740 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.740 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.740 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.740 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.740 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.740 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.740 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.740 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.740 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.740 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.740 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.740 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.740 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.740 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.740 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.740 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.740 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.740 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.740 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.740 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.740 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.740 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.740 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.740 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.740 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.740 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.740 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.740 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.740 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.740 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.740 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.740 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.740 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.740 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.740 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.740 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.740 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.740 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.740 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.740 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.740 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.740 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.740 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.740 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.740 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.740 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.740 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.740 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.740 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.740 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.740 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.740 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.740 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.740 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.740 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.740 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.740 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.740 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.740 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.741 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.741 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.741 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.741 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.741 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.741 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.741 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.741 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.741 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.741 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.741 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.741 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.741 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.741 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.741 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.741 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.741 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.741 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.741 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.741 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.741 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.741 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.741 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.741 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.741 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.741 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.741 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.741 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.741 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.741 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.741 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.741 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.741 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.741 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.741 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.741 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.741 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.741 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.741 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.741 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.741 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.741 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.741 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.741 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.741 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.741 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.741 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.741 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.741 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.741 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.741 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.741 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.741 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.741 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.741 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.741 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.741 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.741 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.741 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.741 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.741 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.741 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.741 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.741 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.741 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.741 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.741 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.741 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.741 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.741 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.741 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.741 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.741 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.741 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.741 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.741 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.741 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.741 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.741 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.741 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.741 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.741 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.741 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.741 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.741 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.741 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.741 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.741 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.741 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.741 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.741 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.741 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.741 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.741 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.741 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.741 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.741 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.741 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.741 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.741 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.741 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.741 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.741 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.741 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.741 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.741 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.741 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.741 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.741 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.742 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.742 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.742 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.742 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.742 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.742 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.742 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.742 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.742 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.742 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.742 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.742 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.742 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.742 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.742 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.742 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.742 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.742 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.742 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.742 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.742 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.742 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.742 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.742 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.742 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.742 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.742 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.742 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.742 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.742 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.742 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.742 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.742 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.742 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:09.742 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:09.742 03:14:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:09.742 03:14:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:05:09.742 nr_hugepages=1025 00:05:09.742 03:14:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:09.742 resv_hugepages=0 00:05:09.742 03:14:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:09.742 surplus_hugepages=0 00:05:09.742 03:14:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:09.742 anon_hugepages=0 00:05:09.742 03:14:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:09.742 03:14:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:05:09.742 03:14:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:09.742 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:09.742 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:09.742 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:09.742 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:09.742 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:09.742 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:09.742 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:09.742 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:09.742 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:09.742 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.742 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.742 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43622356 kB' 'MemAvailable: 47116192 kB' 'Buffers: 3736 kB' 'Cached: 12554552 kB' 'SwapCached: 0 kB' 'Active: 9517236 kB' 'Inactive: 3500996 kB' 'Active(anon): 9128228 kB' 'Inactive(anon): 0 kB' 'Active(file): 389008 kB' 'Inactive(file): 3500996 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 463204 kB' 'Mapped: 172616 kB' 'Shmem: 8668284 kB' 'KReclaimable: 194908 kB' 'Slab: 550072 kB' 'SReclaimable: 194908 kB' 'SUnreclaim: 355164 kB' 'KernelStack: 12736 kB' 'PageTables: 7448 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609856 kB' 'Committed_AS: 10251312 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195952 kB' 'VmallocChunk: 0 kB' 'Percpu: 32832 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1678940 kB' 'DirectMap2M: 13969408 kB' 'DirectMap1G: 53477376 kB' 00:05:09.742 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.742 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.742 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.742 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.742 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.742 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.742 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.742 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.742 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.742 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.742 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.742 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.742 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.742 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.742 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.742 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.742 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.742 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.742 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.742 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.742 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.742 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.742 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.742 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.742 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.742 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.742 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.742 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.742 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.742 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.742 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.742 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.742 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.742 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.742 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.742 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.742 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.742 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.742 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.742 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.742 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.742 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.742 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.742 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.742 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.742 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.742 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.742 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.742 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.742 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.742 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.742 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.742 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.742 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.742 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.742 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.742 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.742 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.742 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.742 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.742 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.742 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.743 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.743 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.743 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.743 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.743 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.743 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.743 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.743 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.743 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.743 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.743 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.743 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.743 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.743 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.743 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.743 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.743 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.743 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.743 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.743 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.743 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.743 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.743 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.743 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.743 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.743 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.743 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.743 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.743 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.743 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.743 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.743 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.743 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.743 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.743 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.743 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.743 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.743 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.743 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.743 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.743 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.743 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.743 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.743 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.743 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.743 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.743 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.743 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.743 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.743 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.743 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.743 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.743 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.743 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.743 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.743 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.743 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.743 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.743 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.743 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.743 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.743 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.743 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.743 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.743 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.743 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.743 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.743 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.743 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.743 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.743 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.743 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.743 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.743 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.743 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.743 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.743 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.743 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.743 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.743 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.743 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.743 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.743 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.743 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.743 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.743 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.743 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.743 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.743 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.743 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.743 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.743 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.743 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.743 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.743 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.743 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.743 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.743 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.743 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.743 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.743 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.743 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.743 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.743 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.743 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.743 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.743 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.743 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.743 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.743 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.743 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.743 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.743 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.743 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.743 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.743 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.743 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.743 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.743 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.743 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.743 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.743 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.743 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.743 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.743 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.743 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.743 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.743 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.743 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.743 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.743 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.743 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:05:09.744 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:09.744 03:14:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:09.744 03:14:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:09.744 03:14:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:05:09.744 03:14:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:09.744 03:14:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:09.744 03:14:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:09.744 03:14:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:05:09.744 03:14:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:09.744 03:14:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:09.744 03:14:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:09.744 03:14:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:09.744 03:14:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:09.744 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:09.744 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:05:09.744 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:09.744 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:09.744 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:09.744 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:09.744 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:09.744 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:09.744 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:09.744 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.744 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.744 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 27522284 kB' 'MemUsed: 5307600 kB' 'SwapCached: 0 kB' 'Active: 3193028 kB' 'Inactive: 146700 kB' 'Active(anon): 3036496 kB' 'Inactive(anon): 0 kB' 'Active(file): 156532 kB' 'Inactive(file): 146700 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3067784 kB' 'Mapped: 70648 kB' 'AnonPages: 275072 kB' 'Shmem: 2764552 kB' 'KernelStack: 8056 kB' 'PageTables: 5000 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 97724 kB' 'Slab: 308844 kB' 'SReclaimable: 97724 kB' 'SUnreclaim: 211120 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:09.744 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.744 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.744 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.744 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.744 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.744 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.744 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.744 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.744 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.744 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.744 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.744 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.744 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.744 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.744 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.744 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.744 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.744 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.744 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.744 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.744 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.744 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.744 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.744 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.744 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.744 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.744 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.744 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.744 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.744 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.744 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.744 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.744 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.744 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.744 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.744 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.744 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.744 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.744 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.744 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.744 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.744 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.744 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.744 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.744 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.744 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.744 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.744 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.744 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.744 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.744 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.744 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.744 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.744 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.744 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.744 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.744 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.744 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.744 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.744 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.744 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.744 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.744 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.744 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.744 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.744 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.744 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.744 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.744 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.744 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.744 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.744 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.744 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.744 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.744 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.744 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.744 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.744 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.744 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.744 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.744 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.744 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.744 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.744 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.744 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.744 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.744 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.744 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.744 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.744 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.744 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.744 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.744 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.744 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.744 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.744 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.745 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.745 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.745 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.745 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.745 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.745 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.745 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.745 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.745 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.745 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.745 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.745 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.745 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.745 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.745 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.745 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.745 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.745 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.745 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.745 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.745 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.745 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.745 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.745 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.745 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.745 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.745 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.745 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.745 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.745 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.745 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.745 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.745 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.745 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.745 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.745 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.745 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.745 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.745 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.745 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.745 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.745 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.745 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.745 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.745 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.745 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.745 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.745 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.745 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.745 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:09.745 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:09.745 03:14:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:09.745 03:14:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:09.745 03:14:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:09.745 03:14:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:05:09.745 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:09.745 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:05:09.745 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:09.745 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:09.745 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:09.745 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:05:09.745 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:05:09.745 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:09.745 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:09.745 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.745 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.745 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711824 kB' 'MemFree: 16099736 kB' 'MemUsed: 11612088 kB' 'SwapCached: 0 kB' 'Active: 6325416 kB' 'Inactive: 3354296 kB' 'Active(anon): 6092940 kB' 'Inactive(anon): 0 kB' 'Active(file): 232476 kB' 'Inactive(file): 3354296 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9490540 kB' 'Mapped: 101968 kB' 'AnonPages: 189292 kB' 'Shmem: 5903768 kB' 'KernelStack: 4856 kB' 'PageTables: 3272 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 97184 kB' 'Slab: 241220 kB' 'SReclaimable: 97184 kB' 'SUnreclaim: 144036 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:05:09.745 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.745 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.745 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.745 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.745 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.745 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.745 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.745 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.745 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.745 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.745 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.745 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.745 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.745 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.745 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.745 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.745 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.745 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.745 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.745 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.745 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.745 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.745 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.745 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.745 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.745 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.745 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.745 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.745 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.745 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.745 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.745 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.745 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.746 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.746 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.746 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.746 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.746 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.746 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.746 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.746 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.746 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.746 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.746 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.746 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.746 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.746 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.746 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.746 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.746 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.746 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.746 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.746 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.746 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.746 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.746 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.746 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.746 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.746 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.746 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.746 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.746 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.746 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.746 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.746 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.746 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.746 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.746 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.746 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.746 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.746 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.746 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.746 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.746 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.746 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.746 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.746 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.746 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.746 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.746 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.746 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.746 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.746 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.746 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.746 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.746 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.746 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.746 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.746 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.746 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.746 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.746 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.746 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.746 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.746 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.746 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.746 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.746 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.746 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.746 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.746 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.746 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.746 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.746 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.746 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.746 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.746 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.746 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.746 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.746 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.746 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.746 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.746 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.746 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.746 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.746 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.746 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.746 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.746 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.746 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.746 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.746 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.746 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.746 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.746 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.746 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.746 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.746 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.746 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.746 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.746 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.746 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.746 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.746 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.746 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.746 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.746 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.746 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.746 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.746 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.746 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.746 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.746 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.746 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.746 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.746 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:09.746 03:14:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:09.746 03:14:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:09.746 03:14:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:09.746 03:14:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:09.746 03:14:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:09.746 03:14:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:05:09.746 node0=512 expecting 513 00:05:09.746 03:14:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:09.746 03:14:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:09.746 03:14:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:09.746 03:14:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:05:09.746 node1=513 expecting 512 00:05:09.746 03:14:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:05:09.746 00:05:09.746 real 0m1.347s 00:05:09.746 user 0m0.573s 00:05:09.746 sys 0m0.735s 00:05:09.746 03:14:54 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:09.746 03:14:54 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:09.746 ************************************ 00:05:09.746 END TEST odd_alloc 00:05:09.746 ************************************ 00:05:09.747 03:14:55 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:05:09.747 03:14:55 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:09.747 03:14:55 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:09.747 03:14:55 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:09.747 ************************************ 00:05:09.747 START TEST custom_alloc 00:05:09.747 ************************************ 00:05:09.747 03:14:55 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1121 -- # custom_alloc 00:05:09.747 03:14:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:05:09.747 03:14:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:05:09.747 03:14:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:05:09.747 03:14:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:05:09.747 03:14:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:05:09.747 03:14:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:05:09.747 03:14:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:05:09.747 03:14:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:09.747 03:14:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:09.747 03:14:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:05:10.008 03:14:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:10.008 03:14:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:10.008 03:14:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:10.008 03:14:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:10.008 03:14:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:10.008 03:14:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:10.008 03:14:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:10.008 03:14:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:10.008 03:14:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:10.008 03:14:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:10.008 03:14:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:05:10.008 03:14:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:05:10.008 03:14:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:05:10.008 03:14:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:10.008 03:14:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:05:10.008 03:14:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:05:10.009 03:14:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:05:10.009 03:14:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:10.009 03:14:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:05:10.009 03:14:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:05:10.009 03:14:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:05:10.009 03:14:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:05:10.009 03:14:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:10.009 03:14:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:10.009 03:14:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:10.009 03:14:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:10.009 03:14:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:10.009 03:14:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:10.009 03:14:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:10.009 03:14:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:10.009 03:14:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:10.009 03:14:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:10.009 03:14:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:10.009 03:14:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:05:10.009 03:14:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:05:10.009 03:14:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:05:10.009 03:14:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:05:10.009 03:14:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:05:10.009 03:14:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:05:10.009 03:14:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:05:10.009 03:14:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:05:10.009 03:14:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:05:10.009 03:14:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:05:10.009 03:14:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:05:10.009 03:14:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:05:10.009 03:14:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:10.009 03:14:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:10.009 03:14:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:10.009 03:14:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:10.009 03:14:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:10.009 03:14:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:10.009 03:14:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:10.009 03:14:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:05:10.009 03:14:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:05:10.009 03:14:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:05:10.009 03:14:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:05:10.009 03:14:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:05:10.009 03:14:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:05:10.009 03:14:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:05:10.009 03:14:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:05:10.009 03:14:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:10.009 03:14:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:10.987 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:10.987 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:05:10.987 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:10.987 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:10.987 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:10.987 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:10.987 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:10.987 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:10.987 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:10.987 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:10.987 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:10.987 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:10.987 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:10.987 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:10.987 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:10.987 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:10.987 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:11.250 03:14:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:05:11.250 03:14:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:05:11.250 03:14:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:05:11.250 03:14:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:11.250 03:14:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:11.250 03:14:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:11.250 03:14:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:11.250 03:14:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:11.250 03:14:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:11.250 03:14:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:11.250 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:11.250 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:11.250 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:11.250 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:11.250 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:11.250 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:11.250 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:11.250 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:11.250 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:11.250 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.250 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.250 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 42555164 kB' 'MemAvailable: 46049000 kB' 'Buffers: 3736 kB' 'Cached: 12554644 kB' 'SwapCached: 0 kB' 'Active: 9517156 kB' 'Inactive: 3500996 kB' 'Active(anon): 9128148 kB' 'Inactive(anon): 0 kB' 'Active(file): 389008 kB' 'Inactive(file): 3500996 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 463008 kB' 'Mapped: 172716 kB' 'Shmem: 8668376 kB' 'KReclaimable: 194908 kB' 'Slab: 549972 kB' 'SReclaimable: 194908 kB' 'SUnreclaim: 355064 kB' 'KernelStack: 12624 kB' 'PageTables: 7436 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086592 kB' 'Committed_AS: 10249280 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195920 kB' 'VmallocChunk: 0 kB' 'Percpu: 32832 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1678940 kB' 'DirectMap2M: 13969408 kB' 'DirectMap1G: 53477376 kB' 00:05:11.250 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.250 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.250 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.250 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.250 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.250 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.250 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.250 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.250 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.250 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.250 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.250 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.250 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.250 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.250 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.250 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.250 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.250 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.250 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.250 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.250 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.250 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.250 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.250 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.250 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.250 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.250 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.250 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.250 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.250 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.250 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.250 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.250 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.250 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.250 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.250 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.250 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.250 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.250 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.250 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.250 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.250 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.251 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.251 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.251 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.251 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.251 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.251 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.251 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.251 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.251 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.251 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.251 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.251 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.251 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.251 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.251 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.251 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.251 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.251 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.251 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.251 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.251 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.251 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.251 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.251 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.251 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.251 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.251 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.251 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.251 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.251 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.251 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.251 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.251 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.251 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.251 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.251 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.251 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.251 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.251 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.251 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.251 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.251 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.251 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.251 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.251 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.251 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.251 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.251 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.251 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.251 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.251 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.251 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.251 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.251 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.251 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.251 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.251 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.251 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.251 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.251 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.251 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.251 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.251 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.251 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.251 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.251 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.251 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.251 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.251 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.251 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.251 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.251 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.251 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.251 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.251 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.251 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.251 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.251 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.251 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.251 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.251 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.251 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.251 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.251 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.251 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.251 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.251 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.251 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.251 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.251 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.251 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.251 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.251 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.251 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.251 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.251 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.251 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.251 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.251 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.251 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.251 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.251 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.251 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.251 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.251 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.251 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.251 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.251 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.251 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.251 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.251 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.251 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.251 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.251 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.251 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.251 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.251 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.251 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.251 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.251 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:11.251 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:11.251 03:14:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:11.251 03:14:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:11.251 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:11.251 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:11.251 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:11.251 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:11.251 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:11.251 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:11.251 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:11.251 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:11.251 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:11.252 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.252 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.252 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 42556472 kB' 'MemAvailable: 46050308 kB' 'Buffers: 3736 kB' 'Cached: 12554644 kB' 'SwapCached: 0 kB' 'Active: 9517160 kB' 'Inactive: 3500996 kB' 'Active(anon): 9128152 kB' 'Inactive(anon): 0 kB' 'Active(file): 389008 kB' 'Inactive(file): 3500996 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 463052 kB' 'Mapped: 172704 kB' 'Shmem: 8668376 kB' 'KReclaimable: 194908 kB' 'Slab: 549976 kB' 'SReclaimable: 194908 kB' 'SUnreclaim: 355068 kB' 'KernelStack: 12640 kB' 'PageTables: 7492 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086592 kB' 'Committed_AS: 10249296 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195856 kB' 'VmallocChunk: 0 kB' 'Percpu: 32832 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1678940 kB' 'DirectMap2M: 13969408 kB' 'DirectMap1G: 53477376 kB' 00:05:11.252 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.252 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.252 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.252 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.252 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.252 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.252 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.252 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.252 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.252 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.252 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.252 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.252 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.252 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.252 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.252 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.252 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.252 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.252 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.252 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.252 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.252 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.252 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.252 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.252 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.252 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.252 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.252 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.252 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.252 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.252 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.252 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.252 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.252 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.252 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.252 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.252 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.252 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.252 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.252 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.252 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.252 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.252 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.252 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.252 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.252 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.252 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.252 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.252 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.252 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.252 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.252 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.252 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.252 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.252 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.252 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.252 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.252 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.252 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.252 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.252 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.252 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.252 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.252 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.252 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.252 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.252 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.252 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.252 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.252 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.252 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.252 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.252 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.252 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.252 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.252 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.252 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.252 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.252 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.252 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.252 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.252 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.252 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.252 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.252 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.252 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.252 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.252 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.252 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.252 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.252 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.252 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.252 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.252 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.252 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.252 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.252 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.252 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.252 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.252 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.252 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.252 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.252 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.252 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.252 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.252 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.252 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.252 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.252 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.252 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.252 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.252 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.252 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.252 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.252 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.253 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.253 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.253 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.253 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.253 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.253 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.253 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.253 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.253 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.253 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.253 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.253 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.253 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.253 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.253 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.253 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.253 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.253 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.253 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.253 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.253 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.253 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.253 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.253 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.253 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.253 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.253 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.253 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.253 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.253 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.253 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.253 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.253 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.253 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.253 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.253 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.253 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.253 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.253 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.253 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.253 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.253 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.253 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.253 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.253 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.253 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.253 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.253 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.253 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.253 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.253 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.253 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.253 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.253 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.253 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.253 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.253 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.253 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.253 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.253 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.253 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.253 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.253 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.253 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.253 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.253 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.253 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.253 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.253 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.253 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.253 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.253 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.253 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.253 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.253 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.253 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.253 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.253 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.253 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.253 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.253 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.253 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.253 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.253 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.253 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.253 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.253 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.253 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.253 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.253 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.253 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:11.253 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:11.253 03:14:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:11.253 03:14:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:11.253 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:11.253 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:11.253 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:11.253 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:11.253 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:11.253 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:11.253 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:11.253 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:11.253 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:11.253 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.253 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.253 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 42556472 kB' 'MemAvailable: 46050308 kB' 'Buffers: 3736 kB' 'Cached: 12554668 kB' 'SwapCached: 0 kB' 'Active: 9517060 kB' 'Inactive: 3500996 kB' 'Active(anon): 9128052 kB' 'Inactive(anon): 0 kB' 'Active(file): 389008 kB' 'Inactive(file): 3500996 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 462944 kB' 'Mapped: 172616 kB' 'Shmem: 8668400 kB' 'KReclaimable: 194908 kB' 'Slab: 550016 kB' 'SReclaimable: 194908 kB' 'SUnreclaim: 355108 kB' 'KernelStack: 12656 kB' 'PageTables: 7540 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086592 kB' 'Committed_AS: 10249320 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195856 kB' 'VmallocChunk: 0 kB' 'Percpu: 32832 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1678940 kB' 'DirectMap2M: 13969408 kB' 'DirectMap1G: 53477376 kB' 00:05:11.253 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.253 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.253 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.253 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.253 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.253 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.253 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.253 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.253 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.253 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.253 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.253 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.253 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.254 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.254 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.254 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.254 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.254 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.254 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.254 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.254 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.254 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.254 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.254 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.254 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.254 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.254 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.254 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.254 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.254 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.254 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.254 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.254 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.254 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.254 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.254 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.254 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.254 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.254 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.254 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.254 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.254 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.254 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.254 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.254 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.254 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.254 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.254 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.254 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.254 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.254 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.254 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.254 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.254 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.254 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.254 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.254 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.254 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.254 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.254 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.254 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.254 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.254 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.254 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.254 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.254 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.254 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.254 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.254 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.254 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.254 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.254 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.254 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.254 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.254 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.254 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.254 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.254 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.254 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.254 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.254 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.254 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.254 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.254 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.254 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.254 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.254 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.254 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.254 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.254 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.254 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.254 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.254 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.254 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.254 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.254 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.254 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.254 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.254 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.254 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.254 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.254 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.254 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.254 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.254 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.254 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.254 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.254 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.254 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.254 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.254 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.254 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.254 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.254 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.254 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.254 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.254 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.254 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.254 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.254 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.254 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.254 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.254 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.255 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.255 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.255 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.255 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.255 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.255 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.255 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.255 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.255 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.255 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.255 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.255 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.255 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.255 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.255 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.255 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.255 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.255 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.255 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.255 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.255 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.255 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.255 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.255 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.255 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.255 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.255 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.255 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.255 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.255 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.255 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.255 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.255 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.255 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.255 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.255 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.255 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.255 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.255 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.255 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.255 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.255 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.255 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.255 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.255 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.255 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.255 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.255 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.255 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.255 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.255 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.255 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.255 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.255 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.255 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.255 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.255 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.255 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.255 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.255 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.255 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.255 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.255 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.255 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.255 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.255 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.255 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.255 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.255 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.255 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.255 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.255 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.255 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.255 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.255 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.255 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.255 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.255 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.255 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:11.255 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:11.255 03:14:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:11.255 03:14:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:05:11.255 nr_hugepages=1536 00:05:11.255 03:14:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:11.255 resv_hugepages=0 00:05:11.255 03:14:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:11.255 surplus_hugepages=0 00:05:11.255 03:14:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:11.255 anon_hugepages=0 00:05:11.255 03:14:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:05:11.255 03:14:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:05:11.255 03:14:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:11.255 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:11.255 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:11.255 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:11.255 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:11.255 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:11.255 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:11.255 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:11.255 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:11.255 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:11.255 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.255 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.255 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 42556472 kB' 'MemAvailable: 46050308 kB' 'Buffers: 3736 kB' 'Cached: 12554688 kB' 'SwapCached: 0 kB' 'Active: 9517080 kB' 'Inactive: 3500996 kB' 'Active(anon): 9128072 kB' 'Inactive(anon): 0 kB' 'Active(file): 389008 kB' 'Inactive(file): 3500996 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 462944 kB' 'Mapped: 172616 kB' 'Shmem: 8668420 kB' 'KReclaimable: 194908 kB' 'Slab: 550016 kB' 'SReclaimable: 194908 kB' 'SUnreclaim: 355108 kB' 'KernelStack: 12656 kB' 'PageTables: 7540 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086592 kB' 'Committed_AS: 10249340 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195856 kB' 'VmallocChunk: 0 kB' 'Percpu: 32832 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1678940 kB' 'DirectMap2M: 13969408 kB' 'DirectMap1G: 53477376 kB' 00:05:11.255 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.255 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.255 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.255 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.255 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.255 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.255 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.255 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.255 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.255 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.255 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.255 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.255 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.255 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.255 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.255 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.256 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.256 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.256 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.256 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.256 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.256 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.256 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.256 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.256 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.256 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.256 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.256 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.256 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.256 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.256 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.256 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.256 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.256 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.256 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.256 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.256 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.256 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.256 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.256 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.256 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.256 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.256 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.256 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.256 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.256 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.256 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.256 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.256 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.256 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.256 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.256 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.256 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.256 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.256 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.256 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.256 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.256 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.256 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.256 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.256 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.256 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.256 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.256 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.256 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.256 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.256 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.256 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.256 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.256 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.256 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.256 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.256 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.256 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.256 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.256 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.256 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.256 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.256 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.256 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.256 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.256 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.256 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.256 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.256 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.256 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.256 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.256 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.256 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.256 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.256 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.256 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.256 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.256 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.256 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.256 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.256 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.256 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.256 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.256 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.256 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.256 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.256 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.256 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.256 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.256 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.256 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.256 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.256 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.256 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.256 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.256 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.256 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.256 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.256 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.256 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.256 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.256 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.256 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.256 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.256 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.256 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.256 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.256 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.256 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.256 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.256 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.256 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.256 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.256 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.256 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.256 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.256 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.256 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.256 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.256 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.256 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.256 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.256 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.256 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.256 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.256 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.256 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.256 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.256 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.256 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.256 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.256 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.257 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.257 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.257 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.257 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.257 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.257 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.257 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.257 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.257 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.257 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.257 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.257 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.257 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.257 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.257 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.257 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.257 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.257 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.257 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.257 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.257 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.257 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.257 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.257 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.257 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.257 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.257 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.257 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.257 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.257 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.257 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.257 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.257 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.257 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.257 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.257 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.257 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.257 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.257 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.257 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.257 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.257 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.257 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.257 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.257 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.257 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:05:11.257 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:11.257 03:14:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:05:11.257 03:14:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:11.257 03:14:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:05:11.257 03:14:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:11.257 03:14:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:11.257 03:14:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:11.257 03:14:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:11.257 03:14:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:11.257 03:14:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:11.257 03:14:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:11.257 03:14:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:11.257 03:14:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:11.257 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:11.257 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:05:11.257 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:11.257 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:11.257 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:11.257 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:11.257 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:11.257 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:11.257 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:11.257 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.257 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.257 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 27516724 kB' 'MemUsed: 5313160 kB' 'SwapCached: 0 kB' 'Active: 3191372 kB' 'Inactive: 146700 kB' 'Active(anon): 3034840 kB' 'Inactive(anon): 0 kB' 'Active(file): 156532 kB' 'Inactive(file): 146700 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3067792 kB' 'Mapped: 70664 kB' 'AnonPages: 273408 kB' 'Shmem: 2764560 kB' 'KernelStack: 7768 kB' 'PageTables: 4160 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 97724 kB' 'Slab: 308752 kB' 'SReclaimable: 97724 kB' 'SUnreclaim: 211028 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:11.257 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.257 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.257 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.257 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.257 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.257 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.257 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.257 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.257 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.257 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.257 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.257 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.257 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.257 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.257 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.257 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.257 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.257 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.257 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.257 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.257 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.257 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.257 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.257 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.257 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.257 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.257 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.257 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.257 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.257 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.257 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.257 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.257 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.257 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.257 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.257 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.257 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.257 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.257 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.257 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.257 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.257 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.257 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.257 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.257 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.257 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.257 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.257 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.257 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.257 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.257 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.257 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.258 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.258 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.258 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.258 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.258 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.258 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.258 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.258 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.258 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.258 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.258 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.258 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.258 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.258 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.258 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.258 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.258 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.258 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.258 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.258 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.258 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.258 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.258 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.258 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.258 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.258 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.258 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.258 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.258 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.258 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.258 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.258 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.258 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.258 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.258 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.258 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.258 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.258 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.258 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.258 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.258 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.258 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.258 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.258 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.258 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.258 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.258 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.258 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.258 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.258 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.258 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.258 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.258 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.258 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.258 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.258 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.258 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.258 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.258 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.258 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.258 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.258 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.258 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.258 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.258 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.258 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.258 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.258 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.258 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.258 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.258 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.258 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.258 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.258 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.258 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.258 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.258 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.258 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.258 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.258 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.258 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.258 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.258 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.258 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.258 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.258 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.258 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.258 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.258 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.258 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.258 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.258 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.258 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.258 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:11.258 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:11.258 03:14:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:11.258 03:14:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:11.258 03:14:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:11.258 03:14:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:05:11.258 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:11.258 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:05:11.258 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:11.258 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:11.258 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:11.258 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:05:11.258 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:05:11.258 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:11.258 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:11.258 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.258 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.258 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711824 kB' 'MemFree: 15040188 kB' 'MemUsed: 12671636 kB' 'SwapCached: 0 kB' 'Active: 6325772 kB' 'Inactive: 3354296 kB' 'Active(anon): 6093296 kB' 'Inactive(anon): 0 kB' 'Active(file): 232476 kB' 'Inactive(file): 3354296 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9490688 kB' 'Mapped: 101952 kB' 'AnonPages: 189548 kB' 'Shmem: 5903916 kB' 'KernelStack: 4888 kB' 'PageTables: 3380 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 97184 kB' 'Slab: 241264 kB' 'SReclaimable: 97184 kB' 'SUnreclaim: 144080 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:11.258 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.258 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.258 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.258 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.259 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.259 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.259 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.259 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.259 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.259 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.259 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.259 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.259 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.259 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.259 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.259 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.259 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.259 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.259 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.259 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.259 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.259 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.259 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.259 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.259 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.259 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.259 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.259 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.259 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.259 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.259 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.259 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.259 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.259 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.259 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.259 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.259 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.259 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.259 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.259 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.259 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.259 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.259 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.259 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.259 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.259 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.259 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.259 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.259 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.259 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.259 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.259 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.259 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.259 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.259 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.259 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.259 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.259 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.259 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.259 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.259 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.259 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.259 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.259 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.259 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.259 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.259 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.259 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.259 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.259 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.259 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.259 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.259 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.259 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.259 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.259 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.259 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.259 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.259 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.259 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.259 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.259 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.259 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.259 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.259 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.259 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.259 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.259 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.259 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.259 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.259 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.259 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.259 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.259 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.259 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.259 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.259 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.259 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.259 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.259 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.259 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.259 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.259 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.259 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.259 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.259 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.259 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.259 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.259 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.259 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.259 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.259 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.259 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.259 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.259 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.259 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.259 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.259 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.259 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.259 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.259 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.259 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.259 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.260 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.260 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.260 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.260 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.260 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.260 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.260 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.260 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.260 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.260 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.260 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.260 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.260 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.260 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.260 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.260 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.260 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.260 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.260 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.260 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.260 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.260 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.260 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:11.260 03:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:11.260 03:14:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:11.260 03:14:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:11.260 03:14:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:11.260 03:14:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:11.260 03:14:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:11.260 node0=512 expecting 512 00:05:11.260 03:14:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:11.260 03:14:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:11.260 03:14:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:11.260 03:14:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:05:11.260 node1=1024 expecting 1024 00:05:11.260 03:14:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:05:11.260 00:05:11.260 real 0m1.439s 00:05:11.260 user 0m0.640s 00:05:11.260 sys 0m0.761s 00:05:11.260 03:14:56 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:11.260 03:14:56 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:11.260 ************************************ 00:05:11.260 END TEST custom_alloc 00:05:11.260 ************************************ 00:05:11.260 03:14:56 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:05:11.260 03:14:56 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:11.260 03:14:56 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:11.260 03:14:56 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:11.260 ************************************ 00:05:11.260 START TEST no_shrink_alloc 00:05:11.260 ************************************ 00:05:11.260 03:14:56 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1121 -- # no_shrink_alloc 00:05:11.260 03:14:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:05:11.260 03:14:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:05:11.260 03:14:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:11.260 03:14:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:05:11.260 03:14:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:11.260 03:14:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:05:11.260 03:14:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:11.260 03:14:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:11.260 03:14:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:11.260 03:14:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:11.260 03:14:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:11.260 03:14:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:11.260 03:14:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:11.260 03:14:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:11.260 03:14:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:11.260 03:14:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:11.260 03:14:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:11.260 03:14:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:05:11.260 03:14:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:05:11.260 03:14:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:05:11.260 03:14:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:11.260 03:14:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:12.638 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:12.638 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:05:12.638 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:12.638 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:12.638 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:12.638 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:12.638 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:12.638 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:12.638 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:12.638 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:12.638 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:12.638 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:12.638 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:12.638 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:12.638 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:12.638 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:12.638 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:12.638 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:05:12.638 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:05:12.638 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:12.638 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:12.638 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:12.638 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:12.638 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:12.638 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:12.638 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:12.638 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:12.638 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:12.638 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:12.638 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:12.638 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:12.638 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:12.638 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:12.638 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:12.638 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:12.638 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.638 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.638 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43615476 kB' 'MemAvailable: 47109312 kB' 'Buffers: 3736 kB' 'Cached: 12554768 kB' 'SwapCached: 0 kB' 'Active: 9516804 kB' 'Inactive: 3500996 kB' 'Active(anon): 9127796 kB' 'Inactive(anon): 0 kB' 'Active(file): 389008 kB' 'Inactive(file): 3500996 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 462408 kB' 'Mapped: 172636 kB' 'Shmem: 8668500 kB' 'KReclaimable: 194908 kB' 'Slab: 549600 kB' 'SReclaimable: 194908 kB' 'SUnreclaim: 354692 kB' 'KernelStack: 12624 kB' 'PageTables: 7380 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 10249400 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196016 kB' 'VmallocChunk: 0 kB' 'Percpu: 32832 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1678940 kB' 'DirectMap2M: 13969408 kB' 'DirectMap1G: 53477376 kB' 00:05:12.638 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.638 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.638 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.638 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.638 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.638 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.638 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.638 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.638 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.638 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.638 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.638 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.638 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.638 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.638 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.638 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.638 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.638 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.638 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.638 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.638 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.638 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.638 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.638 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.638 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.638 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.638 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.638 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.638 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.638 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.638 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.638 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.638 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.638 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.638 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.638 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.638 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.638 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.638 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.638 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.638 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.638 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.638 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.638 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.638 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.638 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.638 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.638 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.638 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.638 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.638 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.638 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.639 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.639 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.639 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.639 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.639 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.639 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.639 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.639 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.639 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.639 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.639 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.639 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.639 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.639 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.639 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.639 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.639 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.639 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.639 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.639 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.639 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.639 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.639 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.639 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.639 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.639 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.639 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.639 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.639 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.639 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.639 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.639 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.639 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.639 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.639 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.639 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.639 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.639 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.639 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.639 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.639 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.639 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.639 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.639 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.639 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.639 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.639 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.639 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.639 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.639 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.639 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.639 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.639 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.639 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.639 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.639 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.639 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.639 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.639 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.639 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.639 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.639 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.639 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.639 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.639 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.639 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.639 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.639 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.639 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.639 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.639 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.639 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.639 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.639 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.639 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.639 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.639 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.639 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.639 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.639 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.639 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.639 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.639 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.639 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.639 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.639 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.639 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.639 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.639 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.639 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.639 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.639 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.639 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.639 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.639 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.639 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.639 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.639 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.639 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.639 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.639 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.639 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.639 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.639 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.639 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.639 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.639 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.639 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.639 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.639 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:12.639 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:12.639 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:12.639 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:12.639 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:12.639 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:12.639 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:12.639 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:12.639 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:12.639 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:12.639 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:12.639 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:12.639 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:12.639 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.639 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.640 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43615652 kB' 'MemAvailable: 47109488 kB' 'Buffers: 3736 kB' 'Cached: 12554772 kB' 'SwapCached: 0 kB' 'Active: 9517404 kB' 'Inactive: 3500996 kB' 'Active(anon): 9128396 kB' 'Inactive(anon): 0 kB' 'Active(file): 389008 kB' 'Inactive(file): 3500996 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 463044 kB' 'Mapped: 172636 kB' 'Shmem: 8668504 kB' 'KReclaimable: 194908 kB' 'Slab: 549576 kB' 'SReclaimable: 194908 kB' 'SUnreclaim: 354668 kB' 'KernelStack: 12672 kB' 'PageTables: 7528 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 10249420 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196000 kB' 'VmallocChunk: 0 kB' 'Percpu: 32832 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1678940 kB' 'DirectMap2M: 13969408 kB' 'DirectMap1G: 53477376 kB' 00:05:12.640 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.640 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.640 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.640 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.640 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.640 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.640 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.640 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.640 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.640 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.640 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.640 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.640 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.640 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.640 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.640 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.640 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.640 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.640 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.640 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.640 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.640 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.640 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.640 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.640 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.640 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.640 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.640 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.640 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.640 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.640 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.640 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.640 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.640 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.640 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.640 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.640 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.640 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.640 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.640 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.640 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.640 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.640 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.640 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.640 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.640 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.640 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.640 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.640 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.640 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.640 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.640 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.640 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.640 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.640 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.640 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.640 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.640 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.640 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.640 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.640 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.640 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.640 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.640 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.640 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.640 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.640 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.640 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.640 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.640 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.640 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.640 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.640 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.640 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.640 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.640 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.640 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.640 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.640 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.640 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.640 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.640 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.640 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.640 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.640 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.640 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.640 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.640 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.640 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.640 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.640 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.640 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.640 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.640 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.640 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.640 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.640 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.640 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.640 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.640 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.640 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.640 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.640 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.640 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.640 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.640 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.640 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.640 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.640 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.640 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.640 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.640 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.640 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.641 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.641 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.641 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.641 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.641 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.641 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.641 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.641 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.641 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.641 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.641 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.641 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.641 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.641 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.641 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.641 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.641 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.641 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.641 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.641 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.641 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.641 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.641 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.641 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.641 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.641 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.641 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.641 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.641 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.641 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.641 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.641 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.641 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.641 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.641 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.641 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.641 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.641 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.641 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.641 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.641 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.641 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.641 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.641 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.641 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.641 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.641 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.641 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.641 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.641 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.641 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.641 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.641 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.641 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.641 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.641 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.641 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.641 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.641 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.641 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.641 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.641 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.641 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.641 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.641 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.641 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.641 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.641 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.641 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.641 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.641 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.641 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.641 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.641 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.641 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.641 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.641 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.641 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.641 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.641 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.641 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.641 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.641 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.641 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.641 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.641 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.641 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.641 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.641 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.641 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.641 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.641 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.641 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:12.641 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:12.641 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:12.641 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:12.641 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:12.641 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:12.641 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:12.641 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:12.641 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:12.641 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:12.641 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:12.641 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:12.641 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:12.641 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.641 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.641 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43616424 kB' 'MemAvailable: 47110260 kB' 'Buffers: 3736 kB' 'Cached: 12554788 kB' 'SwapCached: 0 kB' 'Active: 9517392 kB' 'Inactive: 3500996 kB' 'Active(anon): 9128384 kB' 'Inactive(anon): 0 kB' 'Active(file): 389008 kB' 'Inactive(file): 3500996 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 463016 kB' 'Mapped: 172636 kB' 'Shmem: 8668520 kB' 'KReclaimable: 194908 kB' 'Slab: 549604 kB' 'SReclaimable: 194908 kB' 'SUnreclaim: 354696 kB' 'KernelStack: 12672 kB' 'PageTables: 7540 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 10249440 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196000 kB' 'VmallocChunk: 0 kB' 'Percpu: 32832 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1678940 kB' 'DirectMap2M: 13969408 kB' 'DirectMap1G: 53477376 kB' 00:05:12.641 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.641 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.641 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.641 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.641 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.642 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.642 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.642 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.642 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.642 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.642 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.642 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.642 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.642 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.642 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.642 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.642 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.642 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.642 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.642 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.642 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.642 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.642 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.642 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.642 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.642 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.642 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.642 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.642 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.642 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.642 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.642 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.642 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.642 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.642 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.642 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.642 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.642 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.642 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.642 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.642 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.642 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.642 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.642 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.642 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.642 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.642 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.642 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.642 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.642 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.642 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.642 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.642 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.642 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.642 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.642 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.642 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.642 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.642 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.642 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.642 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.642 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.642 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.642 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.642 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.642 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.642 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.642 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.642 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.642 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.642 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.642 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.642 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.642 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.642 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.642 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.642 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.642 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.642 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.642 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.642 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.642 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.642 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.642 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.642 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.642 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.642 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.642 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.642 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.642 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.642 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.642 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.642 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.642 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.642 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.642 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.642 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.642 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.642 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.642 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.642 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.642 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.642 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.642 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.642 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.642 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.642 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.642 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.643 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.643 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.643 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.643 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.643 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.643 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.643 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.643 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.643 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.643 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.643 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.643 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.643 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.643 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.643 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.643 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.643 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.643 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.643 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.643 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.643 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.643 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.643 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.643 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.643 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.643 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.643 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.643 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.643 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.643 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.643 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.643 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.643 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.643 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.643 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.643 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.643 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.643 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.643 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.643 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.643 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.643 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.643 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.643 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.643 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.643 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.643 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.643 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.643 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.643 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.643 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.643 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.643 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.643 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.643 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.643 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.643 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.643 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.643 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.643 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.643 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.643 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.643 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.643 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.643 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.643 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.643 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.643 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.643 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.643 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.643 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.643 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.643 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.643 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.643 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.643 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.643 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.643 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.643 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.643 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.643 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.643 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.643 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.643 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.643 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.643 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.643 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.643 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.643 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.643 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.643 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.643 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.643 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.643 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:12.643 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:12.643 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:12.643 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:12.643 nr_hugepages=1024 00:05:12.643 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:12.643 resv_hugepages=0 00:05:12.643 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:12.643 surplus_hugepages=0 00:05:12.643 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:12.643 anon_hugepages=0 00:05:12.643 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:12.643 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:12.643 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:12.643 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:12.643 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:12.643 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:12.643 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:12.643 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:12.643 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:12.643 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:12.643 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:12.643 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:12.643 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.643 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.644 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43617044 kB' 'MemAvailable: 47110880 kB' 'Buffers: 3736 kB' 'Cached: 12554808 kB' 'SwapCached: 0 kB' 'Active: 9519244 kB' 'Inactive: 3500996 kB' 'Active(anon): 9130236 kB' 'Inactive(anon): 0 kB' 'Active(file): 389008 kB' 'Inactive(file): 3500996 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 464832 kB' 'Mapped: 173072 kB' 'Shmem: 8668540 kB' 'KReclaimable: 194908 kB' 'Slab: 549604 kB' 'SReclaimable: 194908 kB' 'SUnreclaim: 354696 kB' 'KernelStack: 12656 kB' 'PageTables: 7500 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 10251744 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195984 kB' 'VmallocChunk: 0 kB' 'Percpu: 32832 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1678940 kB' 'DirectMap2M: 13969408 kB' 'DirectMap1G: 53477376 kB' 00:05:12.644 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.644 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.644 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.644 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.644 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.644 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.644 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.644 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.644 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.644 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.644 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.644 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.644 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.644 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.644 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.644 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.644 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.644 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.644 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.644 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.644 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.644 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.644 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.644 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.644 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.644 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.644 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.644 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.644 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.644 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.644 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.644 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.644 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.644 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.644 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.644 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.644 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.644 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.644 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.644 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.644 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.644 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.644 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.644 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.644 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.644 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.644 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.644 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.644 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.644 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.644 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.644 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.644 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.644 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.644 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.644 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.644 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.644 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.644 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.644 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.644 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.644 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.644 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.644 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.644 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.644 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.644 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.644 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.644 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.644 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.644 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.644 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.644 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.644 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.644 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.644 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.644 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.644 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.644 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.644 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.644 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.644 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.644 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.644 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.644 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.644 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.644 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.644 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.644 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.644 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.644 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.644 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.644 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.644 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.644 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.644 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.644 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.644 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.644 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.644 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.644 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.644 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.644 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.644 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.644 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.644 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.644 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.644 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.644 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.644 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.644 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.644 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.644 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.644 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.644 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.644 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.644 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.644 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.644 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.645 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.645 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.645 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.645 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.645 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.645 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.645 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.645 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.645 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.645 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.645 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.645 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.645 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.645 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.645 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.645 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.645 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.645 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.645 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.645 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.645 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.645 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.645 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.645 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.645 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.645 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.645 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.645 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.645 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.645 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.645 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.645 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.645 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.645 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.645 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.645 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.645 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.645 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.645 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.645 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.645 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.645 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.645 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.645 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.645 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.645 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.645 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.645 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.645 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.645 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.645 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.645 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.645 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.645 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.645 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.645 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.645 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.645 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.645 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.645 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.645 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.645 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.645 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.645 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.645 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.645 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.645 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.645 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.645 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.645 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.645 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.645 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.645 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.645 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.645 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:05:12.645 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:12.645 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:12.645 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:12.645 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:05:12.645 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:12.645 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:12.645 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:12.645 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:05:12.645 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:12.645 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:12.645 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:12.645 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:12.645 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:12.645 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:12.645 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:05:12.645 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:12.645 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:12.645 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:12.645 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:12.645 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:12.645 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:12.645 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:12.645 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.645 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.645 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 26482696 kB' 'MemUsed: 6347188 kB' 'SwapCached: 0 kB' 'Active: 3191792 kB' 'Inactive: 146700 kB' 'Active(anon): 3035260 kB' 'Inactive(anon): 0 kB' 'Active(file): 156532 kB' 'Inactive(file): 146700 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3067796 kB' 'Mapped: 70684 kB' 'AnonPages: 273804 kB' 'Shmem: 2764564 kB' 'KernelStack: 7768 kB' 'PageTables: 4172 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 97724 kB' 'Slab: 308488 kB' 'SReclaimable: 97724 kB' 'SUnreclaim: 210764 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:12.645 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.645 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.645 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.645 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.645 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.645 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.645 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.645 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.645 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.645 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.645 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.645 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.645 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.645 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.645 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.645 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.645 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.645 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.646 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.646 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.646 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.646 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.646 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.646 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.646 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.646 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.646 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.646 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.646 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.646 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.646 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.646 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.646 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.646 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.646 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.646 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.646 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.646 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.646 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.646 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.646 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.646 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.646 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.646 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.646 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.646 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.646 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.646 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.646 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.646 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.646 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.646 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.646 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.646 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.646 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.646 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.646 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.646 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.646 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.646 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.646 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.646 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.646 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.646 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.646 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.646 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.646 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.646 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.646 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.646 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.646 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.646 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.646 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.646 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.646 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.646 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.646 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.646 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.646 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.646 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.646 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.646 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.646 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.646 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.646 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.646 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.646 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.646 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.646 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.646 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.646 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.646 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.646 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.646 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.646 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.646 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.646 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.646 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.646 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.646 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.646 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.646 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.646 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.646 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.646 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.646 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.646 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.646 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.646 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.646 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.646 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.646 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.646 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.646 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.646 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.646 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.646 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.646 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.646 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.646 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.646 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.646 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.646 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.646 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.646 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.646 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.646 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.646 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.646 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.646 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.646 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.646 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.646 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.646 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.646 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.646 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.646 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.646 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.646 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.646 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.647 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.647 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.647 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.647 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.647 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.647 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:12.647 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:12.647 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:12.647 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:12.647 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:12.647 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:12.647 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:12.647 node0=1024 expecting 1024 00:05:12.647 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:12.647 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:05:12.647 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:05:12.647 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:05:12.647 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:12.647 03:14:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:14.023 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:14.023 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:05:14.023 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:14.023 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:14.023 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:14.023 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:14.023 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:14.023 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:14.023 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:14.023 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:14.023 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:14.023 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:14.023 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:14.023 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:14.023 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:14.023 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:14.023 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:14.023 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:05:14.023 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:05:14.023 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:05:14.023 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:14.023 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:14.023 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:14.023 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:14.023 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:14.023 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:14.023 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:14.023 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:14.023 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:14.023 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:14.023 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:14.023 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:14.023 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:14.023 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:14.023 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:14.023 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:14.023 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.023 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.023 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43607300 kB' 'MemAvailable: 47101136 kB' 'Buffers: 3736 kB' 'Cached: 12554884 kB' 'SwapCached: 0 kB' 'Active: 9523304 kB' 'Inactive: 3500996 kB' 'Active(anon): 9134296 kB' 'Inactive(anon): 0 kB' 'Active(file): 389008 kB' 'Inactive(file): 3500996 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 468828 kB' 'Mapped: 173576 kB' 'Shmem: 8668616 kB' 'KReclaimable: 194908 kB' 'Slab: 549688 kB' 'SReclaimable: 194908 kB' 'SUnreclaim: 354780 kB' 'KernelStack: 12640 kB' 'PageTables: 7492 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 10256040 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196004 kB' 'VmallocChunk: 0 kB' 'Percpu: 32832 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1678940 kB' 'DirectMap2M: 13969408 kB' 'DirectMap1G: 53477376 kB' 00:05:14.023 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.023 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.023 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.023 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.023 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.023 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.023 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.023 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.023 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.023 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.023 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.023 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.023 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.023 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.023 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.023 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.023 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.023 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.023 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.023 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.023 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.023 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.023 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.023 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.023 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.023 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.023 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.023 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.023 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.023 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.023 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.023 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.023 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.023 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.023 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.023 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.023 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.023 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.023 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.023 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.023 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.023 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.023 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.023 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.023 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.023 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.023 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.023 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.023 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.023 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.023 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.023 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.023 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.023 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.023 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.023 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.023 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.023 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.023 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.023 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.023 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.023 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.023 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.023 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.023 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.023 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.023 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.023 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.023 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.023 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.023 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.023 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.023 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.023 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.023 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.023 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.023 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.023 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.023 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.023 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.023 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.023 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.023 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.023 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.023 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.023 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.023 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.023 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.023 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.023 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.023 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.023 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.023 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.023 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.023 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.023 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.023 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.023 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.023 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.023 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.023 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.023 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.023 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.023 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.023 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.023 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.023 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.023 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.023 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.024 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.024 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.024 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.024 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.024 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.024 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.024 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.024 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.024 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.024 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.024 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.024 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.024 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.024 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.024 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.024 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.024 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.024 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.024 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.024 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.024 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.024 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.024 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.024 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.024 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.024 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.024 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.024 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.024 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.024 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.024 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.024 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.024 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.024 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.024 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.024 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.024 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.024 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.024 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.024 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.024 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.024 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.024 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.024 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.024 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.024 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.024 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.024 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.024 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.024 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.024 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.024 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.024 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:14.024 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:14.024 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:14.024 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:14.024 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:14.024 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:14.024 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:14.024 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:14.024 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:14.024 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:14.024 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:14.024 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:14.024 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:14.024 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.024 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.024 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43609180 kB' 'MemAvailable: 47103016 kB' 'Buffers: 3736 kB' 'Cached: 12554884 kB' 'SwapCached: 0 kB' 'Active: 9523752 kB' 'Inactive: 3500996 kB' 'Active(anon): 9134744 kB' 'Inactive(anon): 0 kB' 'Active(file): 389008 kB' 'Inactive(file): 3500996 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 469284 kB' 'Mapped: 173564 kB' 'Shmem: 8668616 kB' 'KReclaimable: 194908 kB' 'Slab: 549692 kB' 'SReclaimable: 194908 kB' 'SUnreclaim: 354784 kB' 'KernelStack: 12704 kB' 'PageTables: 7660 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 10255944 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195972 kB' 'VmallocChunk: 0 kB' 'Percpu: 32832 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1678940 kB' 'DirectMap2M: 13969408 kB' 'DirectMap1G: 53477376 kB' 00:05:14.024 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.024 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.024 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.024 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.024 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.024 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.024 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.024 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.024 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.024 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.024 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.024 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.024 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.024 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.024 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.024 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.024 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.024 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.024 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.024 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.024 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.024 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.024 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.024 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.024 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.024 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.024 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.024 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.024 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.024 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.024 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.024 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.024 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.024 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.024 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.024 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.024 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.024 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.024 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.024 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.024 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.024 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.024 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.024 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.024 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.024 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.024 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.024 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.024 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.024 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.024 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.024 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.024 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.024 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.024 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.024 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.024 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.024 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.024 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.024 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.024 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.024 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.024 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.024 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.024 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.024 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.024 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.024 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.024 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.024 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.024 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.024 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.024 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.024 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.024 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.024 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.024 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.024 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.024 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.024 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.024 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.024 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.024 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.024 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.024 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.024 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.024 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.024 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.024 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.024 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.024 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.024 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.024 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.024 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.024 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.024 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.024 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.024 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.024 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.024 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.024 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.024 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.024 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.024 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.024 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.024 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.024 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.024 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.024 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.024 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.024 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.024 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.024 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.024 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.024 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.024 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.024 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.024 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.024 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.024 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.024 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.024 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.024 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.024 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.024 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.024 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.024 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.024 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.024 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.024 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.024 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.024 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.024 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.025 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.025 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.025 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.025 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.025 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.025 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.025 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.025 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.025 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.025 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.025 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.025 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.025 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.025 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.025 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.025 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.025 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.025 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.025 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.025 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.025 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.025 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.025 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.025 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.025 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.025 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.025 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.025 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.025 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.025 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.025 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.025 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.025 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.025 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.025 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.025 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.025 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.025 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.025 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.025 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.025 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.025 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.025 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.025 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.025 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.025 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.025 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.025 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.025 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.025 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.025 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.025 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.025 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.025 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.025 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.025 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.025 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.025 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.025 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.025 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.025 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.025 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.025 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.025 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.025 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.025 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.025 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.025 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.025 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.025 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.025 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.025 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.025 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:14.025 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:14.025 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:14.025 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:14.025 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:14.025 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:14.025 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:14.025 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:14.025 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:14.025 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:14.025 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:14.025 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:14.025 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:14.025 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.025 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.025 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43607068 kB' 'MemAvailable: 47100904 kB' 'Buffers: 3736 kB' 'Cached: 12554896 kB' 'SwapCached: 0 kB' 'Active: 9517204 kB' 'Inactive: 3500996 kB' 'Active(anon): 9128196 kB' 'Inactive(anon): 0 kB' 'Active(file): 389008 kB' 'Inactive(file): 3500996 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 462748 kB' 'Mapped: 173128 kB' 'Shmem: 8668628 kB' 'KReclaimable: 194908 kB' 'Slab: 549692 kB' 'SReclaimable: 194908 kB' 'SUnreclaim: 354784 kB' 'KernelStack: 12624 kB' 'PageTables: 7420 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 10249844 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195952 kB' 'VmallocChunk: 0 kB' 'Percpu: 32832 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1678940 kB' 'DirectMap2M: 13969408 kB' 'DirectMap1G: 53477376 kB' 00:05:14.025 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.025 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.025 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.025 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.025 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.025 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.025 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.025 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.025 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.025 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.025 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.025 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.025 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.025 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.025 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.025 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.025 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.025 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.025 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.025 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.025 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.025 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.025 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.025 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.025 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.025 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.025 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.025 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.025 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.025 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.025 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.025 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.025 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.025 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.025 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.025 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.025 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.025 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.025 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.025 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.025 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.025 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.025 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.025 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.025 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.025 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.025 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.025 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.025 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.025 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.025 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.025 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.025 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.025 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.025 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.025 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.025 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.025 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.025 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.025 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.025 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.025 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.025 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.025 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.025 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.025 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.025 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.025 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.025 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.025 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.025 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.025 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.025 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.025 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.025 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.025 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.025 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.025 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.025 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.025 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.025 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.025 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.025 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.026 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.026 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.026 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.026 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.026 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.026 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.026 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.026 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.026 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.026 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.026 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.026 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.026 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.026 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.026 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.026 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.026 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.026 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.026 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.026 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.026 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.026 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.026 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.026 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.026 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.026 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.026 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.026 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.026 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.026 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.026 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.026 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.026 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.026 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.026 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.026 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.026 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.026 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.026 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.026 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.026 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.026 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.026 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.026 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.026 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.026 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.026 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.026 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.026 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.026 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.026 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.026 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.026 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.026 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.026 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.026 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.026 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.026 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.026 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.026 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.026 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.026 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.026 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.026 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.026 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.026 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.026 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.026 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.026 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.026 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.026 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.026 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.026 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.026 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.026 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.026 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.026 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.026 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.026 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.026 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.026 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.026 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.026 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.026 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.026 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.026 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.026 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.026 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.026 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.026 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.026 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.026 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.026 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.026 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.026 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.026 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.026 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.026 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.026 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.026 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.026 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.026 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.026 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.026 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.026 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.026 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.026 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.026 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.026 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.026 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.026 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.026 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.026 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.026 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.026 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.026 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.026 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.026 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.026 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:14.026 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:14.026 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:14.026 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:14.026 nr_hugepages=1024 00:05:14.026 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:14.026 resv_hugepages=0 00:05:14.026 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:14.026 surplus_hugepages=0 00:05:14.026 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:14.026 anon_hugepages=0 00:05:14.026 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:14.026 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:14.026 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:14.026 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:14.026 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:14.026 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:14.026 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:14.026 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:14.026 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:14.026 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:14.026 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:14.026 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:14.026 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.026 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.026 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43607952 kB' 'MemAvailable: 47101788 kB' 'Buffers: 3736 kB' 'Cached: 12554928 kB' 'SwapCached: 0 kB' 'Active: 9517436 kB' 'Inactive: 3500996 kB' 'Active(anon): 9128428 kB' 'Inactive(anon): 0 kB' 'Active(file): 389008 kB' 'Inactive(file): 3500996 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 462988 kB' 'Mapped: 172660 kB' 'Shmem: 8668660 kB' 'KReclaimable: 194908 kB' 'Slab: 549776 kB' 'SReclaimable: 194908 kB' 'SUnreclaim: 354868 kB' 'KernelStack: 12624 kB' 'PageTables: 7396 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 10249868 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195936 kB' 'VmallocChunk: 0 kB' 'Percpu: 32832 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1678940 kB' 'DirectMap2M: 13969408 kB' 'DirectMap1G: 53477376 kB' 00:05:14.026 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.026 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.026 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.026 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.026 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.026 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.026 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.026 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.026 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.026 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.026 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.026 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.026 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.026 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.026 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.026 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.026 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.026 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.026 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.026 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.026 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.026 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.026 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.026 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.026 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.026 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.026 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.026 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.026 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.026 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.026 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.026 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.026 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.026 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.026 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.026 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.026 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.026 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.026 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.026 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.026 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.026 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.026 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.026 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.026 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.026 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.026 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.026 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.026 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.026 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.026 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.026 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.026 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.026 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.026 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.026 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.026 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.026 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.026 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.026 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.026 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.027 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.027 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.027 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.027 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.027 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.027 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.027 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.027 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.027 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.027 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.027 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.027 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.027 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.027 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.027 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.027 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.027 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.027 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.027 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.027 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.027 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.027 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.027 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.027 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.027 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.027 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.027 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.027 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.027 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.027 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.027 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.027 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.027 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.027 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.027 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.027 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.027 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.027 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.027 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.027 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.027 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.027 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.027 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.027 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.027 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.027 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.027 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.027 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.027 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.027 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.027 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.027 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.027 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.027 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.027 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.027 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.027 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.027 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.027 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.027 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.027 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.027 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.027 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.027 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.027 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.027 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.027 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.027 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.027 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.027 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.027 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.027 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.027 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.027 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.027 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.027 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.027 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.027 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.027 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.027 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.027 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.027 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.027 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.027 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.027 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.027 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.027 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.027 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.027 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.027 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.027 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.027 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.027 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.027 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.027 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.027 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.027 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.027 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.027 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.027 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.027 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.027 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.027 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.027 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.027 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.027 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.027 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.027 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.027 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.027 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.027 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.027 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.027 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.027 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.027 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.027 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.027 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.027 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.027 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.027 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.027 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.027 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.027 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.027 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.027 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.027 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.027 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.027 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.027 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.027 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.027 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.027 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.027 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:05:14.027 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:14.027 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:14.027 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:14.027 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:05:14.027 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:14.027 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:14.027 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:14.027 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:05:14.027 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:14.027 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:14.027 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:14.027 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:14.027 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:14.027 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:14.027 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:05:14.027 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:14.027 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:14.027 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:14.027 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:14.027 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:14.027 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:14.027 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:14.027 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.027 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.027 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 26471852 kB' 'MemUsed: 6358032 kB' 'SwapCached: 0 kB' 'Active: 3192164 kB' 'Inactive: 146700 kB' 'Active(anon): 3035632 kB' 'Inactive(anon): 0 kB' 'Active(file): 156532 kB' 'Inactive(file): 146700 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3067900 kB' 'Mapped: 70708 kB' 'AnonPages: 274100 kB' 'Shmem: 2764668 kB' 'KernelStack: 7800 kB' 'PageTables: 4176 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 97724 kB' 'Slab: 308572 kB' 'SReclaimable: 97724 kB' 'SUnreclaim: 210848 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:14.027 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.027 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.027 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.027 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.027 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.027 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.027 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.027 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.027 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.027 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.027 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.027 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.027 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.027 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.027 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.027 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.027 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.027 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.027 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.027 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.027 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.027 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.027 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.027 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.027 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.027 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.027 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.027 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.027 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.027 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.027 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.027 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.027 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.027 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.027 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.027 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.027 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.027 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.027 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.027 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.027 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.027 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.027 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.027 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.027 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.027 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.027 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.027 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.028 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.028 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.028 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.028 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.028 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.028 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.028 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.028 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.028 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.028 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.028 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.028 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.028 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.028 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.028 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.028 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.028 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.028 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.028 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.028 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.028 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.028 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.028 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.028 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.028 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.028 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.028 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.028 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.028 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.028 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.028 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.028 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.028 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.028 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.028 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.028 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.028 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.028 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.028 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.028 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.028 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.028 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.028 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.028 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.028 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.028 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.028 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.028 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.028 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.028 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.028 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.028 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.028 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.028 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.028 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.028 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.028 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.028 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.028 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.028 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.028 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.028 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.028 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.028 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.028 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.028 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.028 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.028 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.028 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.028 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.028 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.028 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.028 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.028 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.028 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.028 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.028 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.028 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.028 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.028 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.028 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.028 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.028 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.028 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.028 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.028 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.028 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.028 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.028 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.028 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.028 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.028 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.028 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.028 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.028 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.028 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.028 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.028 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:14.028 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:14.028 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:14.028 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:14.028 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:14.028 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:14.028 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:14.028 node0=1024 expecting 1024 00:05:14.028 03:14:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:14.028 00:05:14.028 real 0m2.741s 00:05:14.028 user 0m1.115s 00:05:14.028 sys 0m1.545s 00:05:14.028 03:14:59 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:14.028 03:14:59 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:14.028 ************************************ 00:05:14.028 END TEST no_shrink_alloc 00:05:14.028 ************************************ 00:05:14.028 03:14:59 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:05:14.028 03:14:59 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:05:14.028 03:14:59 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:14.028 03:14:59 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:14.028 03:14:59 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:14.028 03:14:59 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:14.028 03:14:59 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:14.028 03:14:59 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:14.028 03:14:59 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:14.028 03:14:59 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:14.028 03:14:59 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:14.028 03:14:59 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:14.028 03:14:59 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:14.028 03:14:59 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:14.028 00:05:14.028 real 0m11.170s 00:05:14.028 user 0m4.307s 00:05:14.028 sys 0m5.758s 00:05:14.028 03:14:59 setup.sh.hugepages -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:14.028 03:14:59 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:14.028 ************************************ 00:05:14.028 END TEST hugepages 00:05:14.028 ************************************ 00:05:14.028 03:14:59 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:05:14.028 03:14:59 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:14.028 03:14:59 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:14.028 03:14:59 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:14.286 ************************************ 00:05:14.286 START TEST driver 00:05:14.286 ************************************ 00:05:14.286 03:14:59 setup.sh.driver -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:05:14.286 * Looking for test storage... 00:05:14.286 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:05:14.286 03:14:59 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:05:14.286 03:14:59 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:14.286 03:14:59 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:16.810 03:15:01 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:05:16.810 03:15:01 setup.sh.driver -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:16.810 03:15:01 setup.sh.driver -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:16.810 03:15:01 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:05:16.810 ************************************ 00:05:16.810 START TEST guess_driver 00:05:16.810 ************************************ 00:05:16.810 03:15:01 setup.sh.driver.guess_driver -- common/autotest_common.sh@1121 -- # guess_driver 00:05:16.810 03:15:01 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:05:16.810 03:15:01 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:05:16.810 03:15:01 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:05:16.810 03:15:01 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:05:16.810 03:15:01 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:05:16.810 03:15:01 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:05:16.810 03:15:01 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:05:16.810 03:15:01 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:05:16.810 03:15:01 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:05:16.810 03:15:01 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 141 > 0 )) 00:05:16.810 03:15:01 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:05:16.810 03:15:01 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:05:16.810 03:15:01 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:05:16.810 03:15:01 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:05:16.810 03:15:01 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:05:16.810 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:05:16.810 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:05:16.810 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:05:16.810 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:05:16.810 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:05:16.810 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:05:16.810 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:05:16.810 03:15:01 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:05:16.810 03:15:01 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:05:16.810 03:15:01 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:05:16.810 03:15:01 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:05:16.810 03:15:01 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:05:16.810 Looking for driver=vfio-pci 00:05:16.810 03:15:01 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:16.810 03:15:01 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:05:16.810 03:15:01 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:05:16.810 03:15:01 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:17.755 03:15:02 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:17.755 03:15:02 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:17.755 03:15:02 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:17.756 03:15:02 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:17.756 03:15:02 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:17.756 03:15:02 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:17.756 03:15:02 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:17.756 03:15:02 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:17.756 03:15:02 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:17.756 03:15:02 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:17.756 03:15:02 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:17.756 03:15:02 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:17.756 03:15:02 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:17.756 03:15:02 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:17.756 03:15:02 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:17.756 03:15:02 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:17.756 03:15:02 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:17.756 03:15:02 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:17.756 03:15:02 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:17.756 03:15:02 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:17.756 03:15:02 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:17.756 03:15:02 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:17.756 03:15:02 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:17.756 03:15:02 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:17.756 03:15:02 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:17.756 03:15:02 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:17.756 03:15:02 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:17.756 03:15:02 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:17.756 03:15:02 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:17.756 03:15:02 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:17.756 03:15:02 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:17.756 03:15:02 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:17.756 03:15:02 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:17.756 03:15:02 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:17.756 03:15:02 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:17.756 03:15:02 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:17.756 03:15:02 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:17.756 03:15:02 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:17.756 03:15:02 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:17.756 03:15:03 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:17.756 03:15:03 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:17.756 03:15:03 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:17.756 03:15:03 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:17.756 03:15:03 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:17.756 03:15:03 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:17.756 03:15:03 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:17.756 03:15:03 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:17.756 03:15:03 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:18.690 03:15:03 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:18.690 03:15:03 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:18.690 03:15:03 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:18.947 03:15:04 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:05:18.947 03:15:04 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:05:18.948 03:15:04 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:18.948 03:15:04 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:21.472 00:05:21.472 real 0m4.786s 00:05:21.472 user 0m1.115s 00:05:21.472 sys 0m1.773s 00:05:21.472 03:15:06 setup.sh.driver.guess_driver -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:21.472 03:15:06 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:05:21.472 ************************************ 00:05:21.472 END TEST guess_driver 00:05:21.472 ************************************ 00:05:21.472 00:05:21.472 real 0m7.218s 00:05:21.472 user 0m1.644s 00:05:21.472 sys 0m2.693s 00:05:21.472 03:15:06 setup.sh.driver -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:21.472 03:15:06 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:05:21.472 ************************************ 00:05:21.472 END TEST driver 00:05:21.472 ************************************ 00:05:21.472 03:15:06 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:05:21.473 03:15:06 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:21.473 03:15:06 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:21.473 03:15:06 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:21.473 ************************************ 00:05:21.473 START TEST devices 00:05:21.473 ************************************ 00:05:21.473 03:15:06 setup.sh.devices -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:05:21.473 * Looking for test storage... 00:05:21.473 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:05:21.473 03:15:06 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:05:21.473 03:15:06 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:05:21.473 03:15:06 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:21.473 03:15:06 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:22.841 03:15:08 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:05:22.841 03:15:08 setup.sh.devices -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:05:22.841 03:15:08 setup.sh.devices -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:05:22.841 03:15:08 setup.sh.devices -- common/autotest_common.sh@1666 -- # local nvme bdf 00:05:22.841 03:15:08 setup.sh.devices -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:05:22.841 03:15:08 setup.sh.devices -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:05:22.841 03:15:08 setup.sh.devices -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:05:22.841 03:15:08 setup.sh.devices -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:22.841 03:15:08 setup.sh.devices -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:05:22.841 03:15:08 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:05:22.841 03:15:08 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:05:22.841 03:15:08 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:05:22.841 03:15:08 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:05:22.841 03:15:08 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:05:22.841 03:15:08 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:22.841 03:15:08 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:05:22.841 03:15:08 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:05:22.841 03:15:08 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:88:00.0 00:05:22.841 03:15:08 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\8\8\:\0\0\.\0* ]] 00:05:22.841 03:15:08 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:05:22.841 03:15:08 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:05:22.841 03:15:08 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:05:22.841 No valid GPT data, bailing 00:05:22.841 03:15:08 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:22.841 03:15:08 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:05:22.841 03:15:08 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:05:22.841 03:15:08 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:05:22.841 03:15:08 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:05:22.841 03:15:08 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:05:22.841 03:15:08 setup.sh.devices -- setup/common.sh@80 -- # echo 1000204886016 00:05:22.841 03:15:08 setup.sh.devices -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:05:22.841 03:15:08 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:22.841 03:15:08 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:88:00.0 00:05:22.841 03:15:08 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:05:22.841 03:15:08 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:05:22.841 03:15:08 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:05:22.841 03:15:08 setup.sh.devices -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:22.841 03:15:08 setup.sh.devices -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:22.841 03:15:08 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:23.098 ************************************ 00:05:23.098 START TEST nvme_mount 00:05:23.098 ************************************ 00:05:23.098 03:15:08 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1121 -- # nvme_mount 00:05:23.098 03:15:08 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:05:23.098 03:15:08 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:05:23.098 03:15:08 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:23.098 03:15:08 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:23.098 03:15:08 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:05:23.098 03:15:08 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:23.098 03:15:08 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:05:23.098 03:15:08 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:05:23.098 03:15:08 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:23.098 03:15:08 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:05:23.098 03:15:08 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:05:23.098 03:15:08 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:05:23.098 03:15:08 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:23.098 03:15:08 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:23.098 03:15:08 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:23.098 03:15:08 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:23.098 03:15:08 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:05:23.098 03:15:08 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:23.098 03:15:08 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:05:24.039 Creating new GPT entries in memory. 00:05:24.039 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:24.039 other utilities. 00:05:24.039 03:15:09 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:05:24.039 03:15:09 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:24.039 03:15:09 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:24.039 03:15:09 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:24.039 03:15:09 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:05:24.971 Creating new GPT entries in memory. 00:05:24.971 The operation has completed successfully. 00:05:24.971 03:15:10 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:24.971 03:15:10 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:24.971 03:15:10 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 2268468 00:05:24.971 03:15:10 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:24.971 03:15:10 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:05:24.971 03:15:10 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:24.971 03:15:10 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:05:24.971 03:15:10 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:05:24.971 03:15:10 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:25.228 03:15:10 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:88:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:25.228 03:15:10 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:05:25.228 03:15:10 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:05:25.228 03:15:10 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:25.228 03:15:10 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:25.228 03:15:10 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:25.228 03:15:10 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:25.228 03:15:10 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:05:25.228 03:15:10 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:25.228 03:15:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:25.228 03:15:10 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:05:25.228 03:15:10 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:25.228 03:15:10 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:25.228 03:15:10 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:26.158 03:15:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:26.158 03:15:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:05:26.158 03:15:11 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:26.158 03:15:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.158 03:15:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:26.158 03:15:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.158 03:15:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:26.158 03:15:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.158 03:15:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:26.158 03:15:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.158 03:15:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:26.158 03:15:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.158 03:15:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:26.158 03:15:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.158 03:15:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:26.158 03:15:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.158 03:15:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:26.158 03:15:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.158 03:15:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:26.158 03:15:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.158 03:15:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:26.158 03:15:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.158 03:15:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:26.158 03:15:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.158 03:15:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:26.158 03:15:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.158 03:15:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:26.158 03:15:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.158 03:15:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:26.158 03:15:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.158 03:15:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:26.158 03:15:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.158 03:15:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:26.158 03:15:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.158 03:15:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:26.158 03:15:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.416 03:15:11 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:26.416 03:15:11 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:05:26.417 03:15:11 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:26.417 03:15:11 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:26.417 03:15:11 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:26.417 03:15:11 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:05:26.417 03:15:11 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:26.417 03:15:11 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:26.417 03:15:11 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:26.417 03:15:11 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:26.417 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:26.417 03:15:11 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:26.417 03:15:11 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:26.674 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:05:26.674 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:05:26.674 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:26.674 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:26.674 03:15:11 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:05:26.674 03:15:11 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:05:26.674 03:15:11 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:26.674 03:15:11 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:05:26.674 03:15:11 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:05:26.674 03:15:11 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:26.674 03:15:11 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:88:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:26.674 03:15:11 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:05:26.674 03:15:11 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:05:26.674 03:15:11 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:26.674 03:15:11 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:26.674 03:15:11 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:26.674 03:15:11 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:26.674 03:15:11 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:05:26.674 03:15:11 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:26.674 03:15:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.674 03:15:11 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:05:26.674 03:15:11 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:26.674 03:15:11 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:26.674 03:15:11 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:28.046 03:15:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:28.047 03:15:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:05:28.047 03:15:12 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:28.047 03:15:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.047 03:15:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:28.047 03:15:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.047 03:15:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:28.047 03:15:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.047 03:15:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:28.047 03:15:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.047 03:15:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:28.047 03:15:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.047 03:15:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:28.047 03:15:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.047 03:15:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:28.047 03:15:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.047 03:15:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:28.047 03:15:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.047 03:15:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:28.047 03:15:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.047 03:15:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:28.047 03:15:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.047 03:15:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:28.047 03:15:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.047 03:15:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:28.047 03:15:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.047 03:15:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:28.047 03:15:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.047 03:15:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:28.047 03:15:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.047 03:15:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:28.047 03:15:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.047 03:15:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:28.047 03:15:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.047 03:15:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:28.047 03:15:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.047 03:15:13 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:28.047 03:15:13 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:05:28.047 03:15:13 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:28.047 03:15:13 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:28.047 03:15:13 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:28.047 03:15:13 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:28.047 03:15:13 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:88:00.0 data@nvme0n1 '' '' 00:05:28.047 03:15:13 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:05:28.047 03:15:13 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:05:28.047 03:15:13 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:28.047 03:15:13 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:05:28.047 03:15:13 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:28.047 03:15:13 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:28.047 03:15:13 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:28.047 03:15:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.047 03:15:13 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:05:28.047 03:15:13 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:28.047 03:15:13 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:28.047 03:15:13 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:28.977 03:15:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:28.977 03:15:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:05:28.977 03:15:14 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:28.977 03:15:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.977 03:15:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:28.977 03:15:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.977 03:15:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:28.977 03:15:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.977 03:15:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:28.977 03:15:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.977 03:15:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:28.977 03:15:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.977 03:15:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:28.977 03:15:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.977 03:15:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:28.977 03:15:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.977 03:15:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:28.977 03:15:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.977 03:15:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:28.977 03:15:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.977 03:15:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:28.977 03:15:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.977 03:15:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:28.977 03:15:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.977 03:15:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:28.977 03:15:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.977 03:15:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:28.977 03:15:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.977 03:15:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:28.977 03:15:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.977 03:15:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:28.977 03:15:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.977 03:15:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:28.977 03:15:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.977 03:15:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:28.977 03:15:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:29.241 03:15:14 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:29.241 03:15:14 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:29.241 03:15:14 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:05:29.241 03:15:14 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:05:29.241 03:15:14 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:29.241 03:15:14 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:29.241 03:15:14 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:29.241 03:15:14 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:29.241 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:29.241 00:05:29.241 real 0m6.257s 00:05:29.241 user 0m1.510s 00:05:29.241 sys 0m2.296s 00:05:29.241 03:15:14 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:29.241 03:15:14 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:05:29.241 ************************************ 00:05:29.241 END TEST nvme_mount 00:05:29.241 ************************************ 00:05:29.241 03:15:14 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:05:29.241 03:15:14 setup.sh.devices -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:29.241 03:15:14 setup.sh.devices -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:29.241 03:15:14 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:29.241 ************************************ 00:05:29.241 START TEST dm_mount 00:05:29.241 ************************************ 00:05:29.241 03:15:14 setup.sh.devices.dm_mount -- common/autotest_common.sh@1121 -- # dm_mount 00:05:29.241 03:15:14 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:05:29.241 03:15:14 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:05:29.241 03:15:14 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:05:29.241 03:15:14 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:05:29.241 03:15:14 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:29.241 03:15:14 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:05:29.241 03:15:14 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:05:29.241 03:15:14 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:29.241 03:15:14 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:05:29.241 03:15:14 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:05:29.241 03:15:14 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:05:29.241 03:15:14 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:29.241 03:15:14 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:29.241 03:15:14 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:29.241 03:15:14 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:29.241 03:15:14 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:29.241 03:15:14 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:29.241 03:15:14 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:29.241 03:15:14 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:05:29.241 03:15:14 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:29.241 03:15:14 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:05:30.199 Creating new GPT entries in memory. 00:05:30.199 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:30.199 other utilities. 00:05:30.199 03:15:15 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:05:30.199 03:15:15 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:30.199 03:15:15 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:30.199 03:15:15 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:30.199 03:15:15 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:05:31.571 Creating new GPT entries in memory. 00:05:31.571 The operation has completed successfully. 00:05:31.571 03:15:16 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:31.571 03:15:16 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:31.571 03:15:16 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:31.571 03:15:16 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:31.571 03:15:16 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:05:32.505 The operation has completed successfully. 00:05:32.505 03:15:17 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:32.505 03:15:17 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:32.505 03:15:17 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 2270966 00:05:32.505 03:15:17 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:05:32.505 03:15:17 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:32.505 03:15:17 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:32.505 03:15:17 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:05:32.505 03:15:17 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:05:32.505 03:15:17 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:32.505 03:15:17 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:05:32.505 03:15:17 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:32.506 03:15:17 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:05:32.506 03:15:17 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:05:32.506 03:15:17 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:05:32.506 03:15:17 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:05:32.506 03:15:17 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:05:32.506 03:15:17 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:32.506 03:15:17 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:05:32.506 03:15:17 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:32.506 03:15:17 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:32.506 03:15:17 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:05:32.506 03:15:17 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:32.506 03:15:17 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:88:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:32.506 03:15:17 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:05:32.506 03:15:17 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:05:32.506 03:15:17 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:32.506 03:15:17 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:32.506 03:15:17 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:32.506 03:15:17 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:32.506 03:15:17 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:05:32.506 03:15:17 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:32.506 03:15:17 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:32.506 03:15:17 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:05:32.506 03:15:17 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:32.506 03:15:17 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:32.506 03:15:17 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:33.437 03:15:18 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:33.437 03:15:18 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:05:33.437 03:15:18 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:33.437 03:15:18 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.437 03:15:18 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:33.437 03:15:18 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.437 03:15:18 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:33.437 03:15:18 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.437 03:15:18 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:33.437 03:15:18 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.437 03:15:18 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:33.437 03:15:18 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.437 03:15:18 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:33.437 03:15:18 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.437 03:15:18 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:33.695 03:15:18 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.695 03:15:18 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:33.695 03:15:18 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.695 03:15:18 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:33.695 03:15:18 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.695 03:15:18 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:33.695 03:15:18 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.695 03:15:18 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:33.695 03:15:18 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.695 03:15:18 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:33.695 03:15:18 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.695 03:15:18 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:33.695 03:15:18 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.695 03:15:18 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:33.695 03:15:18 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.695 03:15:18 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:33.695 03:15:18 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.695 03:15:18 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:33.695 03:15:18 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.695 03:15:18 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:33.695 03:15:18 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.695 03:15:18 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:33.695 03:15:18 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:05:33.695 03:15:18 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:33.695 03:15:18 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:33.695 03:15:18 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:33.695 03:15:18 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:33.695 03:15:18 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:88:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:05:33.695 03:15:18 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:05:33.695 03:15:18 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:05:33.695 03:15:18 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:33.695 03:15:18 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:05:33.695 03:15:18 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:33.695 03:15:18 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:33.695 03:15:18 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:33.695 03:15:18 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.695 03:15:18 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:05:33.695 03:15:18 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:33.695 03:15:18 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:33.695 03:15:18 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:34.629 03:15:19 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:34.629 03:15:19 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:05:34.629 03:15:19 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:34.629 03:15:19 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:34.629 03:15:19 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:34.629 03:15:19 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:34.629 03:15:19 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:34.629 03:15:19 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:34.629 03:15:19 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:34.629 03:15:19 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:34.629 03:15:19 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:34.629 03:15:19 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:34.629 03:15:19 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:34.629 03:15:19 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:34.629 03:15:19 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:34.629 03:15:19 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:34.629 03:15:19 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:34.629 03:15:19 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:34.629 03:15:19 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:34.629 03:15:19 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:34.629 03:15:19 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:34.629 03:15:19 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:34.629 03:15:19 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:34.629 03:15:19 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:34.629 03:15:19 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:34.629 03:15:19 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:34.629 03:15:19 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:34.629 03:15:19 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:34.629 03:15:19 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:34.629 03:15:19 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:34.629 03:15:19 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:34.629 03:15:19 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:34.629 03:15:19 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:34.629 03:15:19 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:34.629 03:15:19 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:34.629 03:15:19 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:34.888 03:15:20 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:34.888 03:15:20 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:34.888 03:15:20 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:05:34.888 03:15:20 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:05:34.888 03:15:20 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:34.888 03:15:20 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:34.888 03:15:20 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:34.888 03:15:20 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:34.888 03:15:20 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:05:34.888 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:34.888 03:15:20 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:34.888 03:15:20 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:05:34.888 00:05:34.888 real 0m5.649s 00:05:34.888 user 0m0.955s 00:05:34.888 sys 0m1.552s 00:05:34.888 03:15:20 setup.sh.devices.dm_mount -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:34.888 03:15:20 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:05:34.888 ************************************ 00:05:34.888 END TEST dm_mount 00:05:34.888 ************************************ 00:05:34.888 03:15:20 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:05:34.888 03:15:20 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:05:34.888 03:15:20 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:34.888 03:15:20 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:34.888 03:15:20 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:34.888 03:15:20 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:34.888 03:15:20 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:35.146 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:05:35.146 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:05:35.146 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:35.146 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:35.146 03:15:20 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:05:35.146 03:15:20 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:35.146 03:15:20 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:35.146 03:15:20 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:35.146 03:15:20 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:35.146 03:15:20 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:05:35.146 03:15:20 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:05:35.146 00:05:35.146 real 0m13.813s 00:05:35.146 user 0m3.099s 00:05:35.146 sys 0m4.888s 00:05:35.146 03:15:20 setup.sh.devices -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:35.146 03:15:20 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:35.146 ************************************ 00:05:35.146 END TEST devices 00:05:35.146 ************************************ 00:05:35.146 00:05:35.146 real 0m42.987s 00:05:35.146 user 0m12.477s 00:05:35.146 sys 0m18.698s 00:05:35.146 03:15:20 setup.sh -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:35.146 03:15:20 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:35.146 ************************************ 00:05:35.146 END TEST setup.sh 00:05:35.146 ************************************ 00:05:35.404 03:15:20 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:05:36.334 Hugepages 00:05:36.334 node hugesize free / total 00:05:36.334 node0 1048576kB 0 / 0 00:05:36.334 node0 2048kB 2048 / 2048 00:05:36.334 node1 1048576kB 0 / 0 00:05:36.334 node1 2048kB 0 / 0 00:05:36.334 00:05:36.334 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:36.334 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:05:36.334 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:05:36.592 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:05:36.592 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:05:36.592 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:05:36.592 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:05:36.592 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:05:36.592 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:05:36.592 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:05:36.592 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:05:36.592 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:05:36.592 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:05:36.592 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:05:36.592 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:05:36.592 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:05:36.592 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:05:36.592 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:05:36.592 03:15:21 -- spdk/autotest.sh@130 -- # uname -s 00:05:36.592 03:15:21 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:05:36.592 03:15:21 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:05:36.592 03:15:21 -- common/autotest_common.sh@1527 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:37.961 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:37.961 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:37.961 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:37.961 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:37.961 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:37.961 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:37.961 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:37.961 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:37.961 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:37.961 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:37.961 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:37.961 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:37.961 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:37.961 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:37.961 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:37.961 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:38.894 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:05:38.894 03:15:24 -- common/autotest_common.sh@1528 -- # sleep 1 00:05:39.828 03:15:25 -- common/autotest_common.sh@1529 -- # bdfs=() 00:05:39.828 03:15:25 -- common/autotest_common.sh@1529 -- # local bdfs 00:05:39.828 03:15:25 -- common/autotest_common.sh@1530 -- # bdfs=($(get_nvme_bdfs)) 00:05:39.828 03:15:25 -- common/autotest_common.sh@1530 -- # get_nvme_bdfs 00:05:39.828 03:15:25 -- common/autotest_common.sh@1509 -- # bdfs=() 00:05:39.828 03:15:25 -- common/autotest_common.sh@1509 -- # local bdfs 00:05:39.828 03:15:25 -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:39.828 03:15:25 -- common/autotest_common.sh@1510 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:39.828 03:15:25 -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:05:39.828 03:15:25 -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:05:39.828 03:15:25 -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:88:00.0 00:05:39.828 03:15:25 -- common/autotest_common.sh@1532 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:41.204 Waiting for block devices as requested 00:05:41.204 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:05:41.204 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:05:41.204 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:05:41.463 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:05:41.463 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:05:41.463 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:05:41.463 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:05:41.722 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:05:41.722 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:05:41.723 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:05:41.723 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:05:41.723 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:05:41.981 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:05:41.981 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:05:41.981 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:05:42.240 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:05:42.240 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:05:42.240 03:15:27 -- common/autotest_common.sh@1534 -- # for bdf in "${bdfs[@]}" 00:05:42.240 03:15:27 -- common/autotest_common.sh@1535 -- # get_nvme_ctrlr_from_bdf 0000:88:00.0 00:05:42.240 03:15:27 -- common/autotest_common.sh@1498 -- # readlink -f /sys/class/nvme/nvme0 00:05:42.240 03:15:27 -- common/autotest_common.sh@1498 -- # grep 0000:88:00.0/nvme/nvme 00:05:42.240 03:15:27 -- common/autotest_common.sh@1498 -- # bdf_sysfs_path=/sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:05:42.240 03:15:27 -- common/autotest_common.sh@1499 -- # [[ -z /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 ]] 00:05:42.240 03:15:27 -- common/autotest_common.sh@1503 -- # basename /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:05:42.240 03:15:27 -- common/autotest_common.sh@1503 -- # printf '%s\n' nvme0 00:05:42.240 03:15:27 -- common/autotest_common.sh@1535 -- # nvme_ctrlr=/dev/nvme0 00:05:42.240 03:15:27 -- common/autotest_common.sh@1536 -- # [[ -z /dev/nvme0 ]] 00:05:42.240 03:15:27 -- common/autotest_common.sh@1541 -- # nvme id-ctrl /dev/nvme0 00:05:42.240 03:15:27 -- common/autotest_common.sh@1541 -- # grep oacs 00:05:42.240 03:15:27 -- common/autotest_common.sh@1541 -- # cut -d: -f2 00:05:42.240 03:15:27 -- common/autotest_common.sh@1541 -- # oacs=' 0xf' 00:05:42.240 03:15:27 -- common/autotest_common.sh@1542 -- # oacs_ns_manage=8 00:05:42.240 03:15:27 -- common/autotest_common.sh@1544 -- # [[ 8 -ne 0 ]] 00:05:42.240 03:15:27 -- common/autotest_common.sh@1550 -- # nvme id-ctrl /dev/nvme0 00:05:42.240 03:15:27 -- common/autotest_common.sh@1550 -- # grep unvmcap 00:05:42.240 03:15:27 -- common/autotest_common.sh@1550 -- # cut -d: -f2 00:05:42.240 03:15:27 -- common/autotest_common.sh@1550 -- # unvmcap=' 0' 00:05:42.240 03:15:27 -- common/autotest_common.sh@1551 -- # [[ 0 -eq 0 ]] 00:05:42.240 03:15:27 -- common/autotest_common.sh@1553 -- # continue 00:05:42.240 03:15:27 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:05:42.240 03:15:27 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:42.240 03:15:27 -- common/autotest_common.sh@10 -- # set +x 00:05:42.501 03:15:27 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:05:42.501 03:15:27 -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:42.501 03:15:27 -- common/autotest_common.sh@10 -- # set +x 00:05:42.501 03:15:27 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:43.432 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:43.432 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:43.432 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:43.690 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:43.690 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:43.690 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:43.690 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:43.690 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:43.690 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:43.690 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:43.690 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:43.690 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:43.690 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:43.690 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:43.690 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:43.690 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:44.624 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:05:44.624 03:15:29 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:05:44.624 03:15:29 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:44.624 03:15:29 -- common/autotest_common.sh@10 -- # set +x 00:05:44.624 03:15:29 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:05:44.624 03:15:29 -- common/autotest_common.sh@1587 -- # mapfile -t bdfs 00:05:44.624 03:15:29 -- common/autotest_common.sh@1587 -- # get_nvme_bdfs_by_id 0x0a54 00:05:44.624 03:15:29 -- common/autotest_common.sh@1573 -- # bdfs=() 00:05:44.624 03:15:29 -- common/autotest_common.sh@1573 -- # local bdfs 00:05:44.624 03:15:29 -- common/autotest_common.sh@1575 -- # get_nvme_bdfs 00:05:44.624 03:15:29 -- common/autotest_common.sh@1509 -- # bdfs=() 00:05:44.624 03:15:29 -- common/autotest_common.sh@1509 -- # local bdfs 00:05:44.624 03:15:29 -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:44.624 03:15:29 -- common/autotest_common.sh@1510 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:44.624 03:15:29 -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:05:44.881 03:15:29 -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:05:44.881 03:15:29 -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:88:00.0 00:05:44.881 03:15:29 -- common/autotest_common.sh@1575 -- # for bdf in $(get_nvme_bdfs) 00:05:44.881 03:15:29 -- common/autotest_common.sh@1576 -- # cat /sys/bus/pci/devices/0000:88:00.0/device 00:05:44.881 03:15:29 -- common/autotest_common.sh@1576 -- # device=0x0a54 00:05:44.881 03:15:29 -- common/autotest_common.sh@1577 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:05:44.881 03:15:29 -- common/autotest_common.sh@1578 -- # bdfs+=($bdf) 00:05:44.881 03:15:29 -- common/autotest_common.sh@1582 -- # printf '%s\n' 0000:88:00.0 00:05:44.881 03:15:29 -- common/autotest_common.sh@1588 -- # [[ -z 0000:88:00.0 ]] 00:05:44.881 03:15:29 -- common/autotest_common.sh@1593 -- # spdk_tgt_pid=2276140 00:05:44.881 03:15:29 -- common/autotest_common.sh@1592 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:44.881 03:15:29 -- common/autotest_common.sh@1594 -- # waitforlisten 2276140 00:05:44.881 03:15:29 -- common/autotest_common.sh@827 -- # '[' -z 2276140 ']' 00:05:44.881 03:15:29 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:44.881 03:15:29 -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:44.881 03:15:29 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:44.881 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:44.881 03:15:29 -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:44.881 03:15:29 -- common/autotest_common.sh@10 -- # set +x 00:05:44.881 [2024-07-21 03:15:30.025920] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:05:44.882 [2024-07-21 03:15:30.026011] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2276140 ] 00:05:44.882 EAL: No free 2048 kB hugepages reported on node 1 00:05:44.882 [2024-07-21 03:15:30.089998] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.882 [2024-07-21 03:15:30.179512] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.138 03:15:30 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:45.138 03:15:30 -- common/autotest_common.sh@860 -- # return 0 00:05:45.138 03:15:30 -- common/autotest_common.sh@1596 -- # bdf_id=0 00:05:45.138 03:15:30 -- common/autotest_common.sh@1597 -- # for bdf in "${bdfs[@]}" 00:05:45.138 03:15:30 -- common/autotest_common.sh@1598 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:88:00.0 00:05:48.438 nvme0n1 00:05:48.438 03:15:33 -- common/autotest_common.sh@1600 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:05:48.438 [2024-07-21 03:15:33.740117] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:05:48.438 [2024-07-21 03:15:33.740171] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:05:48.724 request: 00:05:48.724 { 00:05:48.724 "nvme_ctrlr_name": "nvme0", 00:05:48.724 "password": "test", 00:05:48.724 "method": "bdev_nvme_opal_revert", 00:05:48.724 "req_id": 1 00:05:48.724 } 00:05:48.724 Got JSON-RPC error response 00:05:48.724 response: 00:05:48.724 { 00:05:48.724 "code": -32603, 00:05:48.724 "message": "Internal error" 00:05:48.724 } 00:05:48.724 03:15:33 -- common/autotest_common.sh@1600 -- # true 00:05:48.724 03:15:33 -- common/autotest_common.sh@1601 -- # (( ++bdf_id )) 00:05:48.724 03:15:33 -- common/autotest_common.sh@1604 -- # killprocess 2276140 00:05:48.724 03:15:33 -- common/autotest_common.sh@946 -- # '[' -z 2276140 ']' 00:05:48.724 03:15:33 -- common/autotest_common.sh@950 -- # kill -0 2276140 00:05:48.724 03:15:33 -- common/autotest_common.sh@951 -- # uname 00:05:48.724 03:15:33 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:48.724 03:15:33 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2276140 00:05:48.724 03:15:33 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:48.724 03:15:33 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:48.724 03:15:33 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2276140' 00:05:48.724 killing process with pid 2276140 00:05:48.724 03:15:33 -- common/autotest_common.sh@965 -- # kill 2276140 00:05:48.724 03:15:33 -- common/autotest_common.sh@970 -- # wait 2276140 00:05:50.619 03:15:35 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:05:50.619 03:15:35 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:05:50.619 03:15:35 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:50.619 03:15:35 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:50.619 03:15:35 -- spdk/autotest.sh@162 -- # timing_enter lib 00:05:50.619 03:15:35 -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:50.619 03:15:35 -- common/autotest_common.sh@10 -- # set +x 00:05:50.619 03:15:35 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:05:50.619 03:15:35 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:50.619 03:15:35 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:50.619 03:15:35 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:50.619 03:15:35 -- common/autotest_common.sh@10 -- # set +x 00:05:50.619 ************************************ 00:05:50.619 START TEST env 00:05:50.619 ************************************ 00:05:50.619 03:15:35 env -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:50.619 * Looking for test storage... 00:05:50.619 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:05:50.619 03:15:35 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:50.619 03:15:35 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:50.619 03:15:35 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:50.619 03:15:35 env -- common/autotest_common.sh@10 -- # set +x 00:05:50.619 ************************************ 00:05:50.619 START TEST env_memory 00:05:50.619 ************************************ 00:05:50.619 03:15:35 env.env_memory -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:50.619 00:05:50.619 00:05:50.619 CUnit - A unit testing framework for C - Version 2.1-3 00:05:50.619 http://cunit.sourceforge.net/ 00:05:50.619 00:05:50.619 00:05:50.619 Suite: memory 00:05:50.619 Test: alloc and free memory map ...[2024-07-21 03:15:35.670330] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:50.619 passed 00:05:50.619 Test: mem map translation ...[2024-07-21 03:15:35.690877] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:50.619 [2024-07-21 03:15:35.690921] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:50.619 [2024-07-21 03:15:35.690988] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:50.619 [2024-07-21 03:15:35.691000] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:50.619 passed 00:05:50.619 Test: mem map registration ...[2024-07-21 03:15:35.732437] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:50.619 [2024-07-21 03:15:35.732457] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:50.619 passed 00:05:50.619 Test: mem map adjacent registrations ...passed 00:05:50.619 00:05:50.619 Run Summary: Type Total Ran Passed Failed Inactive 00:05:50.619 suites 1 1 n/a 0 0 00:05:50.619 tests 4 4 4 0 0 00:05:50.619 asserts 152 152 152 0 n/a 00:05:50.619 00:05:50.619 Elapsed time = 0.142 seconds 00:05:50.619 00:05:50.619 real 0m0.150s 00:05:50.619 user 0m0.145s 00:05:50.619 sys 0m0.005s 00:05:50.619 03:15:35 env.env_memory -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:50.619 03:15:35 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:50.619 ************************************ 00:05:50.619 END TEST env_memory 00:05:50.619 ************************************ 00:05:50.619 03:15:35 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:50.619 03:15:35 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:50.619 03:15:35 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:50.619 03:15:35 env -- common/autotest_common.sh@10 -- # set +x 00:05:50.619 ************************************ 00:05:50.619 START TEST env_vtophys 00:05:50.619 ************************************ 00:05:50.619 03:15:35 env.env_vtophys -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:50.619 EAL: lib.eal log level changed from notice to debug 00:05:50.619 EAL: Detected lcore 0 as core 0 on socket 0 00:05:50.619 EAL: Detected lcore 1 as core 1 on socket 0 00:05:50.619 EAL: Detected lcore 2 as core 2 on socket 0 00:05:50.619 EAL: Detected lcore 3 as core 3 on socket 0 00:05:50.619 EAL: Detected lcore 4 as core 4 on socket 0 00:05:50.619 EAL: Detected lcore 5 as core 5 on socket 0 00:05:50.619 EAL: Detected lcore 6 as core 8 on socket 0 00:05:50.619 EAL: Detected lcore 7 as core 9 on socket 0 00:05:50.619 EAL: Detected lcore 8 as core 10 on socket 0 00:05:50.619 EAL: Detected lcore 9 as core 11 on socket 0 00:05:50.619 EAL: Detected lcore 10 as core 12 on socket 0 00:05:50.619 EAL: Detected lcore 11 as core 13 on socket 0 00:05:50.619 EAL: Detected lcore 12 as core 0 on socket 1 00:05:50.619 EAL: Detected lcore 13 as core 1 on socket 1 00:05:50.619 EAL: Detected lcore 14 as core 2 on socket 1 00:05:50.619 EAL: Detected lcore 15 as core 3 on socket 1 00:05:50.619 EAL: Detected lcore 16 as core 4 on socket 1 00:05:50.619 EAL: Detected lcore 17 as core 5 on socket 1 00:05:50.619 EAL: Detected lcore 18 as core 8 on socket 1 00:05:50.619 EAL: Detected lcore 19 as core 9 on socket 1 00:05:50.619 EAL: Detected lcore 20 as core 10 on socket 1 00:05:50.619 EAL: Detected lcore 21 as core 11 on socket 1 00:05:50.619 EAL: Detected lcore 22 as core 12 on socket 1 00:05:50.619 EAL: Detected lcore 23 as core 13 on socket 1 00:05:50.619 EAL: Detected lcore 24 as core 0 on socket 0 00:05:50.619 EAL: Detected lcore 25 as core 1 on socket 0 00:05:50.619 EAL: Detected lcore 26 as core 2 on socket 0 00:05:50.619 EAL: Detected lcore 27 as core 3 on socket 0 00:05:50.620 EAL: Detected lcore 28 as core 4 on socket 0 00:05:50.620 EAL: Detected lcore 29 as core 5 on socket 0 00:05:50.620 EAL: Detected lcore 30 as core 8 on socket 0 00:05:50.620 EAL: Detected lcore 31 as core 9 on socket 0 00:05:50.620 EAL: Detected lcore 32 as core 10 on socket 0 00:05:50.620 EAL: Detected lcore 33 as core 11 on socket 0 00:05:50.620 EAL: Detected lcore 34 as core 12 on socket 0 00:05:50.620 EAL: Detected lcore 35 as core 13 on socket 0 00:05:50.620 EAL: Detected lcore 36 as core 0 on socket 1 00:05:50.620 EAL: Detected lcore 37 as core 1 on socket 1 00:05:50.620 EAL: Detected lcore 38 as core 2 on socket 1 00:05:50.620 EAL: Detected lcore 39 as core 3 on socket 1 00:05:50.620 EAL: Detected lcore 40 as core 4 on socket 1 00:05:50.620 EAL: Detected lcore 41 as core 5 on socket 1 00:05:50.620 EAL: Detected lcore 42 as core 8 on socket 1 00:05:50.620 EAL: Detected lcore 43 as core 9 on socket 1 00:05:50.620 EAL: Detected lcore 44 as core 10 on socket 1 00:05:50.620 EAL: Detected lcore 45 as core 11 on socket 1 00:05:50.620 EAL: Detected lcore 46 as core 12 on socket 1 00:05:50.620 EAL: Detected lcore 47 as core 13 on socket 1 00:05:50.620 EAL: Maximum logical cores by configuration: 128 00:05:50.620 EAL: Detected CPU lcores: 48 00:05:50.620 EAL: Detected NUMA nodes: 2 00:05:50.620 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:05:50.620 EAL: Detected shared linkage of DPDK 00:05:50.620 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24.0 00:05:50.620 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24.0 00:05:50.620 EAL: Registered [vdev] bus. 00:05:50.620 EAL: bus.vdev log level changed from disabled to notice 00:05:50.620 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24.0 00:05:50.620 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24.0 00:05:50.620 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:05:50.620 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:05:50.620 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:05:50.620 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:05:50.620 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:05:50.620 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:05:50.620 EAL: No shared files mode enabled, IPC will be disabled 00:05:50.620 EAL: No shared files mode enabled, IPC is disabled 00:05:50.620 EAL: Bus pci wants IOVA as 'DC' 00:05:50.620 EAL: Bus vdev wants IOVA as 'DC' 00:05:50.620 EAL: Buses did not request a specific IOVA mode. 00:05:50.620 EAL: IOMMU is available, selecting IOVA as VA mode. 00:05:50.620 EAL: Selected IOVA mode 'VA' 00:05:50.620 EAL: No free 2048 kB hugepages reported on node 1 00:05:50.620 EAL: Probing VFIO support... 00:05:50.620 EAL: IOMMU type 1 (Type 1) is supported 00:05:50.620 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:50.620 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:50.620 EAL: VFIO support initialized 00:05:50.620 EAL: Ask a virtual area of 0x2e000 bytes 00:05:50.620 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:50.620 EAL: Setting up physically contiguous memory... 00:05:50.620 EAL: Setting maximum number of open files to 524288 00:05:50.620 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:50.620 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:50.620 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:50.620 EAL: Ask a virtual area of 0x61000 bytes 00:05:50.620 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:50.620 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:50.620 EAL: Ask a virtual area of 0x400000000 bytes 00:05:50.620 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:50.620 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:50.620 EAL: Ask a virtual area of 0x61000 bytes 00:05:50.620 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:50.620 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:50.620 EAL: Ask a virtual area of 0x400000000 bytes 00:05:50.620 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:50.620 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:50.620 EAL: Ask a virtual area of 0x61000 bytes 00:05:50.620 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:50.620 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:50.620 EAL: Ask a virtual area of 0x400000000 bytes 00:05:50.620 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:50.620 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:50.620 EAL: Ask a virtual area of 0x61000 bytes 00:05:50.620 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:50.620 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:50.620 EAL: Ask a virtual area of 0x400000000 bytes 00:05:50.620 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:50.620 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:50.620 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:50.620 EAL: Ask a virtual area of 0x61000 bytes 00:05:50.620 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:50.620 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:50.620 EAL: Ask a virtual area of 0x400000000 bytes 00:05:50.620 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:50.620 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:50.620 EAL: Ask a virtual area of 0x61000 bytes 00:05:50.620 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:50.620 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:50.620 EAL: Ask a virtual area of 0x400000000 bytes 00:05:50.620 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:50.620 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:50.620 EAL: Ask a virtual area of 0x61000 bytes 00:05:50.620 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:50.620 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:50.620 EAL: Ask a virtual area of 0x400000000 bytes 00:05:50.620 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:50.620 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:50.620 EAL: Ask a virtual area of 0x61000 bytes 00:05:50.620 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:50.620 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:50.620 EAL: Ask a virtual area of 0x400000000 bytes 00:05:50.620 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:50.620 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:50.620 EAL: Hugepages will be freed exactly as allocated. 00:05:50.620 EAL: No shared files mode enabled, IPC is disabled 00:05:50.620 EAL: No shared files mode enabled, IPC is disabled 00:05:50.620 EAL: TSC frequency is ~2700000 KHz 00:05:50.620 EAL: Main lcore 0 is ready (tid=7f5aa9e44a00;cpuset=[0]) 00:05:50.620 EAL: Trying to obtain current memory policy. 00:05:50.620 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:50.620 EAL: Restoring previous memory policy: 0 00:05:50.620 EAL: request: mp_malloc_sync 00:05:50.620 EAL: No shared files mode enabled, IPC is disabled 00:05:50.620 EAL: Heap on socket 0 was expanded by 2MB 00:05:50.620 EAL: No shared files mode enabled, IPC is disabled 00:05:50.620 EAL: No shared files mode enabled, IPC is disabled 00:05:50.620 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:50.620 EAL: Mem event callback 'spdk:(nil)' registered 00:05:50.620 00:05:50.620 00:05:50.620 CUnit - A unit testing framework for C - Version 2.1-3 00:05:50.621 http://cunit.sourceforge.net/ 00:05:50.621 00:05:50.621 00:05:50.621 Suite: components_suite 00:05:50.621 Test: vtophys_malloc_test ...passed 00:05:50.621 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:50.621 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:50.621 EAL: Restoring previous memory policy: 4 00:05:50.621 EAL: Calling mem event callback 'spdk:(nil)' 00:05:50.621 EAL: request: mp_malloc_sync 00:05:50.621 EAL: No shared files mode enabled, IPC is disabled 00:05:50.621 EAL: Heap on socket 0 was expanded by 4MB 00:05:50.621 EAL: Calling mem event callback 'spdk:(nil)' 00:05:50.621 EAL: request: mp_malloc_sync 00:05:50.621 EAL: No shared files mode enabled, IPC is disabled 00:05:50.621 EAL: Heap on socket 0 was shrunk by 4MB 00:05:50.621 EAL: Trying to obtain current memory policy. 00:05:50.621 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:50.621 EAL: Restoring previous memory policy: 4 00:05:50.621 EAL: Calling mem event callback 'spdk:(nil)' 00:05:50.621 EAL: request: mp_malloc_sync 00:05:50.621 EAL: No shared files mode enabled, IPC is disabled 00:05:50.621 EAL: Heap on socket 0 was expanded by 6MB 00:05:50.621 EAL: Calling mem event callback 'spdk:(nil)' 00:05:50.621 EAL: request: mp_malloc_sync 00:05:50.621 EAL: No shared files mode enabled, IPC is disabled 00:05:50.621 EAL: Heap on socket 0 was shrunk by 6MB 00:05:50.621 EAL: Trying to obtain current memory policy. 00:05:50.621 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:50.621 EAL: Restoring previous memory policy: 4 00:05:50.621 EAL: Calling mem event callback 'spdk:(nil)' 00:05:50.621 EAL: request: mp_malloc_sync 00:05:50.621 EAL: No shared files mode enabled, IPC is disabled 00:05:50.621 EAL: Heap on socket 0 was expanded by 10MB 00:05:50.621 EAL: Calling mem event callback 'spdk:(nil)' 00:05:50.621 EAL: request: mp_malloc_sync 00:05:50.621 EAL: No shared files mode enabled, IPC is disabled 00:05:50.621 EAL: Heap on socket 0 was shrunk by 10MB 00:05:50.621 EAL: Trying to obtain current memory policy. 00:05:50.621 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:50.621 EAL: Restoring previous memory policy: 4 00:05:50.621 EAL: Calling mem event callback 'spdk:(nil)' 00:05:50.621 EAL: request: mp_malloc_sync 00:05:50.621 EAL: No shared files mode enabled, IPC is disabled 00:05:50.621 EAL: Heap on socket 0 was expanded by 18MB 00:05:50.621 EAL: Calling mem event callback 'spdk:(nil)' 00:05:50.621 EAL: request: mp_malloc_sync 00:05:50.621 EAL: No shared files mode enabled, IPC is disabled 00:05:50.621 EAL: Heap on socket 0 was shrunk by 18MB 00:05:50.621 EAL: Trying to obtain current memory policy. 00:05:50.621 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:50.621 EAL: Restoring previous memory policy: 4 00:05:50.621 EAL: Calling mem event callback 'spdk:(nil)' 00:05:50.621 EAL: request: mp_malloc_sync 00:05:50.621 EAL: No shared files mode enabled, IPC is disabled 00:05:50.621 EAL: Heap on socket 0 was expanded by 34MB 00:05:50.621 EAL: Calling mem event callback 'spdk:(nil)' 00:05:50.923 EAL: request: mp_malloc_sync 00:05:50.923 EAL: No shared files mode enabled, IPC is disabled 00:05:50.923 EAL: Heap on socket 0 was shrunk by 34MB 00:05:50.923 EAL: Trying to obtain current memory policy. 00:05:50.924 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:50.924 EAL: Restoring previous memory policy: 4 00:05:50.924 EAL: Calling mem event callback 'spdk:(nil)' 00:05:50.924 EAL: request: mp_malloc_sync 00:05:50.924 EAL: No shared files mode enabled, IPC is disabled 00:05:50.924 EAL: Heap on socket 0 was expanded by 66MB 00:05:50.924 EAL: Calling mem event callback 'spdk:(nil)' 00:05:50.924 EAL: request: mp_malloc_sync 00:05:50.924 EAL: No shared files mode enabled, IPC is disabled 00:05:50.924 EAL: Heap on socket 0 was shrunk by 66MB 00:05:50.924 EAL: Trying to obtain current memory policy. 00:05:50.924 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:50.924 EAL: Restoring previous memory policy: 4 00:05:50.924 EAL: Calling mem event callback 'spdk:(nil)' 00:05:50.924 EAL: request: mp_malloc_sync 00:05:50.924 EAL: No shared files mode enabled, IPC is disabled 00:05:50.924 EAL: Heap on socket 0 was expanded by 130MB 00:05:50.924 EAL: Calling mem event callback 'spdk:(nil)' 00:05:50.924 EAL: request: mp_malloc_sync 00:05:50.924 EAL: No shared files mode enabled, IPC is disabled 00:05:50.924 EAL: Heap on socket 0 was shrunk by 130MB 00:05:50.924 EAL: Trying to obtain current memory policy. 00:05:50.924 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:50.924 EAL: Restoring previous memory policy: 4 00:05:50.924 EAL: Calling mem event callback 'spdk:(nil)' 00:05:50.924 EAL: request: mp_malloc_sync 00:05:50.924 EAL: No shared files mode enabled, IPC is disabled 00:05:50.924 EAL: Heap on socket 0 was expanded by 258MB 00:05:50.924 EAL: Calling mem event callback 'spdk:(nil)' 00:05:50.924 EAL: request: mp_malloc_sync 00:05:50.924 EAL: No shared files mode enabled, IPC is disabled 00:05:50.924 EAL: Heap on socket 0 was shrunk by 258MB 00:05:50.924 EAL: Trying to obtain current memory policy. 00:05:50.924 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:51.180 EAL: Restoring previous memory policy: 4 00:05:51.180 EAL: Calling mem event callback 'spdk:(nil)' 00:05:51.180 EAL: request: mp_malloc_sync 00:05:51.180 EAL: No shared files mode enabled, IPC is disabled 00:05:51.180 EAL: Heap on socket 0 was expanded by 514MB 00:05:51.180 EAL: Calling mem event callback 'spdk:(nil)' 00:05:51.437 EAL: request: mp_malloc_sync 00:05:51.437 EAL: No shared files mode enabled, IPC is disabled 00:05:51.437 EAL: Heap on socket 0 was shrunk by 514MB 00:05:51.437 EAL: Trying to obtain current memory policy. 00:05:51.437 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:51.694 EAL: Restoring previous memory policy: 4 00:05:51.694 EAL: Calling mem event callback 'spdk:(nil)' 00:05:51.694 EAL: request: mp_malloc_sync 00:05:51.694 EAL: No shared files mode enabled, IPC is disabled 00:05:51.694 EAL: Heap on socket 0 was expanded by 1026MB 00:05:51.951 EAL: Calling mem event callback 'spdk:(nil)' 00:05:52.208 EAL: request: mp_malloc_sync 00:05:52.208 EAL: No shared files mode enabled, IPC is disabled 00:05:52.208 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:52.208 passed 00:05:52.208 00:05:52.208 Run Summary: Type Total Ran Passed Failed Inactive 00:05:52.208 suites 1 1 n/a 0 0 00:05:52.208 tests 2 2 2 0 0 00:05:52.208 asserts 497 497 497 0 n/a 00:05:52.208 00:05:52.208 Elapsed time = 1.365 seconds 00:05:52.208 EAL: Calling mem event callback 'spdk:(nil)' 00:05:52.208 EAL: request: mp_malloc_sync 00:05:52.208 EAL: No shared files mode enabled, IPC is disabled 00:05:52.208 EAL: Heap on socket 0 was shrunk by 2MB 00:05:52.208 EAL: No shared files mode enabled, IPC is disabled 00:05:52.208 EAL: No shared files mode enabled, IPC is disabled 00:05:52.208 EAL: No shared files mode enabled, IPC is disabled 00:05:52.208 00:05:52.208 real 0m1.481s 00:05:52.208 user 0m0.851s 00:05:52.208 sys 0m0.595s 00:05:52.208 03:15:37 env.env_vtophys -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:52.208 03:15:37 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:52.208 ************************************ 00:05:52.208 END TEST env_vtophys 00:05:52.208 ************************************ 00:05:52.208 03:15:37 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:52.208 03:15:37 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:52.208 03:15:37 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:52.208 03:15:37 env -- common/autotest_common.sh@10 -- # set +x 00:05:52.208 ************************************ 00:05:52.208 START TEST env_pci 00:05:52.208 ************************************ 00:05:52.208 03:15:37 env.env_pci -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:52.208 00:05:52.208 00:05:52.208 CUnit - A unit testing framework for C - Version 2.1-3 00:05:52.208 http://cunit.sourceforge.net/ 00:05:52.208 00:05:52.208 00:05:52.208 Suite: pci 00:05:52.208 Test: pci_hook ...[2024-07-21 03:15:37.361847] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 2277029 has claimed it 00:05:52.208 EAL: Cannot find device (10000:00:01.0) 00:05:52.208 EAL: Failed to attach device on primary process 00:05:52.208 passed 00:05:52.208 00:05:52.208 Run Summary: Type Total Ran Passed Failed Inactive 00:05:52.208 suites 1 1 n/a 0 0 00:05:52.208 tests 1 1 1 0 0 00:05:52.208 asserts 25 25 25 0 n/a 00:05:52.208 00:05:52.208 Elapsed time = 0.022 seconds 00:05:52.208 00:05:52.208 real 0m0.034s 00:05:52.208 user 0m0.008s 00:05:52.208 sys 0m0.026s 00:05:52.208 03:15:37 env.env_pci -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:52.208 03:15:37 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:52.209 ************************************ 00:05:52.209 END TEST env_pci 00:05:52.209 ************************************ 00:05:52.209 03:15:37 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:52.209 03:15:37 env -- env/env.sh@15 -- # uname 00:05:52.209 03:15:37 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:52.209 03:15:37 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:52.209 03:15:37 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:52.209 03:15:37 env -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:05:52.209 03:15:37 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:52.209 03:15:37 env -- common/autotest_common.sh@10 -- # set +x 00:05:52.209 ************************************ 00:05:52.209 START TEST env_dpdk_post_init 00:05:52.209 ************************************ 00:05:52.209 03:15:37 env.env_dpdk_post_init -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:52.209 EAL: Detected CPU lcores: 48 00:05:52.209 EAL: Detected NUMA nodes: 2 00:05:52.209 EAL: Detected shared linkage of DPDK 00:05:52.209 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:52.209 EAL: Selected IOVA mode 'VA' 00:05:52.209 EAL: No free 2048 kB hugepages reported on node 1 00:05:52.209 EAL: VFIO support initialized 00:05:52.209 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:52.468 EAL: Using IOMMU type 1 (Type 1) 00:05:52.468 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:00:04.0 (socket 0) 00:05:52.468 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:00:04.1 (socket 0) 00:05:52.468 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:00:04.2 (socket 0) 00:05:52.468 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:00:04.3 (socket 0) 00:05:52.468 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:00:04.4 (socket 0) 00:05:52.468 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:00:04.5 (socket 0) 00:05:52.468 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:00:04.6 (socket 0) 00:05:52.468 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:00:04.7 (socket 0) 00:05:52.468 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:80:04.0 (socket 1) 00:05:52.468 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:80:04.1 (socket 1) 00:05:52.468 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:80:04.2 (socket 1) 00:05:52.468 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:80:04.3 (socket 1) 00:05:52.468 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:80:04.4 (socket 1) 00:05:52.468 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:80:04.5 (socket 1) 00:05:52.468 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:80:04.6 (socket 1) 00:05:52.468 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:80:04.7 (socket 1) 00:05:53.404 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:88:00.0 (socket 1) 00:05:56.711 EAL: Releasing PCI mapped resource for 0000:88:00.0 00:05:56.711 EAL: Calling pci_unmap_resource for 0000:88:00.0 at 0x202001040000 00:05:56.711 Starting DPDK initialization... 00:05:56.711 Starting SPDK post initialization... 00:05:56.711 SPDK NVMe probe 00:05:56.711 Attaching to 0000:88:00.0 00:05:56.711 Attached to 0000:88:00.0 00:05:56.711 Cleaning up... 00:05:56.711 00:05:56.711 real 0m4.379s 00:05:56.711 user 0m3.261s 00:05:56.711 sys 0m0.178s 00:05:56.711 03:15:41 env.env_dpdk_post_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:56.711 03:15:41 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:56.711 ************************************ 00:05:56.711 END TEST env_dpdk_post_init 00:05:56.711 ************************************ 00:05:56.711 03:15:41 env -- env/env.sh@26 -- # uname 00:05:56.711 03:15:41 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:56.711 03:15:41 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:56.711 03:15:41 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:56.711 03:15:41 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:56.711 03:15:41 env -- common/autotest_common.sh@10 -- # set +x 00:05:56.711 ************************************ 00:05:56.711 START TEST env_mem_callbacks 00:05:56.711 ************************************ 00:05:56.711 03:15:41 env.env_mem_callbacks -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:56.711 EAL: Detected CPU lcores: 48 00:05:56.711 EAL: Detected NUMA nodes: 2 00:05:56.711 EAL: Detected shared linkage of DPDK 00:05:56.711 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:56.711 EAL: Selected IOVA mode 'VA' 00:05:56.711 EAL: No free 2048 kB hugepages reported on node 1 00:05:56.711 EAL: VFIO support initialized 00:05:56.711 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:56.711 00:05:56.711 00:05:56.711 CUnit - A unit testing framework for C - Version 2.1-3 00:05:56.711 http://cunit.sourceforge.net/ 00:05:56.711 00:05:56.711 00:05:56.711 Suite: memory 00:05:56.711 Test: test ... 00:05:56.711 register 0x200000200000 2097152 00:05:56.711 malloc 3145728 00:05:56.711 register 0x200000400000 4194304 00:05:56.711 buf 0x200000500000 len 3145728 PASSED 00:05:56.711 malloc 64 00:05:56.711 buf 0x2000004fff40 len 64 PASSED 00:05:56.711 malloc 4194304 00:05:56.711 register 0x200000800000 6291456 00:05:56.711 buf 0x200000a00000 len 4194304 PASSED 00:05:56.711 free 0x200000500000 3145728 00:05:56.711 free 0x2000004fff40 64 00:05:56.711 unregister 0x200000400000 4194304 PASSED 00:05:56.711 free 0x200000a00000 4194304 00:05:56.711 unregister 0x200000800000 6291456 PASSED 00:05:56.711 malloc 8388608 00:05:56.711 register 0x200000400000 10485760 00:05:56.711 buf 0x200000600000 len 8388608 PASSED 00:05:56.711 free 0x200000600000 8388608 00:05:56.711 unregister 0x200000400000 10485760 PASSED 00:05:56.711 passed 00:05:56.711 00:05:56.711 Run Summary: Type Total Ran Passed Failed Inactive 00:05:56.711 suites 1 1 n/a 0 0 00:05:56.711 tests 1 1 1 0 0 00:05:56.711 asserts 15 15 15 0 n/a 00:05:56.711 00:05:56.711 Elapsed time = 0.005 seconds 00:05:56.711 00:05:56.711 real 0m0.049s 00:05:56.711 user 0m0.017s 00:05:56.711 sys 0m0.032s 00:05:56.711 03:15:41 env.env_mem_callbacks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:56.711 03:15:41 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:56.711 ************************************ 00:05:56.711 END TEST env_mem_callbacks 00:05:56.711 ************************************ 00:05:56.711 00:05:56.711 real 0m6.362s 00:05:56.711 user 0m4.393s 00:05:56.711 sys 0m1.011s 00:05:56.711 03:15:41 env -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:56.711 03:15:41 env -- common/autotest_common.sh@10 -- # set +x 00:05:56.711 ************************************ 00:05:56.711 END TEST env 00:05:56.711 ************************************ 00:05:56.711 03:15:41 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:56.711 03:15:41 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:56.711 03:15:41 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:56.711 03:15:41 -- common/autotest_common.sh@10 -- # set +x 00:05:56.711 ************************************ 00:05:56.711 START TEST rpc 00:05:56.711 ************************************ 00:05:56.711 03:15:41 rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:56.711 * Looking for test storage... 00:05:56.711 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:56.711 03:15:42 rpc -- rpc/rpc.sh@65 -- # spdk_pid=2277691 00:05:56.711 03:15:42 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:56.711 03:15:42 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:56.711 03:15:42 rpc -- rpc/rpc.sh@67 -- # waitforlisten 2277691 00:05:56.711 03:15:42 rpc -- common/autotest_common.sh@827 -- # '[' -z 2277691 ']' 00:05:56.711 03:15:42 rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:56.711 03:15:42 rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:56.711 03:15:42 rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:56.711 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:56.711 03:15:42 rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:56.711 03:15:42 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:56.969 [2024-07-21 03:15:42.073148] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:05:56.969 [2024-07-21 03:15:42.073256] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2277691 ] 00:05:56.969 EAL: No free 2048 kB hugepages reported on node 1 00:05:56.969 [2024-07-21 03:15:42.134113] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.969 [2024-07-21 03:15:42.218426] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:56.969 [2024-07-21 03:15:42.218499] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 2277691' to capture a snapshot of events at runtime. 00:05:56.969 [2024-07-21 03:15:42.218522] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:56.969 [2024-07-21 03:15:42.218534] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:56.969 [2024-07-21 03:15:42.218544] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid2277691 for offline analysis/debug. 00:05:56.969 [2024-07-21 03:15:42.218571] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.226 03:15:42 rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:57.226 03:15:42 rpc -- common/autotest_common.sh@860 -- # return 0 00:05:57.226 03:15:42 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:57.226 03:15:42 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:57.226 03:15:42 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:57.226 03:15:42 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:57.226 03:15:42 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:57.226 03:15:42 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:57.226 03:15:42 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:57.226 ************************************ 00:05:57.226 START TEST rpc_integrity 00:05:57.226 ************************************ 00:05:57.226 03:15:42 rpc.rpc_integrity -- common/autotest_common.sh@1121 -- # rpc_integrity 00:05:57.226 03:15:42 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:57.226 03:15:42 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:57.226 03:15:42 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:57.226 03:15:42 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:57.226 03:15:42 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:57.226 03:15:42 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:57.482 03:15:42 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:57.482 03:15:42 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:57.482 03:15:42 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:57.482 03:15:42 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:57.482 03:15:42 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:57.482 03:15:42 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:57.483 03:15:42 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:57.483 03:15:42 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:57.483 03:15:42 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:57.483 03:15:42 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:57.483 03:15:42 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:57.483 { 00:05:57.483 "name": "Malloc0", 00:05:57.483 "aliases": [ 00:05:57.483 "bc96bdc1-2315-4990-91b4-bb2b50e85ccf" 00:05:57.483 ], 00:05:57.483 "product_name": "Malloc disk", 00:05:57.483 "block_size": 512, 00:05:57.483 "num_blocks": 16384, 00:05:57.483 "uuid": "bc96bdc1-2315-4990-91b4-bb2b50e85ccf", 00:05:57.483 "assigned_rate_limits": { 00:05:57.483 "rw_ios_per_sec": 0, 00:05:57.483 "rw_mbytes_per_sec": 0, 00:05:57.483 "r_mbytes_per_sec": 0, 00:05:57.483 "w_mbytes_per_sec": 0 00:05:57.483 }, 00:05:57.483 "claimed": false, 00:05:57.483 "zoned": false, 00:05:57.483 "supported_io_types": { 00:05:57.483 "read": true, 00:05:57.483 "write": true, 00:05:57.483 "unmap": true, 00:05:57.483 "write_zeroes": true, 00:05:57.483 "flush": true, 00:05:57.483 "reset": true, 00:05:57.483 "compare": false, 00:05:57.483 "compare_and_write": false, 00:05:57.483 "abort": true, 00:05:57.483 "nvme_admin": false, 00:05:57.483 "nvme_io": false 00:05:57.483 }, 00:05:57.483 "memory_domains": [ 00:05:57.483 { 00:05:57.483 "dma_device_id": "system", 00:05:57.483 "dma_device_type": 1 00:05:57.483 }, 00:05:57.483 { 00:05:57.483 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:57.483 "dma_device_type": 2 00:05:57.483 } 00:05:57.483 ], 00:05:57.483 "driver_specific": {} 00:05:57.483 } 00:05:57.483 ]' 00:05:57.483 03:15:42 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:57.483 03:15:42 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:57.483 03:15:42 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:57.483 03:15:42 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:57.483 03:15:42 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:57.483 [2024-07-21 03:15:42.608213] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:57.483 [2024-07-21 03:15:42.608262] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:57.483 [2024-07-21 03:15:42.608287] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xa73d60 00:05:57.483 [2024-07-21 03:15:42.608303] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:57.483 [2024-07-21 03:15:42.609814] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:57.483 [2024-07-21 03:15:42.609840] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:57.483 Passthru0 00:05:57.483 03:15:42 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:57.483 03:15:42 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:57.483 03:15:42 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:57.483 03:15:42 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:57.483 03:15:42 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:57.483 03:15:42 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:57.483 { 00:05:57.483 "name": "Malloc0", 00:05:57.483 "aliases": [ 00:05:57.483 "bc96bdc1-2315-4990-91b4-bb2b50e85ccf" 00:05:57.483 ], 00:05:57.483 "product_name": "Malloc disk", 00:05:57.483 "block_size": 512, 00:05:57.483 "num_blocks": 16384, 00:05:57.483 "uuid": "bc96bdc1-2315-4990-91b4-bb2b50e85ccf", 00:05:57.483 "assigned_rate_limits": { 00:05:57.483 "rw_ios_per_sec": 0, 00:05:57.483 "rw_mbytes_per_sec": 0, 00:05:57.483 "r_mbytes_per_sec": 0, 00:05:57.483 "w_mbytes_per_sec": 0 00:05:57.483 }, 00:05:57.483 "claimed": true, 00:05:57.483 "claim_type": "exclusive_write", 00:05:57.483 "zoned": false, 00:05:57.483 "supported_io_types": { 00:05:57.483 "read": true, 00:05:57.483 "write": true, 00:05:57.483 "unmap": true, 00:05:57.483 "write_zeroes": true, 00:05:57.483 "flush": true, 00:05:57.483 "reset": true, 00:05:57.483 "compare": false, 00:05:57.483 "compare_and_write": false, 00:05:57.483 "abort": true, 00:05:57.483 "nvme_admin": false, 00:05:57.483 "nvme_io": false 00:05:57.483 }, 00:05:57.483 "memory_domains": [ 00:05:57.483 { 00:05:57.483 "dma_device_id": "system", 00:05:57.483 "dma_device_type": 1 00:05:57.483 }, 00:05:57.483 { 00:05:57.483 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:57.483 "dma_device_type": 2 00:05:57.483 } 00:05:57.483 ], 00:05:57.483 "driver_specific": {} 00:05:57.483 }, 00:05:57.483 { 00:05:57.483 "name": "Passthru0", 00:05:57.483 "aliases": [ 00:05:57.483 "7496f310-9af3-5461-b4e1-42b3a521a150" 00:05:57.483 ], 00:05:57.483 "product_name": "passthru", 00:05:57.483 "block_size": 512, 00:05:57.483 "num_blocks": 16384, 00:05:57.483 "uuid": "7496f310-9af3-5461-b4e1-42b3a521a150", 00:05:57.483 "assigned_rate_limits": { 00:05:57.483 "rw_ios_per_sec": 0, 00:05:57.483 "rw_mbytes_per_sec": 0, 00:05:57.483 "r_mbytes_per_sec": 0, 00:05:57.483 "w_mbytes_per_sec": 0 00:05:57.483 }, 00:05:57.483 "claimed": false, 00:05:57.483 "zoned": false, 00:05:57.483 "supported_io_types": { 00:05:57.483 "read": true, 00:05:57.483 "write": true, 00:05:57.483 "unmap": true, 00:05:57.483 "write_zeroes": true, 00:05:57.483 "flush": true, 00:05:57.483 "reset": true, 00:05:57.483 "compare": false, 00:05:57.483 "compare_and_write": false, 00:05:57.483 "abort": true, 00:05:57.483 "nvme_admin": false, 00:05:57.483 "nvme_io": false 00:05:57.483 }, 00:05:57.483 "memory_domains": [ 00:05:57.483 { 00:05:57.483 "dma_device_id": "system", 00:05:57.483 "dma_device_type": 1 00:05:57.483 }, 00:05:57.483 { 00:05:57.483 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:57.483 "dma_device_type": 2 00:05:57.483 } 00:05:57.483 ], 00:05:57.483 "driver_specific": { 00:05:57.483 "passthru": { 00:05:57.483 "name": "Passthru0", 00:05:57.483 "base_bdev_name": "Malloc0" 00:05:57.483 } 00:05:57.483 } 00:05:57.483 } 00:05:57.483 ]' 00:05:57.483 03:15:42 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:57.483 03:15:42 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:57.483 03:15:42 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:57.483 03:15:42 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:57.483 03:15:42 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:57.483 03:15:42 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:57.483 03:15:42 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:57.483 03:15:42 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:57.483 03:15:42 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:57.483 03:15:42 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:57.483 03:15:42 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:57.483 03:15:42 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:57.483 03:15:42 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:57.483 03:15:42 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:57.483 03:15:42 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:57.483 03:15:42 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:57.483 03:15:42 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:57.483 00:05:57.483 real 0m0.230s 00:05:57.484 user 0m0.154s 00:05:57.484 sys 0m0.016s 00:05:57.484 03:15:42 rpc.rpc_integrity -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:57.484 03:15:42 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:57.484 ************************************ 00:05:57.484 END TEST rpc_integrity 00:05:57.484 ************************************ 00:05:57.484 03:15:42 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:57.484 03:15:42 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:57.484 03:15:42 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:57.484 03:15:42 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:57.484 ************************************ 00:05:57.484 START TEST rpc_plugins 00:05:57.484 ************************************ 00:05:57.484 03:15:42 rpc.rpc_plugins -- common/autotest_common.sh@1121 -- # rpc_plugins 00:05:57.484 03:15:42 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:57.484 03:15:42 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:57.484 03:15:42 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:57.484 03:15:42 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:57.484 03:15:42 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:57.484 03:15:42 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:57.484 03:15:42 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:57.484 03:15:42 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:57.740 03:15:42 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:57.740 03:15:42 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:57.740 { 00:05:57.740 "name": "Malloc1", 00:05:57.740 "aliases": [ 00:05:57.740 "dddf0700-6698-4f5a-b16a-0bab553a11ed" 00:05:57.740 ], 00:05:57.740 "product_name": "Malloc disk", 00:05:57.740 "block_size": 4096, 00:05:57.740 "num_blocks": 256, 00:05:57.740 "uuid": "dddf0700-6698-4f5a-b16a-0bab553a11ed", 00:05:57.740 "assigned_rate_limits": { 00:05:57.740 "rw_ios_per_sec": 0, 00:05:57.740 "rw_mbytes_per_sec": 0, 00:05:57.740 "r_mbytes_per_sec": 0, 00:05:57.740 "w_mbytes_per_sec": 0 00:05:57.740 }, 00:05:57.740 "claimed": false, 00:05:57.740 "zoned": false, 00:05:57.740 "supported_io_types": { 00:05:57.740 "read": true, 00:05:57.740 "write": true, 00:05:57.740 "unmap": true, 00:05:57.740 "write_zeroes": true, 00:05:57.740 "flush": true, 00:05:57.740 "reset": true, 00:05:57.740 "compare": false, 00:05:57.740 "compare_and_write": false, 00:05:57.740 "abort": true, 00:05:57.740 "nvme_admin": false, 00:05:57.740 "nvme_io": false 00:05:57.740 }, 00:05:57.740 "memory_domains": [ 00:05:57.740 { 00:05:57.740 "dma_device_id": "system", 00:05:57.740 "dma_device_type": 1 00:05:57.740 }, 00:05:57.740 { 00:05:57.740 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:57.740 "dma_device_type": 2 00:05:57.740 } 00:05:57.740 ], 00:05:57.740 "driver_specific": {} 00:05:57.740 } 00:05:57.740 ]' 00:05:57.740 03:15:42 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:57.740 03:15:42 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:57.740 03:15:42 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:57.740 03:15:42 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:57.740 03:15:42 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:57.740 03:15:42 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:57.740 03:15:42 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:57.740 03:15:42 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:57.740 03:15:42 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:57.740 03:15:42 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:57.740 03:15:42 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:57.740 03:15:42 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:57.740 03:15:42 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:57.740 00:05:57.740 real 0m0.117s 00:05:57.740 user 0m0.074s 00:05:57.740 sys 0m0.012s 00:05:57.740 03:15:42 rpc.rpc_plugins -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:57.740 03:15:42 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:57.740 ************************************ 00:05:57.740 END TEST rpc_plugins 00:05:57.740 ************************************ 00:05:57.740 03:15:42 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:57.740 03:15:42 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:57.740 03:15:42 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:57.740 03:15:42 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:57.740 ************************************ 00:05:57.740 START TEST rpc_trace_cmd_test 00:05:57.740 ************************************ 00:05:57.740 03:15:42 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1121 -- # rpc_trace_cmd_test 00:05:57.740 03:15:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:57.740 03:15:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:57.740 03:15:42 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:57.740 03:15:42 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:57.740 03:15:42 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:57.740 03:15:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:57.740 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid2277691", 00:05:57.740 "tpoint_group_mask": "0x8", 00:05:57.740 "iscsi_conn": { 00:05:57.740 "mask": "0x2", 00:05:57.740 "tpoint_mask": "0x0" 00:05:57.740 }, 00:05:57.740 "scsi": { 00:05:57.740 "mask": "0x4", 00:05:57.740 "tpoint_mask": "0x0" 00:05:57.740 }, 00:05:57.740 "bdev": { 00:05:57.740 "mask": "0x8", 00:05:57.740 "tpoint_mask": "0xffffffffffffffff" 00:05:57.740 }, 00:05:57.740 "nvmf_rdma": { 00:05:57.740 "mask": "0x10", 00:05:57.740 "tpoint_mask": "0x0" 00:05:57.740 }, 00:05:57.740 "nvmf_tcp": { 00:05:57.740 "mask": "0x20", 00:05:57.740 "tpoint_mask": "0x0" 00:05:57.740 }, 00:05:57.740 "ftl": { 00:05:57.740 "mask": "0x40", 00:05:57.740 "tpoint_mask": "0x0" 00:05:57.740 }, 00:05:57.740 "blobfs": { 00:05:57.740 "mask": "0x80", 00:05:57.740 "tpoint_mask": "0x0" 00:05:57.740 }, 00:05:57.740 "dsa": { 00:05:57.740 "mask": "0x200", 00:05:57.740 "tpoint_mask": "0x0" 00:05:57.740 }, 00:05:57.740 "thread": { 00:05:57.740 "mask": "0x400", 00:05:57.740 "tpoint_mask": "0x0" 00:05:57.740 }, 00:05:57.740 "nvme_pcie": { 00:05:57.740 "mask": "0x800", 00:05:57.740 "tpoint_mask": "0x0" 00:05:57.740 }, 00:05:57.740 "iaa": { 00:05:57.740 "mask": "0x1000", 00:05:57.740 "tpoint_mask": "0x0" 00:05:57.740 }, 00:05:57.740 "nvme_tcp": { 00:05:57.740 "mask": "0x2000", 00:05:57.740 "tpoint_mask": "0x0" 00:05:57.740 }, 00:05:57.740 "bdev_nvme": { 00:05:57.740 "mask": "0x4000", 00:05:57.740 "tpoint_mask": "0x0" 00:05:57.740 }, 00:05:57.740 "sock": { 00:05:57.740 "mask": "0x8000", 00:05:57.740 "tpoint_mask": "0x0" 00:05:57.740 } 00:05:57.740 }' 00:05:57.740 03:15:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:57.740 03:15:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:05:57.740 03:15:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:57.740 03:15:43 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:57.740 03:15:43 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:57.997 03:15:43 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:57.997 03:15:43 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:57.997 03:15:43 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:57.997 03:15:43 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:57.997 03:15:43 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:57.997 00:05:57.997 real 0m0.197s 00:05:57.997 user 0m0.172s 00:05:57.997 sys 0m0.016s 00:05:57.997 03:15:43 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:57.997 03:15:43 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:57.997 ************************************ 00:05:57.997 END TEST rpc_trace_cmd_test 00:05:57.997 ************************************ 00:05:57.997 03:15:43 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:57.997 03:15:43 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:57.997 03:15:43 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:57.997 03:15:43 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:57.997 03:15:43 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:57.997 03:15:43 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:57.997 ************************************ 00:05:57.997 START TEST rpc_daemon_integrity 00:05:57.997 ************************************ 00:05:57.997 03:15:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1121 -- # rpc_integrity 00:05:57.997 03:15:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:57.997 03:15:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:57.997 03:15:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:57.997 03:15:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:57.997 03:15:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:57.997 03:15:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:57.997 03:15:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:57.997 03:15:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:57.997 03:15:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:57.997 03:15:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:57.997 03:15:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:57.997 03:15:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:57.997 03:15:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:57.997 03:15:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:57.997 03:15:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:57.997 03:15:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:57.997 03:15:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:57.997 { 00:05:57.997 "name": "Malloc2", 00:05:57.997 "aliases": [ 00:05:57.997 "9b19eea4-96ad-4092-97a5-b617bd10c200" 00:05:57.997 ], 00:05:57.997 "product_name": "Malloc disk", 00:05:57.997 "block_size": 512, 00:05:57.997 "num_blocks": 16384, 00:05:57.997 "uuid": "9b19eea4-96ad-4092-97a5-b617bd10c200", 00:05:57.997 "assigned_rate_limits": { 00:05:57.997 "rw_ios_per_sec": 0, 00:05:57.997 "rw_mbytes_per_sec": 0, 00:05:57.997 "r_mbytes_per_sec": 0, 00:05:57.997 "w_mbytes_per_sec": 0 00:05:57.997 }, 00:05:57.997 "claimed": false, 00:05:57.997 "zoned": false, 00:05:57.997 "supported_io_types": { 00:05:57.997 "read": true, 00:05:57.997 "write": true, 00:05:57.997 "unmap": true, 00:05:57.997 "write_zeroes": true, 00:05:57.997 "flush": true, 00:05:57.997 "reset": true, 00:05:57.997 "compare": false, 00:05:57.997 "compare_and_write": false, 00:05:57.997 "abort": true, 00:05:57.997 "nvme_admin": false, 00:05:57.997 "nvme_io": false 00:05:57.997 }, 00:05:57.997 "memory_domains": [ 00:05:57.997 { 00:05:57.997 "dma_device_id": "system", 00:05:57.997 "dma_device_type": 1 00:05:57.997 }, 00:05:57.997 { 00:05:57.997 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:57.997 "dma_device_type": 2 00:05:57.997 } 00:05:57.997 ], 00:05:57.997 "driver_specific": {} 00:05:57.997 } 00:05:57.997 ]' 00:05:57.997 03:15:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:57.997 03:15:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:57.997 03:15:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:57.997 03:15:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:57.997 03:15:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:57.997 [2024-07-21 03:15:43.286859] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:57.997 [2024-07-21 03:15:43.286922] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:57.997 [2024-07-21 03:15:43.286948] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xc25420 00:05:57.997 [2024-07-21 03:15:43.286964] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:57.997 [2024-07-21 03:15:43.288322] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:57.997 [2024-07-21 03:15:43.288351] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:57.997 Passthru0 00:05:57.997 03:15:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:57.997 03:15:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:57.997 03:15:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:57.997 03:15:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:57.997 03:15:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:57.997 03:15:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:57.997 { 00:05:57.997 "name": "Malloc2", 00:05:57.997 "aliases": [ 00:05:57.997 "9b19eea4-96ad-4092-97a5-b617bd10c200" 00:05:57.997 ], 00:05:57.997 "product_name": "Malloc disk", 00:05:57.997 "block_size": 512, 00:05:57.997 "num_blocks": 16384, 00:05:57.997 "uuid": "9b19eea4-96ad-4092-97a5-b617bd10c200", 00:05:57.997 "assigned_rate_limits": { 00:05:57.997 "rw_ios_per_sec": 0, 00:05:57.997 "rw_mbytes_per_sec": 0, 00:05:57.997 "r_mbytes_per_sec": 0, 00:05:57.997 "w_mbytes_per_sec": 0 00:05:57.997 }, 00:05:57.997 "claimed": true, 00:05:57.997 "claim_type": "exclusive_write", 00:05:57.997 "zoned": false, 00:05:57.997 "supported_io_types": { 00:05:57.997 "read": true, 00:05:57.997 "write": true, 00:05:57.997 "unmap": true, 00:05:57.997 "write_zeroes": true, 00:05:57.997 "flush": true, 00:05:57.997 "reset": true, 00:05:57.997 "compare": false, 00:05:57.997 "compare_and_write": false, 00:05:57.997 "abort": true, 00:05:57.997 "nvme_admin": false, 00:05:57.997 "nvme_io": false 00:05:57.997 }, 00:05:57.997 "memory_domains": [ 00:05:57.997 { 00:05:57.997 "dma_device_id": "system", 00:05:57.997 "dma_device_type": 1 00:05:57.997 }, 00:05:57.997 { 00:05:57.997 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:57.997 "dma_device_type": 2 00:05:57.997 } 00:05:57.997 ], 00:05:57.997 "driver_specific": {} 00:05:57.997 }, 00:05:57.997 { 00:05:57.997 "name": "Passthru0", 00:05:57.997 "aliases": [ 00:05:57.997 "392c3c7a-54ec-5dc2-ae15-bb3f7ad4e58e" 00:05:57.997 ], 00:05:57.997 "product_name": "passthru", 00:05:57.997 "block_size": 512, 00:05:57.997 "num_blocks": 16384, 00:05:57.998 "uuid": "392c3c7a-54ec-5dc2-ae15-bb3f7ad4e58e", 00:05:57.998 "assigned_rate_limits": { 00:05:57.998 "rw_ios_per_sec": 0, 00:05:57.998 "rw_mbytes_per_sec": 0, 00:05:57.998 "r_mbytes_per_sec": 0, 00:05:57.998 "w_mbytes_per_sec": 0 00:05:57.998 }, 00:05:57.998 "claimed": false, 00:05:57.998 "zoned": false, 00:05:57.998 "supported_io_types": { 00:05:57.998 "read": true, 00:05:57.998 "write": true, 00:05:57.998 "unmap": true, 00:05:57.998 "write_zeroes": true, 00:05:57.998 "flush": true, 00:05:57.998 "reset": true, 00:05:57.998 "compare": false, 00:05:57.998 "compare_and_write": false, 00:05:57.998 "abort": true, 00:05:57.998 "nvme_admin": false, 00:05:57.998 "nvme_io": false 00:05:57.998 }, 00:05:57.998 "memory_domains": [ 00:05:57.998 { 00:05:57.998 "dma_device_id": "system", 00:05:57.998 "dma_device_type": 1 00:05:57.998 }, 00:05:57.998 { 00:05:57.998 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:57.998 "dma_device_type": 2 00:05:57.998 } 00:05:57.998 ], 00:05:57.998 "driver_specific": { 00:05:57.998 "passthru": { 00:05:57.998 "name": "Passthru0", 00:05:57.998 "base_bdev_name": "Malloc2" 00:05:57.998 } 00:05:57.998 } 00:05:57.998 } 00:05:57.998 ]' 00:05:57.998 03:15:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:58.253 03:15:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:58.253 03:15:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:58.253 03:15:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:58.253 03:15:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:58.253 03:15:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:58.253 03:15:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:58.253 03:15:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:58.253 03:15:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:58.253 03:15:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:58.253 03:15:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:58.253 03:15:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:58.253 03:15:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:58.253 03:15:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:58.253 03:15:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:58.253 03:15:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:58.253 03:15:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:58.253 00:05:58.253 real 0m0.230s 00:05:58.253 user 0m0.145s 00:05:58.253 sys 0m0.027s 00:05:58.253 03:15:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:58.253 03:15:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:58.253 ************************************ 00:05:58.253 END TEST rpc_daemon_integrity 00:05:58.253 ************************************ 00:05:58.253 03:15:43 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:58.253 03:15:43 rpc -- rpc/rpc.sh@84 -- # killprocess 2277691 00:05:58.253 03:15:43 rpc -- common/autotest_common.sh@946 -- # '[' -z 2277691 ']' 00:05:58.253 03:15:43 rpc -- common/autotest_common.sh@950 -- # kill -0 2277691 00:05:58.253 03:15:43 rpc -- common/autotest_common.sh@951 -- # uname 00:05:58.253 03:15:43 rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:58.253 03:15:43 rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2277691 00:05:58.253 03:15:43 rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:58.253 03:15:43 rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:58.253 03:15:43 rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2277691' 00:05:58.253 killing process with pid 2277691 00:05:58.253 03:15:43 rpc -- common/autotest_common.sh@965 -- # kill 2277691 00:05:58.253 03:15:43 rpc -- common/autotest_common.sh@970 -- # wait 2277691 00:05:58.816 00:05:58.816 real 0m1.905s 00:05:58.816 user 0m2.393s 00:05:58.816 sys 0m0.593s 00:05:58.816 03:15:43 rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:58.816 03:15:43 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:58.816 ************************************ 00:05:58.816 END TEST rpc 00:05:58.816 ************************************ 00:05:58.816 03:15:43 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:58.816 03:15:43 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:58.816 03:15:43 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:58.816 03:15:43 -- common/autotest_common.sh@10 -- # set +x 00:05:58.816 ************************************ 00:05:58.816 START TEST skip_rpc 00:05:58.816 ************************************ 00:05:58.816 03:15:43 skip_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:58.816 * Looking for test storage... 00:05:58.816 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:58.816 03:15:43 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:58.816 03:15:43 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:58.816 03:15:43 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:58.816 03:15:43 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:58.816 03:15:43 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:58.816 03:15:43 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:58.816 ************************************ 00:05:58.816 START TEST skip_rpc 00:05:58.816 ************************************ 00:05:58.816 03:15:43 skip_rpc.skip_rpc -- common/autotest_common.sh@1121 -- # test_skip_rpc 00:05:58.816 03:15:43 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=2278121 00:05:58.816 03:15:43 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:58.816 03:15:43 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:58.816 03:15:43 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:58.816 [2024-07-21 03:15:44.045437] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:05:58.816 [2024-07-21 03:15:44.045500] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2278121 ] 00:05:58.816 EAL: No free 2048 kB hugepages reported on node 1 00:05:58.816 [2024-07-21 03:15:44.107440] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.072 [2024-07-21 03:15:44.200551] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.332 03:15:48 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:06:04.332 03:15:48 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:06:04.332 03:15:48 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:06:04.332 03:15:48 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:06:04.332 03:15:48 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:04.332 03:15:48 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:06:04.332 03:15:48 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:04.332 03:15:48 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:06:04.332 03:15:48 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:04.332 03:15:48 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:04.332 03:15:49 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:06:04.332 03:15:49 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:06:04.332 03:15:49 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:04.332 03:15:49 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:04.332 03:15:49 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:04.332 03:15:49 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:06:04.332 03:15:49 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 2278121 00:06:04.332 03:15:49 skip_rpc.skip_rpc -- common/autotest_common.sh@946 -- # '[' -z 2278121 ']' 00:06:04.332 03:15:49 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # kill -0 2278121 00:06:04.332 03:15:49 skip_rpc.skip_rpc -- common/autotest_common.sh@951 -- # uname 00:06:04.332 03:15:49 skip_rpc.skip_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:04.332 03:15:49 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2278121 00:06:04.332 03:15:49 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:04.332 03:15:49 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:04.332 03:15:49 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2278121' 00:06:04.332 killing process with pid 2278121 00:06:04.332 03:15:49 skip_rpc.skip_rpc -- common/autotest_common.sh@965 -- # kill 2278121 00:06:04.332 03:15:49 skip_rpc.skip_rpc -- common/autotest_common.sh@970 -- # wait 2278121 00:06:04.332 00:06:04.332 real 0m5.438s 00:06:04.332 user 0m5.118s 00:06:04.332 sys 0m0.322s 00:06:04.332 03:15:49 skip_rpc.skip_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:04.332 03:15:49 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:04.332 ************************************ 00:06:04.332 END TEST skip_rpc 00:06:04.332 ************************************ 00:06:04.332 03:15:49 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:06:04.332 03:15:49 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:04.332 03:15:49 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:04.332 03:15:49 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:04.332 ************************************ 00:06:04.332 START TEST skip_rpc_with_json 00:06:04.332 ************************************ 00:06:04.332 03:15:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1121 -- # test_skip_rpc_with_json 00:06:04.332 03:15:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:06:04.332 03:15:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=2278812 00:06:04.332 03:15:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:04.332 03:15:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:04.332 03:15:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 2278812 00:06:04.332 03:15:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@827 -- # '[' -z 2278812 ']' 00:06:04.332 03:15:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:04.332 03:15:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:04.332 03:15:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:04.332 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:04.332 03:15:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:04.332 03:15:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:04.332 [2024-07-21 03:15:49.537695] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:04.332 [2024-07-21 03:15:49.537773] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2278812 ] 00:06:04.332 EAL: No free 2048 kB hugepages reported on node 1 00:06:04.332 [2024-07-21 03:15:49.597373] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.590 [2024-07-21 03:15:49.686005] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.848 03:15:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:04.848 03:15:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # return 0 00:06:04.848 03:15:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:06:04.848 03:15:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:04.848 03:15:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:04.848 [2024-07-21 03:15:49.938644] nvmf_rpc.c:2558:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:06:04.848 request: 00:06:04.848 { 00:06:04.848 "trtype": "tcp", 00:06:04.848 "method": "nvmf_get_transports", 00:06:04.848 "req_id": 1 00:06:04.848 } 00:06:04.848 Got JSON-RPC error response 00:06:04.848 response: 00:06:04.848 { 00:06:04.848 "code": -19, 00:06:04.848 "message": "No such device" 00:06:04.848 } 00:06:04.848 03:15:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:06:04.848 03:15:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:06:04.848 03:15:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:04.848 03:15:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:04.848 [2024-07-21 03:15:49.946768] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:04.848 03:15:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:04.848 03:15:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:06:04.848 03:15:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:04.848 03:15:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:04.848 03:15:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:04.848 03:15:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:04.848 { 00:06:04.848 "subsystems": [ 00:06:04.848 { 00:06:04.848 "subsystem": "vfio_user_target", 00:06:04.848 "config": null 00:06:04.848 }, 00:06:04.848 { 00:06:04.848 "subsystem": "keyring", 00:06:04.848 "config": [] 00:06:04.848 }, 00:06:04.848 { 00:06:04.848 "subsystem": "iobuf", 00:06:04.848 "config": [ 00:06:04.848 { 00:06:04.848 "method": "iobuf_set_options", 00:06:04.849 "params": { 00:06:04.849 "small_pool_count": 8192, 00:06:04.849 "large_pool_count": 1024, 00:06:04.849 "small_bufsize": 8192, 00:06:04.849 "large_bufsize": 135168 00:06:04.849 } 00:06:04.849 } 00:06:04.849 ] 00:06:04.849 }, 00:06:04.849 { 00:06:04.849 "subsystem": "sock", 00:06:04.849 "config": [ 00:06:04.849 { 00:06:04.849 "method": "sock_set_default_impl", 00:06:04.849 "params": { 00:06:04.849 "impl_name": "posix" 00:06:04.849 } 00:06:04.849 }, 00:06:04.849 { 00:06:04.849 "method": "sock_impl_set_options", 00:06:04.849 "params": { 00:06:04.849 "impl_name": "ssl", 00:06:04.849 "recv_buf_size": 4096, 00:06:04.849 "send_buf_size": 4096, 00:06:04.849 "enable_recv_pipe": true, 00:06:04.849 "enable_quickack": false, 00:06:04.849 "enable_placement_id": 0, 00:06:04.849 "enable_zerocopy_send_server": true, 00:06:04.849 "enable_zerocopy_send_client": false, 00:06:04.849 "zerocopy_threshold": 0, 00:06:04.849 "tls_version": 0, 00:06:04.849 "enable_ktls": false 00:06:04.849 } 00:06:04.849 }, 00:06:04.849 { 00:06:04.849 "method": "sock_impl_set_options", 00:06:04.849 "params": { 00:06:04.849 "impl_name": "posix", 00:06:04.849 "recv_buf_size": 2097152, 00:06:04.849 "send_buf_size": 2097152, 00:06:04.849 "enable_recv_pipe": true, 00:06:04.849 "enable_quickack": false, 00:06:04.849 "enable_placement_id": 0, 00:06:04.849 "enable_zerocopy_send_server": true, 00:06:04.849 "enable_zerocopy_send_client": false, 00:06:04.849 "zerocopy_threshold": 0, 00:06:04.849 "tls_version": 0, 00:06:04.849 "enable_ktls": false 00:06:04.849 } 00:06:04.849 } 00:06:04.849 ] 00:06:04.849 }, 00:06:04.849 { 00:06:04.849 "subsystem": "vmd", 00:06:04.849 "config": [] 00:06:04.849 }, 00:06:04.849 { 00:06:04.849 "subsystem": "accel", 00:06:04.849 "config": [ 00:06:04.849 { 00:06:04.849 "method": "accel_set_options", 00:06:04.849 "params": { 00:06:04.849 "small_cache_size": 128, 00:06:04.849 "large_cache_size": 16, 00:06:04.849 "task_count": 2048, 00:06:04.849 "sequence_count": 2048, 00:06:04.849 "buf_count": 2048 00:06:04.849 } 00:06:04.849 } 00:06:04.849 ] 00:06:04.849 }, 00:06:04.849 { 00:06:04.849 "subsystem": "bdev", 00:06:04.849 "config": [ 00:06:04.849 { 00:06:04.849 "method": "bdev_set_options", 00:06:04.849 "params": { 00:06:04.849 "bdev_io_pool_size": 65535, 00:06:04.849 "bdev_io_cache_size": 256, 00:06:04.849 "bdev_auto_examine": true, 00:06:04.849 "iobuf_small_cache_size": 128, 00:06:04.849 "iobuf_large_cache_size": 16 00:06:04.849 } 00:06:04.849 }, 00:06:04.849 { 00:06:04.849 "method": "bdev_raid_set_options", 00:06:04.849 "params": { 00:06:04.849 "process_window_size_kb": 1024 00:06:04.849 } 00:06:04.849 }, 00:06:04.849 { 00:06:04.849 "method": "bdev_iscsi_set_options", 00:06:04.849 "params": { 00:06:04.849 "timeout_sec": 30 00:06:04.849 } 00:06:04.849 }, 00:06:04.849 { 00:06:04.849 "method": "bdev_nvme_set_options", 00:06:04.849 "params": { 00:06:04.849 "action_on_timeout": "none", 00:06:04.849 "timeout_us": 0, 00:06:04.849 "timeout_admin_us": 0, 00:06:04.849 "keep_alive_timeout_ms": 10000, 00:06:04.849 "arbitration_burst": 0, 00:06:04.849 "low_priority_weight": 0, 00:06:04.849 "medium_priority_weight": 0, 00:06:04.849 "high_priority_weight": 0, 00:06:04.849 "nvme_adminq_poll_period_us": 10000, 00:06:04.849 "nvme_ioq_poll_period_us": 0, 00:06:04.849 "io_queue_requests": 0, 00:06:04.849 "delay_cmd_submit": true, 00:06:04.849 "transport_retry_count": 4, 00:06:04.849 "bdev_retry_count": 3, 00:06:04.849 "transport_ack_timeout": 0, 00:06:04.849 "ctrlr_loss_timeout_sec": 0, 00:06:04.849 "reconnect_delay_sec": 0, 00:06:04.849 "fast_io_fail_timeout_sec": 0, 00:06:04.849 "disable_auto_failback": false, 00:06:04.849 "generate_uuids": false, 00:06:04.849 "transport_tos": 0, 00:06:04.849 "nvme_error_stat": false, 00:06:04.849 "rdma_srq_size": 0, 00:06:04.849 "io_path_stat": false, 00:06:04.849 "allow_accel_sequence": false, 00:06:04.849 "rdma_max_cq_size": 0, 00:06:04.849 "rdma_cm_event_timeout_ms": 0, 00:06:04.849 "dhchap_digests": [ 00:06:04.849 "sha256", 00:06:04.849 "sha384", 00:06:04.849 "sha512" 00:06:04.849 ], 00:06:04.849 "dhchap_dhgroups": [ 00:06:04.849 "null", 00:06:04.849 "ffdhe2048", 00:06:04.849 "ffdhe3072", 00:06:04.849 "ffdhe4096", 00:06:04.849 "ffdhe6144", 00:06:04.849 "ffdhe8192" 00:06:04.849 ] 00:06:04.849 } 00:06:04.849 }, 00:06:04.849 { 00:06:04.849 "method": "bdev_nvme_set_hotplug", 00:06:04.849 "params": { 00:06:04.849 "period_us": 100000, 00:06:04.849 "enable": false 00:06:04.849 } 00:06:04.849 }, 00:06:04.849 { 00:06:04.849 "method": "bdev_wait_for_examine" 00:06:04.849 } 00:06:04.849 ] 00:06:04.849 }, 00:06:04.849 { 00:06:04.849 "subsystem": "scsi", 00:06:04.849 "config": null 00:06:04.849 }, 00:06:04.849 { 00:06:04.849 "subsystem": "scheduler", 00:06:04.849 "config": [ 00:06:04.849 { 00:06:04.849 "method": "framework_set_scheduler", 00:06:04.849 "params": { 00:06:04.849 "name": "static" 00:06:04.849 } 00:06:04.849 } 00:06:04.849 ] 00:06:04.849 }, 00:06:04.849 { 00:06:04.849 "subsystem": "vhost_scsi", 00:06:04.849 "config": [] 00:06:04.849 }, 00:06:04.849 { 00:06:04.849 "subsystem": "vhost_blk", 00:06:04.849 "config": [] 00:06:04.849 }, 00:06:04.849 { 00:06:04.849 "subsystem": "ublk", 00:06:04.849 "config": [] 00:06:04.849 }, 00:06:04.849 { 00:06:04.849 "subsystem": "nbd", 00:06:04.849 "config": [] 00:06:04.849 }, 00:06:04.849 { 00:06:04.849 "subsystem": "nvmf", 00:06:04.849 "config": [ 00:06:04.849 { 00:06:04.849 "method": "nvmf_set_config", 00:06:04.849 "params": { 00:06:04.849 "discovery_filter": "match_any", 00:06:04.849 "admin_cmd_passthru": { 00:06:04.849 "identify_ctrlr": false 00:06:04.849 } 00:06:04.849 } 00:06:04.849 }, 00:06:04.849 { 00:06:04.849 "method": "nvmf_set_max_subsystems", 00:06:04.849 "params": { 00:06:04.849 "max_subsystems": 1024 00:06:04.849 } 00:06:04.849 }, 00:06:04.849 { 00:06:04.849 "method": "nvmf_set_crdt", 00:06:04.849 "params": { 00:06:04.849 "crdt1": 0, 00:06:04.849 "crdt2": 0, 00:06:04.849 "crdt3": 0 00:06:04.849 } 00:06:04.849 }, 00:06:04.849 { 00:06:04.849 "method": "nvmf_create_transport", 00:06:04.849 "params": { 00:06:04.849 "trtype": "TCP", 00:06:04.849 "max_queue_depth": 128, 00:06:04.849 "max_io_qpairs_per_ctrlr": 127, 00:06:04.849 "in_capsule_data_size": 4096, 00:06:04.849 "max_io_size": 131072, 00:06:04.849 "io_unit_size": 131072, 00:06:04.849 "max_aq_depth": 128, 00:06:04.849 "num_shared_buffers": 511, 00:06:04.849 "buf_cache_size": 4294967295, 00:06:04.849 "dif_insert_or_strip": false, 00:06:04.849 "zcopy": false, 00:06:04.849 "c2h_success": true, 00:06:04.849 "sock_priority": 0, 00:06:04.849 "abort_timeout_sec": 1, 00:06:04.849 "ack_timeout": 0, 00:06:04.849 "data_wr_pool_size": 0 00:06:04.849 } 00:06:04.849 } 00:06:04.849 ] 00:06:04.849 }, 00:06:04.849 { 00:06:04.849 "subsystem": "iscsi", 00:06:04.849 "config": [ 00:06:04.849 { 00:06:04.849 "method": "iscsi_set_options", 00:06:04.849 "params": { 00:06:04.849 "node_base": "iqn.2016-06.io.spdk", 00:06:04.849 "max_sessions": 128, 00:06:04.849 "max_connections_per_session": 2, 00:06:04.849 "max_queue_depth": 64, 00:06:04.849 "default_time2wait": 2, 00:06:04.849 "default_time2retain": 20, 00:06:04.849 "first_burst_length": 8192, 00:06:04.849 "immediate_data": true, 00:06:04.849 "allow_duplicated_isid": false, 00:06:04.849 "error_recovery_level": 0, 00:06:04.849 "nop_timeout": 60, 00:06:04.849 "nop_in_interval": 30, 00:06:04.849 "disable_chap": false, 00:06:04.849 "require_chap": false, 00:06:04.849 "mutual_chap": false, 00:06:04.849 "chap_group": 0, 00:06:04.849 "max_large_datain_per_connection": 64, 00:06:04.849 "max_r2t_per_connection": 4, 00:06:04.849 "pdu_pool_size": 36864, 00:06:04.849 "immediate_data_pool_size": 16384, 00:06:04.849 "data_out_pool_size": 2048 00:06:04.849 } 00:06:04.849 } 00:06:04.849 ] 00:06:04.849 } 00:06:04.849 ] 00:06:04.849 } 00:06:04.849 03:15:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:06:04.849 03:15:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 2278812 00:06:04.849 03:15:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@946 -- # '[' -z 2278812 ']' 00:06:04.849 03:15:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # kill -0 2278812 00:06:04.849 03:15:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # uname 00:06:04.849 03:15:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:04.849 03:15:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2278812 00:06:04.849 03:15:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:04.849 03:15:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:04.849 03:15:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2278812' 00:06:04.849 killing process with pid 2278812 00:06:04.849 03:15:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@965 -- # kill 2278812 00:06:04.849 03:15:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # wait 2278812 00:06:05.415 03:15:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=2278948 00:06:05.415 03:15:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:05.415 03:15:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:06:10.684 03:15:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 2278948 00:06:10.684 03:15:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@946 -- # '[' -z 2278948 ']' 00:06:10.684 03:15:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # kill -0 2278948 00:06:10.684 03:15:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # uname 00:06:10.684 03:15:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:10.684 03:15:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2278948 00:06:10.684 03:15:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:10.684 03:15:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:10.684 03:15:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2278948' 00:06:10.684 killing process with pid 2278948 00:06:10.684 03:15:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@965 -- # kill 2278948 00:06:10.684 03:15:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # wait 2278948 00:06:10.684 03:15:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:06:10.684 03:15:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:06:10.684 00:06:10.684 real 0m6.490s 00:06:10.684 user 0m6.090s 00:06:10.684 sys 0m0.687s 00:06:10.684 03:15:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:10.684 03:15:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:10.684 ************************************ 00:06:10.684 END TEST skip_rpc_with_json 00:06:10.684 ************************************ 00:06:10.942 03:15:55 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:06:10.942 03:15:55 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:10.942 03:15:55 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:10.942 03:15:55 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:10.942 ************************************ 00:06:10.942 START TEST skip_rpc_with_delay 00:06:10.942 ************************************ 00:06:10.942 03:15:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1121 -- # test_skip_rpc_with_delay 00:06:10.942 03:15:56 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:10.942 03:15:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:06:10.942 03:15:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:10.942 03:15:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:10.942 03:15:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:10.942 03:15:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:10.942 03:15:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:10.942 03:15:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:10.942 03:15:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:10.942 03:15:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:10.942 03:15:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:06:10.942 03:15:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:10.942 [2024-07-21 03:15:56.075627] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:06:10.942 [2024-07-21 03:15:56.075724] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:06:10.942 03:15:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:06:10.942 03:15:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:10.942 03:15:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:10.942 03:15:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:10.942 00:06:10.942 real 0m0.065s 00:06:10.942 user 0m0.045s 00:06:10.942 sys 0m0.020s 00:06:10.942 03:15:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:10.942 03:15:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:06:10.942 ************************************ 00:06:10.942 END TEST skip_rpc_with_delay 00:06:10.942 ************************************ 00:06:10.942 03:15:56 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:06:10.942 03:15:56 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:06:10.942 03:15:56 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:06:10.942 03:15:56 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:10.942 03:15:56 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:10.942 03:15:56 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:10.942 ************************************ 00:06:10.942 START TEST exit_on_failed_rpc_init 00:06:10.942 ************************************ 00:06:10.942 03:15:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1121 -- # test_exit_on_failed_rpc_init 00:06:10.942 03:15:56 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=2279671 00:06:10.942 03:15:56 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:10.942 03:15:56 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 2279671 00:06:10.942 03:15:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@827 -- # '[' -z 2279671 ']' 00:06:10.942 03:15:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:10.942 03:15:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:10.942 03:15:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:10.942 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:10.942 03:15:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:10.942 03:15:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:10.942 [2024-07-21 03:15:56.191170] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:10.942 [2024-07-21 03:15:56.191251] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2279671 ] 00:06:10.942 EAL: No free 2048 kB hugepages reported on node 1 00:06:11.199 [2024-07-21 03:15:56.262701] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.199 [2024-07-21 03:15:56.362665] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.456 03:15:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:11.456 03:15:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # return 0 00:06:11.456 03:15:56 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:11.456 03:15:56 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:11.456 03:15:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:06:11.456 03:15:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:11.456 03:15:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:11.456 03:15:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:11.456 03:15:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:11.456 03:15:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:11.456 03:15:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:11.456 03:15:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:11.456 03:15:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:11.456 03:15:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:06:11.456 03:15:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:11.456 [2024-07-21 03:15:56.674660] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:11.456 [2024-07-21 03:15:56.674750] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2279677 ] 00:06:11.456 EAL: No free 2048 kB hugepages reported on node 1 00:06:11.456 [2024-07-21 03:15:56.733783] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.713 [2024-07-21 03:15:56.823055] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:11.713 [2024-07-21 03:15:56.823180] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:06:11.713 [2024-07-21 03:15:56.823214] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:06:11.713 [2024-07-21 03:15:56.823226] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:11.713 03:15:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:06:11.713 03:15:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:11.713 03:15:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:06:11.713 03:15:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:06:11.713 03:15:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:06:11.713 03:15:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:11.713 03:15:56 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:11.713 03:15:56 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 2279671 00:06:11.713 03:15:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@946 -- # '[' -z 2279671 ']' 00:06:11.713 03:15:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # kill -0 2279671 00:06:11.713 03:15:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@951 -- # uname 00:06:11.713 03:15:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:11.713 03:15:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2279671 00:06:11.713 03:15:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:11.713 03:15:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:11.713 03:15:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2279671' 00:06:11.713 killing process with pid 2279671 00:06:11.713 03:15:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@965 -- # kill 2279671 00:06:11.713 03:15:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@970 -- # wait 2279671 00:06:12.277 00:06:12.277 real 0m1.178s 00:06:12.277 user 0m1.375s 00:06:12.277 sys 0m0.469s 00:06:12.277 03:15:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:12.277 03:15:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:12.277 ************************************ 00:06:12.277 END TEST exit_on_failed_rpc_init 00:06:12.277 ************************************ 00:06:12.277 03:15:57 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:12.277 00:06:12.277 real 0m13.421s 00:06:12.277 user 0m12.734s 00:06:12.277 sys 0m1.656s 00:06:12.277 03:15:57 skip_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:12.277 03:15:57 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:12.277 ************************************ 00:06:12.277 END TEST skip_rpc 00:06:12.277 ************************************ 00:06:12.277 03:15:57 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:06:12.277 03:15:57 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:12.277 03:15:57 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:12.277 03:15:57 -- common/autotest_common.sh@10 -- # set +x 00:06:12.277 ************************************ 00:06:12.277 START TEST rpc_client 00:06:12.277 ************************************ 00:06:12.277 03:15:57 rpc_client -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:06:12.277 * Looking for test storage... 00:06:12.277 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:06:12.277 03:15:57 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:06:12.277 OK 00:06:12.277 03:15:57 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:12.277 00:06:12.277 real 0m0.069s 00:06:12.277 user 0m0.026s 00:06:12.277 sys 0m0.048s 00:06:12.277 03:15:57 rpc_client -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:12.277 03:15:57 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:06:12.277 ************************************ 00:06:12.277 END TEST rpc_client 00:06:12.277 ************************************ 00:06:12.277 03:15:57 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:06:12.277 03:15:57 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:12.277 03:15:57 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:12.277 03:15:57 -- common/autotest_common.sh@10 -- # set +x 00:06:12.277 ************************************ 00:06:12.277 START TEST json_config 00:06:12.277 ************************************ 00:06:12.277 03:15:57 json_config -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:06:12.277 03:15:57 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:12.277 03:15:57 json_config -- nvmf/common.sh@7 -- # uname -s 00:06:12.277 03:15:57 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:12.277 03:15:57 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:12.277 03:15:57 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:12.277 03:15:57 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:12.277 03:15:57 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:12.277 03:15:57 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:12.277 03:15:57 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:12.277 03:15:57 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:12.277 03:15:57 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:12.277 03:15:57 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:12.277 03:15:57 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:12.277 03:15:57 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:12.277 03:15:57 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:12.277 03:15:57 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:12.277 03:15:57 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:12.277 03:15:57 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:12.277 03:15:57 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:12.277 03:15:57 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:12.277 03:15:57 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:12.277 03:15:57 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:12.277 03:15:57 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:12.277 03:15:57 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:12.277 03:15:57 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:12.277 03:15:57 json_config -- paths/export.sh@5 -- # export PATH 00:06:12.277 03:15:57 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:12.277 03:15:57 json_config -- nvmf/common.sh@47 -- # : 0 00:06:12.277 03:15:57 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:12.277 03:15:57 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:12.277 03:15:57 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:12.277 03:15:57 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:12.277 03:15:57 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:12.277 03:15:57 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:12.277 03:15:57 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:12.277 03:15:57 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:12.277 03:15:57 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:06:12.277 03:15:57 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:06:12.277 03:15:57 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:06:12.277 03:15:57 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:06:12.277 03:15:57 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:12.277 03:15:57 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:06:12.277 03:15:57 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:06:12.277 03:15:57 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:06:12.277 03:15:57 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:06:12.278 03:15:57 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:06:12.278 03:15:57 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:06:12.278 03:15:57 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:06:12.278 03:15:57 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:06:12.278 03:15:57 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:06:12.278 03:15:57 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:12.278 03:15:57 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:06:12.278 INFO: JSON configuration test init 00:06:12.278 03:15:57 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:06:12.278 03:15:57 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:06:12.278 03:15:57 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:12.278 03:15:57 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:12.278 03:15:57 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:06:12.278 03:15:57 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:12.278 03:15:57 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:12.278 03:15:57 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:06:12.278 03:15:57 json_config -- json_config/common.sh@9 -- # local app=target 00:06:12.278 03:15:57 json_config -- json_config/common.sh@10 -- # shift 00:06:12.278 03:15:57 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:12.278 03:15:57 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:12.278 03:15:57 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:12.278 03:15:57 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:12.278 03:15:57 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:12.278 03:15:57 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2279919 00:06:12.278 03:15:57 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:06:12.278 03:15:57 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:12.278 Waiting for target to run... 00:06:12.278 03:15:57 json_config -- json_config/common.sh@25 -- # waitforlisten 2279919 /var/tmp/spdk_tgt.sock 00:06:12.278 03:15:57 json_config -- common/autotest_common.sh@827 -- # '[' -z 2279919 ']' 00:06:12.278 03:15:57 json_config -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:12.278 03:15:57 json_config -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:12.278 03:15:57 json_config -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:12.278 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:12.278 03:15:57 json_config -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:12.278 03:15:57 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:12.536 [2024-07-21 03:15:57.611255] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:12.536 [2024-07-21 03:15:57.611338] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2279919 ] 00:06:12.536 EAL: No free 2048 kB hugepages reported on node 1 00:06:12.794 [2024-07-21 03:15:57.958838] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.794 [2024-07-21 03:15:58.025213] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.358 03:15:58 json_config -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:13.358 03:15:58 json_config -- common/autotest_common.sh@860 -- # return 0 00:06:13.358 03:15:58 json_config -- json_config/common.sh@26 -- # echo '' 00:06:13.358 00:06:13.358 03:15:58 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:06:13.358 03:15:58 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:06:13.358 03:15:58 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:13.358 03:15:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:13.358 03:15:58 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:06:13.358 03:15:58 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:06:13.358 03:15:58 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:13.358 03:15:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:13.358 03:15:58 json_config -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:06:13.358 03:15:58 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:06:13.358 03:15:58 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:06:16.638 03:16:01 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:06:16.638 03:16:01 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:06:16.638 03:16:01 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:16.638 03:16:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:16.638 03:16:01 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:06:16.638 03:16:01 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:06:16.638 03:16:01 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:06:16.638 03:16:01 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:06:16.638 03:16:01 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:06:16.638 03:16:01 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:06:16.896 03:16:01 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:06:16.896 03:16:01 json_config -- json_config/json_config.sh@48 -- # local get_types 00:06:16.896 03:16:01 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:06:16.896 03:16:01 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:06:16.896 03:16:01 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:16.896 03:16:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:16.896 03:16:01 json_config -- json_config/json_config.sh@55 -- # return 0 00:06:16.896 03:16:01 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:06:16.896 03:16:01 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:06:16.896 03:16:01 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:06:16.896 03:16:01 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:06:16.896 03:16:01 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:06:16.896 03:16:01 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:06:16.896 03:16:01 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:16.896 03:16:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:16.896 03:16:01 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:06:16.896 03:16:01 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:06:16.896 03:16:01 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:06:16.896 03:16:01 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:16.896 03:16:01 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:17.154 MallocForNvmf0 00:06:17.154 03:16:02 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:17.154 03:16:02 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:17.154 MallocForNvmf1 00:06:17.412 03:16:02 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:06:17.412 03:16:02 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:06:17.412 [2024-07-21 03:16:02.716696] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:17.670 03:16:02 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:17.670 03:16:02 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:17.670 03:16:02 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:17.670 03:16:02 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:17.927 03:16:03 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:17.927 03:16:03 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:18.184 03:16:03 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:18.184 03:16:03 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:18.441 [2024-07-21 03:16:03.687806] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:18.441 03:16:03 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:06:18.441 03:16:03 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:18.441 03:16:03 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:18.441 03:16:03 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:06:18.441 03:16:03 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:18.441 03:16:03 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:18.441 03:16:03 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:06:18.441 03:16:03 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:18.441 03:16:03 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:18.699 MallocBdevForConfigChangeCheck 00:06:18.699 03:16:03 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:06:18.699 03:16:03 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:18.699 03:16:03 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:18.699 03:16:04 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:06:18.699 03:16:04 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:19.264 03:16:04 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:06:19.264 INFO: shutting down applications... 00:06:19.264 03:16:04 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:06:19.264 03:16:04 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:06:19.264 03:16:04 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:06:19.264 03:16:04 json_config -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:06:21.160 Calling clear_iscsi_subsystem 00:06:21.160 Calling clear_nvmf_subsystem 00:06:21.160 Calling clear_nbd_subsystem 00:06:21.160 Calling clear_ublk_subsystem 00:06:21.160 Calling clear_vhost_blk_subsystem 00:06:21.160 Calling clear_vhost_scsi_subsystem 00:06:21.160 Calling clear_bdev_subsystem 00:06:21.160 03:16:06 json_config -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:06:21.160 03:16:06 json_config -- json_config/json_config.sh@343 -- # count=100 00:06:21.160 03:16:06 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:06:21.160 03:16:06 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:21.160 03:16:06 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:06:21.160 03:16:06 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:06:21.160 03:16:06 json_config -- json_config/json_config.sh@345 -- # break 00:06:21.160 03:16:06 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:06:21.160 03:16:06 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:06:21.160 03:16:06 json_config -- json_config/common.sh@31 -- # local app=target 00:06:21.160 03:16:06 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:21.160 03:16:06 json_config -- json_config/common.sh@35 -- # [[ -n 2279919 ]] 00:06:21.160 03:16:06 json_config -- json_config/common.sh@38 -- # kill -SIGINT 2279919 00:06:21.160 03:16:06 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:21.160 03:16:06 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:21.160 03:16:06 json_config -- json_config/common.sh@41 -- # kill -0 2279919 00:06:21.160 03:16:06 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:06:21.728 03:16:06 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:06:21.728 03:16:06 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:21.728 03:16:06 json_config -- json_config/common.sh@41 -- # kill -0 2279919 00:06:21.728 03:16:06 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:21.728 03:16:06 json_config -- json_config/common.sh@43 -- # break 00:06:21.728 03:16:06 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:21.728 03:16:06 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:21.728 SPDK target shutdown done 00:06:21.728 03:16:06 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:06:21.728 INFO: relaunching applications... 00:06:21.728 03:16:06 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:21.728 03:16:06 json_config -- json_config/common.sh@9 -- # local app=target 00:06:21.728 03:16:06 json_config -- json_config/common.sh@10 -- # shift 00:06:21.728 03:16:06 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:21.728 03:16:06 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:21.728 03:16:06 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:21.728 03:16:06 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:21.728 03:16:06 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:21.728 03:16:06 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2281110 00:06:21.728 03:16:06 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:21.728 03:16:06 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:21.728 Waiting for target to run... 00:06:21.728 03:16:06 json_config -- json_config/common.sh@25 -- # waitforlisten 2281110 /var/tmp/spdk_tgt.sock 00:06:21.728 03:16:06 json_config -- common/autotest_common.sh@827 -- # '[' -z 2281110 ']' 00:06:21.728 03:16:06 json_config -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:21.728 03:16:06 json_config -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:21.728 03:16:06 json_config -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:21.728 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:21.728 03:16:06 json_config -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:21.728 03:16:06 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:21.728 [2024-07-21 03:16:06.948809] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:21.728 [2024-07-21 03:16:06.948905] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2281110 ] 00:06:21.728 EAL: No free 2048 kB hugepages reported on node 1 00:06:22.294 [2024-07-21 03:16:07.458928] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.294 [2024-07-21 03:16:07.540929] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.578 [2024-07-21 03:16:10.575748] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:25.578 [2024-07-21 03:16:10.608177] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:26.142 03:16:11 json_config -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:26.142 03:16:11 json_config -- common/autotest_common.sh@860 -- # return 0 00:06:26.142 03:16:11 json_config -- json_config/common.sh@26 -- # echo '' 00:06:26.142 00:06:26.142 03:16:11 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:06:26.142 03:16:11 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:06:26.142 INFO: Checking if target configuration is the same... 00:06:26.142 03:16:11 json_config -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:26.142 03:16:11 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:06:26.142 03:16:11 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:26.142 + '[' 2 -ne 2 ']' 00:06:26.142 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:26.142 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:06:26.142 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:26.142 +++ basename /dev/fd/62 00:06:26.142 ++ mktemp /tmp/62.XXX 00:06:26.142 + tmp_file_1=/tmp/62.nPa 00:06:26.142 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:26.142 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:26.142 + tmp_file_2=/tmp/spdk_tgt_config.json.Dgc 00:06:26.142 + ret=0 00:06:26.142 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:26.710 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:26.710 + diff -u /tmp/62.nPa /tmp/spdk_tgt_config.json.Dgc 00:06:26.710 + echo 'INFO: JSON config files are the same' 00:06:26.710 INFO: JSON config files are the same 00:06:26.710 + rm /tmp/62.nPa /tmp/spdk_tgt_config.json.Dgc 00:06:26.710 + exit 0 00:06:26.710 03:16:11 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:06:26.710 03:16:11 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:06:26.710 INFO: changing configuration and checking if this can be detected... 00:06:26.710 03:16:11 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:26.710 03:16:11 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:26.967 03:16:12 json_config -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:26.967 03:16:12 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:06:26.967 03:16:12 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:26.967 + '[' 2 -ne 2 ']' 00:06:26.967 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:26.967 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:06:26.967 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:26.967 +++ basename /dev/fd/62 00:06:26.967 ++ mktemp /tmp/62.XXX 00:06:26.967 + tmp_file_1=/tmp/62.oyD 00:06:26.967 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:26.967 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:26.967 + tmp_file_2=/tmp/spdk_tgt_config.json.KnW 00:06:26.967 + ret=0 00:06:26.967 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:27.225 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:27.225 + diff -u /tmp/62.oyD /tmp/spdk_tgt_config.json.KnW 00:06:27.225 + ret=1 00:06:27.225 + echo '=== Start of file: /tmp/62.oyD ===' 00:06:27.225 + cat /tmp/62.oyD 00:06:27.225 + echo '=== End of file: /tmp/62.oyD ===' 00:06:27.225 + echo '' 00:06:27.225 + echo '=== Start of file: /tmp/spdk_tgt_config.json.KnW ===' 00:06:27.225 + cat /tmp/spdk_tgt_config.json.KnW 00:06:27.225 + echo '=== End of file: /tmp/spdk_tgt_config.json.KnW ===' 00:06:27.225 + echo '' 00:06:27.225 + rm /tmp/62.oyD /tmp/spdk_tgt_config.json.KnW 00:06:27.225 + exit 1 00:06:27.225 03:16:12 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:06:27.225 INFO: configuration change detected. 00:06:27.225 03:16:12 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:06:27.225 03:16:12 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:06:27.225 03:16:12 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:27.225 03:16:12 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:27.225 03:16:12 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:06:27.225 03:16:12 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:06:27.225 03:16:12 json_config -- json_config/json_config.sh@317 -- # [[ -n 2281110 ]] 00:06:27.225 03:16:12 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:06:27.225 03:16:12 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:06:27.225 03:16:12 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:27.225 03:16:12 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:27.225 03:16:12 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:06:27.225 03:16:12 json_config -- json_config/json_config.sh@193 -- # uname -s 00:06:27.225 03:16:12 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:06:27.225 03:16:12 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:06:27.225 03:16:12 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:06:27.225 03:16:12 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:06:27.225 03:16:12 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:27.225 03:16:12 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:27.225 03:16:12 json_config -- json_config/json_config.sh@323 -- # killprocess 2281110 00:06:27.225 03:16:12 json_config -- common/autotest_common.sh@946 -- # '[' -z 2281110 ']' 00:06:27.225 03:16:12 json_config -- common/autotest_common.sh@950 -- # kill -0 2281110 00:06:27.225 03:16:12 json_config -- common/autotest_common.sh@951 -- # uname 00:06:27.225 03:16:12 json_config -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:27.225 03:16:12 json_config -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2281110 00:06:27.225 03:16:12 json_config -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:27.225 03:16:12 json_config -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:27.225 03:16:12 json_config -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2281110' 00:06:27.225 killing process with pid 2281110 00:06:27.225 03:16:12 json_config -- common/autotest_common.sh@965 -- # kill 2281110 00:06:27.225 03:16:12 json_config -- common/autotest_common.sh@970 -- # wait 2281110 00:06:29.152 03:16:14 json_config -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:29.152 03:16:14 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:06:29.152 03:16:14 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:29.152 03:16:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:29.152 03:16:14 json_config -- json_config/json_config.sh@328 -- # return 0 00:06:29.152 03:16:14 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:06:29.152 INFO: Success 00:06:29.152 00:06:29.152 real 0m16.672s 00:06:29.152 user 0m18.568s 00:06:29.152 sys 0m2.020s 00:06:29.152 03:16:14 json_config -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:29.152 03:16:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:29.152 ************************************ 00:06:29.152 END TEST json_config 00:06:29.152 ************************************ 00:06:29.152 03:16:14 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:29.152 03:16:14 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:29.152 03:16:14 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:29.152 03:16:14 -- common/autotest_common.sh@10 -- # set +x 00:06:29.152 ************************************ 00:06:29.152 START TEST json_config_extra_key 00:06:29.152 ************************************ 00:06:29.152 03:16:14 json_config_extra_key -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:29.152 03:16:14 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:29.152 03:16:14 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:29.152 03:16:14 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:29.152 03:16:14 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:29.152 03:16:14 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:29.152 03:16:14 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:29.152 03:16:14 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:29.153 03:16:14 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:29.153 03:16:14 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:29.153 03:16:14 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:29.153 03:16:14 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:29.153 03:16:14 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:29.153 03:16:14 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:29.153 03:16:14 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:29.153 03:16:14 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:29.153 03:16:14 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:29.153 03:16:14 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:29.153 03:16:14 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:29.153 03:16:14 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:29.153 03:16:14 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:29.153 03:16:14 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:29.153 03:16:14 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:29.153 03:16:14 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:29.153 03:16:14 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:29.153 03:16:14 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:29.153 03:16:14 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:29.153 03:16:14 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:29.153 03:16:14 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:06:29.153 03:16:14 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:29.153 03:16:14 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:29.153 03:16:14 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:29.153 03:16:14 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:29.153 03:16:14 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:29.153 03:16:14 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:29.153 03:16:14 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:29.153 03:16:14 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:29.153 03:16:14 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:06:29.153 03:16:14 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:29.153 03:16:14 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:29.153 03:16:14 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:29.153 03:16:14 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:29.153 03:16:14 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:29.153 03:16:14 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:29.153 03:16:14 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:06:29.153 03:16:14 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:29.153 03:16:14 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:29.153 03:16:14 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:29.153 INFO: launching applications... 00:06:29.153 03:16:14 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:06:29.153 03:16:14 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:29.153 03:16:14 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:29.153 03:16:14 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:29.153 03:16:14 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:29.153 03:16:14 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:29.153 03:16:14 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:29.153 03:16:14 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:29.153 03:16:14 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=2282154 00:06:29.153 03:16:14 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:06:29.153 03:16:14 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:29.153 Waiting for target to run... 00:06:29.153 03:16:14 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 2282154 /var/tmp/spdk_tgt.sock 00:06:29.153 03:16:14 json_config_extra_key -- common/autotest_common.sh@827 -- # '[' -z 2282154 ']' 00:06:29.153 03:16:14 json_config_extra_key -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:29.153 03:16:14 json_config_extra_key -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:29.153 03:16:14 json_config_extra_key -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:29.153 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:29.153 03:16:14 json_config_extra_key -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:29.153 03:16:14 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:29.153 [2024-07-21 03:16:14.328845] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:29.153 [2024-07-21 03:16:14.328941] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2282154 ] 00:06:29.153 EAL: No free 2048 kB hugepages reported on node 1 00:06:29.725 [2024-07-21 03:16:14.821757] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.725 [2024-07-21 03:16:14.898130] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.288 03:16:15 json_config_extra_key -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:30.288 03:16:15 json_config_extra_key -- common/autotest_common.sh@860 -- # return 0 00:06:30.288 03:16:15 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:30.288 00:06:30.288 03:16:15 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:30.288 INFO: shutting down applications... 00:06:30.288 03:16:15 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:30.288 03:16:15 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:30.288 03:16:15 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:30.288 03:16:15 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 2282154 ]] 00:06:30.288 03:16:15 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 2282154 00:06:30.288 03:16:15 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:30.288 03:16:15 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:30.288 03:16:15 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2282154 00:06:30.288 03:16:15 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:30.544 03:16:15 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:30.544 03:16:15 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:30.544 03:16:15 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2282154 00:06:30.544 03:16:15 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:30.544 03:16:15 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:30.544 03:16:15 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:30.544 03:16:15 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:30.544 SPDK target shutdown done 00:06:30.544 03:16:15 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:30.544 Success 00:06:30.544 00:06:30.544 real 0m1.594s 00:06:30.544 user 0m1.480s 00:06:30.544 sys 0m0.573s 00:06:30.544 03:16:15 json_config_extra_key -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:30.544 03:16:15 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:30.544 ************************************ 00:06:30.544 END TEST json_config_extra_key 00:06:30.544 ************************************ 00:06:30.544 03:16:15 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:30.544 03:16:15 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:30.544 03:16:15 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:30.544 03:16:15 -- common/autotest_common.sh@10 -- # set +x 00:06:30.800 ************************************ 00:06:30.800 START TEST alias_rpc 00:06:30.800 ************************************ 00:06:30.800 03:16:15 alias_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:30.800 * Looking for test storage... 00:06:30.800 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:06:30.800 03:16:15 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:30.800 03:16:15 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=2282350 00:06:30.800 03:16:15 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:30.800 03:16:15 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 2282350 00:06:30.800 03:16:15 alias_rpc -- common/autotest_common.sh@827 -- # '[' -z 2282350 ']' 00:06:30.800 03:16:15 alias_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:30.800 03:16:15 alias_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:30.800 03:16:15 alias_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:30.800 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:30.800 03:16:15 alias_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:30.800 03:16:15 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:30.800 [2024-07-21 03:16:15.969168] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:30.800 [2024-07-21 03:16:15.969268] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2282350 ] 00:06:30.800 EAL: No free 2048 kB hugepages reported on node 1 00:06:30.800 [2024-07-21 03:16:16.034382] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.057 [2024-07-21 03:16:16.125035] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.314 03:16:16 alias_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:31.315 03:16:16 alias_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:31.315 03:16:16 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:06:31.572 03:16:16 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 2282350 00:06:31.572 03:16:16 alias_rpc -- common/autotest_common.sh@946 -- # '[' -z 2282350 ']' 00:06:31.572 03:16:16 alias_rpc -- common/autotest_common.sh@950 -- # kill -0 2282350 00:06:31.572 03:16:16 alias_rpc -- common/autotest_common.sh@951 -- # uname 00:06:31.572 03:16:16 alias_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:31.572 03:16:16 alias_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2282350 00:06:31.572 03:16:16 alias_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:31.572 03:16:16 alias_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:31.572 03:16:16 alias_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2282350' 00:06:31.572 killing process with pid 2282350 00:06:31.572 03:16:16 alias_rpc -- common/autotest_common.sh@965 -- # kill 2282350 00:06:31.572 03:16:16 alias_rpc -- common/autotest_common.sh@970 -- # wait 2282350 00:06:31.830 00:06:31.830 real 0m1.221s 00:06:31.830 user 0m1.274s 00:06:31.830 sys 0m0.436s 00:06:31.830 03:16:17 alias_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:31.830 03:16:17 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:31.830 ************************************ 00:06:31.830 END TEST alias_rpc 00:06:31.830 ************************************ 00:06:31.830 03:16:17 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:06:31.830 03:16:17 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:31.830 03:16:17 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:31.830 03:16:17 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:31.830 03:16:17 -- common/autotest_common.sh@10 -- # set +x 00:06:31.830 ************************************ 00:06:31.830 START TEST spdkcli_tcp 00:06:31.830 ************************************ 00:06:31.830 03:16:17 spdkcli_tcp -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:32.088 * Looking for test storage... 00:06:32.088 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:06:32.088 03:16:17 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:06:32.088 03:16:17 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:06:32.088 03:16:17 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:06:32.088 03:16:17 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:32.088 03:16:17 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:32.088 03:16:17 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:32.088 03:16:17 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:32.088 03:16:17 spdkcli_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:32.088 03:16:17 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:32.088 03:16:17 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=2282653 00:06:32.088 03:16:17 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:32.088 03:16:17 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 2282653 00:06:32.088 03:16:17 spdkcli_tcp -- common/autotest_common.sh@827 -- # '[' -z 2282653 ']' 00:06:32.088 03:16:17 spdkcli_tcp -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:32.088 03:16:17 spdkcli_tcp -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:32.088 03:16:17 spdkcli_tcp -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:32.088 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:32.088 03:16:17 spdkcli_tcp -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:32.088 03:16:17 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:32.088 [2024-07-21 03:16:17.225730] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:32.088 [2024-07-21 03:16:17.225813] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2282653 ] 00:06:32.088 EAL: No free 2048 kB hugepages reported on node 1 00:06:32.088 [2024-07-21 03:16:17.287873] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:32.088 [2024-07-21 03:16:17.383636] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:32.088 [2024-07-21 03:16:17.383648] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.344 03:16:17 spdkcli_tcp -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:32.344 03:16:17 spdkcli_tcp -- common/autotest_common.sh@860 -- # return 0 00:06:32.344 03:16:17 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=2282663 00:06:32.344 03:16:17 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:32.344 03:16:17 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:32.601 [ 00:06:32.601 "bdev_malloc_delete", 00:06:32.601 "bdev_malloc_create", 00:06:32.601 "bdev_null_resize", 00:06:32.601 "bdev_null_delete", 00:06:32.601 "bdev_null_create", 00:06:32.601 "bdev_nvme_cuse_unregister", 00:06:32.601 "bdev_nvme_cuse_register", 00:06:32.601 "bdev_opal_new_user", 00:06:32.601 "bdev_opal_set_lock_state", 00:06:32.601 "bdev_opal_delete", 00:06:32.601 "bdev_opal_get_info", 00:06:32.601 "bdev_opal_create", 00:06:32.601 "bdev_nvme_opal_revert", 00:06:32.601 "bdev_nvme_opal_init", 00:06:32.601 "bdev_nvme_send_cmd", 00:06:32.601 "bdev_nvme_get_path_iostat", 00:06:32.601 "bdev_nvme_get_mdns_discovery_info", 00:06:32.601 "bdev_nvme_stop_mdns_discovery", 00:06:32.601 "bdev_nvme_start_mdns_discovery", 00:06:32.601 "bdev_nvme_set_multipath_policy", 00:06:32.601 "bdev_nvme_set_preferred_path", 00:06:32.601 "bdev_nvme_get_io_paths", 00:06:32.601 "bdev_nvme_remove_error_injection", 00:06:32.601 "bdev_nvme_add_error_injection", 00:06:32.601 "bdev_nvme_get_discovery_info", 00:06:32.601 "bdev_nvme_stop_discovery", 00:06:32.601 "bdev_nvme_start_discovery", 00:06:32.601 "bdev_nvme_get_controller_health_info", 00:06:32.601 "bdev_nvme_disable_controller", 00:06:32.601 "bdev_nvme_enable_controller", 00:06:32.601 "bdev_nvme_reset_controller", 00:06:32.601 "bdev_nvme_get_transport_statistics", 00:06:32.601 "bdev_nvme_apply_firmware", 00:06:32.601 "bdev_nvme_detach_controller", 00:06:32.601 "bdev_nvme_get_controllers", 00:06:32.601 "bdev_nvme_attach_controller", 00:06:32.601 "bdev_nvme_set_hotplug", 00:06:32.601 "bdev_nvme_set_options", 00:06:32.601 "bdev_passthru_delete", 00:06:32.601 "bdev_passthru_create", 00:06:32.601 "bdev_lvol_set_parent_bdev", 00:06:32.601 "bdev_lvol_set_parent", 00:06:32.601 "bdev_lvol_check_shallow_copy", 00:06:32.601 "bdev_lvol_start_shallow_copy", 00:06:32.601 "bdev_lvol_grow_lvstore", 00:06:32.601 "bdev_lvol_get_lvols", 00:06:32.601 "bdev_lvol_get_lvstores", 00:06:32.601 "bdev_lvol_delete", 00:06:32.601 "bdev_lvol_set_read_only", 00:06:32.601 "bdev_lvol_resize", 00:06:32.601 "bdev_lvol_decouple_parent", 00:06:32.601 "bdev_lvol_inflate", 00:06:32.601 "bdev_lvol_rename", 00:06:32.601 "bdev_lvol_clone_bdev", 00:06:32.601 "bdev_lvol_clone", 00:06:32.601 "bdev_lvol_snapshot", 00:06:32.601 "bdev_lvol_create", 00:06:32.601 "bdev_lvol_delete_lvstore", 00:06:32.601 "bdev_lvol_rename_lvstore", 00:06:32.601 "bdev_lvol_create_lvstore", 00:06:32.601 "bdev_raid_set_options", 00:06:32.601 "bdev_raid_remove_base_bdev", 00:06:32.601 "bdev_raid_add_base_bdev", 00:06:32.601 "bdev_raid_delete", 00:06:32.601 "bdev_raid_create", 00:06:32.601 "bdev_raid_get_bdevs", 00:06:32.601 "bdev_error_inject_error", 00:06:32.601 "bdev_error_delete", 00:06:32.601 "bdev_error_create", 00:06:32.601 "bdev_split_delete", 00:06:32.601 "bdev_split_create", 00:06:32.601 "bdev_delay_delete", 00:06:32.601 "bdev_delay_create", 00:06:32.601 "bdev_delay_update_latency", 00:06:32.601 "bdev_zone_block_delete", 00:06:32.601 "bdev_zone_block_create", 00:06:32.601 "blobfs_create", 00:06:32.602 "blobfs_detect", 00:06:32.602 "blobfs_set_cache_size", 00:06:32.602 "bdev_aio_delete", 00:06:32.602 "bdev_aio_rescan", 00:06:32.602 "bdev_aio_create", 00:06:32.602 "bdev_ftl_set_property", 00:06:32.602 "bdev_ftl_get_properties", 00:06:32.602 "bdev_ftl_get_stats", 00:06:32.602 "bdev_ftl_unmap", 00:06:32.602 "bdev_ftl_unload", 00:06:32.602 "bdev_ftl_delete", 00:06:32.602 "bdev_ftl_load", 00:06:32.602 "bdev_ftl_create", 00:06:32.602 "bdev_virtio_attach_controller", 00:06:32.602 "bdev_virtio_scsi_get_devices", 00:06:32.602 "bdev_virtio_detach_controller", 00:06:32.602 "bdev_virtio_blk_set_hotplug", 00:06:32.602 "bdev_iscsi_delete", 00:06:32.602 "bdev_iscsi_create", 00:06:32.602 "bdev_iscsi_set_options", 00:06:32.602 "accel_error_inject_error", 00:06:32.602 "ioat_scan_accel_module", 00:06:32.602 "dsa_scan_accel_module", 00:06:32.602 "iaa_scan_accel_module", 00:06:32.602 "vfu_virtio_create_scsi_endpoint", 00:06:32.602 "vfu_virtio_scsi_remove_target", 00:06:32.602 "vfu_virtio_scsi_add_target", 00:06:32.602 "vfu_virtio_create_blk_endpoint", 00:06:32.602 "vfu_virtio_delete_endpoint", 00:06:32.602 "keyring_file_remove_key", 00:06:32.602 "keyring_file_add_key", 00:06:32.602 "keyring_linux_set_options", 00:06:32.602 "iscsi_get_histogram", 00:06:32.602 "iscsi_enable_histogram", 00:06:32.602 "iscsi_set_options", 00:06:32.602 "iscsi_get_auth_groups", 00:06:32.602 "iscsi_auth_group_remove_secret", 00:06:32.602 "iscsi_auth_group_add_secret", 00:06:32.602 "iscsi_delete_auth_group", 00:06:32.602 "iscsi_create_auth_group", 00:06:32.602 "iscsi_set_discovery_auth", 00:06:32.602 "iscsi_get_options", 00:06:32.602 "iscsi_target_node_request_logout", 00:06:32.602 "iscsi_target_node_set_redirect", 00:06:32.602 "iscsi_target_node_set_auth", 00:06:32.602 "iscsi_target_node_add_lun", 00:06:32.602 "iscsi_get_stats", 00:06:32.602 "iscsi_get_connections", 00:06:32.602 "iscsi_portal_group_set_auth", 00:06:32.602 "iscsi_start_portal_group", 00:06:32.602 "iscsi_delete_portal_group", 00:06:32.602 "iscsi_create_portal_group", 00:06:32.602 "iscsi_get_portal_groups", 00:06:32.602 "iscsi_delete_target_node", 00:06:32.602 "iscsi_target_node_remove_pg_ig_maps", 00:06:32.602 "iscsi_target_node_add_pg_ig_maps", 00:06:32.602 "iscsi_create_target_node", 00:06:32.602 "iscsi_get_target_nodes", 00:06:32.602 "iscsi_delete_initiator_group", 00:06:32.602 "iscsi_initiator_group_remove_initiators", 00:06:32.602 "iscsi_initiator_group_add_initiators", 00:06:32.602 "iscsi_create_initiator_group", 00:06:32.602 "iscsi_get_initiator_groups", 00:06:32.602 "nvmf_set_crdt", 00:06:32.602 "nvmf_set_config", 00:06:32.602 "nvmf_set_max_subsystems", 00:06:32.602 "nvmf_stop_mdns_prr", 00:06:32.602 "nvmf_publish_mdns_prr", 00:06:32.602 "nvmf_subsystem_get_listeners", 00:06:32.602 "nvmf_subsystem_get_qpairs", 00:06:32.602 "nvmf_subsystem_get_controllers", 00:06:32.602 "nvmf_get_stats", 00:06:32.602 "nvmf_get_transports", 00:06:32.602 "nvmf_create_transport", 00:06:32.602 "nvmf_get_targets", 00:06:32.602 "nvmf_delete_target", 00:06:32.602 "nvmf_create_target", 00:06:32.602 "nvmf_subsystem_allow_any_host", 00:06:32.602 "nvmf_subsystem_remove_host", 00:06:32.602 "nvmf_subsystem_add_host", 00:06:32.602 "nvmf_ns_remove_host", 00:06:32.602 "nvmf_ns_add_host", 00:06:32.602 "nvmf_subsystem_remove_ns", 00:06:32.602 "nvmf_subsystem_add_ns", 00:06:32.602 "nvmf_subsystem_listener_set_ana_state", 00:06:32.602 "nvmf_discovery_get_referrals", 00:06:32.602 "nvmf_discovery_remove_referral", 00:06:32.602 "nvmf_discovery_add_referral", 00:06:32.602 "nvmf_subsystem_remove_listener", 00:06:32.602 "nvmf_subsystem_add_listener", 00:06:32.602 "nvmf_delete_subsystem", 00:06:32.602 "nvmf_create_subsystem", 00:06:32.602 "nvmf_get_subsystems", 00:06:32.602 "env_dpdk_get_mem_stats", 00:06:32.602 "nbd_get_disks", 00:06:32.602 "nbd_stop_disk", 00:06:32.602 "nbd_start_disk", 00:06:32.602 "ublk_recover_disk", 00:06:32.602 "ublk_get_disks", 00:06:32.602 "ublk_stop_disk", 00:06:32.602 "ublk_start_disk", 00:06:32.602 "ublk_destroy_target", 00:06:32.602 "ublk_create_target", 00:06:32.602 "virtio_blk_create_transport", 00:06:32.602 "virtio_blk_get_transports", 00:06:32.602 "vhost_controller_set_coalescing", 00:06:32.602 "vhost_get_controllers", 00:06:32.602 "vhost_delete_controller", 00:06:32.602 "vhost_create_blk_controller", 00:06:32.602 "vhost_scsi_controller_remove_target", 00:06:32.602 "vhost_scsi_controller_add_target", 00:06:32.602 "vhost_start_scsi_controller", 00:06:32.602 "vhost_create_scsi_controller", 00:06:32.602 "thread_set_cpumask", 00:06:32.602 "framework_get_scheduler", 00:06:32.602 "framework_set_scheduler", 00:06:32.602 "framework_get_reactors", 00:06:32.602 "thread_get_io_channels", 00:06:32.602 "thread_get_pollers", 00:06:32.602 "thread_get_stats", 00:06:32.602 "framework_monitor_context_switch", 00:06:32.602 "spdk_kill_instance", 00:06:32.602 "log_enable_timestamps", 00:06:32.602 "log_get_flags", 00:06:32.602 "log_clear_flag", 00:06:32.602 "log_set_flag", 00:06:32.602 "log_get_level", 00:06:32.602 "log_set_level", 00:06:32.602 "log_get_print_level", 00:06:32.602 "log_set_print_level", 00:06:32.602 "framework_enable_cpumask_locks", 00:06:32.602 "framework_disable_cpumask_locks", 00:06:32.602 "framework_wait_init", 00:06:32.602 "framework_start_init", 00:06:32.602 "scsi_get_devices", 00:06:32.602 "bdev_get_histogram", 00:06:32.602 "bdev_enable_histogram", 00:06:32.602 "bdev_set_qos_limit", 00:06:32.602 "bdev_set_qd_sampling_period", 00:06:32.602 "bdev_get_bdevs", 00:06:32.602 "bdev_reset_iostat", 00:06:32.602 "bdev_get_iostat", 00:06:32.602 "bdev_examine", 00:06:32.602 "bdev_wait_for_examine", 00:06:32.602 "bdev_set_options", 00:06:32.602 "notify_get_notifications", 00:06:32.602 "notify_get_types", 00:06:32.602 "accel_get_stats", 00:06:32.602 "accel_set_options", 00:06:32.602 "accel_set_driver", 00:06:32.602 "accel_crypto_key_destroy", 00:06:32.602 "accel_crypto_keys_get", 00:06:32.602 "accel_crypto_key_create", 00:06:32.602 "accel_assign_opc", 00:06:32.602 "accel_get_module_info", 00:06:32.602 "accel_get_opc_assignments", 00:06:32.602 "vmd_rescan", 00:06:32.602 "vmd_remove_device", 00:06:32.602 "vmd_enable", 00:06:32.602 "sock_get_default_impl", 00:06:32.602 "sock_set_default_impl", 00:06:32.602 "sock_impl_set_options", 00:06:32.602 "sock_impl_get_options", 00:06:32.602 "iobuf_get_stats", 00:06:32.602 "iobuf_set_options", 00:06:32.602 "keyring_get_keys", 00:06:32.602 "framework_get_pci_devices", 00:06:32.602 "framework_get_config", 00:06:32.602 "framework_get_subsystems", 00:06:32.602 "vfu_tgt_set_base_path", 00:06:32.602 "trace_get_info", 00:06:32.602 "trace_get_tpoint_group_mask", 00:06:32.602 "trace_disable_tpoint_group", 00:06:32.602 "trace_enable_tpoint_group", 00:06:32.602 "trace_clear_tpoint_mask", 00:06:32.602 "trace_set_tpoint_mask", 00:06:32.602 "spdk_get_version", 00:06:32.602 "rpc_get_methods" 00:06:32.602 ] 00:06:32.602 03:16:17 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:32.602 03:16:17 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:32.602 03:16:17 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:32.859 03:16:17 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:32.859 03:16:17 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 2282653 00:06:32.859 03:16:17 spdkcli_tcp -- common/autotest_common.sh@946 -- # '[' -z 2282653 ']' 00:06:32.859 03:16:17 spdkcli_tcp -- common/autotest_common.sh@950 -- # kill -0 2282653 00:06:32.859 03:16:17 spdkcli_tcp -- common/autotest_common.sh@951 -- # uname 00:06:32.859 03:16:17 spdkcli_tcp -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:32.859 03:16:17 spdkcli_tcp -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2282653 00:06:32.859 03:16:17 spdkcli_tcp -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:32.859 03:16:17 spdkcli_tcp -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:32.859 03:16:17 spdkcli_tcp -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2282653' 00:06:32.859 killing process with pid 2282653 00:06:32.859 03:16:17 spdkcli_tcp -- common/autotest_common.sh@965 -- # kill 2282653 00:06:32.859 03:16:17 spdkcli_tcp -- common/autotest_common.sh@970 -- # wait 2282653 00:06:33.117 00:06:33.117 real 0m1.211s 00:06:33.117 user 0m2.144s 00:06:33.117 sys 0m0.441s 00:06:33.117 03:16:18 spdkcli_tcp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:33.117 03:16:18 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:33.117 ************************************ 00:06:33.117 END TEST spdkcli_tcp 00:06:33.117 ************************************ 00:06:33.117 03:16:18 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:33.117 03:16:18 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:33.117 03:16:18 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:33.117 03:16:18 -- common/autotest_common.sh@10 -- # set +x 00:06:33.117 ************************************ 00:06:33.117 START TEST dpdk_mem_utility 00:06:33.117 ************************************ 00:06:33.117 03:16:18 dpdk_mem_utility -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:33.375 * Looking for test storage... 00:06:33.375 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:06:33.375 03:16:18 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:33.375 03:16:18 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=2282854 00:06:33.375 03:16:18 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:33.375 03:16:18 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 2282854 00:06:33.375 03:16:18 dpdk_mem_utility -- common/autotest_common.sh@827 -- # '[' -z 2282854 ']' 00:06:33.375 03:16:18 dpdk_mem_utility -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:33.375 03:16:18 dpdk_mem_utility -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:33.375 03:16:18 dpdk_mem_utility -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:33.375 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:33.375 03:16:18 dpdk_mem_utility -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:33.375 03:16:18 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:33.375 [2024-07-21 03:16:18.491193] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:33.375 [2024-07-21 03:16:18.491274] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2282854 ] 00:06:33.375 EAL: No free 2048 kB hugepages reported on node 1 00:06:33.375 [2024-07-21 03:16:18.549976] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.375 [2024-07-21 03:16:18.634491] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.632 03:16:18 dpdk_mem_utility -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:33.632 03:16:18 dpdk_mem_utility -- common/autotest_common.sh@860 -- # return 0 00:06:33.632 03:16:18 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:33.632 03:16:18 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:33.632 03:16:18 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:33.632 03:16:18 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:33.632 { 00:06:33.632 "filename": "/tmp/spdk_mem_dump.txt" 00:06:33.632 } 00:06:33.632 03:16:18 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:33.632 03:16:18 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:33.632 DPDK memory size 814.000000 MiB in 1 heap(s) 00:06:33.632 1 heaps totaling size 814.000000 MiB 00:06:33.632 size: 814.000000 MiB heap id: 0 00:06:33.632 end heaps---------- 00:06:33.632 8 mempools totaling size 598.116089 MiB 00:06:33.632 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:33.632 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:33.632 size: 84.521057 MiB name: bdev_io_2282854 00:06:33.632 size: 51.011292 MiB name: evtpool_2282854 00:06:33.632 size: 50.003479 MiB name: msgpool_2282854 00:06:33.632 size: 21.763794 MiB name: PDU_Pool 00:06:33.632 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:33.632 size: 0.026123 MiB name: Session_Pool 00:06:33.632 end mempools------- 00:06:33.632 6 memzones totaling size 4.142822 MiB 00:06:33.632 size: 1.000366 MiB name: RG_ring_0_2282854 00:06:33.632 size: 1.000366 MiB name: RG_ring_1_2282854 00:06:33.632 size: 1.000366 MiB name: RG_ring_4_2282854 00:06:33.632 size: 1.000366 MiB name: RG_ring_5_2282854 00:06:33.632 size: 0.125366 MiB name: RG_ring_2_2282854 00:06:33.632 size: 0.015991 MiB name: RG_ring_3_2282854 00:06:33.632 end memzones------- 00:06:33.889 03:16:18 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:06:33.889 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:06:33.889 list of free elements. size: 12.519348 MiB 00:06:33.889 element at address: 0x200000400000 with size: 1.999512 MiB 00:06:33.889 element at address: 0x200018e00000 with size: 0.999878 MiB 00:06:33.889 element at address: 0x200019000000 with size: 0.999878 MiB 00:06:33.889 element at address: 0x200003e00000 with size: 0.996277 MiB 00:06:33.889 element at address: 0x200031c00000 with size: 0.994446 MiB 00:06:33.889 element at address: 0x200013800000 with size: 0.978699 MiB 00:06:33.889 element at address: 0x200007000000 with size: 0.959839 MiB 00:06:33.889 element at address: 0x200019200000 with size: 0.936584 MiB 00:06:33.889 element at address: 0x200000200000 with size: 0.841614 MiB 00:06:33.889 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:06:33.889 element at address: 0x20000b200000 with size: 0.490723 MiB 00:06:33.889 element at address: 0x200000800000 with size: 0.487793 MiB 00:06:33.889 element at address: 0x200019400000 with size: 0.485657 MiB 00:06:33.889 element at address: 0x200027e00000 with size: 0.410034 MiB 00:06:33.889 element at address: 0x200003a00000 with size: 0.355530 MiB 00:06:33.889 list of standard malloc elements. size: 199.218079 MiB 00:06:33.889 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:06:33.889 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:06:33.889 element at address: 0x200018efff80 with size: 1.000122 MiB 00:06:33.889 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:06:33.889 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:06:33.889 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:33.889 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:06:33.889 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:33.889 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:06:33.889 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:06:33.889 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:06:33.889 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:06:33.889 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:06:33.889 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:06:33.889 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:33.889 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:33.889 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:06:33.889 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:06:33.889 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:06:33.889 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:06:33.889 element at address: 0x200003adb300 with size: 0.000183 MiB 00:06:33.889 element at address: 0x200003adb500 with size: 0.000183 MiB 00:06:33.889 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:06:33.889 element at address: 0x200003affa80 with size: 0.000183 MiB 00:06:33.889 element at address: 0x200003affb40 with size: 0.000183 MiB 00:06:33.889 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:06:33.889 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:06:33.889 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:06:33.889 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:06:33.889 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:06:33.889 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:06:33.889 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:06:33.889 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:06:33.889 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:06:33.889 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:06:33.889 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:06:33.889 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:06:33.889 element at address: 0x200027e69040 with size: 0.000183 MiB 00:06:33.889 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:06:33.889 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:06:33.889 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:06:33.889 list of memzone associated elements. size: 602.262573 MiB 00:06:33.889 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:06:33.889 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:33.889 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:06:33.889 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:33.889 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:06:33.889 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_2282854_0 00:06:33.889 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:06:33.889 associated memzone info: size: 48.002930 MiB name: MP_evtpool_2282854_0 00:06:33.889 element at address: 0x200003fff380 with size: 48.003052 MiB 00:06:33.889 associated memzone info: size: 48.002930 MiB name: MP_msgpool_2282854_0 00:06:33.889 element at address: 0x2000195be940 with size: 20.255554 MiB 00:06:33.889 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:33.889 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:06:33.889 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:33.889 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:06:33.889 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_2282854 00:06:33.889 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:06:33.889 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_2282854 00:06:33.889 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:33.889 associated memzone info: size: 1.007996 MiB name: MP_evtpool_2282854 00:06:33.889 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:06:33.889 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:33.889 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:06:33.889 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:33.889 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:06:33.889 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:33.889 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:06:33.889 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:33.889 element at address: 0x200003eff180 with size: 1.000488 MiB 00:06:33.889 associated memzone info: size: 1.000366 MiB name: RG_ring_0_2282854 00:06:33.889 element at address: 0x200003affc00 with size: 1.000488 MiB 00:06:33.889 associated memzone info: size: 1.000366 MiB name: RG_ring_1_2282854 00:06:33.889 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:06:33.889 associated memzone info: size: 1.000366 MiB name: RG_ring_4_2282854 00:06:33.889 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:06:33.889 associated memzone info: size: 1.000366 MiB name: RG_ring_5_2282854 00:06:33.889 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:06:33.889 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_2282854 00:06:33.889 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:06:33.889 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:33.889 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:06:33.889 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:33.889 element at address: 0x20001947c540 with size: 0.250488 MiB 00:06:33.889 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:33.889 element at address: 0x200003adf880 with size: 0.125488 MiB 00:06:33.889 associated memzone info: size: 0.125366 MiB name: RG_ring_2_2282854 00:06:33.889 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:06:33.889 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:33.889 element at address: 0x200027e69100 with size: 0.023743 MiB 00:06:33.889 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:33.889 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:06:33.889 associated memzone info: size: 0.015991 MiB name: RG_ring_3_2282854 00:06:33.889 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:06:33.889 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:33.889 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:06:33.889 associated memzone info: size: 0.000183 MiB name: MP_msgpool_2282854 00:06:33.889 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:06:33.889 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_2282854 00:06:33.889 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:06:33.889 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:33.889 03:16:19 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:33.889 03:16:19 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 2282854 00:06:33.889 03:16:19 dpdk_mem_utility -- common/autotest_common.sh@946 -- # '[' -z 2282854 ']' 00:06:33.889 03:16:19 dpdk_mem_utility -- common/autotest_common.sh@950 -- # kill -0 2282854 00:06:33.889 03:16:19 dpdk_mem_utility -- common/autotest_common.sh@951 -- # uname 00:06:33.889 03:16:19 dpdk_mem_utility -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:33.889 03:16:19 dpdk_mem_utility -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2282854 00:06:33.889 03:16:19 dpdk_mem_utility -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:33.889 03:16:19 dpdk_mem_utility -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:33.889 03:16:19 dpdk_mem_utility -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2282854' 00:06:33.889 killing process with pid 2282854 00:06:33.889 03:16:19 dpdk_mem_utility -- common/autotest_common.sh@965 -- # kill 2282854 00:06:33.889 03:16:19 dpdk_mem_utility -- common/autotest_common.sh@970 -- # wait 2282854 00:06:34.146 00:06:34.146 real 0m1.045s 00:06:34.146 user 0m0.996s 00:06:34.146 sys 0m0.418s 00:06:34.146 03:16:19 dpdk_mem_utility -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:34.146 03:16:19 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:34.146 ************************************ 00:06:34.146 END TEST dpdk_mem_utility 00:06:34.146 ************************************ 00:06:34.146 03:16:19 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:34.146 03:16:19 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:34.146 03:16:19 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:34.146 03:16:19 -- common/autotest_common.sh@10 -- # set +x 00:06:34.403 ************************************ 00:06:34.403 START TEST event 00:06:34.403 ************************************ 00:06:34.403 03:16:19 event -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:34.403 * Looking for test storage... 00:06:34.403 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:34.403 03:16:19 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:06:34.403 03:16:19 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:34.403 03:16:19 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:34.403 03:16:19 event -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:06:34.403 03:16:19 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:34.403 03:16:19 event -- common/autotest_common.sh@10 -- # set +x 00:06:34.403 ************************************ 00:06:34.403 START TEST event_perf 00:06:34.403 ************************************ 00:06:34.403 03:16:19 event.event_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:34.403 Running I/O for 1 seconds...[2024-07-21 03:16:19.572274] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:34.403 [2024-07-21 03:16:19.572342] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2283044 ] 00:06:34.403 EAL: No free 2048 kB hugepages reported on node 1 00:06:34.403 [2024-07-21 03:16:19.638283] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:34.660 [2024-07-21 03:16:19.732783] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:34.660 [2024-07-21 03:16:19.732815] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:34.660 [2024-07-21 03:16:19.732839] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:34.660 [2024-07-21 03:16:19.732841] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.594 Running I/O for 1 seconds... 00:06:35.594 lcore 0: 227948 00:06:35.594 lcore 1: 227949 00:06:35.594 lcore 2: 227948 00:06:35.594 lcore 3: 227948 00:06:35.594 done. 00:06:35.594 00:06:35.594 real 0m1.255s 00:06:35.594 user 0m4.161s 00:06:35.594 sys 0m0.089s 00:06:35.594 03:16:20 event.event_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:35.594 03:16:20 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:35.594 ************************************ 00:06:35.594 END TEST event_perf 00:06:35.594 ************************************ 00:06:35.594 03:16:20 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:35.594 03:16:20 event -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:06:35.594 03:16:20 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:35.594 03:16:20 event -- common/autotest_common.sh@10 -- # set +x 00:06:35.594 ************************************ 00:06:35.594 START TEST event_reactor 00:06:35.594 ************************************ 00:06:35.594 03:16:20 event.event_reactor -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:35.594 [2024-07-21 03:16:20.870267] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:35.594 [2024-07-21 03:16:20.870335] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2283201 ] 00:06:35.594 EAL: No free 2048 kB hugepages reported on node 1 00:06:35.852 [2024-07-21 03:16:20.934637] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.852 [2024-07-21 03:16:21.027840] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.222 test_start 00:06:37.223 oneshot 00:06:37.223 tick 100 00:06:37.223 tick 100 00:06:37.223 tick 250 00:06:37.223 tick 100 00:06:37.223 tick 100 00:06:37.223 tick 100 00:06:37.223 tick 250 00:06:37.223 tick 500 00:06:37.223 tick 100 00:06:37.223 tick 100 00:06:37.223 tick 250 00:06:37.223 tick 100 00:06:37.223 tick 100 00:06:37.223 test_end 00:06:37.223 00:06:37.223 real 0m1.250s 00:06:37.223 user 0m1.159s 00:06:37.223 sys 0m0.087s 00:06:37.223 03:16:22 event.event_reactor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:37.223 03:16:22 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:37.223 ************************************ 00:06:37.223 END TEST event_reactor 00:06:37.223 ************************************ 00:06:37.223 03:16:22 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:37.223 03:16:22 event -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:06:37.223 03:16:22 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:37.223 03:16:22 event -- common/autotest_common.sh@10 -- # set +x 00:06:37.223 ************************************ 00:06:37.223 START TEST event_reactor_perf 00:06:37.223 ************************************ 00:06:37.223 03:16:22 event.event_reactor_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:37.223 [2024-07-21 03:16:22.163124] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:37.223 [2024-07-21 03:16:22.163187] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2283364 ] 00:06:37.223 EAL: No free 2048 kB hugepages reported on node 1 00:06:37.223 [2024-07-21 03:16:22.225667] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.223 [2024-07-21 03:16:22.318700] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.154 test_start 00:06:38.154 test_end 00:06:38.154 Performance: 352825 events per second 00:06:38.154 00:06:38.154 real 0m1.252s 00:06:38.154 user 0m1.165s 00:06:38.154 sys 0m0.082s 00:06:38.154 03:16:23 event.event_reactor_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:38.154 03:16:23 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:38.154 ************************************ 00:06:38.154 END TEST event_reactor_perf 00:06:38.154 ************************************ 00:06:38.154 03:16:23 event -- event/event.sh@49 -- # uname -s 00:06:38.154 03:16:23 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:38.154 03:16:23 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:38.154 03:16:23 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:38.154 03:16:23 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:38.154 03:16:23 event -- common/autotest_common.sh@10 -- # set +x 00:06:38.154 ************************************ 00:06:38.154 START TEST event_scheduler 00:06:38.154 ************************************ 00:06:38.154 03:16:23 event.event_scheduler -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:38.411 * Looking for test storage... 00:06:38.411 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:06:38.411 03:16:23 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:38.411 03:16:23 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=2283542 00:06:38.411 03:16:23 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:38.411 03:16:23 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:38.411 03:16:23 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 2283542 00:06:38.411 03:16:23 event.event_scheduler -- common/autotest_common.sh@827 -- # '[' -z 2283542 ']' 00:06:38.411 03:16:23 event.event_scheduler -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:38.411 03:16:23 event.event_scheduler -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:38.411 03:16:23 event.event_scheduler -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:38.411 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:38.411 03:16:23 event.event_scheduler -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:38.411 03:16:23 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:38.411 [2024-07-21 03:16:23.540741] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:38.411 [2024-07-21 03:16:23.540829] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2283542 ] 00:06:38.411 EAL: No free 2048 kB hugepages reported on node 1 00:06:38.411 [2024-07-21 03:16:23.598502] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:38.411 [2024-07-21 03:16:23.685241] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.411 [2024-07-21 03:16:23.685304] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:38.411 [2024-07-21 03:16:23.685369] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:38.411 [2024-07-21 03:16:23.685372] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:38.671 03:16:23 event.event_scheduler -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:38.671 03:16:23 event.event_scheduler -- common/autotest_common.sh@860 -- # return 0 00:06:38.671 03:16:23 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:38.671 03:16:23 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:38.671 03:16:23 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:38.671 POWER: Env isn't set yet! 00:06:38.671 POWER: Attempting to initialise ACPI cpufreq power management... 00:06:38.671 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_available_frequencies 00:06:38.671 POWER: Cannot get available frequencies of lcore 0 00:06:38.671 POWER: Attempting to initialise PSTAT power management... 00:06:38.671 POWER: Power management governor of lcore 0 has been set to 'performance' successfully 00:06:38.671 POWER: Initialized successfully for lcore 0 power management 00:06:38.671 POWER: Power management governor of lcore 1 has been set to 'performance' successfully 00:06:38.671 POWER: Initialized successfully for lcore 1 power management 00:06:38.671 POWER: Power management governor of lcore 2 has been set to 'performance' successfully 00:06:38.671 POWER: Initialized successfully for lcore 2 power management 00:06:38.671 POWER: Power management governor of lcore 3 has been set to 'performance' successfully 00:06:38.671 POWER: Initialized successfully for lcore 3 power management 00:06:38.671 [2024-07-21 03:16:23.776805] scheduler_dynamic.c: 382:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:38.671 [2024-07-21 03:16:23.776823] scheduler_dynamic.c: 384:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:38.672 [2024-07-21 03:16:23.776834] scheduler_dynamic.c: 386:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:38.672 03:16:23 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:38.672 03:16:23 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:38.672 03:16:23 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:38.672 03:16:23 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:38.672 [2024-07-21 03:16:23.872199] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:38.672 03:16:23 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:38.672 03:16:23 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:38.672 03:16:23 event.event_scheduler -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:38.672 03:16:23 event.event_scheduler -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:38.672 03:16:23 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:38.672 ************************************ 00:06:38.672 START TEST scheduler_create_thread 00:06:38.672 ************************************ 00:06:38.672 03:16:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1121 -- # scheduler_create_thread 00:06:38.672 03:16:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:38.672 03:16:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:38.672 03:16:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:38.672 2 00:06:38.672 03:16:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:38.672 03:16:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:38.672 03:16:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:38.672 03:16:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:38.672 3 00:06:38.672 03:16:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:38.672 03:16:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:38.672 03:16:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:38.672 03:16:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:38.672 4 00:06:38.672 03:16:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:38.672 03:16:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:38.672 03:16:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:38.672 03:16:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:38.672 5 00:06:38.672 03:16:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:38.672 03:16:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:38.672 03:16:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:38.672 03:16:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:38.672 6 00:06:38.672 03:16:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:38.672 03:16:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:38.672 03:16:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:38.672 03:16:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:38.672 7 00:06:38.672 03:16:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:38.672 03:16:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:38.672 03:16:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:38.672 03:16:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:38.672 8 00:06:38.672 03:16:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:38.672 03:16:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:38.672 03:16:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:38.672 03:16:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:38.672 9 00:06:38.672 03:16:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:38.672 03:16:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:38.672 03:16:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:38.672 03:16:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:38.929 10 00:06:38.929 03:16:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:38.929 03:16:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:38.929 03:16:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:38.929 03:16:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:38.929 03:16:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:38.929 03:16:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:38.929 03:16:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:38.929 03:16:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:38.929 03:16:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:38.929 03:16:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:38.929 03:16:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:38.929 03:16:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:38.929 03:16:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:40.298 03:16:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:40.298 03:16:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:40.298 03:16:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:40.298 03:16:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:40.298 03:16:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:41.229 03:16:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:41.229 00:06:41.229 real 0m2.616s 00:06:41.229 user 0m0.013s 00:06:41.229 sys 0m0.002s 00:06:41.229 03:16:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:41.229 03:16:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:41.229 ************************************ 00:06:41.229 END TEST scheduler_create_thread 00:06:41.229 ************************************ 00:06:41.229 03:16:26 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:41.229 03:16:26 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 2283542 00:06:41.229 03:16:26 event.event_scheduler -- common/autotest_common.sh@946 -- # '[' -z 2283542 ']' 00:06:41.229 03:16:26 event.event_scheduler -- common/autotest_common.sh@950 -- # kill -0 2283542 00:06:41.229 03:16:26 event.event_scheduler -- common/autotest_common.sh@951 -- # uname 00:06:41.487 03:16:26 event.event_scheduler -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:41.487 03:16:26 event.event_scheduler -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2283542 00:06:41.487 03:16:26 event.event_scheduler -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:06:41.487 03:16:26 event.event_scheduler -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:06:41.487 03:16:26 event.event_scheduler -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2283542' 00:06:41.487 killing process with pid 2283542 00:06:41.487 03:16:26 event.event_scheduler -- common/autotest_common.sh@965 -- # kill 2283542 00:06:41.487 03:16:26 event.event_scheduler -- common/autotest_common.sh@970 -- # wait 2283542 00:06:41.744 [2024-07-21 03:16:26.999356] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:42.003 POWER: Power management governor of lcore 0 has been set to 'userspace' successfully 00:06:42.003 POWER: Power management of lcore 0 has exited from 'performance' mode and been set back to the original 00:06:42.003 POWER: Power management governor of lcore 1 has been set to 'schedutil' successfully 00:06:42.003 POWER: Power management of lcore 1 has exited from 'performance' mode and been set back to the original 00:06:42.003 POWER: Power management governor of lcore 2 has been set to 'schedutil' successfully 00:06:42.003 POWER: Power management of lcore 2 has exited from 'performance' mode and been set back to the original 00:06:42.003 POWER: Power management governor of lcore 3 has been set to 'schedutil' successfully 00:06:42.003 POWER: Power management of lcore 3 has exited from 'performance' mode and been set back to the original 00:06:42.003 00:06:42.003 real 0m3.776s 00:06:42.003 user 0m5.746s 00:06:42.003 sys 0m0.325s 00:06:42.003 03:16:27 event.event_scheduler -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:42.003 03:16:27 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:42.003 ************************************ 00:06:42.003 END TEST event_scheduler 00:06:42.003 ************************************ 00:06:42.003 03:16:27 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:42.003 03:16:27 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:42.003 03:16:27 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:42.003 03:16:27 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:42.003 03:16:27 event -- common/autotest_common.sh@10 -- # set +x 00:06:42.003 ************************************ 00:06:42.003 START TEST app_repeat 00:06:42.003 ************************************ 00:06:42.003 03:16:27 event.app_repeat -- common/autotest_common.sh@1121 -- # app_repeat_test 00:06:42.003 03:16:27 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:42.003 03:16:27 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:42.003 03:16:27 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:42.003 03:16:27 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:42.003 03:16:27 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:42.003 03:16:27 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:42.003 03:16:27 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:42.003 03:16:27 event.app_repeat -- event/event.sh@19 -- # repeat_pid=2284115 00:06:42.003 03:16:27 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:42.003 03:16:27 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:42.003 03:16:27 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 2284115' 00:06:42.003 Process app_repeat pid: 2284115 00:06:42.003 03:16:27 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:42.003 03:16:27 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:42.003 spdk_app_start Round 0 00:06:42.003 03:16:27 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2284115 /var/tmp/spdk-nbd.sock 00:06:42.003 03:16:27 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 2284115 ']' 00:06:42.003 03:16:27 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:42.003 03:16:27 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:42.003 03:16:27 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:42.003 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:42.003 03:16:27 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:42.003 03:16:27 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:42.003 [2024-07-21 03:16:27.306516] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:42.003 [2024-07-21 03:16:27.306583] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2284115 ] 00:06:42.261 EAL: No free 2048 kB hugepages reported on node 1 00:06:42.261 [2024-07-21 03:16:27.370133] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:42.261 [2024-07-21 03:16:27.461165] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:42.261 [2024-07-21 03:16:27.461171] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.261 03:16:27 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:42.261 03:16:27 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:06:42.261 03:16:27 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:42.519 Malloc0 00:06:42.519 03:16:27 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:42.776 Malloc1 00:06:43.052 03:16:28 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:43.052 03:16:28 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:43.052 03:16:28 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:43.052 03:16:28 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:43.052 03:16:28 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:43.052 03:16:28 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:43.052 03:16:28 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:43.052 03:16:28 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:43.052 03:16:28 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:43.052 03:16:28 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:43.052 03:16:28 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:43.052 03:16:28 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:43.052 03:16:28 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:43.052 03:16:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:43.052 03:16:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:43.052 03:16:28 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:43.052 /dev/nbd0 00:06:43.309 03:16:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:43.309 03:16:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:43.309 03:16:28 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:06:43.309 03:16:28 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:43.309 03:16:28 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:43.309 03:16:28 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:43.309 03:16:28 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:06:43.309 03:16:28 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:43.309 03:16:28 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:43.309 03:16:28 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:43.309 03:16:28 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:43.309 1+0 records in 00:06:43.309 1+0 records out 00:06:43.309 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00017822 s, 23.0 MB/s 00:06:43.309 03:16:28 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:43.309 03:16:28 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:43.309 03:16:28 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:43.309 03:16:28 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:43.309 03:16:28 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:43.309 03:16:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:43.309 03:16:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:43.309 03:16:28 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:43.571 /dev/nbd1 00:06:43.571 03:16:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:43.571 03:16:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:43.571 03:16:28 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:06:43.571 03:16:28 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:43.571 03:16:28 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:43.571 03:16:28 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:43.571 03:16:28 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:06:43.571 03:16:28 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:43.571 03:16:28 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:43.571 03:16:28 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:43.571 03:16:28 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:43.571 1+0 records in 00:06:43.571 1+0 records out 00:06:43.571 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000215326 s, 19.0 MB/s 00:06:43.571 03:16:28 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:43.571 03:16:28 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:43.571 03:16:28 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:43.571 03:16:28 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:43.571 03:16:28 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:43.571 03:16:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:43.571 03:16:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:43.572 03:16:28 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:43.572 03:16:28 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:43.572 03:16:28 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:43.867 03:16:28 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:43.867 { 00:06:43.867 "nbd_device": "/dev/nbd0", 00:06:43.867 "bdev_name": "Malloc0" 00:06:43.867 }, 00:06:43.867 { 00:06:43.867 "nbd_device": "/dev/nbd1", 00:06:43.867 "bdev_name": "Malloc1" 00:06:43.867 } 00:06:43.867 ]' 00:06:43.867 03:16:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:43.867 { 00:06:43.867 "nbd_device": "/dev/nbd0", 00:06:43.867 "bdev_name": "Malloc0" 00:06:43.867 }, 00:06:43.867 { 00:06:43.867 "nbd_device": "/dev/nbd1", 00:06:43.867 "bdev_name": "Malloc1" 00:06:43.867 } 00:06:43.867 ]' 00:06:43.867 03:16:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:43.867 03:16:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:43.867 /dev/nbd1' 00:06:43.867 03:16:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:43.867 /dev/nbd1' 00:06:43.867 03:16:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:43.867 03:16:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:43.867 03:16:28 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:43.867 03:16:28 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:43.867 03:16:28 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:43.867 03:16:28 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:43.867 03:16:28 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:43.867 03:16:28 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:43.867 03:16:28 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:43.867 03:16:28 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:43.867 03:16:28 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:43.867 03:16:28 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:43.867 256+0 records in 00:06:43.867 256+0 records out 00:06:43.867 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00521012 s, 201 MB/s 00:06:43.867 03:16:28 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:43.867 03:16:28 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:43.867 256+0 records in 00:06:43.867 256+0 records out 00:06:43.867 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0239588 s, 43.8 MB/s 00:06:43.867 03:16:28 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:43.867 03:16:28 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:43.867 256+0 records in 00:06:43.867 256+0 records out 00:06:43.867 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0253251 s, 41.4 MB/s 00:06:43.867 03:16:29 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:43.867 03:16:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:43.867 03:16:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:43.867 03:16:29 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:43.867 03:16:29 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:43.867 03:16:29 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:43.867 03:16:29 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:43.867 03:16:29 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:43.867 03:16:29 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:43.867 03:16:29 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:43.867 03:16:29 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:43.867 03:16:29 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:43.867 03:16:29 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:43.867 03:16:29 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:43.867 03:16:29 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:43.867 03:16:29 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:43.867 03:16:29 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:43.867 03:16:29 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:43.867 03:16:29 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:44.129 03:16:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:44.129 03:16:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:44.129 03:16:29 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:44.129 03:16:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:44.129 03:16:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:44.129 03:16:29 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:44.129 03:16:29 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:44.129 03:16:29 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:44.129 03:16:29 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:44.129 03:16:29 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:44.386 03:16:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:44.386 03:16:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:44.386 03:16:29 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:44.386 03:16:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:44.386 03:16:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:44.386 03:16:29 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:44.386 03:16:29 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:44.386 03:16:29 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:44.386 03:16:29 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:44.386 03:16:29 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:44.386 03:16:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:44.643 03:16:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:44.643 03:16:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:44.643 03:16:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:44.643 03:16:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:44.643 03:16:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:44.643 03:16:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:44.643 03:16:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:44.643 03:16:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:44.643 03:16:29 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:44.643 03:16:29 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:44.643 03:16:29 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:44.643 03:16:29 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:44.643 03:16:29 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:44.899 03:16:30 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:45.155 [2024-07-21 03:16:30.366368] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:45.156 [2024-07-21 03:16:30.457238] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.156 [2024-07-21 03:16:30.457239] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:45.412 [2024-07-21 03:16:30.514454] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:45.412 [2024-07-21 03:16:30.514521] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:47.931 03:16:33 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:47.931 03:16:33 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:47.931 spdk_app_start Round 1 00:06:47.931 03:16:33 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2284115 /var/tmp/spdk-nbd.sock 00:06:47.931 03:16:33 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 2284115 ']' 00:06:47.931 03:16:33 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:47.931 03:16:33 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:47.931 03:16:33 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:47.931 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:47.931 03:16:33 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:47.931 03:16:33 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:48.188 03:16:33 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:48.188 03:16:33 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:06:48.188 03:16:33 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:48.445 Malloc0 00:06:48.445 03:16:33 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:48.703 Malloc1 00:06:48.703 03:16:33 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:48.703 03:16:33 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:48.703 03:16:33 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:48.703 03:16:33 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:48.703 03:16:33 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:48.703 03:16:33 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:48.703 03:16:33 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:48.703 03:16:33 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:48.703 03:16:33 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:48.703 03:16:33 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:48.703 03:16:33 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:48.703 03:16:33 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:48.703 03:16:33 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:48.703 03:16:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:48.703 03:16:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:48.703 03:16:33 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:48.960 /dev/nbd0 00:06:48.960 03:16:34 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:48.960 03:16:34 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:48.960 03:16:34 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:06:48.960 03:16:34 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:48.960 03:16:34 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:48.960 03:16:34 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:48.960 03:16:34 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:06:48.960 03:16:34 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:48.960 03:16:34 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:48.960 03:16:34 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:48.960 03:16:34 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:48.960 1+0 records in 00:06:48.960 1+0 records out 00:06:48.960 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000187005 s, 21.9 MB/s 00:06:48.960 03:16:34 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:48.960 03:16:34 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:48.960 03:16:34 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:48.960 03:16:34 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:48.960 03:16:34 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:48.960 03:16:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:48.960 03:16:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:48.960 03:16:34 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:49.218 /dev/nbd1 00:06:49.218 03:16:34 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:49.218 03:16:34 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:49.218 03:16:34 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:06:49.218 03:16:34 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:49.218 03:16:34 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:49.218 03:16:34 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:49.218 03:16:34 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:06:49.218 03:16:34 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:49.218 03:16:34 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:49.218 03:16:34 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:49.218 03:16:34 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:49.218 1+0 records in 00:06:49.218 1+0 records out 00:06:49.218 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000209686 s, 19.5 MB/s 00:06:49.218 03:16:34 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:49.218 03:16:34 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:49.218 03:16:34 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:49.218 03:16:34 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:49.218 03:16:34 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:49.218 03:16:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:49.218 03:16:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:49.218 03:16:34 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:49.218 03:16:34 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:49.218 03:16:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:49.477 03:16:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:49.477 { 00:06:49.477 "nbd_device": "/dev/nbd0", 00:06:49.477 "bdev_name": "Malloc0" 00:06:49.477 }, 00:06:49.477 { 00:06:49.477 "nbd_device": "/dev/nbd1", 00:06:49.477 "bdev_name": "Malloc1" 00:06:49.477 } 00:06:49.477 ]' 00:06:49.477 03:16:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:49.477 { 00:06:49.477 "nbd_device": "/dev/nbd0", 00:06:49.477 "bdev_name": "Malloc0" 00:06:49.477 }, 00:06:49.477 { 00:06:49.477 "nbd_device": "/dev/nbd1", 00:06:49.477 "bdev_name": "Malloc1" 00:06:49.477 } 00:06:49.477 ]' 00:06:49.477 03:16:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:49.477 03:16:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:49.477 /dev/nbd1' 00:06:49.477 03:16:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:49.477 /dev/nbd1' 00:06:49.477 03:16:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:49.477 03:16:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:49.477 03:16:34 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:49.477 03:16:34 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:49.477 03:16:34 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:49.477 03:16:34 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:49.477 03:16:34 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:49.477 03:16:34 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:49.477 03:16:34 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:49.477 03:16:34 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:49.477 03:16:34 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:49.477 03:16:34 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:49.477 256+0 records in 00:06:49.477 256+0 records out 00:06:49.477 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0050168 s, 209 MB/s 00:06:49.477 03:16:34 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:49.477 03:16:34 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:49.477 256+0 records in 00:06:49.477 256+0 records out 00:06:49.477 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0234338 s, 44.7 MB/s 00:06:49.477 03:16:34 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:49.477 03:16:34 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:49.735 256+0 records in 00:06:49.735 256+0 records out 00:06:49.735 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0253682 s, 41.3 MB/s 00:06:49.735 03:16:34 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:49.735 03:16:34 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:49.735 03:16:34 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:49.735 03:16:34 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:49.735 03:16:34 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:49.735 03:16:34 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:49.735 03:16:34 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:49.735 03:16:34 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:49.735 03:16:34 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:49.735 03:16:34 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:49.735 03:16:34 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:49.735 03:16:34 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:49.735 03:16:34 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:49.735 03:16:34 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:49.735 03:16:34 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:49.735 03:16:34 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:49.735 03:16:34 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:49.735 03:16:34 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:49.735 03:16:34 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:49.993 03:16:35 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:49.993 03:16:35 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:49.993 03:16:35 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:49.993 03:16:35 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:49.993 03:16:35 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:49.993 03:16:35 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:49.993 03:16:35 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:49.993 03:16:35 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:49.993 03:16:35 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:49.993 03:16:35 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:50.251 03:16:35 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:50.251 03:16:35 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:50.251 03:16:35 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:50.251 03:16:35 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:50.251 03:16:35 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:50.251 03:16:35 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:50.251 03:16:35 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:50.251 03:16:35 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:50.251 03:16:35 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:50.251 03:16:35 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:50.251 03:16:35 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:50.509 03:16:35 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:50.509 03:16:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:50.509 03:16:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:50.509 03:16:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:50.509 03:16:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:50.509 03:16:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:50.509 03:16:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:50.509 03:16:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:50.509 03:16:35 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:50.509 03:16:35 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:50.509 03:16:35 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:50.509 03:16:35 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:50.509 03:16:35 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:50.766 03:16:35 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:51.025 [2024-07-21 03:16:36.151074] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:51.025 [2024-07-21 03:16:36.241531] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:51.025 [2024-07-21 03:16:36.241535] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.025 [2024-07-21 03:16:36.304703] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:51.025 [2024-07-21 03:16:36.304781] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:54.299 03:16:38 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:54.299 03:16:38 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:54.299 spdk_app_start Round 2 00:06:54.299 03:16:38 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2284115 /var/tmp/spdk-nbd.sock 00:06:54.299 03:16:38 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 2284115 ']' 00:06:54.299 03:16:38 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:54.299 03:16:38 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:54.299 03:16:38 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:54.299 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:54.299 03:16:38 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:54.299 03:16:38 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:54.299 03:16:39 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:54.299 03:16:39 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:06:54.299 03:16:39 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:54.299 Malloc0 00:06:54.299 03:16:39 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:54.555 Malloc1 00:06:54.556 03:16:39 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:54.556 03:16:39 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:54.556 03:16:39 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:54.556 03:16:39 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:54.556 03:16:39 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:54.556 03:16:39 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:54.556 03:16:39 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:54.556 03:16:39 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:54.556 03:16:39 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:54.556 03:16:39 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:54.556 03:16:39 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:54.556 03:16:39 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:54.556 03:16:39 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:54.556 03:16:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:54.556 03:16:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:54.556 03:16:39 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:54.812 /dev/nbd0 00:06:54.812 03:16:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:54.812 03:16:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:54.812 03:16:39 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:06:54.812 03:16:39 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:54.812 03:16:39 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:54.812 03:16:39 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:54.812 03:16:39 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:06:54.812 03:16:39 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:54.812 03:16:39 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:54.812 03:16:39 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:54.812 03:16:39 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:54.812 1+0 records in 00:06:54.812 1+0 records out 00:06:54.812 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000162051 s, 25.3 MB/s 00:06:54.812 03:16:40 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:54.812 03:16:40 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:54.812 03:16:40 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:54.812 03:16:40 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:54.812 03:16:40 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:54.812 03:16:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:54.812 03:16:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:54.812 03:16:40 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:55.069 /dev/nbd1 00:06:55.069 03:16:40 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:55.069 03:16:40 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:55.069 03:16:40 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:06:55.069 03:16:40 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:55.069 03:16:40 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:55.069 03:16:40 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:55.069 03:16:40 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:06:55.069 03:16:40 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:55.069 03:16:40 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:55.069 03:16:40 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:55.069 03:16:40 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:55.069 1+0 records in 00:06:55.069 1+0 records out 00:06:55.069 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000217042 s, 18.9 MB/s 00:06:55.069 03:16:40 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:55.069 03:16:40 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:55.069 03:16:40 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:55.069 03:16:40 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:55.069 03:16:40 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:55.069 03:16:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:55.069 03:16:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:55.069 03:16:40 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:55.069 03:16:40 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:55.069 03:16:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:55.326 03:16:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:55.326 { 00:06:55.326 "nbd_device": "/dev/nbd0", 00:06:55.326 "bdev_name": "Malloc0" 00:06:55.326 }, 00:06:55.326 { 00:06:55.326 "nbd_device": "/dev/nbd1", 00:06:55.326 "bdev_name": "Malloc1" 00:06:55.326 } 00:06:55.326 ]' 00:06:55.326 03:16:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:55.326 { 00:06:55.326 "nbd_device": "/dev/nbd0", 00:06:55.326 "bdev_name": "Malloc0" 00:06:55.326 }, 00:06:55.326 { 00:06:55.326 "nbd_device": "/dev/nbd1", 00:06:55.326 "bdev_name": "Malloc1" 00:06:55.326 } 00:06:55.326 ]' 00:06:55.326 03:16:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:55.326 03:16:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:55.326 /dev/nbd1' 00:06:55.326 03:16:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:55.326 /dev/nbd1' 00:06:55.326 03:16:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:55.326 03:16:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:55.326 03:16:40 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:55.326 03:16:40 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:55.326 03:16:40 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:55.326 03:16:40 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:55.326 03:16:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:55.326 03:16:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:55.326 03:16:40 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:55.326 03:16:40 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:55.326 03:16:40 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:55.326 03:16:40 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:55.326 256+0 records in 00:06:55.326 256+0 records out 00:06:55.326 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00410501 s, 255 MB/s 00:06:55.326 03:16:40 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:55.326 03:16:40 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:55.326 256+0 records in 00:06:55.326 256+0 records out 00:06:55.326 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0234165 s, 44.8 MB/s 00:06:55.326 03:16:40 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:55.326 03:16:40 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:55.583 256+0 records in 00:06:55.583 256+0 records out 00:06:55.583 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.025192 s, 41.6 MB/s 00:06:55.583 03:16:40 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:55.583 03:16:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:55.583 03:16:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:55.583 03:16:40 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:55.583 03:16:40 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:55.583 03:16:40 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:55.583 03:16:40 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:55.583 03:16:40 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:55.583 03:16:40 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:55.583 03:16:40 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:55.583 03:16:40 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:55.583 03:16:40 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:55.583 03:16:40 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:55.583 03:16:40 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:55.583 03:16:40 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:55.583 03:16:40 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:55.583 03:16:40 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:55.583 03:16:40 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:55.583 03:16:40 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:55.841 03:16:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:55.841 03:16:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:55.841 03:16:40 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:55.841 03:16:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:55.841 03:16:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:55.841 03:16:40 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:55.841 03:16:40 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:55.841 03:16:40 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:55.841 03:16:40 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:55.841 03:16:40 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:56.098 03:16:41 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:56.098 03:16:41 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:56.098 03:16:41 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:56.098 03:16:41 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:56.098 03:16:41 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:56.098 03:16:41 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:56.098 03:16:41 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:56.098 03:16:41 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:56.098 03:16:41 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:56.098 03:16:41 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:56.098 03:16:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:56.356 03:16:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:56.356 03:16:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:56.356 03:16:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:56.356 03:16:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:56.356 03:16:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:56.356 03:16:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:56.356 03:16:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:56.356 03:16:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:56.356 03:16:41 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:56.356 03:16:41 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:56.356 03:16:41 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:56.356 03:16:41 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:56.356 03:16:41 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:56.613 03:16:41 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:56.871 [2024-07-21 03:16:41.972783] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:56.871 [2024-07-21 03:16:42.063434] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:56.871 [2024-07-21 03:16:42.063438] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.871 [2024-07-21 03:16:42.125564] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:56.871 [2024-07-21 03:16:42.125687] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:00.148 03:16:44 event.app_repeat -- event/event.sh@38 -- # waitforlisten 2284115 /var/tmp/spdk-nbd.sock 00:07:00.148 03:16:44 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 2284115 ']' 00:07:00.148 03:16:44 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:00.148 03:16:44 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:00.148 03:16:44 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:00.148 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:00.148 03:16:44 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:00.148 03:16:44 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:00.148 03:16:45 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:00.148 03:16:45 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:07:00.148 03:16:45 event.app_repeat -- event/event.sh@39 -- # killprocess 2284115 00:07:00.148 03:16:45 event.app_repeat -- common/autotest_common.sh@946 -- # '[' -z 2284115 ']' 00:07:00.148 03:16:45 event.app_repeat -- common/autotest_common.sh@950 -- # kill -0 2284115 00:07:00.148 03:16:45 event.app_repeat -- common/autotest_common.sh@951 -- # uname 00:07:00.148 03:16:45 event.app_repeat -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:00.148 03:16:45 event.app_repeat -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2284115 00:07:00.148 03:16:45 event.app_repeat -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:00.148 03:16:45 event.app_repeat -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:00.148 03:16:45 event.app_repeat -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2284115' 00:07:00.148 killing process with pid 2284115 00:07:00.148 03:16:45 event.app_repeat -- common/autotest_common.sh@965 -- # kill 2284115 00:07:00.148 03:16:45 event.app_repeat -- common/autotest_common.sh@970 -- # wait 2284115 00:07:00.148 spdk_app_start is called in Round 0. 00:07:00.148 Shutdown signal received, stop current app iteration 00:07:00.148 Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 reinitialization... 00:07:00.148 spdk_app_start is called in Round 1. 00:07:00.148 Shutdown signal received, stop current app iteration 00:07:00.148 Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 reinitialization... 00:07:00.148 spdk_app_start is called in Round 2. 00:07:00.148 Shutdown signal received, stop current app iteration 00:07:00.148 Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 reinitialization... 00:07:00.148 spdk_app_start is called in Round 3. 00:07:00.148 Shutdown signal received, stop current app iteration 00:07:00.148 03:16:45 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:07:00.148 03:16:45 event.app_repeat -- event/event.sh@42 -- # return 0 00:07:00.148 00:07:00.148 real 0m17.956s 00:07:00.148 user 0m39.126s 00:07:00.148 sys 0m3.220s 00:07:00.148 03:16:45 event.app_repeat -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:00.148 03:16:45 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:00.148 ************************************ 00:07:00.148 END TEST app_repeat 00:07:00.148 ************************************ 00:07:00.148 03:16:45 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:07:00.148 03:16:45 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:07:00.148 03:16:45 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:00.148 03:16:45 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:00.148 03:16:45 event -- common/autotest_common.sh@10 -- # set +x 00:07:00.148 ************************************ 00:07:00.148 START TEST cpu_locks 00:07:00.148 ************************************ 00:07:00.148 03:16:45 event.cpu_locks -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:07:00.148 * Looking for test storage... 00:07:00.148 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:07:00.148 03:16:45 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:07:00.148 03:16:45 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:07:00.148 03:16:45 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:07:00.148 03:16:45 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:07:00.148 03:16:45 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:00.148 03:16:45 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:00.148 03:16:45 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:00.148 ************************************ 00:07:00.148 START TEST default_locks 00:07:00.148 ************************************ 00:07:00.148 03:16:45 event.cpu_locks.default_locks -- common/autotest_common.sh@1121 -- # default_locks 00:07:00.148 03:16:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=2286468 00:07:00.148 03:16:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:00.148 03:16:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 2286468 00:07:00.148 03:16:45 event.cpu_locks.default_locks -- common/autotest_common.sh@827 -- # '[' -z 2286468 ']' 00:07:00.148 03:16:45 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:00.148 03:16:45 event.cpu_locks.default_locks -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:00.148 03:16:45 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:00.148 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:00.148 03:16:45 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:00.148 03:16:45 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:00.148 [2024-07-21 03:16:45.417836] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:00.149 [2024-07-21 03:16:45.417913] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2286468 ] 00:07:00.149 EAL: No free 2048 kB hugepages reported on node 1 00:07:00.405 [2024-07-21 03:16:45.483633] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.405 [2024-07-21 03:16:45.575034] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.662 03:16:45 event.cpu_locks.default_locks -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:00.662 03:16:45 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # return 0 00:07:00.662 03:16:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 2286468 00:07:00.662 03:16:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 2286468 00:07:00.662 03:16:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:00.919 lslocks: write error 00:07:00.919 03:16:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 2286468 00:07:00.919 03:16:46 event.cpu_locks.default_locks -- common/autotest_common.sh@946 -- # '[' -z 2286468 ']' 00:07:00.919 03:16:46 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # kill -0 2286468 00:07:00.919 03:16:46 event.cpu_locks.default_locks -- common/autotest_common.sh@951 -- # uname 00:07:00.919 03:16:46 event.cpu_locks.default_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:00.919 03:16:46 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2286468 00:07:00.919 03:16:46 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:00.919 03:16:46 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:00.919 03:16:46 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2286468' 00:07:00.919 killing process with pid 2286468 00:07:00.919 03:16:46 event.cpu_locks.default_locks -- common/autotest_common.sh@965 -- # kill 2286468 00:07:00.919 03:16:46 event.cpu_locks.default_locks -- common/autotest_common.sh@970 -- # wait 2286468 00:07:01.483 03:16:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 2286468 00:07:01.483 03:16:46 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:07:01.483 03:16:46 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 2286468 00:07:01.483 03:16:46 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:07:01.483 03:16:46 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:01.483 03:16:46 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:07:01.483 03:16:46 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:01.483 03:16:46 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 2286468 00:07:01.483 03:16:46 event.cpu_locks.default_locks -- common/autotest_common.sh@827 -- # '[' -z 2286468 ']' 00:07:01.483 03:16:46 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:01.483 03:16:46 event.cpu_locks.default_locks -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:01.483 03:16:46 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:01.483 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:01.483 03:16:46 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:01.483 03:16:46 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:01.483 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 842: kill: (2286468) - No such process 00:07:01.483 ERROR: process (pid: 2286468) is no longer running 00:07:01.483 03:16:46 event.cpu_locks.default_locks -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:01.483 03:16:46 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # return 1 00:07:01.483 03:16:46 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:07:01.483 03:16:46 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:01.483 03:16:46 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:01.483 03:16:46 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:01.483 03:16:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:07:01.483 03:16:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:01.483 03:16:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:07:01.483 03:16:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:01.483 00:07:01.483 real 0m1.232s 00:07:01.483 user 0m1.177s 00:07:01.483 sys 0m0.545s 00:07:01.483 03:16:46 event.cpu_locks.default_locks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:01.483 03:16:46 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:01.483 ************************************ 00:07:01.483 END TEST default_locks 00:07:01.483 ************************************ 00:07:01.483 03:16:46 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:07:01.483 03:16:46 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:01.483 03:16:46 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:01.483 03:16:46 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:01.483 ************************************ 00:07:01.483 START TEST default_locks_via_rpc 00:07:01.483 ************************************ 00:07:01.483 03:16:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1121 -- # default_locks_via_rpc 00:07:01.483 03:16:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=2286632 00:07:01.483 03:16:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:01.483 03:16:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 2286632 00:07:01.483 03:16:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 2286632 ']' 00:07:01.483 03:16:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:01.483 03:16:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:01.483 03:16:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:01.483 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:01.483 03:16:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:01.483 03:16:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:01.483 [2024-07-21 03:16:46.700489] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:01.483 [2024-07-21 03:16:46.700591] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2286632 ] 00:07:01.483 EAL: No free 2048 kB hugepages reported on node 1 00:07:01.483 [2024-07-21 03:16:46.763130] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.740 [2024-07-21 03:16:46.851499] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.997 03:16:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:01.997 03:16:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:07:01.997 03:16:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:07:01.997 03:16:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:01.997 03:16:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:01.997 03:16:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:01.997 03:16:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:07:01.997 03:16:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:01.997 03:16:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:07:01.997 03:16:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:01.997 03:16:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:07:01.997 03:16:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:01.997 03:16:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:01.997 03:16:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:01.997 03:16:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 2286632 00:07:01.997 03:16:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 2286632 00:07:01.997 03:16:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:02.253 03:16:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 2286632 00:07:02.253 03:16:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@946 -- # '[' -z 2286632 ']' 00:07:02.253 03:16:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # kill -0 2286632 00:07:02.253 03:16:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@951 -- # uname 00:07:02.253 03:16:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:02.253 03:16:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2286632 00:07:02.253 03:16:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:02.253 03:16:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:02.253 03:16:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2286632' 00:07:02.253 killing process with pid 2286632 00:07:02.253 03:16:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@965 -- # kill 2286632 00:07:02.253 03:16:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@970 -- # wait 2286632 00:07:02.817 00:07:02.817 real 0m1.208s 00:07:02.817 user 0m1.132s 00:07:02.817 sys 0m0.528s 00:07:02.817 03:16:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:02.817 03:16:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:02.817 ************************************ 00:07:02.817 END TEST default_locks_via_rpc 00:07:02.817 ************************************ 00:07:02.817 03:16:47 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:07:02.817 03:16:47 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:02.818 03:16:47 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:02.818 03:16:47 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:02.818 ************************************ 00:07:02.818 START TEST non_locking_app_on_locked_coremask 00:07:02.818 ************************************ 00:07:02.818 03:16:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1121 -- # non_locking_app_on_locked_coremask 00:07:02.818 03:16:47 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=2286792 00:07:02.818 03:16:47 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:02.818 03:16:47 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 2286792 /var/tmp/spdk.sock 00:07:02.818 03:16:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 2286792 ']' 00:07:02.818 03:16:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:02.818 03:16:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:02.818 03:16:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:02.818 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:02.818 03:16:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:02.818 03:16:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:02.818 [2024-07-21 03:16:47.955084] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:02.818 [2024-07-21 03:16:47.955166] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2286792 ] 00:07:02.818 EAL: No free 2048 kB hugepages reported on node 1 00:07:02.818 [2024-07-21 03:16:48.018311] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.818 [2024-07-21 03:16:48.113322] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.076 03:16:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:03.076 03:16:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:07:03.076 03:16:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=2286822 00:07:03.076 03:16:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:07:03.076 03:16:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 2286822 /var/tmp/spdk2.sock 00:07:03.076 03:16:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 2286822 ']' 00:07:03.076 03:16:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:03.076 03:16:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:03.076 03:16:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:03.076 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:03.076 03:16:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:03.076 03:16:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:03.335 [2024-07-21 03:16:48.421610] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:03.335 [2024-07-21 03:16:48.421721] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2286822 ] 00:07:03.335 EAL: No free 2048 kB hugepages reported on node 1 00:07:03.335 [2024-07-21 03:16:48.517081] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:03.335 [2024-07-21 03:16:48.517119] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.592 [2024-07-21 03:16:48.700044] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.158 03:16:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:04.158 03:16:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:07:04.158 03:16:49 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 2286792 00:07:04.158 03:16:49 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2286792 00:07:04.158 03:16:49 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:04.723 lslocks: write error 00:07:04.723 03:16:49 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 2286792 00:07:04.723 03:16:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 2286792 ']' 00:07:04.723 03:16:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 2286792 00:07:04.723 03:16:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:07:04.723 03:16:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:04.723 03:16:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2286792 00:07:04.723 03:16:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:04.723 03:16:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:04.723 03:16:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2286792' 00:07:04.723 killing process with pid 2286792 00:07:04.723 03:16:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 2286792 00:07:04.723 03:16:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 2286792 00:07:05.321 03:16:50 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 2286822 00:07:05.321 03:16:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 2286822 ']' 00:07:05.321 03:16:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 2286822 00:07:05.321 03:16:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:07:05.321 03:16:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:05.321 03:16:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2286822 00:07:05.580 03:16:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:05.580 03:16:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:05.580 03:16:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2286822' 00:07:05.580 killing process with pid 2286822 00:07:05.580 03:16:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 2286822 00:07:05.580 03:16:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 2286822 00:07:05.839 00:07:05.839 real 0m3.145s 00:07:05.839 user 0m3.276s 00:07:05.839 sys 0m1.089s 00:07:05.839 03:16:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:05.839 03:16:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:05.839 ************************************ 00:07:05.839 END TEST non_locking_app_on_locked_coremask 00:07:05.839 ************************************ 00:07:05.839 03:16:51 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:07:05.839 03:16:51 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:05.839 03:16:51 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:05.839 03:16:51 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:05.839 ************************************ 00:07:05.839 START TEST locking_app_on_unlocked_coremask 00:07:05.839 ************************************ 00:07:05.839 03:16:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1121 -- # locking_app_on_unlocked_coremask 00:07:05.839 03:16:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=2287228 00:07:05.839 03:16:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:07:05.839 03:16:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 2287228 /var/tmp/spdk.sock 00:07:05.839 03:16:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@827 -- # '[' -z 2287228 ']' 00:07:05.839 03:16:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:05.839 03:16:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:05.839 03:16:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:05.839 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:05.839 03:16:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:05.839 03:16:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:05.839 [2024-07-21 03:16:51.151423] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:05.840 [2024-07-21 03:16:51.151524] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2287228 ] 00:07:06.098 EAL: No free 2048 kB hugepages reported on node 1 00:07:06.098 [2024-07-21 03:16:51.214686] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:06.098 [2024-07-21 03:16:51.214725] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.098 [2024-07-21 03:16:51.302627] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.357 03:16:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:06.357 03:16:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # return 0 00:07:06.357 03:16:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=2287237 00:07:06.357 03:16:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:06.357 03:16:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 2287237 /var/tmp/spdk2.sock 00:07:06.357 03:16:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@827 -- # '[' -z 2287237 ']' 00:07:06.357 03:16:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:06.357 03:16:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:06.357 03:16:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:06.357 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:06.357 03:16:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:06.357 03:16:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:06.357 [2024-07-21 03:16:51.610650] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:06.357 [2024-07-21 03:16:51.610746] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2287237 ] 00:07:06.357 EAL: No free 2048 kB hugepages reported on node 1 00:07:06.615 [2024-07-21 03:16:51.709278] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.615 [2024-07-21 03:16:51.889943] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.548 03:16:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:07.548 03:16:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # return 0 00:07:07.548 03:16:52 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 2287237 00:07:07.548 03:16:52 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2287237 00:07:07.548 03:16:52 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:07.805 lslocks: write error 00:07:07.805 03:16:53 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 2287228 00:07:07.805 03:16:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@946 -- # '[' -z 2287228 ']' 00:07:07.805 03:16:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # kill -0 2287228 00:07:07.805 03:16:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # uname 00:07:07.806 03:16:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:07.806 03:16:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2287228 00:07:07.806 03:16:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:07.806 03:16:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:07.806 03:16:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2287228' 00:07:07.806 killing process with pid 2287228 00:07:07.806 03:16:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@965 -- # kill 2287228 00:07:07.806 03:16:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # wait 2287228 00:07:08.738 03:16:53 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 2287237 00:07:08.738 03:16:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@946 -- # '[' -z 2287237 ']' 00:07:08.738 03:16:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # kill -0 2287237 00:07:08.738 03:16:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # uname 00:07:08.738 03:16:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:08.738 03:16:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2287237 00:07:08.738 03:16:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:08.738 03:16:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:08.738 03:16:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2287237' 00:07:08.738 killing process with pid 2287237 00:07:08.738 03:16:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@965 -- # kill 2287237 00:07:08.738 03:16:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # wait 2287237 00:07:09.301 00:07:09.301 real 0m3.245s 00:07:09.301 user 0m3.380s 00:07:09.301 sys 0m1.087s 00:07:09.302 03:16:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:09.302 03:16:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:09.302 ************************************ 00:07:09.302 END TEST locking_app_on_unlocked_coremask 00:07:09.302 ************************************ 00:07:09.302 03:16:54 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:07:09.302 03:16:54 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:09.302 03:16:54 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:09.302 03:16:54 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:09.302 ************************************ 00:07:09.302 START TEST locking_app_on_locked_coremask 00:07:09.302 ************************************ 00:07:09.302 03:16:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1121 -- # locking_app_on_locked_coremask 00:07:09.302 03:16:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=2287662 00:07:09.302 03:16:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:09.302 03:16:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 2287662 /var/tmp/spdk.sock 00:07:09.302 03:16:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 2287662 ']' 00:07:09.302 03:16:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:09.302 03:16:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:09.302 03:16:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:09.302 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:09.302 03:16:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:09.302 03:16:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:09.302 [2024-07-21 03:16:54.446552] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:09.302 [2024-07-21 03:16:54.446651] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2287662 ] 00:07:09.302 EAL: No free 2048 kB hugepages reported on node 1 00:07:09.302 [2024-07-21 03:16:54.510056] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.302 [2024-07-21 03:16:54.599362] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.559 03:16:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:09.559 03:16:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:07:09.559 03:16:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=2287671 00:07:09.560 03:16:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:09.560 03:16:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 2287671 /var/tmp/spdk2.sock 00:07:09.560 03:16:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:07:09.560 03:16:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 2287671 /var/tmp/spdk2.sock 00:07:09.560 03:16:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:07:09.560 03:16:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:09.560 03:16:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:07:09.560 03:16:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:09.560 03:16:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 2287671 /var/tmp/spdk2.sock 00:07:09.560 03:16:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 2287671 ']' 00:07:09.560 03:16:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:09.560 03:16:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:09.560 03:16:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:09.560 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:09.560 03:16:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:09.560 03:16:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:09.817 [2024-07-21 03:16:54.911831] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:09.817 [2024-07-21 03:16:54.911926] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2287671 ] 00:07:09.817 EAL: No free 2048 kB hugepages reported on node 1 00:07:09.817 [2024-07-21 03:16:55.009389] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 2287662 has claimed it. 00:07:09.817 [2024-07-21 03:16:55.009464] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:10.380 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 842: kill: (2287671) - No such process 00:07:10.380 ERROR: process (pid: 2287671) is no longer running 00:07:10.380 03:16:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:10.380 03:16:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 1 00:07:10.380 03:16:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:07:10.380 03:16:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:10.380 03:16:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:10.380 03:16:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:10.380 03:16:55 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 2287662 00:07:10.380 03:16:55 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2287662 00:07:10.380 03:16:55 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:10.944 lslocks: write error 00:07:10.944 03:16:56 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 2287662 00:07:10.944 03:16:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 2287662 ']' 00:07:10.944 03:16:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 2287662 00:07:10.944 03:16:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:07:10.944 03:16:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:10.944 03:16:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2287662 00:07:10.944 03:16:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:10.944 03:16:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:10.944 03:16:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2287662' 00:07:10.944 killing process with pid 2287662 00:07:10.944 03:16:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 2287662 00:07:10.944 03:16:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 2287662 00:07:11.508 00:07:11.508 real 0m2.133s 00:07:11.508 user 0m2.266s 00:07:11.508 sys 0m0.690s 00:07:11.508 03:16:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:11.508 03:16:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:11.508 ************************************ 00:07:11.508 END TEST locking_app_on_locked_coremask 00:07:11.508 ************************************ 00:07:11.508 03:16:56 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:11.508 03:16:56 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:11.508 03:16:56 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:11.508 03:16:56 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:11.508 ************************************ 00:07:11.508 START TEST locking_overlapped_coremask 00:07:11.508 ************************************ 00:07:11.508 03:16:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1121 -- # locking_overlapped_coremask 00:07:11.508 03:16:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=2287966 00:07:11.508 03:16:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:07:11.508 03:16:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 2287966 /var/tmp/spdk.sock 00:07:11.508 03:16:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@827 -- # '[' -z 2287966 ']' 00:07:11.508 03:16:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:11.508 03:16:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:11.508 03:16:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:11.508 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:11.508 03:16:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:11.508 03:16:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:11.508 [2024-07-21 03:16:56.627766] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:11.508 [2024-07-21 03:16:56.627858] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2287966 ] 00:07:11.508 EAL: No free 2048 kB hugepages reported on node 1 00:07:11.508 [2024-07-21 03:16:56.690695] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:11.508 [2024-07-21 03:16:56.780522] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:11.508 [2024-07-21 03:16:56.780575] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:11.508 [2024-07-21 03:16:56.780592] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.765 03:16:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:11.765 03:16:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # return 0 00:07:11.765 03:16:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=2287971 00:07:11.765 03:16:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:11.765 03:16:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 2287971 /var/tmp/spdk2.sock 00:07:11.765 03:16:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:07:11.765 03:16:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 2287971 /var/tmp/spdk2.sock 00:07:11.765 03:16:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:07:11.765 03:16:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:11.765 03:16:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:07:11.765 03:16:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:11.765 03:16:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 2287971 /var/tmp/spdk2.sock 00:07:11.765 03:16:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@827 -- # '[' -z 2287971 ']' 00:07:11.765 03:16:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:11.765 03:16:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:11.765 03:16:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:11.765 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:11.765 03:16:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:11.765 03:16:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:12.021 [2024-07-21 03:16:57.084175] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:12.021 [2024-07-21 03:16:57.084268] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2287971 ] 00:07:12.021 EAL: No free 2048 kB hugepages reported on node 1 00:07:12.021 [2024-07-21 03:16:57.170768] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2287966 has claimed it. 00:07:12.021 [2024-07-21 03:16:57.170834] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:12.585 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 842: kill: (2287971) - No such process 00:07:12.585 ERROR: process (pid: 2287971) is no longer running 00:07:12.585 03:16:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:12.585 03:16:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # return 1 00:07:12.585 03:16:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:07:12.585 03:16:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:12.585 03:16:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:12.585 03:16:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:12.585 03:16:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:12.585 03:16:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:12.585 03:16:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:12.585 03:16:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:12.585 03:16:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 2287966 00:07:12.585 03:16:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@946 -- # '[' -z 2287966 ']' 00:07:12.585 03:16:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # kill -0 2287966 00:07:12.585 03:16:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@951 -- # uname 00:07:12.585 03:16:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:12.585 03:16:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2287966 00:07:12.585 03:16:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:12.585 03:16:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:12.585 03:16:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2287966' 00:07:12.585 killing process with pid 2287966 00:07:12.585 03:16:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@965 -- # kill 2287966 00:07:12.585 03:16:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@970 -- # wait 2287966 00:07:13.152 00:07:13.152 real 0m1.637s 00:07:13.152 user 0m4.405s 00:07:13.152 sys 0m0.450s 00:07:13.152 03:16:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:13.152 03:16:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:13.152 ************************************ 00:07:13.152 END TEST locking_overlapped_coremask 00:07:13.152 ************************************ 00:07:13.152 03:16:58 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:13.152 03:16:58 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:13.152 03:16:58 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:13.152 03:16:58 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:13.152 ************************************ 00:07:13.152 START TEST locking_overlapped_coremask_via_rpc 00:07:13.152 ************************************ 00:07:13.152 03:16:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1121 -- # locking_overlapped_coremask_via_rpc 00:07:13.152 03:16:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=2288135 00:07:13.152 03:16:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:13.152 03:16:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 2288135 /var/tmp/spdk.sock 00:07:13.152 03:16:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 2288135 ']' 00:07:13.152 03:16:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:13.152 03:16:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:13.152 03:16:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:13.152 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:13.152 03:16:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:13.152 03:16:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:13.152 [2024-07-21 03:16:58.315144] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:13.152 [2024-07-21 03:16:58.315228] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2288135 ] 00:07:13.152 EAL: No free 2048 kB hugepages reported on node 1 00:07:13.152 [2024-07-21 03:16:58.379431] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:13.152 [2024-07-21 03:16:58.379479] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:13.410 [2024-07-21 03:16:58.480641] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:13.410 [2024-07-21 03:16:58.480686] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:13.410 [2024-07-21 03:16:58.484645] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.667 03:16:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:13.667 03:16:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:07:13.667 03:16:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=2288271 00:07:13.667 03:16:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:13.667 03:16:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 2288271 /var/tmp/spdk2.sock 00:07:13.667 03:16:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 2288271 ']' 00:07:13.668 03:16:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:13.668 03:16:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:13.668 03:16:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:13.668 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:13.668 03:16:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:13.668 03:16:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:13.668 [2024-07-21 03:16:58.788876] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:13.668 [2024-07-21 03:16:58.788982] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2288271 ] 00:07:13.668 EAL: No free 2048 kB hugepages reported on node 1 00:07:13.668 [2024-07-21 03:16:58.875752] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:13.668 [2024-07-21 03:16:58.875796] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:13.925 [2024-07-21 03:16:59.052162] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:13.925 [2024-07-21 03:16:59.052229] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:07:13.925 [2024-07-21 03:16:59.052231] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:14.490 03:16:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:14.490 03:16:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:07:14.490 03:16:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:14.490 03:16:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:14.490 03:16:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:14.490 03:16:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:14.490 03:16:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:14.490 03:16:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:07:14.490 03:16:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:14.490 03:16:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:07:14.490 03:16:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:14.490 03:16:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:07:14.490 03:16:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:14.490 03:16:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:14.490 03:16:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:14.490 03:16:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:14.490 [2024-07-21 03:16:59.741720] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2288135 has claimed it. 00:07:14.490 request: 00:07:14.490 { 00:07:14.491 "method": "framework_enable_cpumask_locks", 00:07:14.491 "req_id": 1 00:07:14.491 } 00:07:14.491 Got JSON-RPC error response 00:07:14.491 response: 00:07:14.491 { 00:07:14.491 "code": -32603, 00:07:14.491 "message": "Failed to claim CPU core: 2" 00:07:14.491 } 00:07:14.491 03:16:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:07:14.491 03:16:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:07:14.491 03:16:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:14.491 03:16:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:14.491 03:16:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:14.491 03:16:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 2288135 /var/tmp/spdk.sock 00:07:14.491 03:16:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 2288135 ']' 00:07:14.491 03:16:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:14.491 03:16:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:14.491 03:16:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:14.491 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:14.491 03:16:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:14.491 03:16:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:14.748 03:16:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:14.748 03:16:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:07:14.748 03:16:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 2288271 /var/tmp/spdk2.sock 00:07:14.748 03:16:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 2288271 ']' 00:07:14.748 03:16:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:14.748 03:16:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:14.748 03:16:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:14.748 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:14.748 03:16:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:14.748 03:16:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:15.006 03:17:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:15.006 03:17:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:07:15.006 03:17:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:15.006 03:17:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:15.006 03:17:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:15.006 03:17:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:15.006 00:07:15.006 real 0m1.968s 00:07:15.006 user 0m1.069s 00:07:15.006 sys 0m0.193s 00:07:15.006 03:17:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:15.006 03:17:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:15.006 ************************************ 00:07:15.006 END TEST locking_overlapped_coremask_via_rpc 00:07:15.006 ************************************ 00:07:15.006 03:17:00 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:07:15.006 03:17:00 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2288135 ]] 00:07:15.006 03:17:00 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2288135 00:07:15.006 03:17:00 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 2288135 ']' 00:07:15.006 03:17:00 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 2288135 00:07:15.006 03:17:00 event.cpu_locks -- common/autotest_common.sh@951 -- # uname 00:07:15.006 03:17:00 event.cpu_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:15.006 03:17:00 event.cpu_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2288135 00:07:15.006 03:17:00 event.cpu_locks -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:15.006 03:17:00 event.cpu_locks -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:15.006 03:17:00 event.cpu_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2288135' 00:07:15.006 killing process with pid 2288135 00:07:15.006 03:17:00 event.cpu_locks -- common/autotest_common.sh@965 -- # kill 2288135 00:07:15.006 03:17:00 event.cpu_locks -- common/autotest_common.sh@970 -- # wait 2288135 00:07:15.571 03:17:00 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2288271 ]] 00:07:15.571 03:17:00 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2288271 00:07:15.571 03:17:00 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 2288271 ']' 00:07:15.571 03:17:00 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 2288271 00:07:15.571 03:17:00 event.cpu_locks -- common/autotest_common.sh@951 -- # uname 00:07:15.571 03:17:00 event.cpu_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:15.571 03:17:00 event.cpu_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2288271 00:07:15.571 03:17:00 event.cpu_locks -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:07:15.571 03:17:00 event.cpu_locks -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:07:15.571 03:17:00 event.cpu_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2288271' 00:07:15.571 killing process with pid 2288271 00:07:15.571 03:17:00 event.cpu_locks -- common/autotest_common.sh@965 -- # kill 2288271 00:07:15.571 03:17:00 event.cpu_locks -- common/autotest_common.sh@970 -- # wait 2288271 00:07:15.828 03:17:01 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:15.828 03:17:01 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:07:15.828 03:17:01 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2288135 ]] 00:07:15.828 03:17:01 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2288135 00:07:15.828 03:17:01 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 2288135 ']' 00:07:15.828 03:17:01 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 2288135 00:07:15.828 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (2288135) - No such process 00:07:15.828 03:17:01 event.cpu_locks -- common/autotest_common.sh@973 -- # echo 'Process with pid 2288135 is not found' 00:07:15.828 Process with pid 2288135 is not found 00:07:15.828 03:17:01 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2288271 ]] 00:07:15.828 03:17:01 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2288271 00:07:15.828 03:17:01 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 2288271 ']' 00:07:15.828 03:17:01 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 2288271 00:07:15.828 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (2288271) - No such process 00:07:15.828 03:17:01 event.cpu_locks -- common/autotest_common.sh@973 -- # echo 'Process with pid 2288271 is not found' 00:07:15.828 Process with pid 2288271 is not found 00:07:15.828 03:17:01 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:15.828 00:07:15.828 real 0m15.822s 00:07:15.828 user 0m27.399s 00:07:15.828 sys 0m5.475s 00:07:15.828 03:17:01 event.cpu_locks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:15.828 03:17:01 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:15.828 ************************************ 00:07:15.828 END TEST cpu_locks 00:07:15.828 ************************************ 00:07:15.828 00:07:15.828 real 0m41.653s 00:07:15.828 user 1m18.895s 00:07:15.828 sys 0m9.502s 00:07:15.828 03:17:01 event -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:15.828 03:17:01 event -- common/autotest_common.sh@10 -- # set +x 00:07:15.828 ************************************ 00:07:15.828 END TEST event 00:07:15.828 ************************************ 00:07:16.086 03:17:01 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:07:16.086 03:17:01 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:16.086 03:17:01 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:16.086 03:17:01 -- common/autotest_common.sh@10 -- # set +x 00:07:16.086 ************************************ 00:07:16.086 START TEST thread 00:07:16.086 ************************************ 00:07:16.086 03:17:01 thread -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:07:16.086 * Looking for test storage... 00:07:16.086 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:07:16.086 03:17:01 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:16.086 03:17:01 thread -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:07:16.086 03:17:01 thread -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:16.086 03:17:01 thread -- common/autotest_common.sh@10 -- # set +x 00:07:16.086 ************************************ 00:07:16.086 START TEST thread_poller_perf 00:07:16.086 ************************************ 00:07:16.086 03:17:01 thread.thread_poller_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:16.086 [2024-07-21 03:17:01.266820] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:16.086 [2024-07-21 03:17:01.266888] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2288635 ] 00:07:16.086 EAL: No free 2048 kB hugepages reported on node 1 00:07:16.086 [2024-07-21 03:17:01.325926] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.343 [2024-07-21 03:17:01.415308] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.343 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:17.275 ====================================== 00:07:17.275 busy:2708088847 (cyc) 00:07:17.275 total_run_count: 293000 00:07:17.275 tsc_hz: 2700000000 (cyc) 00:07:17.275 ====================================== 00:07:17.275 poller_cost: 9242 (cyc), 3422 (nsec) 00:07:17.275 00:07:17.275 real 0m1.253s 00:07:17.275 user 0m1.169s 00:07:17.275 sys 0m0.079s 00:07:17.275 03:17:02 thread.thread_poller_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:17.275 03:17:02 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:17.275 ************************************ 00:07:17.275 END TEST thread_poller_perf 00:07:17.275 ************************************ 00:07:17.275 03:17:02 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:17.275 03:17:02 thread -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:07:17.275 03:17:02 thread -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:17.275 03:17:02 thread -- common/autotest_common.sh@10 -- # set +x 00:07:17.275 ************************************ 00:07:17.275 START TEST thread_poller_perf 00:07:17.275 ************************************ 00:07:17.275 03:17:02 thread.thread_poller_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:17.275 [2024-07-21 03:17:02.568553] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:17.275 [2024-07-21 03:17:02.568624] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2288793 ] 00:07:17.532 EAL: No free 2048 kB hugepages reported on node 1 00:07:17.532 [2024-07-21 03:17:02.631802] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.532 [2024-07-21 03:17:02.722383] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.532 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:18.899 ====================================== 00:07:18.899 busy:2702766145 (cyc) 00:07:18.899 total_run_count: 3851000 00:07:18.899 tsc_hz: 2700000000 (cyc) 00:07:18.899 ====================================== 00:07:18.899 poller_cost: 701 (cyc), 259 (nsec) 00:07:18.899 00:07:18.899 real 0m1.251s 00:07:18.899 user 0m1.168s 00:07:18.899 sys 0m0.077s 00:07:18.899 03:17:03 thread.thread_poller_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:18.899 03:17:03 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:18.899 ************************************ 00:07:18.899 END TEST thread_poller_perf 00:07:18.899 ************************************ 00:07:18.899 03:17:03 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:18.899 00:07:18.899 real 0m2.655s 00:07:18.899 user 0m2.390s 00:07:18.899 sys 0m0.265s 00:07:18.899 03:17:03 thread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:18.899 03:17:03 thread -- common/autotest_common.sh@10 -- # set +x 00:07:18.899 ************************************ 00:07:18.899 END TEST thread 00:07:18.899 ************************************ 00:07:18.899 03:17:03 -- spdk/autotest.sh@183 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:07:18.899 03:17:03 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:18.899 03:17:03 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:18.899 03:17:03 -- common/autotest_common.sh@10 -- # set +x 00:07:18.899 ************************************ 00:07:18.899 START TEST accel 00:07:18.899 ************************************ 00:07:18.899 03:17:03 accel -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:07:18.899 * Looking for test storage... 00:07:18.899 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:07:18.899 03:17:03 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:07:18.899 03:17:03 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:07:18.899 03:17:03 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:18.899 03:17:03 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=2288984 00:07:18.899 03:17:03 accel -- accel/accel.sh@63 -- # waitforlisten 2288984 00:07:18.899 03:17:03 accel -- common/autotest_common.sh@827 -- # '[' -z 2288984 ']' 00:07:18.899 03:17:03 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:07:18.899 03:17:03 accel -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:18.899 03:17:03 accel -- accel/accel.sh@61 -- # build_accel_config 00:07:18.899 03:17:03 accel -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:18.899 03:17:03 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:18.899 03:17:03 accel -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:18.899 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:18.899 03:17:03 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:18.899 03:17:03 accel -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:18.899 03:17:03 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:18.899 03:17:03 accel -- common/autotest_common.sh@10 -- # set +x 00:07:18.899 03:17:03 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:18.899 03:17:03 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:18.899 03:17:03 accel -- accel/accel.sh@40 -- # local IFS=, 00:07:18.899 03:17:03 accel -- accel/accel.sh@41 -- # jq -r . 00:07:18.899 [2024-07-21 03:17:03.982682] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:18.899 [2024-07-21 03:17:03.982756] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2288984 ] 00:07:18.899 EAL: No free 2048 kB hugepages reported on node 1 00:07:18.899 [2024-07-21 03:17:04.043994] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.899 [2024-07-21 03:17:04.137271] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.156 03:17:04 accel -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:19.156 03:17:04 accel -- common/autotest_common.sh@860 -- # return 0 00:07:19.156 03:17:04 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:07:19.156 03:17:04 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:07:19.156 03:17:04 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:07:19.156 03:17:04 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:07:19.156 03:17:04 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:07:19.156 03:17:04 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:07:19.156 03:17:04 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:19.156 03:17:04 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:07:19.156 03:17:04 accel -- common/autotest_common.sh@10 -- # set +x 00:07:19.156 03:17:04 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:19.156 03:17:04 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:19.156 03:17:04 accel -- accel/accel.sh@72 -- # IFS== 00:07:19.156 03:17:04 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:19.156 03:17:04 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:19.156 03:17:04 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:19.156 03:17:04 accel -- accel/accel.sh@72 -- # IFS== 00:07:19.156 03:17:04 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:19.156 03:17:04 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:19.156 03:17:04 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:19.156 03:17:04 accel -- accel/accel.sh@72 -- # IFS== 00:07:19.156 03:17:04 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:19.156 03:17:04 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:19.156 03:17:04 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:19.156 03:17:04 accel -- accel/accel.sh@72 -- # IFS== 00:07:19.156 03:17:04 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:19.156 03:17:04 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:19.156 03:17:04 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:19.156 03:17:04 accel -- accel/accel.sh@72 -- # IFS== 00:07:19.156 03:17:04 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:19.156 03:17:04 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:19.156 03:17:04 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:19.156 03:17:04 accel -- accel/accel.sh@72 -- # IFS== 00:07:19.156 03:17:04 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:19.156 03:17:04 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:19.156 03:17:04 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:19.156 03:17:04 accel -- accel/accel.sh@72 -- # IFS== 00:07:19.156 03:17:04 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:19.156 03:17:04 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:19.156 03:17:04 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:19.156 03:17:04 accel -- accel/accel.sh@72 -- # IFS== 00:07:19.156 03:17:04 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:19.156 03:17:04 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:19.156 03:17:04 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:19.156 03:17:04 accel -- accel/accel.sh@72 -- # IFS== 00:07:19.156 03:17:04 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:19.156 03:17:04 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:19.156 03:17:04 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:19.156 03:17:04 accel -- accel/accel.sh@72 -- # IFS== 00:07:19.156 03:17:04 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:19.156 03:17:04 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:19.156 03:17:04 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:19.156 03:17:04 accel -- accel/accel.sh@72 -- # IFS== 00:07:19.156 03:17:04 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:19.156 03:17:04 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:19.156 03:17:04 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:19.156 03:17:04 accel -- accel/accel.sh@72 -- # IFS== 00:07:19.156 03:17:04 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:19.156 03:17:04 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:19.156 03:17:04 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:19.156 03:17:04 accel -- accel/accel.sh@72 -- # IFS== 00:07:19.156 03:17:04 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:19.156 03:17:04 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:19.156 03:17:04 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:19.156 03:17:04 accel -- accel/accel.sh@72 -- # IFS== 00:07:19.156 03:17:04 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:19.156 03:17:04 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:19.156 03:17:04 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:19.156 03:17:04 accel -- accel/accel.sh@72 -- # IFS== 00:07:19.156 03:17:04 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:19.156 03:17:04 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:19.156 03:17:04 accel -- accel/accel.sh@75 -- # killprocess 2288984 00:07:19.156 03:17:04 accel -- common/autotest_common.sh@946 -- # '[' -z 2288984 ']' 00:07:19.156 03:17:04 accel -- common/autotest_common.sh@950 -- # kill -0 2288984 00:07:19.156 03:17:04 accel -- common/autotest_common.sh@951 -- # uname 00:07:19.156 03:17:04 accel -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:19.156 03:17:04 accel -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2288984 00:07:19.156 03:17:04 accel -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:19.156 03:17:04 accel -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:19.156 03:17:04 accel -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2288984' 00:07:19.156 killing process with pid 2288984 00:07:19.156 03:17:04 accel -- common/autotest_common.sh@965 -- # kill 2288984 00:07:19.156 03:17:04 accel -- common/autotest_common.sh@970 -- # wait 2288984 00:07:19.719 03:17:04 accel -- accel/accel.sh@76 -- # trap - ERR 00:07:19.719 03:17:04 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:07:19.719 03:17:04 accel -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:19.719 03:17:04 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:19.719 03:17:04 accel -- common/autotest_common.sh@10 -- # set +x 00:07:19.719 03:17:04 accel.accel_help -- common/autotest_common.sh@1121 -- # accel_perf -h 00:07:19.719 03:17:04 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:07:19.719 03:17:04 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:07:19.719 03:17:04 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:19.719 03:17:04 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:19.719 03:17:04 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:19.719 03:17:04 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:19.719 03:17:04 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:19.719 03:17:04 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:07:19.719 03:17:04 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:07:19.719 03:17:04 accel.accel_help -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:19.719 03:17:04 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:07:19.719 03:17:04 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:07:19.719 03:17:04 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:07:19.719 03:17:04 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:19.719 03:17:04 accel -- common/autotest_common.sh@10 -- # set +x 00:07:19.719 ************************************ 00:07:19.719 START TEST accel_missing_filename 00:07:19.719 ************************************ 00:07:19.719 03:17:04 accel.accel_missing_filename -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w compress 00:07:19.719 03:17:04 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:07:19.719 03:17:04 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:07:19.719 03:17:04 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:19.719 03:17:04 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:19.719 03:17:04 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:19.719 03:17:04 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:19.719 03:17:04 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:07:19.719 03:17:04 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:07:19.719 03:17:04 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:07:19.719 03:17:04 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:19.719 03:17:04 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:19.719 03:17:04 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:19.719 03:17:04 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:19.719 03:17:04 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:19.719 03:17:04 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:07:19.719 03:17:04 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:07:19.719 [2024-07-21 03:17:04.979468] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:19.719 [2024-07-21 03:17:04.979534] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2289154 ] 00:07:19.719 EAL: No free 2048 kB hugepages reported on node 1 00:07:19.975 [2024-07-21 03:17:05.043461] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.975 [2024-07-21 03:17:05.136620] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.975 [2024-07-21 03:17:05.198635] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:19.975 [2024-07-21 03:17:05.276297] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:07:20.232 A filename is required. 00:07:20.232 03:17:05 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:07:20.232 03:17:05 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:20.232 03:17:05 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:07:20.232 03:17:05 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:07:20.232 03:17:05 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:07:20.232 03:17:05 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:20.232 00:07:20.232 real 0m0.398s 00:07:20.232 user 0m0.278s 00:07:20.232 sys 0m0.154s 00:07:20.232 03:17:05 accel.accel_missing_filename -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:20.232 03:17:05 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:07:20.232 ************************************ 00:07:20.232 END TEST accel_missing_filename 00:07:20.232 ************************************ 00:07:20.232 03:17:05 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:20.232 03:17:05 accel -- common/autotest_common.sh@1097 -- # '[' 10 -le 1 ']' 00:07:20.232 03:17:05 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:20.232 03:17:05 accel -- common/autotest_common.sh@10 -- # set +x 00:07:20.232 ************************************ 00:07:20.232 START TEST accel_compress_verify 00:07:20.232 ************************************ 00:07:20.232 03:17:05 accel.accel_compress_verify -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:20.232 03:17:05 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:07:20.233 03:17:05 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:20.233 03:17:05 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:20.233 03:17:05 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:20.233 03:17:05 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:20.233 03:17:05 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:20.233 03:17:05 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:20.233 03:17:05 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:20.233 03:17:05 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:07:20.233 03:17:05 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:20.233 03:17:05 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:20.233 03:17:05 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:20.233 03:17:05 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:20.233 03:17:05 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:20.233 03:17:05 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:07:20.233 03:17:05 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:07:20.233 [2024-07-21 03:17:05.426550] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:20.233 [2024-07-21 03:17:05.426624] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2289291 ] 00:07:20.233 EAL: No free 2048 kB hugepages reported on node 1 00:07:20.233 [2024-07-21 03:17:05.489996] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.489 [2024-07-21 03:17:05.578665] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.489 [2024-07-21 03:17:05.636881] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:20.489 [2024-07-21 03:17:05.723930] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:07:20.746 00:07:20.746 Compression does not support the verify option, aborting. 00:07:20.746 03:17:05 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:07:20.746 03:17:05 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:20.746 03:17:05 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:07:20.746 03:17:05 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:07:20.746 03:17:05 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:07:20.746 03:17:05 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:20.746 00:07:20.746 real 0m0.402s 00:07:20.746 user 0m0.289s 00:07:20.746 sys 0m0.146s 00:07:20.746 03:17:05 accel.accel_compress_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:20.746 03:17:05 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:07:20.746 ************************************ 00:07:20.746 END TEST accel_compress_verify 00:07:20.746 ************************************ 00:07:20.746 03:17:05 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:07:20.746 03:17:05 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:07:20.746 03:17:05 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:20.746 03:17:05 accel -- common/autotest_common.sh@10 -- # set +x 00:07:20.746 ************************************ 00:07:20.746 START TEST accel_wrong_workload 00:07:20.746 ************************************ 00:07:20.746 03:17:05 accel.accel_wrong_workload -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w foobar 00:07:20.746 03:17:05 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:07:20.746 03:17:05 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:07:20.746 03:17:05 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:20.746 03:17:05 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:20.746 03:17:05 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:20.746 03:17:05 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:20.746 03:17:05 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:07:20.746 03:17:05 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:07:20.746 03:17:05 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:07:20.746 03:17:05 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:20.746 03:17:05 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:20.746 03:17:05 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:20.746 03:17:05 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:20.746 03:17:05 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:20.746 03:17:05 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:07:20.746 03:17:05 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:07:20.746 Unsupported workload type: foobar 00:07:20.746 [2024-07-21 03:17:05.875508] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:07:20.746 accel_perf options: 00:07:20.746 [-h help message] 00:07:20.746 [-q queue depth per core] 00:07:20.746 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:20.746 [-T number of threads per core 00:07:20.746 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:20.746 [-t time in seconds] 00:07:20.746 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:20.746 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:07:20.746 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:20.746 [-l for compress/decompress workloads, name of uncompressed input file 00:07:20.746 [-S for crc32c workload, use this seed value (default 0) 00:07:20.746 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:20.746 [-f for fill workload, use this BYTE value (default 255) 00:07:20.746 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:20.746 [-y verify result if this switch is on] 00:07:20.746 [-a tasks to allocate per core (default: same value as -q)] 00:07:20.746 Can be used to spread operations across a wider range of memory. 00:07:20.746 03:17:05 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:07:20.746 03:17:05 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:20.746 03:17:05 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:20.746 03:17:05 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:20.746 00:07:20.746 real 0m0.024s 00:07:20.746 user 0m0.015s 00:07:20.746 sys 0m0.009s 00:07:20.746 03:17:05 accel.accel_wrong_workload -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:20.746 03:17:05 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:07:20.746 ************************************ 00:07:20.746 END TEST accel_wrong_workload 00:07:20.746 ************************************ 00:07:20.746 Error: writing output failed: Broken pipe 00:07:20.746 03:17:05 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:07:20.746 03:17:05 accel -- common/autotest_common.sh@1097 -- # '[' 10 -le 1 ']' 00:07:20.746 03:17:05 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:20.746 03:17:05 accel -- common/autotest_common.sh@10 -- # set +x 00:07:20.746 ************************************ 00:07:20.746 START TEST accel_negative_buffers 00:07:20.746 ************************************ 00:07:20.746 03:17:05 accel.accel_negative_buffers -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:07:20.746 03:17:05 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:07:20.746 03:17:05 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:07:20.746 03:17:05 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:20.746 03:17:05 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:20.746 03:17:05 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:20.746 03:17:05 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:20.746 03:17:05 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:07:20.746 03:17:05 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:07:20.746 03:17:05 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:07:20.746 03:17:05 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:20.746 03:17:05 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:20.746 03:17:05 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:20.746 03:17:05 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:20.746 03:17:05 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:20.746 03:17:05 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:07:20.746 03:17:05 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:07:20.746 -x option must be non-negative. 00:07:20.746 [2024-07-21 03:17:05.937021] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:07:20.746 accel_perf options: 00:07:20.746 [-h help message] 00:07:20.746 [-q queue depth per core] 00:07:20.746 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:20.746 [-T number of threads per core 00:07:20.746 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:20.746 [-t time in seconds] 00:07:20.746 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:20.746 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:07:20.746 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:20.746 [-l for compress/decompress workloads, name of uncompressed input file 00:07:20.746 [-S for crc32c workload, use this seed value (default 0) 00:07:20.746 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:20.746 [-f for fill workload, use this BYTE value (default 255) 00:07:20.746 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:20.746 [-y verify result if this switch is on] 00:07:20.746 [-a tasks to allocate per core (default: same value as -q)] 00:07:20.746 Can be used to spread operations across a wider range of memory. 00:07:20.746 03:17:05 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:07:20.746 03:17:05 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:20.746 03:17:05 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:20.746 03:17:05 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:20.746 00:07:20.746 real 0m0.021s 00:07:20.746 user 0m0.014s 00:07:20.746 sys 0m0.007s 00:07:20.746 03:17:05 accel.accel_negative_buffers -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:20.746 03:17:05 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:07:20.746 ************************************ 00:07:20.746 END TEST accel_negative_buffers 00:07:20.746 ************************************ 00:07:20.746 Error: writing output failed: Broken pipe 00:07:20.746 03:17:05 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:07:20.746 03:17:05 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:07:20.746 03:17:05 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:20.746 03:17:05 accel -- common/autotest_common.sh@10 -- # set +x 00:07:20.746 ************************************ 00:07:20.746 START TEST accel_crc32c 00:07:20.746 ************************************ 00:07:20.746 03:17:05 accel.accel_crc32c -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w crc32c -S 32 -y 00:07:20.746 03:17:05 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:07:20.746 03:17:05 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:07:20.746 03:17:05 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:20.746 03:17:05 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:07:20.746 03:17:05 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:20.746 03:17:05 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:07:20.746 03:17:05 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:07:20.746 03:17:05 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:20.746 03:17:05 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:20.746 03:17:05 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:20.746 03:17:05 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:20.746 03:17:05 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:20.746 03:17:05 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:07:20.746 03:17:05 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:07:20.746 [2024-07-21 03:17:06.005902] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:20.746 [2024-07-21 03:17:06.005961] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2289365 ] 00:07:20.746 EAL: No free 2048 kB hugepages reported on node 1 00:07:21.003 [2024-07-21 03:17:06.068962] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.003 [2024-07-21 03:17:06.161949] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.003 03:17:06 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:21.003 03:17:06 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:21.003 03:17:06 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:21.003 03:17:06 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:21.003 03:17:06 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:21.003 03:17:06 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:21.003 03:17:06 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:21.003 03:17:06 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:21.003 03:17:06 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:07:21.003 03:17:06 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:21.003 03:17:06 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:21.003 03:17:06 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:21.003 03:17:06 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:21.003 03:17:06 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:21.003 03:17:06 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:21.003 03:17:06 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:21.003 03:17:06 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:21.003 03:17:06 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:21.003 03:17:06 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:21.003 03:17:06 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:21.003 03:17:06 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:07:21.003 03:17:06 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:21.003 03:17:06 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:07:21.003 03:17:06 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:21.003 03:17:06 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:21.003 03:17:06 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:21.003 03:17:06 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:21.003 03:17:06 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:21.003 03:17:06 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:21.003 03:17:06 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:21.003 03:17:06 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:21.003 03:17:06 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:21.003 03:17:06 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:21.003 03:17:06 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:21.003 03:17:06 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:21.004 03:17:06 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:21.004 03:17:06 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:21.004 03:17:06 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:07:21.004 03:17:06 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:21.004 03:17:06 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:07:21.004 03:17:06 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:21.004 03:17:06 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:21.004 03:17:06 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:21.004 03:17:06 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:21.004 03:17:06 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:21.004 03:17:06 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:21.004 03:17:06 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:21.004 03:17:06 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:21.004 03:17:06 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:21.004 03:17:06 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:21.004 03:17:06 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:07:21.004 03:17:06 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:21.004 03:17:06 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:21.004 03:17:06 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:21.004 03:17:06 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:07:21.004 03:17:06 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:21.004 03:17:06 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:21.004 03:17:06 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:21.004 03:17:06 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:07:21.004 03:17:06 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:21.004 03:17:06 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:21.004 03:17:06 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:21.004 03:17:06 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:21.004 03:17:06 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:21.004 03:17:06 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:21.004 03:17:06 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:21.004 03:17:06 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:21.004 03:17:06 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:21.004 03:17:06 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:21.004 03:17:06 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:22.443 03:17:07 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:22.443 03:17:07 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:22.443 03:17:07 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:22.443 03:17:07 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:22.443 03:17:07 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:22.443 03:17:07 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:22.443 03:17:07 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:22.443 03:17:07 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:22.443 03:17:07 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:22.443 03:17:07 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:22.443 03:17:07 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:22.443 03:17:07 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:22.443 03:17:07 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:22.443 03:17:07 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:22.443 03:17:07 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:22.443 03:17:07 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:22.443 03:17:07 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:22.443 03:17:07 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:22.443 03:17:07 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:22.443 03:17:07 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:22.443 03:17:07 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:22.443 03:17:07 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:22.443 03:17:07 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:22.443 03:17:07 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:22.443 03:17:07 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:22.443 03:17:07 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:07:22.443 03:17:07 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:22.443 00:07:22.443 real 0m1.408s 00:07:22.443 user 0m1.260s 00:07:22.443 sys 0m0.150s 00:07:22.443 03:17:07 accel.accel_crc32c -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:22.443 03:17:07 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:07:22.443 ************************************ 00:07:22.443 END TEST accel_crc32c 00:07:22.443 ************************************ 00:07:22.443 03:17:07 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:07:22.443 03:17:07 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:07:22.443 03:17:07 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:22.443 03:17:07 accel -- common/autotest_common.sh@10 -- # set +x 00:07:22.443 ************************************ 00:07:22.443 START TEST accel_crc32c_C2 00:07:22.443 ************************************ 00:07:22.443 03:17:07 accel.accel_crc32c_C2 -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w crc32c -y -C 2 00:07:22.443 03:17:07 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:07:22.443 03:17:07 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:07:22.443 03:17:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:22.443 03:17:07 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:07:22.443 03:17:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:22.443 03:17:07 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:07:22.443 03:17:07 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:07:22.443 03:17:07 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:22.443 03:17:07 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:22.443 03:17:07 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:22.443 03:17:07 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:22.443 03:17:07 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:22.443 03:17:07 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:07:22.443 03:17:07 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:07:22.443 [2024-07-21 03:17:07.456923] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:22.443 [2024-07-21 03:17:07.456986] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2289518 ] 00:07:22.443 EAL: No free 2048 kB hugepages reported on node 1 00:07:22.443 [2024-07-21 03:17:07.521102] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.443 [2024-07-21 03:17:07.614770] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.443 03:17:07 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:22.443 03:17:07 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:22.443 03:17:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:22.443 03:17:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:22.443 03:17:07 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:22.443 03:17:07 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:22.443 03:17:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:22.443 03:17:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:22.443 03:17:07 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:07:22.443 03:17:07 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:22.443 03:17:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:22.443 03:17:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:22.443 03:17:07 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:22.443 03:17:07 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:22.443 03:17:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:22.443 03:17:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:22.443 03:17:07 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:22.443 03:17:07 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:22.443 03:17:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:22.443 03:17:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:22.443 03:17:07 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:07:22.443 03:17:07 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:22.443 03:17:07 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:07:22.443 03:17:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:22.443 03:17:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:22.443 03:17:07 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:07:22.443 03:17:07 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:22.443 03:17:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:22.443 03:17:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:22.443 03:17:07 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:22.444 03:17:07 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:22.444 03:17:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:22.444 03:17:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:22.444 03:17:07 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:22.444 03:17:07 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:22.444 03:17:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:22.444 03:17:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:22.444 03:17:07 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:07:22.444 03:17:07 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:22.444 03:17:07 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:07:22.444 03:17:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:22.444 03:17:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:22.444 03:17:07 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:22.444 03:17:07 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:22.444 03:17:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:22.444 03:17:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:22.444 03:17:07 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:22.444 03:17:07 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:22.444 03:17:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:22.444 03:17:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:22.444 03:17:07 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:07:22.444 03:17:07 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:22.444 03:17:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:22.444 03:17:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:22.444 03:17:07 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:22.444 03:17:07 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:22.444 03:17:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:22.444 03:17:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:22.444 03:17:07 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:07:22.444 03:17:07 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:22.444 03:17:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:22.444 03:17:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:22.444 03:17:07 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:22.444 03:17:07 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:22.444 03:17:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:22.444 03:17:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:22.444 03:17:07 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:22.444 03:17:07 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:22.444 03:17:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:22.444 03:17:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:23.813 03:17:08 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:23.813 03:17:08 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:23.813 03:17:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:23.813 03:17:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:23.813 03:17:08 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:23.813 03:17:08 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:23.813 03:17:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:23.813 03:17:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:23.813 03:17:08 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:23.813 03:17:08 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:23.813 03:17:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:23.813 03:17:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:23.813 03:17:08 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:23.813 03:17:08 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:23.813 03:17:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:23.813 03:17:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:23.813 03:17:08 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:23.813 03:17:08 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:23.813 03:17:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:23.813 03:17:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:23.813 03:17:08 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:23.813 03:17:08 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:23.813 03:17:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:23.813 03:17:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:23.813 03:17:08 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:23.813 03:17:08 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:07:23.813 03:17:08 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:23.813 00:07:23.813 real 0m1.402s 00:07:23.813 user 0m1.256s 00:07:23.813 sys 0m0.147s 00:07:23.813 03:17:08 accel.accel_crc32c_C2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:23.813 03:17:08 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:07:23.813 ************************************ 00:07:23.813 END TEST accel_crc32c_C2 00:07:23.813 ************************************ 00:07:23.813 03:17:08 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:07:23.813 03:17:08 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:07:23.813 03:17:08 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:23.813 03:17:08 accel -- common/autotest_common.sh@10 -- # set +x 00:07:23.814 ************************************ 00:07:23.814 START TEST accel_copy 00:07:23.814 ************************************ 00:07:23.814 03:17:08 accel.accel_copy -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy -y 00:07:23.814 03:17:08 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:07:23.814 03:17:08 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:07:23.814 03:17:08 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:23.814 03:17:08 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:07:23.814 03:17:08 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:23.814 03:17:08 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:07:23.814 03:17:08 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:07:23.814 03:17:08 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:23.814 03:17:08 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:23.814 03:17:08 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:23.814 03:17:08 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:23.814 03:17:08 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:23.814 03:17:08 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:07:23.814 03:17:08 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:07:23.814 [2024-07-21 03:17:08.903998] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:23.814 [2024-07-21 03:17:08.904060] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2289796 ] 00:07:23.814 EAL: No free 2048 kB hugepages reported on node 1 00:07:23.814 [2024-07-21 03:17:08.967088] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.814 [2024-07-21 03:17:09.058801] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.814 03:17:09 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:23.814 03:17:09 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:23.814 03:17:09 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:23.814 03:17:09 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:23.814 03:17:09 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:23.814 03:17:09 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:23.814 03:17:09 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:23.814 03:17:09 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:23.814 03:17:09 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:07:23.814 03:17:09 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:23.814 03:17:09 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:23.814 03:17:09 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:23.814 03:17:09 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:23.814 03:17:09 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:23.814 03:17:09 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:23.814 03:17:09 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:23.814 03:17:09 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:23.814 03:17:09 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:23.814 03:17:09 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:23.814 03:17:09 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:23.814 03:17:09 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:07:23.814 03:17:09 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:23.814 03:17:09 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:07:23.814 03:17:09 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:23.814 03:17:09 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:23.814 03:17:09 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:23.814 03:17:09 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:23.814 03:17:09 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:23.814 03:17:09 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:23.814 03:17:09 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:23.814 03:17:09 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:23.814 03:17:09 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:23.814 03:17:09 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:23.814 03:17:09 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:07:23.814 03:17:09 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:23.814 03:17:09 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:07:23.814 03:17:09 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:23.814 03:17:09 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:23.814 03:17:09 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:07:23.814 03:17:09 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:23.814 03:17:09 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:23.814 03:17:09 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:23.814 03:17:09 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:07:23.814 03:17:09 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:23.814 03:17:09 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:23.814 03:17:09 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:23.814 03:17:09 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:07:23.814 03:17:09 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:23.814 03:17:09 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:23.814 03:17:09 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:24.071 03:17:09 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:07:24.071 03:17:09 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:24.071 03:17:09 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:24.071 03:17:09 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:24.071 03:17:09 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:07:24.071 03:17:09 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:24.071 03:17:09 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:24.071 03:17:09 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:24.071 03:17:09 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:24.071 03:17:09 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:24.071 03:17:09 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:24.071 03:17:09 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:24.071 03:17:09 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:24.071 03:17:09 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:24.071 03:17:09 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:24.071 03:17:09 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:25.001 03:17:10 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:25.001 03:17:10 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:25.001 03:17:10 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:25.001 03:17:10 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:25.001 03:17:10 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:25.001 03:17:10 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:25.001 03:17:10 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:25.001 03:17:10 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:25.001 03:17:10 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:25.001 03:17:10 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:25.001 03:17:10 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:25.001 03:17:10 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:25.001 03:17:10 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:25.001 03:17:10 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:25.001 03:17:10 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:25.001 03:17:10 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:25.001 03:17:10 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:25.001 03:17:10 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:25.001 03:17:10 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:25.001 03:17:10 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:25.001 03:17:10 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:25.002 03:17:10 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:25.002 03:17:10 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:25.002 03:17:10 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:25.002 03:17:10 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:25.002 03:17:10 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:07:25.002 03:17:10 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:25.002 00:07:25.002 real 0m1.398s 00:07:25.002 user 0m1.249s 00:07:25.002 sys 0m0.151s 00:07:25.002 03:17:10 accel.accel_copy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:25.002 03:17:10 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:07:25.002 ************************************ 00:07:25.002 END TEST accel_copy 00:07:25.002 ************************************ 00:07:25.002 03:17:10 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:25.002 03:17:10 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:07:25.002 03:17:10 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:25.002 03:17:10 accel -- common/autotest_common.sh@10 -- # set +x 00:07:25.259 ************************************ 00:07:25.259 START TEST accel_fill 00:07:25.259 ************************************ 00:07:25.259 03:17:10 accel.accel_fill -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:25.259 03:17:10 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:07:25.259 03:17:10 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:07:25.259 03:17:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:25.259 03:17:10 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:25.259 03:17:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:25.259 03:17:10 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:25.259 03:17:10 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:07:25.259 03:17:10 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:25.259 03:17:10 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:25.259 03:17:10 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:25.259 03:17:10 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:25.259 03:17:10 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:25.259 03:17:10 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:07:25.259 03:17:10 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:07:25.259 [2024-07-21 03:17:10.344830] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:25.259 [2024-07-21 03:17:10.344887] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2289955 ] 00:07:25.259 EAL: No free 2048 kB hugepages reported on node 1 00:07:25.259 [2024-07-21 03:17:10.405965] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.259 [2024-07-21 03:17:10.498891] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.259 03:17:10 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:25.259 03:17:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:25.259 03:17:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:25.259 03:17:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:25.259 03:17:10 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:25.259 03:17:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:25.259 03:17:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:25.259 03:17:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:25.259 03:17:10 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:07:25.259 03:17:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:25.259 03:17:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:25.259 03:17:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:25.259 03:17:10 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:25.259 03:17:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:25.259 03:17:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:25.259 03:17:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:25.259 03:17:10 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:25.259 03:17:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:25.259 03:17:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:25.259 03:17:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:25.259 03:17:10 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:07:25.259 03:17:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:25.259 03:17:10 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:07:25.259 03:17:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:25.259 03:17:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:25.259 03:17:10 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:07:25.259 03:17:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:25.259 03:17:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:25.259 03:17:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:25.259 03:17:10 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:25.259 03:17:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:25.259 03:17:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:25.259 03:17:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:25.259 03:17:10 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:25.259 03:17:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:25.259 03:17:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:25.259 03:17:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:25.259 03:17:10 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:07:25.259 03:17:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:25.259 03:17:10 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:07:25.259 03:17:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:25.259 03:17:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:25.259 03:17:10 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:07:25.259 03:17:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:25.259 03:17:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:25.259 03:17:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:25.259 03:17:10 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:07:25.259 03:17:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:25.259 03:17:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:25.259 03:17:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:25.259 03:17:10 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:07:25.259 03:17:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:25.259 03:17:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:25.259 03:17:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:25.259 03:17:10 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:07:25.259 03:17:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:25.259 03:17:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:25.259 03:17:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:25.259 03:17:10 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:07:25.259 03:17:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:25.259 03:17:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:25.259 03:17:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:25.259 03:17:10 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:25.259 03:17:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:25.259 03:17:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:25.259 03:17:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:25.259 03:17:10 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:25.259 03:17:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:25.259 03:17:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:25.259 03:17:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:26.682 03:17:11 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:26.682 03:17:11 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:26.682 03:17:11 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:26.682 03:17:11 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:26.682 03:17:11 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:26.682 03:17:11 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:26.682 03:17:11 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:26.682 03:17:11 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:26.682 03:17:11 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:26.682 03:17:11 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:26.682 03:17:11 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:26.682 03:17:11 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:26.682 03:17:11 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:26.682 03:17:11 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:26.682 03:17:11 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:26.682 03:17:11 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:26.682 03:17:11 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:26.682 03:17:11 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:26.682 03:17:11 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:26.682 03:17:11 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:26.682 03:17:11 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:26.682 03:17:11 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:26.682 03:17:11 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:26.682 03:17:11 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:26.682 03:17:11 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:26.682 03:17:11 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:07:26.682 03:17:11 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:26.682 00:07:26.682 real 0m1.408s 00:07:26.682 user 0m1.267s 00:07:26.682 sys 0m0.143s 00:07:26.682 03:17:11 accel.accel_fill -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:26.682 03:17:11 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:07:26.682 ************************************ 00:07:26.682 END TEST accel_fill 00:07:26.682 ************************************ 00:07:26.682 03:17:11 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:07:26.682 03:17:11 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:07:26.682 03:17:11 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:26.682 03:17:11 accel -- common/autotest_common.sh@10 -- # set +x 00:07:26.682 ************************************ 00:07:26.682 START TEST accel_copy_crc32c 00:07:26.682 ************************************ 00:07:26.682 03:17:11 accel.accel_copy_crc32c -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy_crc32c -y 00:07:26.682 03:17:11 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:07:26.682 03:17:11 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:07:26.682 03:17:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:26.682 03:17:11 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:07:26.682 03:17:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:26.682 03:17:11 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:07:26.682 03:17:11 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:07:26.682 03:17:11 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:26.682 03:17:11 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:26.682 03:17:11 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:26.682 03:17:11 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:26.682 03:17:11 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:26.682 03:17:11 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:07:26.682 03:17:11 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:07:26.682 [2024-07-21 03:17:11.794160] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:26.682 [2024-07-21 03:17:11.794219] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2290112 ] 00:07:26.682 EAL: No free 2048 kB hugepages reported on node 1 00:07:26.682 [2024-07-21 03:17:11.856006] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.682 [2024-07-21 03:17:11.949045] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.940 03:17:12 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:26.940 03:17:12 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:26.940 03:17:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:26.940 03:17:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:26.940 03:17:12 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:26.940 03:17:12 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:26.940 03:17:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:26.940 03:17:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:26.940 03:17:12 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:07:26.940 03:17:12 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:26.940 03:17:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:26.940 03:17:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:26.940 03:17:12 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:26.940 03:17:12 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:26.940 03:17:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:26.940 03:17:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:26.940 03:17:12 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:26.940 03:17:12 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:26.940 03:17:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:26.940 03:17:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:26.940 03:17:12 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:26.940 03:17:12 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:26.940 03:17:12 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:26.940 03:17:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:26.940 03:17:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:26.940 03:17:12 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:07:26.940 03:17:12 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:26.940 03:17:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:26.940 03:17:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:26.940 03:17:12 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:26.940 03:17:12 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:26.940 03:17:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:26.940 03:17:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:26.940 03:17:12 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:26.940 03:17:12 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:26.940 03:17:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:26.940 03:17:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:26.940 03:17:12 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:26.940 03:17:12 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:26.940 03:17:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:26.940 03:17:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:26.940 03:17:12 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:07:26.940 03:17:12 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:26.940 03:17:12 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:07:26.940 03:17:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:26.940 03:17:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:26.940 03:17:12 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:07:26.940 03:17:12 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:26.940 03:17:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:26.940 03:17:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:26.940 03:17:12 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:07:26.940 03:17:12 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:26.940 03:17:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:26.940 03:17:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:26.940 03:17:12 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:07:26.940 03:17:12 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:26.940 03:17:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:26.940 03:17:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:26.941 03:17:12 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:07:26.941 03:17:12 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:26.941 03:17:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:26.941 03:17:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:26.941 03:17:12 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:07:26.941 03:17:12 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:26.941 03:17:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:26.941 03:17:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:26.941 03:17:12 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:26.941 03:17:12 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:26.941 03:17:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:26.941 03:17:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:26.941 03:17:12 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:26.941 03:17:12 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:26.941 03:17:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:26.941 03:17:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:27.882 03:17:13 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:27.882 03:17:13 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:27.882 03:17:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:27.882 03:17:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:27.882 03:17:13 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:27.882 03:17:13 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:27.882 03:17:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:27.882 03:17:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:27.882 03:17:13 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:27.882 03:17:13 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:27.882 03:17:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:27.882 03:17:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:27.882 03:17:13 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:27.882 03:17:13 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:27.882 03:17:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:27.882 03:17:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:27.882 03:17:13 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:27.882 03:17:13 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:27.882 03:17:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:27.882 03:17:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:27.882 03:17:13 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:27.882 03:17:13 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:27.882 03:17:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:27.882 03:17:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:27.882 03:17:13 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:27.882 03:17:13 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:27.882 03:17:13 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:27.882 00:07:27.882 real 0m1.409s 00:07:27.882 user 0m1.271s 00:07:27.882 sys 0m0.141s 00:07:27.882 03:17:13 accel.accel_copy_crc32c -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:27.882 03:17:13 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:07:27.882 ************************************ 00:07:27.882 END TEST accel_copy_crc32c 00:07:27.882 ************************************ 00:07:28.141 03:17:13 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:07:28.141 03:17:13 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:07:28.141 03:17:13 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:28.141 03:17:13 accel -- common/autotest_common.sh@10 -- # set +x 00:07:28.141 ************************************ 00:07:28.141 START TEST accel_copy_crc32c_C2 00:07:28.141 ************************************ 00:07:28.141 03:17:13 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:07:28.141 03:17:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:07:28.141 03:17:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:07:28.141 03:17:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:28.141 03:17:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:07:28.141 03:17:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:28.141 03:17:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:07:28.141 03:17:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:07:28.141 03:17:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:28.141 03:17:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:28.141 03:17:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:28.141 03:17:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:28.141 03:17:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:28.141 03:17:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:07:28.141 03:17:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:07:28.141 [2024-07-21 03:17:13.250507] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:28.141 [2024-07-21 03:17:13.250569] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2290313 ] 00:07:28.141 EAL: No free 2048 kB hugepages reported on node 1 00:07:28.141 [2024-07-21 03:17:13.312261] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.141 [2024-07-21 03:17:13.405574] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.399 03:17:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:28.399 03:17:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:28.399 03:17:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:28.399 03:17:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:28.399 03:17:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:28.399 03:17:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:28.399 03:17:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:28.399 03:17:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:28.399 03:17:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:07:28.399 03:17:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:28.399 03:17:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:28.399 03:17:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:28.399 03:17:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:28.399 03:17:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:28.399 03:17:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:28.399 03:17:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:28.399 03:17:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:28.399 03:17:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:28.399 03:17:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:28.399 03:17:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:28.399 03:17:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:28.399 03:17:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:28.399 03:17:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:28.399 03:17:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:28.399 03:17:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:28.399 03:17:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:07:28.399 03:17:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:28.399 03:17:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:28.399 03:17:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:28.399 03:17:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:28.399 03:17:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:28.399 03:17:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:28.399 03:17:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:28.399 03:17:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:07:28.399 03:17:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:28.399 03:17:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:28.399 03:17:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:28.399 03:17:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:28.399 03:17:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:28.399 03:17:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:28.399 03:17:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:28.399 03:17:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:07:28.399 03:17:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:28.399 03:17:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:07:28.399 03:17:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:28.399 03:17:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:28.399 03:17:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:28.399 03:17:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:28.399 03:17:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:28.399 03:17:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:28.399 03:17:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:28.399 03:17:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:28.399 03:17:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:28.399 03:17:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:28.399 03:17:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:07:28.399 03:17:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:28.399 03:17:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:28.399 03:17:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:28.399 03:17:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:28.399 03:17:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:28.400 03:17:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:28.400 03:17:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:28.400 03:17:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:07:28.400 03:17:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:28.400 03:17:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:28.400 03:17:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:28.400 03:17:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:28.400 03:17:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:28.400 03:17:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:28.400 03:17:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:28.400 03:17:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:28.400 03:17:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:28.400 03:17:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:28.400 03:17:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:29.332 03:17:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:29.332 03:17:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:29.332 03:17:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:29.332 03:17:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:29.332 03:17:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:29.332 03:17:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:29.332 03:17:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:29.332 03:17:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:29.332 03:17:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:29.332 03:17:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:29.332 03:17:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:29.332 03:17:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:29.332 03:17:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:29.332 03:17:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:29.332 03:17:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:29.332 03:17:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:29.332 03:17:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:29.332 03:17:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:29.332 03:17:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:29.332 03:17:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:29.332 03:17:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:29.332 03:17:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:29.332 03:17:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:29.332 03:17:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:29.332 03:17:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:29.332 03:17:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:29.332 03:17:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:29.332 00:07:29.332 real 0m1.410s 00:07:29.332 user 0m1.271s 00:07:29.332 sys 0m0.143s 00:07:29.332 03:17:14 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:29.332 03:17:14 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:07:29.332 ************************************ 00:07:29.332 END TEST accel_copy_crc32c_C2 00:07:29.332 ************************************ 00:07:29.590 03:17:14 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:07:29.590 03:17:14 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:07:29.590 03:17:14 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:29.590 03:17:14 accel -- common/autotest_common.sh@10 -- # set +x 00:07:29.590 ************************************ 00:07:29.590 START TEST accel_dualcast 00:07:29.590 ************************************ 00:07:29.590 03:17:14 accel.accel_dualcast -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dualcast -y 00:07:29.590 03:17:14 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:07:29.590 03:17:14 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:07:29.590 03:17:14 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:29.590 03:17:14 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:29.590 03:17:14 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:07:29.591 03:17:14 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:07:29.591 03:17:14 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:07:29.591 03:17:14 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:29.591 03:17:14 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:29.591 03:17:14 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:29.591 03:17:14 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:29.591 03:17:14 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:29.591 03:17:14 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:07:29.591 03:17:14 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:07:29.591 [2024-07-21 03:17:14.702841] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:29.591 [2024-07-21 03:17:14.702898] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2290543 ] 00:07:29.591 EAL: No free 2048 kB hugepages reported on node 1 00:07:29.591 [2024-07-21 03:17:14.764194] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.591 [2024-07-21 03:17:14.856301] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.849 03:17:14 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:29.849 03:17:14 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:29.849 03:17:14 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:29.849 03:17:14 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:29.849 03:17:14 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:29.849 03:17:14 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:29.849 03:17:14 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:29.849 03:17:14 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:29.849 03:17:14 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:07:29.849 03:17:14 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:29.849 03:17:14 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:29.849 03:17:14 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:29.849 03:17:14 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:29.849 03:17:14 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:29.849 03:17:14 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:29.849 03:17:14 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:29.849 03:17:14 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:29.849 03:17:14 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:29.849 03:17:14 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:29.849 03:17:14 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:29.849 03:17:14 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:07:29.849 03:17:14 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:29.849 03:17:14 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:07:29.849 03:17:14 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:29.849 03:17:14 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:29.849 03:17:14 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:29.849 03:17:14 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:29.849 03:17:14 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:29.849 03:17:14 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:29.849 03:17:14 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:29.849 03:17:14 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:29.849 03:17:14 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:29.849 03:17:14 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:29.849 03:17:14 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:07:29.849 03:17:14 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:29.849 03:17:14 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:07:29.849 03:17:14 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:29.849 03:17:14 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:29.849 03:17:14 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:07:29.849 03:17:14 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:29.849 03:17:14 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:29.849 03:17:14 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:29.849 03:17:14 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:07:29.849 03:17:14 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:29.849 03:17:14 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:29.849 03:17:14 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:29.849 03:17:14 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:07:29.849 03:17:14 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:29.849 03:17:14 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:29.849 03:17:14 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:29.849 03:17:14 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:07:29.849 03:17:14 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:29.849 03:17:14 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:29.849 03:17:14 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:29.849 03:17:14 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:07:29.849 03:17:14 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:29.849 03:17:14 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:29.849 03:17:14 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:29.849 03:17:14 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:29.849 03:17:14 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:29.849 03:17:14 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:29.849 03:17:14 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:29.849 03:17:14 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:29.849 03:17:14 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:29.849 03:17:14 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:29.849 03:17:14 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:30.785 03:17:16 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:30.785 03:17:16 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:30.785 03:17:16 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:30.785 03:17:16 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:30.785 03:17:16 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:30.785 03:17:16 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:30.785 03:17:16 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:30.785 03:17:16 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:30.785 03:17:16 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:30.785 03:17:16 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:30.785 03:17:16 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:30.785 03:17:16 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:30.785 03:17:16 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:30.785 03:17:16 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:30.785 03:17:16 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:30.785 03:17:16 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:30.785 03:17:16 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:30.785 03:17:16 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:30.785 03:17:16 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:30.785 03:17:16 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:30.785 03:17:16 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:30.785 03:17:16 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:30.785 03:17:16 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:30.785 03:17:16 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:30.785 03:17:16 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:30.785 03:17:16 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:07:30.785 03:17:16 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:30.785 00:07:30.785 real 0m1.399s 00:07:30.785 user 0m1.255s 00:07:30.785 sys 0m0.146s 00:07:30.785 03:17:16 accel.accel_dualcast -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:30.785 03:17:16 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:07:30.785 ************************************ 00:07:30.785 END TEST accel_dualcast 00:07:30.785 ************************************ 00:07:31.043 03:17:16 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:07:31.043 03:17:16 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:07:31.043 03:17:16 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:31.043 03:17:16 accel -- common/autotest_common.sh@10 -- # set +x 00:07:31.043 ************************************ 00:07:31.043 START TEST accel_compare 00:07:31.043 ************************************ 00:07:31.043 03:17:16 accel.accel_compare -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w compare -y 00:07:31.043 03:17:16 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:07:31.043 03:17:16 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:07:31.043 03:17:16 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:31.043 03:17:16 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:07:31.043 03:17:16 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:31.043 03:17:16 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:07:31.044 03:17:16 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:07:31.044 03:17:16 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:31.044 03:17:16 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:31.044 03:17:16 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:31.044 03:17:16 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:31.044 03:17:16 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:31.044 03:17:16 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:07:31.044 03:17:16 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:07:31.044 [2024-07-21 03:17:16.155916] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:31.044 [2024-07-21 03:17:16.155994] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2290700 ] 00:07:31.044 EAL: No free 2048 kB hugepages reported on node 1 00:07:31.044 [2024-07-21 03:17:16.219555] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.044 [2024-07-21 03:17:16.310692] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.302 03:17:16 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:31.302 03:17:16 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:31.302 03:17:16 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:31.302 03:17:16 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:31.302 03:17:16 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:31.302 03:17:16 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:31.302 03:17:16 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:31.302 03:17:16 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:31.302 03:17:16 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:07:31.302 03:17:16 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:31.302 03:17:16 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:31.302 03:17:16 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:31.302 03:17:16 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:31.302 03:17:16 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:31.302 03:17:16 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:31.302 03:17:16 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:31.302 03:17:16 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:31.302 03:17:16 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:31.302 03:17:16 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:31.302 03:17:16 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:31.302 03:17:16 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:07:31.302 03:17:16 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:31.302 03:17:16 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:07:31.302 03:17:16 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:31.302 03:17:16 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:31.302 03:17:16 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:31.302 03:17:16 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:31.302 03:17:16 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:31.302 03:17:16 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:31.302 03:17:16 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:31.302 03:17:16 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:31.302 03:17:16 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:31.302 03:17:16 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:31.302 03:17:16 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:07:31.302 03:17:16 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:31.302 03:17:16 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:07:31.302 03:17:16 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:31.302 03:17:16 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:31.302 03:17:16 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:07:31.302 03:17:16 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:31.302 03:17:16 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:31.302 03:17:16 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:31.302 03:17:16 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:07:31.302 03:17:16 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:31.302 03:17:16 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:31.302 03:17:16 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:31.302 03:17:16 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:07:31.302 03:17:16 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:31.302 03:17:16 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:31.302 03:17:16 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:31.302 03:17:16 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:07:31.302 03:17:16 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:31.302 03:17:16 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:31.302 03:17:16 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:31.302 03:17:16 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:07:31.302 03:17:16 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:31.302 03:17:16 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:31.302 03:17:16 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:31.302 03:17:16 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:31.302 03:17:16 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:31.302 03:17:16 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:31.302 03:17:16 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:31.302 03:17:16 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:31.302 03:17:16 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:31.302 03:17:16 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:31.302 03:17:16 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:32.235 03:17:17 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:32.235 03:17:17 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:32.235 03:17:17 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:32.235 03:17:17 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:32.235 03:17:17 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:32.235 03:17:17 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:32.235 03:17:17 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:32.493 03:17:17 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:32.493 03:17:17 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:32.493 03:17:17 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:32.493 03:17:17 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:32.493 03:17:17 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:32.493 03:17:17 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:32.493 03:17:17 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:32.493 03:17:17 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:32.493 03:17:17 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:32.493 03:17:17 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:32.493 03:17:17 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:32.493 03:17:17 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:32.493 03:17:17 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:32.493 03:17:17 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:32.493 03:17:17 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:32.493 03:17:17 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:32.493 03:17:17 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:32.493 03:17:17 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:32.493 03:17:17 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:07:32.493 03:17:17 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:32.493 00:07:32.493 real 0m1.414s 00:07:32.493 user 0m1.274s 00:07:32.493 sys 0m0.143s 00:07:32.493 03:17:17 accel.accel_compare -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:32.493 03:17:17 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:07:32.493 ************************************ 00:07:32.493 END TEST accel_compare 00:07:32.493 ************************************ 00:07:32.493 03:17:17 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:07:32.493 03:17:17 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:07:32.493 03:17:17 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:32.493 03:17:17 accel -- common/autotest_common.sh@10 -- # set +x 00:07:32.493 ************************************ 00:07:32.493 START TEST accel_xor 00:07:32.493 ************************************ 00:07:32.493 03:17:17 accel.accel_xor -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w xor -y 00:07:32.493 03:17:17 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:07:32.493 03:17:17 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:07:32.493 03:17:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:32.493 03:17:17 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:07:32.493 03:17:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:32.493 03:17:17 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:32.493 03:17:17 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:07:32.493 03:17:17 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:32.493 03:17:17 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:32.493 03:17:17 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:32.493 03:17:17 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:32.493 03:17:17 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:32.493 03:17:17 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:07:32.493 03:17:17 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:07:32.493 [2024-07-21 03:17:17.613983] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:32.493 [2024-07-21 03:17:17.614048] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2290851 ] 00:07:32.493 EAL: No free 2048 kB hugepages reported on node 1 00:07:32.493 [2024-07-21 03:17:17.675478] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.493 [2024-07-21 03:17:17.770489] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.752 03:17:17 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:32.752 03:17:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:32.752 03:17:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:32.752 03:17:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:32.752 03:17:17 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:32.752 03:17:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:32.752 03:17:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:32.752 03:17:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:32.752 03:17:17 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:07:32.752 03:17:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:32.752 03:17:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:32.752 03:17:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:32.752 03:17:17 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:32.752 03:17:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:32.752 03:17:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:32.752 03:17:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:32.752 03:17:17 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:32.752 03:17:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:32.752 03:17:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:32.752 03:17:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:32.752 03:17:17 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:07:32.752 03:17:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:32.752 03:17:17 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:07:32.752 03:17:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:32.752 03:17:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:32.752 03:17:17 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:07:32.752 03:17:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:32.752 03:17:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:32.752 03:17:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:32.752 03:17:17 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:32.752 03:17:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:32.752 03:17:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:32.752 03:17:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:32.752 03:17:17 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:32.752 03:17:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:32.752 03:17:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:32.752 03:17:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:32.752 03:17:17 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:07:32.752 03:17:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:32.752 03:17:17 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:07:32.752 03:17:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:32.752 03:17:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:32.752 03:17:17 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:32.752 03:17:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:32.752 03:17:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:32.752 03:17:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:32.752 03:17:17 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:32.752 03:17:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:32.752 03:17:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:32.752 03:17:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:32.752 03:17:17 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:07:32.752 03:17:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:32.752 03:17:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:32.752 03:17:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:32.752 03:17:17 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:07:32.752 03:17:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:32.752 03:17:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:32.752 03:17:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:32.752 03:17:17 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:07:32.752 03:17:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:32.752 03:17:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:32.752 03:17:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:32.752 03:17:17 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:32.752 03:17:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:32.752 03:17:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:32.752 03:17:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:32.752 03:17:17 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:32.752 03:17:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:32.752 03:17:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:32.752 03:17:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:33.684 03:17:18 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:33.684 03:17:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:33.684 03:17:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:33.684 03:17:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:33.684 03:17:18 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:33.684 03:17:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:33.684 03:17:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:33.684 03:17:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:33.684 03:17:18 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:33.684 03:17:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:33.684 03:17:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:33.684 03:17:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:33.684 03:17:18 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:33.684 03:17:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:33.684 03:17:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:33.684 03:17:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:33.684 03:17:18 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:33.684 03:17:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:33.684 03:17:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:33.684 03:17:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:33.684 03:17:18 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:33.684 03:17:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:33.684 03:17:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:33.942 03:17:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:33.942 03:17:18 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:33.942 03:17:18 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:07:33.942 03:17:18 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:33.942 00:07:33.942 real 0m1.400s 00:07:33.942 user 0m1.259s 00:07:33.942 sys 0m0.143s 00:07:33.942 03:17:18 accel.accel_xor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:33.942 03:17:18 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:07:33.942 ************************************ 00:07:33.942 END TEST accel_xor 00:07:33.942 ************************************ 00:07:33.942 03:17:19 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:07:33.942 03:17:19 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:07:33.942 03:17:19 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:33.942 03:17:19 accel -- common/autotest_common.sh@10 -- # set +x 00:07:33.942 ************************************ 00:07:33.942 START TEST accel_xor 00:07:33.942 ************************************ 00:07:33.942 03:17:19 accel.accel_xor -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w xor -y -x 3 00:07:33.942 03:17:19 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:07:33.942 03:17:19 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:07:33.942 03:17:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:33.942 03:17:19 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:07:33.942 03:17:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:33.942 03:17:19 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:33.942 03:17:19 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:07:33.942 03:17:19 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:33.942 03:17:19 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:33.942 03:17:19 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:33.942 03:17:19 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:33.942 03:17:19 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:33.942 03:17:19 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:07:33.942 03:17:19 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:07:33.942 [2024-07-21 03:17:19.056108] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:33.942 [2024-07-21 03:17:19.056174] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2291125 ] 00:07:33.942 EAL: No free 2048 kB hugepages reported on node 1 00:07:33.942 [2024-07-21 03:17:19.116775] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.942 [2024-07-21 03:17:19.208437] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.200 03:17:19 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:34.200 03:17:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:34.200 03:17:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:34.200 03:17:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:34.200 03:17:19 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:34.200 03:17:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:34.200 03:17:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:34.200 03:17:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:34.200 03:17:19 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:07:34.200 03:17:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:34.200 03:17:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:34.200 03:17:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:34.200 03:17:19 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:34.200 03:17:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:34.200 03:17:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:34.200 03:17:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:34.200 03:17:19 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:34.200 03:17:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:34.200 03:17:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:34.200 03:17:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:34.200 03:17:19 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:07:34.200 03:17:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:34.200 03:17:19 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:07:34.200 03:17:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:34.200 03:17:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:34.200 03:17:19 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:07:34.200 03:17:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:34.200 03:17:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:34.200 03:17:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:34.200 03:17:19 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:34.200 03:17:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:34.200 03:17:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:34.200 03:17:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:34.200 03:17:19 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:34.200 03:17:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:34.200 03:17:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:34.200 03:17:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:34.200 03:17:19 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:07:34.200 03:17:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:34.200 03:17:19 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:07:34.200 03:17:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:34.200 03:17:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:34.200 03:17:19 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:34.200 03:17:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:34.200 03:17:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:34.200 03:17:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:34.200 03:17:19 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:34.200 03:17:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:34.200 03:17:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:34.200 03:17:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:34.200 03:17:19 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:07:34.200 03:17:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:34.200 03:17:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:34.200 03:17:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:34.200 03:17:19 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:07:34.200 03:17:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:34.200 03:17:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:34.200 03:17:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:34.200 03:17:19 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:07:34.200 03:17:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:34.200 03:17:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:34.200 03:17:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:34.200 03:17:19 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:34.200 03:17:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:34.200 03:17:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:34.200 03:17:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:34.200 03:17:19 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:34.200 03:17:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:34.200 03:17:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:34.200 03:17:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:35.131 03:17:20 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:35.131 03:17:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:35.131 03:17:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:35.131 03:17:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:35.131 03:17:20 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:35.131 03:17:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:35.131 03:17:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:35.131 03:17:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:35.131 03:17:20 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:35.131 03:17:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:35.131 03:17:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:35.131 03:17:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:35.131 03:17:20 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:35.131 03:17:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:35.131 03:17:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:35.131 03:17:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:35.131 03:17:20 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:35.131 03:17:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:35.131 03:17:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:35.131 03:17:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:35.131 03:17:20 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:35.131 03:17:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:35.131 03:17:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:35.131 03:17:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:35.131 03:17:20 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:35.131 03:17:20 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:07:35.131 03:17:20 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:35.131 00:07:35.131 real 0m1.387s 00:07:35.131 user 0m1.254s 00:07:35.131 sys 0m0.135s 00:07:35.131 03:17:20 accel.accel_xor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:35.131 03:17:20 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:07:35.131 ************************************ 00:07:35.131 END TEST accel_xor 00:07:35.131 ************************************ 00:07:35.388 03:17:20 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:07:35.388 03:17:20 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:07:35.388 03:17:20 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:35.388 03:17:20 accel -- common/autotest_common.sh@10 -- # set +x 00:07:35.388 ************************************ 00:07:35.388 START TEST accel_dif_verify 00:07:35.388 ************************************ 00:07:35.388 03:17:20 accel.accel_dif_verify -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_verify 00:07:35.388 03:17:20 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:07:35.388 03:17:20 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:07:35.388 03:17:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:35.388 03:17:20 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:07:35.388 03:17:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:35.388 03:17:20 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:35.388 03:17:20 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:07:35.388 03:17:20 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:35.388 03:17:20 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:35.388 03:17:20 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:35.388 03:17:20 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:35.388 03:17:20 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:35.388 03:17:20 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:07:35.388 03:17:20 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:07:35.388 [2024-07-21 03:17:20.487924] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:35.388 [2024-07-21 03:17:20.487994] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2291283 ] 00:07:35.388 EAL: No free 2048 kB hugepages reported on node 1 00:07:35.388 [2024-07-21 03:17:20.550731] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.388 [2024-07-21 03:17:20.643690] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.646 03:17:20 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:35.646 03:17:20 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:35.646 03:17:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:35.646 03:17:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:35.646 03:17:20 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:35.646 03:17:20 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:35.646 03:17:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:35.646 03:17:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:35.646 03:17:20 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:07:35.646 03:17:20 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:35.646 03:17:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:35.646 03:17:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:35.646 03:17:20 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:35.646 03:17:20 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:35.646 03:17:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:35.646 03:17:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:35.646 03:17:20 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:35.646 03:17:20 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:35.646 03:17:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:35.646 03:17:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:35.646 03:17:20 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:07:35.646 03:17:20 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:35.646 03:17:20 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:07:35.646 03:17:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:35.646 03:17:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:35.646 03:17:20 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:35.646 03:17:20 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:35.646 03:17:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:35.646 03:17:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:35.646 03:17:20 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:35.646 03:17:20 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:35.646 03:17:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:35.646 03:17:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:35.646 03:17:20 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:07:35.646 03:17:20 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:35.646 03:17:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:35.646 03:17:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:35.646 03:17:20 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:07:35.646 03:17:20 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:35.646 03:17:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:35.646 03:17:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:35.646 03:17:20 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:35.646 03:17:20 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:35.646 03:17:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:35.646 03:17:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:35.646 03:17:20 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:07:35.646 03:17:20 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:35.646 03:17:20 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:07:35.646 03:17:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:35.646 03:17:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:35.646 03:17:20 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:07:35.646 03:17:20 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:35.646 03:17:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:35.646 03:17:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:35.646 03:17:20 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:07:35.646 03:17:20 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:35.646 03:17:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:35.646 03:17:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:35.646 03:17:20 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:07:35.646 03:17:20 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:35.646 03:17:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:35.646 03:17:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:35.646 03:17:20 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:07:35.646 03:17:20 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:35.646 03:17:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:35.646 03:17:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:35.646 03:17:20 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:07:35.646 03:17:20 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:35.646 03:17:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:35.646 03:17:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:35.646 03:17:20 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:35.646 03:17:20 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:35.646 03:17:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:35.646 03:17:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:35.646 03:17:20 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:35.646 03:17:20 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:35.646 03:17:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:35.646 03:17:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:36.577 03:17:21 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:36.577 03:17:21 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:36.577 03:17:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:36.577 03:17:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:36.577 03:17:21 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:36.577 03:17:21 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:36.577 03:17:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:36.577 03:17:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:36.577 03:17:21 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:36.577 03:17:21 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:36.577 03:17:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:36.577 03:17:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:36.577 03:17:21 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:36.577 03:17:21 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:36.577 03:17:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:36.577 03:17:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:36.577 03:17:21 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:36.578 03:17:21 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:36.578 03:17:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:36.578 03:17:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:36.578 03:17:21 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:36.578 03:17:21 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:36.578 03:17:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:36.578 03:17:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:36.578 03:17:21 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:36.578 03:17:21 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:07:36.578 03:17:21 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:36.578 00:07:36.578 real 0m1.408s 00:07:36.578 user 0m1.279s 00:07:36.578 sys 0m0.134s 00:07:36.578 03:17:21 accel.accel_dif_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:36.578 03:17:21 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:07:36.578 ************************************ 00:07:36.578 END TEST accel_dif_verify 00:07:36.578 ************************************ 00:07:36.836 03:17:21 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:07:36.836 03:17:21 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:07:36.836 03:17:21 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:36.836 03:17:21 accel -- common/autotest_common.sh@10 -- # set +x 00:07:36.836 ************************************ 00:07:36.836 START TEST accel_dif_generate 00:07:36.836 ************************************ 00:07:36.836 03:17:21 accel.accel_dif_generate -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_generate 00:07:36.836 03:17:21 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:07:36.836 03:17:21 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:07:36.836 03:17:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:36.836 03:17:21 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:07:36.836 03:17:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:36.836 03:17:21 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:36.836 03:17:21 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:07:36.836 03:17:21 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:36.836 03:17:21 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:36.836 03:17:21 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:36.836 03:17:21 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:36.836 03:17:21 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:36.836 03:17:21 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:07:36.836 03:17:21 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:07:36.836 [2024-07-21 03:17:21.945487] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:36.836 [2024-07-21 03:17:21.945550] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2291443 ] 00:07:36.836 EAL: No free 2048 kB hugepages reported on node 1 00:07:36.836 [2024-07-21 03:17:22.007266] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.836 [2024-07-21 03:17:22.101407] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.094 03:17:22 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:37.094 03:17:22 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:37.094 03:17:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:37.094 03:17:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:37.094 03:17:22 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:37.094 03:17:22 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:37.094 03:17:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:37.094 03:17:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:37.094 03:17:22 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:07:37.094 03:17:22 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:37.094 03:17:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:37.094 03:17:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:37.094 03:17:22 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:37.094 03:17:22 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:37.094 03:17:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:37.094 03:17:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:37.094 03:17:22 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:37.094 03:17:22 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:37.094 03:17:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:37.094 03:17:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:37.094 03:17:22 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:07:37.094 03:17:22 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:37.094 03:17:22 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:07:37.094 03:17:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:37.094 03:17:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:37.094 03:17:22 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:37.094 03:17:22 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:37.094 03:17:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:37.094 03:17:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:37.094 03:17:22 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:37.094 03:17:22 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:37.094 03:17:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:37.094 03:17:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:37.094 03:17:22 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:07:37.094 03:17:22 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:37.094 03:17:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:37.094 03:17:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:37.094 03:17:22 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:07:37.094 03:17:22 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:37.094 03:17:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:37.094 03:17:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:37.094 03:17:22 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:37.094 03:17:22 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:37.094 03:17:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:37.094 03:17:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:37.094 03:17:22 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:07:37.094 03:17:22 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:37.094 03:17:22 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:07:37.094 03:17:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:37.094 03:17:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:37.094 03:17:22 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:07:37.094 03:17:22 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:37.094 03:17:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:37.094 03:17:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:37.094 03:17:22 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:07:37.094 03:17:22 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:37.094 03:17:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:37.094 03:17:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:37.094 03:17:22 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:07:37.094 03:17:22 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:37.094 03:17:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:37.094 03:17:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:37.094 03:17:22 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:07:37.094 03:17:22 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:37.094 03:17:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:37.094 03:17:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:37.094 03:17:22 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:07:37.094 03:17:22 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:37.094 03:17:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:37.094 03:17:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:37.094 03:17:22 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:37.094 03:17:22 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:37.094 03:17:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:37.094 03:17:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:37.094 03:17:22 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:37.094 03:17:22 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:37.094 03:17:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:37.094 03:17:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:38.026 03:17:23 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:38.026 03:17:23 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:38.026 03:17:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:38.026 03:17:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:38.026 03:17:23 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:38.026 03:17:23 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:38.026 03:17:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:38.026 03:17:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:38.026 03:17:23 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:38.026 03:17:23 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:38.026 03:17:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:38.026 03:17:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:38.026 03:17:23 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:38.026 03:17:23 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:38.026 03:17:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:38.026 03:17:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:38.283 03:17:23 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:38.283 03:17:23 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:38.283 03:17:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:38.283 03:17:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:38.283 03:17:23 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:38.283 03:17:23 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:38.283 03:17:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:38.283 03:17:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:38.283 03:17:23 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:38.283 03:17:23 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:07:38.283 03:17:23 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:38.283 00:07:38.283 real 0m1.413s 00:07:38.283 user 0m1.275s 00:07:38.283 sys 0m0.143s 00:07:38.283 03:17:23 accel.accel_dif_generate -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:38.283 03:17:23 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:07:38.283 ************************************ 00:07:38.283 END TEST accel_dif_generate 00:07:38.283 ************************************ 00:07:38.283 03:17:23 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:07:38.283 03:17:23 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:07:38.283 03:17:23 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:38.283 03:17:23 accel -- common/autotest_common.sh@10 -- # set +x 00:07:38.283 ************************************ 00:07:38.283 START TEST accel_dif_generate_copy 00:07:38.283 ************************************ 00:07:38.283 03:17:23 accel.accel_dif_generate_copy -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_generate_copy 00:07:38.283 03:17:23 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:07:38.283 03:17:23 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:07:38.283 03:17:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:38.283 03:17:23 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:07:38.283 03:17:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:38.283 03:17:23 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:38.283 03:17:23 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:07:38.283 03:17:23 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:38.283 03:17:23 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:38.283 03:17:23 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:38.283 03:17:23 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:38.283 03:17:23 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:38.284 03:17:23 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:07:38.284 03:17:23 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:07:38.284 [2024-07-21 03:17:23.397554] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:38.284 [2024-07-21 03:17:23.397627] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2291613 ] 00:07:38.284 EAL: No free 2048 kB hugepages reported on node 1 00:07:38.284 [2024-07-21 03:17:23.458377] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.284 [2024-07-21 03:17:23.550801] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.541 03:17:23 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:38.541 03:17:23 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:38.541 03:17:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:38.541 03:17:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:38.541 03:17:23 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:38.541 03:17:23 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:38.541 03:17:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:38.541 03:17:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:38.541 03:17:23 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:07:38.541 03:17:23 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:38.541 03:17:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:38.541 03:17:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:38.541 03:17:23 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:38.541 03:17:23 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:38.541 03:17:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:38.541 03:17:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:38.541 03:17:23 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:38.541 03:17:23 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:38.541 03:17:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:38.541 03:17:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:38.541 03:17:23 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:07:38.541 03:17:23 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:38.541 03:17:23 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:07:38.541 03:17:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:38.541 03:17:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:38.541 03:17:23 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:38.541 03:17:23 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:38.541 03:17:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:38.541 03:17:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:38.541 03:17:23 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:38.541 03:17:23 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:38.541 03:17:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:38.541 03:17:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:38.541 03:17:23 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:38.541 03:17:23 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:38.541 03:17:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:38.541 03:17:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:38.541 03:17:23 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:07:38.541 03:17:23 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:38.541 03:17:23 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:07:38.541 03:17:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:38.541 03:17:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:38.541 03:17:23 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:07:38.541 03:17:23 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:38.541 03:17:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:38.541 03:17:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:38.541 03:17:23 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:07:38.541 03:17:23 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:38.541 03:17:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:38.541 03:17:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:38.541 03:17:23 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:07:38.541 03:17:23 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:38.541 03:17:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:38.541 03:17:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:38.541 03:17:23 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:07:38.541 03:17:23 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:38.541 03:17:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:38.541 03:17:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:38.541 03:17:23 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:07:38.541 03:17:23 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:38.541 03:17:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:38.541 03:17:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:38.541 03:17:23 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:38.541 03:17:23 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:38.541 03:17:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:38.541 03:17:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:38.541 03:17:23 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:38.541 03:17:23 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:38.541 03:17:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:38.541 03:17:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:39.474 03:17:24 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:39.474 03:17:24 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:39.474 03:17:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:39.474 03:17:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:39.474 03:17:24 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:39.474 03:17:24 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:39.474 03:17:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:39.474 03:17:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:39.474 03:17:24 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:39.474 03:17:24 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:39.474 03:17:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:39.474 03:17:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:39.474 03:17:24 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:39.474 03:17:24 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:39.474 03:17:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:39.474 03:17:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:39.474 03:17:24 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:39.474 03:17:24 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:39.474 03:17:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:39.474 03:17:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:39.474 03:17:24 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:39.474 03:17:24 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:39.474 03:17:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:39.474 03:17:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:39.474 03:17:24 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:39.474 03:17:24 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:07:39.474 03:17:24 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:39.474 00:07:39.474 real 0m1.403s 00:07:39.474 user 0m1.265s 00:07:39.474 sys 0m0.140s 00:07:39.474 03:17:24 accel.accel_dif_generate_copy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:39.474 03:17:24 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:07:39.474 ************************************ 00:07:39.474 END TEST accel_dif_generate_copy 00:07:39.474 ************************************ 00:07:39.732 03:17:24 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:07:39.732 03:17:24 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:39.732 03:17:24 accel -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:07:39.732 03:17:24 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:39.732 03:17:24 accel -- common/autotest_common.sh@10 -- # set +x 00:07:39.732 ************************************ 00:07:39.732 START TEST accel_comp 00:07:39.732 ************************************ 00:07:39.732 03:17:24 accel.accel_comp -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:39.732 03:17:24 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:07:39.732 03:17:24 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:07:39.732 03:17:24 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:39.732 03:17:24 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:39.732 03:17:24 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:39.732 03:17:24 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:39.732 03:17:24 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:07:39.732 03:17:24 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:39.732 03:17:24 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:39.732 03:17:24 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:39.732 03:17:24 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:39.732 03:17:24 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:39.732 03:17:24 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:07:39.732 03:17:24 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:07:39.732 [2024-07-21 03:17:24.851791] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:39.732 [2024-07-21 03:17:24.851850] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2291889 ] 00:07:39.732 EAL: No free 2048 kB hugepages reported on node 1 00:07:39.732 [2024-07-21 03:17:24.915467] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:39.732 [2024-07-21 03:17:25.005251] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.990 03:17:25 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:39.990 03:17:25 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:39.990 03:17:25 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:39.990 03:17:25 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:39.990 03:17:25 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:39.990 03:17:25 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:39.990 03:17:25 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:39.990 03:17:25 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:39.990 03:17:25 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:39.990 03:17:25 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:39.990 03:17:25 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:39.990 03:17:25 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:39.990 03:17:25 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:07:39.990 03:17:25 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:39.990 03:17:25 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:39.990 03:17:25 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:39.990 03:17:25 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:39.990 03:17:25 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:39.990 03:17:25 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:39.990 03:17:25 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:39.990 03:17:25 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:39.990 03:17:25 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:39.990 03:17:25 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:39.990 03:17:25 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:39.990 03:17:25 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:07:39.990 03:17:25 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:39.990 03:17:25 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:07:39.990 03:17:25 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:39.990 03:17:25 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:39.990 03:17:25 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:39.990 03:17:25 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:39.990 03:17:25 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:39.990 03:17:25 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:39.990 03:17:25 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:39.990 03:17:25 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:39.990 03:17:25 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:39.990 03:17:25 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:39.990 03:17:25 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:07:39.990 03:17:25 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:39.990 03:17:25 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:07:39.990 03:17:25 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:39.990 03:17:25 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:39.990 03:17:25 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:39.990 03:17:25 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:39.990 03:17:25 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:39.990 03:17:25 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:39.990 03:17:25 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:07:39.990 03:17:25 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:39.990 03:17:25 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:39.990 03:17:25 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:39.990 03:17:25 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:07:39.990 03:17:25 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:39.990 03:17:25 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:39.990 03:17:25 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:39.990 03:17:25 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:07:39.990 03:17:25 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:39.990 03:17:25 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:39.990 03:17:25 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:39.990 03:17:25 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:07:39.990 03:17:25 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:39.990 03:17:25 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:39.990 03:17:25 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:39.990 03:17:25 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:07:39.990 03:17:25 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:39.990 03:17:25 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:39.990 03:17:25 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:39.990 03:17:25 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:39.990 03:17:25 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:39.990 03:17:25 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:39.990 03:17:25 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:39.990 03:17:25 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:39.990 03:17:25 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:39.990 03:17:25 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:39.990 03:17:25 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:40.960 03:17:26 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:40.960 03:17:26 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:40.960 03:17:26 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:40.960 03:17:26 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:40.960 03:17:26 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:40.960 03:17:26 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:40.960 03:17:26 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:40.960 03:17:26 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:40.960 03:17:26 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:40.960 03:17:26 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:40.960 03:17:26 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:40.960 03:17:26 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:40.960 03:17:26 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:40.960 03:17:26 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:40.960 03:17:26 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:40.960 03:17:26 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:40.960 03:17:26 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:40.960 03:17:26 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:40.960 03:17:26 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:40.960 03:17:26 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:40.960 03:17:26 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:40.960 03:17:26 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:40.960 03:17:26 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:40.960 03:17:26 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:40.960 03:17:26 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:40.960 03:17:26 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:07:40.960 03:17:26 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:40.960 00:07:40.960 real 0m1.406s 00:07:40.960 user 0m1.270s 00:07:40.960 sys 0m0.140s 00:07:40.960 03:17:26 accel.accel_comp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:40.960 03:17:26 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:07:40.960 ************************************ 00:07:40.960 END TEST accel_comp 00:07:40.960 ************************************ 00:07:41.218 03:17:26 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:41.218 03:17:26 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:07:41.218 03:17:26 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:41.218 03:17:26 accel -- common/autotest_common.sh@10 -- # set +x 00:07:41.218 ************************************ 00:07:41.218 START TEST accel_decomp 00:07:41.218 ************************************ 00:07:41.218 03:17:26 accel.accel_decomp -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:41.218 03:17:26 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:07:41.218 03:17:26 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:07:41.218 03:17:26 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:41.218 03:17:26 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:41.218 03:17:26 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:41.218 03:17:26 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:41.218 03:17:26 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:07:41.218 03:17:26 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:41.218 03:17:26 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:41.218 03:17:26 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:41.218 03:17:26 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:41.218 03:17:26 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:41.218 03:17:26 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:07:41.218 03:17:26 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:07:41.218 [2024-07-21 03:17:26.299547] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:41.218 [2024-07-21 03:17:26.299609] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2292050 ] 00:07:41.218 EAL: No free 2048 kB hugepages reported on node 1 00:07:41.218 [2024-07-21 03:17:26.362803] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.218 [2024-07-21 03:17:26.456780] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.218 03:17:26 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:41.218 03:17:26 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:41.218 03:17:26 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:41.218 03:17:26 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:41.218 03:17:26 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:41.218 03:17:26 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:41.218 03:17:26 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:41.218 03:17:26 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:41.218 03:17:26 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:41.218 03:17:26 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:41.218 03:17:26 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:41.218 03:17:26 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:41.218 03:17:26 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:07:41.218 03:17:26 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:41.218 03:17:26 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:41.218 03:17:26 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:41.219 03:17:26 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:41.219 03:17:26 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:41.219 03:17:26 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:41.219 03:17:26 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:41.219 03:17:26 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:41.219 03:17:26 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:41.219 03:17:26 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:41.219 03:17:26 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:41.219 03:17:26 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:07:41.219 03:17:26 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:41.219 03:17:26 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:41.219 03:17:26 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:41.219 03:17:26 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:41.219 03:17:26 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:41.219 03:17:26 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:41.219 03:17:26 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:41.219 03:17:26 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:41.219 03:17:26 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:41.219 03:17:26 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:41.219 03:17:26 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:41.219 03:17:26 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:41.219 03:17:26 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:07:41.219 03:17:26 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:41.219 03:17:26 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:07:41.219 03:17:26 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:41.219 03:17:26 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:41.219 03:17:26 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:41.219 03:17:26 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:41.219 03:17:26 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:41.219 03:17:26 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:41.219 03:17:26 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:07:41.219 03:17:26 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:41.219 03:17:26 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:41.219 03:17:26 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:41.219 03:17:26 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:07:41.219 03:17:26 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:41.219 03:17:26 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:41.219 03:17:26 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:41.219 03:17:26 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:07:41.219 03:17:26 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:41.219 03:17:26 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:41.219 03:17:26 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:41.219 03:17:26 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:07:41.219 03:17:26 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:41.219 03:17:26 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:41.219 03:17:26 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:41.219 03:17:26 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:07:41.219 03:17:26 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:41.219 03:17:26 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:41.219 03:17:26 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:41.219 03:17:26 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:41.219 03:17:26 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:41.219 03:17:26 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:41.219 03:17:26 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:41.219 03:17:26 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:41.219 03:17:26 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:41.219 03:17:26 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:41.219 03:17:26 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:42.589 03:17:27 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:42.589 03:17:27 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:42.589 03:17:27 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:42.589 03:17:27 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:42.589 03:17:27 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:42.589 03:17:27 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:42.589 03:17:27 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:42.589 03:17:27 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:42.589 03:17:27 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:42.589 03:17:27 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:42.589 03:17:27 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:42.589 03:17:27 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:42.589 03:17:27 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:42.589 03:17:27 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:42.589 03:17:27 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:42.589 03:17:27 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:42.589 03:17:27 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:42.589 03:17:27 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:42.589 03:17:27 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:42.589 03:17:27 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:42.589 03:17:27 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:42.589 03:17:27 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:42.589 03:17:27 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:42.589 03:17:27 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:42.589 03:17:27 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:42.589 03:17:27 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:42.589 03:17:27 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:42.589 00:07:42.589 real 0m1.411s 00:07:42.589 user 0m1.269s 00:07:42.589 sys 0m0.146s 00:07:42.589 03:17:27 accel.accel_decomp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:42.589 03:17:27 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:07:42.589 ************************************ 00:07:42.589 END TEST accel_decomp 00:07:42.589 ************************************ 00:07:42.589 03:17:27 accel -- accel/accel.sh@118 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:42.589 03:17:27 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:07:42.589 03:17:27 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:42.589 03:17:27 accel -- common/autotest_common.sh@10 -- # set +x 00:07:42.589 ************************************ 00:07:42.589 START TEST accel_decmop_full 00:07:42.589 ************************************ 00:07:42.589 03:17:27 accel.accel_decmop_full -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:42.589 03:17:27 accel.accel_decmop_full -- accel/accel.sh@16 -- # local accel_opc 00:07:42.589 03:17:27 accel.accel_decmop_full -- accel/accel.sh@17 -- # local accel_module 00:07:42.589 03:17:27 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:42.589 03:17:27 accel.accel_decmop_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:42.589 03:17:27 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:42.589 03:17:27 accel.accel_decmop_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:42.589 03:17:27 accel.accel_decmop_full -- accel/accel.sh@12 -- # build_accel_config 00:07:42.589 03:17:27 accel.accel_decmop_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:42.589 03:17:27 accel.accel_decmop_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:42.589 03:17:27 accel.accel_decmop_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:42.589 03:17:27 accel.accel_decmop_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:42.589 03:17:27 accel.accel_decmop_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:42.589 03:17:27 accel.accel_decmop_full -- accel/accel.sh@40 -- # local IFS=, 00:07:42.589 03:17:27 accel.accel_decmop_full -- accel/accel.sh@41 -- # jq -r . 00:07:42.589 [2024-07-21 03:17:27.754183] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:42.590 [2024-07-21 03:17:27.754242] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2292202 ] 00:07:42.590 EAL: No free 2048 kB hugepages reported on node 1 00:07:42.590 [2024-07-21 03:17:27.815543] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:42.847 [2024-07-21 03:17:27.909498] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.847 03:17:27 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:42.847 03:17:27 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:42.847 03:17:27 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:42.847 03:17:27 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:42.847 03:17:27 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:42.847 03:17:27 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:42.847 03:17:27 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:42.847 03:17:27 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:42.847 03:17:27 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:42.847 03:17:27 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:42.847 03:17:27 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:42.847 03:17:27 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:42.847 03:17:27 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=0x1 00:07:42.847 03:17:27 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:42.847 03:17:27 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:42.847 03:17:27 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:42.847 03:17:27 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:42.847 03:17:27 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:42.847 03:17:27 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:42.847 03:17:27 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:42.847 03:17:27 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:42.847 03:17:27 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:42.847 03:17:27 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:42.847 03:17:27 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:42.847 03:17:27 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=decompress 00:07:42.847 03:17:27 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:42.847 03:17:27 accel.accel_decmop_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:42.847 03:17:27 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:42.847 03:17:27 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:42.847 03:17:27 accel.accel_decmop_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:42.847 03:17:27 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:42.847 03:17:27 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:42.847 03:17:27 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:42.847 03:17:27 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:42.847 03:17:27 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:42.847 03:17:27 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:42.847 03:17:27 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:42.847 03:17:27 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=software 00:07:42.847 03:17:27 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:42.847 03:17:27 accel.accel_decmop_full -- accel/accel.sh@22 -- # accel_module=software 00:07:42.847 03:17:27 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:42.847 03:17:27 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:42.847 03:17:27 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:42.847 03:17:27 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:42.847 03:17:27 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:42.847 03:17:27 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:42.847 03:17:27 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=32 00:07:42.847 03:17:27 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:42.847 03:17:27 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:42.847 03:17:27 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:42.847 03:17:27 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=32 00:07:42.847 03:17:27 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:42.847 03:17:27 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:42.847 03:17:27 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:42.847 03:17:27 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=1 00:07:42.847 03:17:27 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:42.847 03:17:27 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:42.847 03:17:27 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:42.847 03:17:27 accel.accel_decmop_full -- accel/accel.sh@20 -- # val='1 seconds' 00:07:42.847 03:17:27 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:42.847 03:17:27 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:42.847 03:17:27 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:42.847 03:17:27 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=Yes 00:07:42.847 03:17:27 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:42.847 03:17:27 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:42.847 03:17:27 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:42.847 03:17:27 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:42.847 03:17:27 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:42.847 03:17:27 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:42.847 03:17:27 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:42.847 03:17:27 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:42.847 03:17:27 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:42.847 03:17:27 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:42.847 03:17:27 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:44.219 03:17:29 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:44.219 03:17:29 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:44.219 03:17:29 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:44.219 03:17:29 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:44.219 03:17:29 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:44.219 03:17:29 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:44.219 03:17:29 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:44.219 03:17:29 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:44.219 03:17:29 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:44.219 03:17:29 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:44.219 03:17:29 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:44.219 03:17:29 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:44.219 03:17:29 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:44.219 03:17:29 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:44.219 03:17:29 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:44.219 03:17:29 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:44.219 03:17:29 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:44.219 03:17:29 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:44.219 03:17:29 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:44.219 03:17:29 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:44.219 03:17:29 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:44.219 03:17:29 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:44.219 03:17:29 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:44.219 03:17:29 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:44.219 03:17:29 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:44.219 03:17:29 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:44.219 03:17:29 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:44.219 00:07:44.219 real 0m1.426s 00:07:44.219 user 0m1.293s 00:07:44.219 sys 0m0.137s 00:07:44.219 03:17:29 accel.accel_decmop_full -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:44.219 03:17:29 accel.accel_decmop_full -- common/autotest_common.sh@10 -- # set +x 00:07:44.219 ************************************ 00:07:44.219 END TEST accel_decmop_full 00:07:44.219 ************************************ 00:07:44.219 03:17:29 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:44.219 03:17:29 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:07:44.219 03:17:29 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:44.219 03:17:29 accel -- common/autotest_common.sh@10 -- # set +x 00:07:44.219 ************************************ 00:07:44.219 START TEST accel_decomp_mcore 00:07:44.219 ************************************ 00:07:44.219 03:17:29 accel.accel_decomp_mcore -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:44.219 03:17:29 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:44.219 03:17:29 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:44.219 03:17:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:44.219 03:17:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:44.219 03:17:29 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:44.219 03:17:29 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:44.219 03:17:29 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:44.219 03:17:29 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:44.219 03:17:29 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:44.219 03:17:29 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:44.219 03:17:29 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:44.219 03:17:29 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:44.219 03:17:29 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:44.219 03:17:29 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:44.219 [2024-07-21 03:17:29.226453] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:44.219 [2024-07-21 03:17:29.226524] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2292457 ] 00:07:44.219 EAL: No free 2048 kB hugepages reported on node 1 00:07:44.219 [2024-07-21 03:17:29.290485] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:44.219 [2024-07-21 03:17:29.387058] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:44.219 [2024-07-21 03:17:29.387113] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:44.219 [2024-07-21 03:17:29.387227] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:44.219 [2024-07-21 03:17:29.387229] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.219 03:17:29 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:44.219 03:17:29 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:44.219 03:17:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:44.219 03:17:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:44.219 03:17:29 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:44.219 03:17:29 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:44.219 03:17:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:44.219 03:17:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:44.219 03:17:29 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:44.219 03:17:29 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:44.219 03:17:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:44.219 03:17:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:44.219 03:17:29 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:44.219 03:17:29 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:44.219 03:17:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:44.219 03:17:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:44.219 03:17:29 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:44.219 03:17:29 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:44.219 03:17:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:44.219 03:17:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:44.219 03:17:29 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:44.219 03:17:29 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:44.219 03:17:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:44.219 03:17:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:44.219 03:17:29 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:44.219 03:17:29 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:44.219 03:17:29 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:44.219 03:17:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:44.219 03:17:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:44.219 03:17:29 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:44.219 03:17:29 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:44.219 03:17:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:44.219 03:17:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:44.219 03:17:29 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:44.219 03:17:29 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:44.219 03:17:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:44.219 03:17:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:44.219 03:17:29 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:07:44.219 03:17:29 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:44.219 03:17:29 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:07:44.220 03:17:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:44.220 03:17:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:44.220 03:17:29 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:44.220 03:17:29 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:44.220 03:17:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:44.220 03:17:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:44.220 03:17:29 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:44.220 03:17:29 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:44.220 03:17:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:44.220 03:17:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:44.220 03:17:29 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:44.220 03:17:29 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:44.220 03:17:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:44.220 03:17:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:44.220 03:17:29 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:07:44.220 03:17:29 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:44.220 03:17:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:44.220 03:17:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:44.220 03:17:29 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:44.220 03:17:29 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:44.220 03:17:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:44.220 03:17:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:44.220 03:17:29 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:44.220 03:17:29 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:44.220 03:17:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:44.220 03:17:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:44.220 03:17:29 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:44.220 03:17:29 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:44.220 03:17:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:44.220 03:17:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:44.220 03:17:29 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:44.220 03:17:29 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:44.220 03:17:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:44.220 03:17:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:45.593 03:17:30 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:45.593 03:17:30 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:45.593 03:17:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:45.593 03:17:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:45.593 03:17:30 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:45.593 03:17:30 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:45.593 03:17:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:45.593 03:17:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:45.593 03:17:30 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:45.593 03:17:30 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:45.593 03:17:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:45.593 03:17:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:45.593 03:17:30 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:45.593 03:17:30 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:45.593 03:17:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:45.593 03:17:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:45.593 03:17:30 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:45.593 03:17:30 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:45.593 03:17:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:45.593 03:17:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:45.593 03:17:30 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:45.593 03:17:30 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:45.593 03:17:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:45.593 03:17:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:45.593 03:17:30 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:45.593 03:17:30 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:45.593 03:17:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:45.593 03:17:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:45.593 03:17:30 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:45.593 03:17:30 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:45.593 03:17:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:45.593 03:17:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:45.593 03:17:30 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:45.593 03:17:30 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:45.593 03:17:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:45.593 03:17:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:45.593 03:17:30 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:45.593 03:17:30 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:45.593 03:17:30 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:45.593 00:07:45.593 real 0m1.421s 00:07:45.593 user 0m4.721s 00:07:45.593 sys 0m0.161s 00:07:45.593 03:17:30 accel.accel_decomp_mcore -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:45.593 03:17:30 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:45.593 ************************************ 00:07:45.593 END TEST accel_decomp_mcore 00:07:45.593 ************************************ 00:07:45.593 03:17:30 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:45.593 03:17:30 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:07:45.593 03:17:30 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:45.593 03:17:30 accel -- common/autotest_common.sh@10 -- # set +x 00:07:45.593 ************************************ 00:07:45.593 START TEST accel_decomp_full_mcore 00:07:45.593 ************************************ 00:07:45.593 03:17:30 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:45.594 03:17:30 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:45.594 03:17:30 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:45.594 03:17:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:45.594 03:17:30 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:45.594 03:17:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:45.594 03:17:30 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:45.594 03:17:30 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:45.594 03:17:30 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:45.594 03:17:30 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:45.594 03:17:30 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:45.594 03:17:30 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:45.594 03:17:30 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:45.594 03:17:30 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:45.594 03:17:30 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:45.594 [2024-07-21 03:17:30.687557] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:45.594 [2024-07-21 03:17:30.687633] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2292634 ] 00:07:45.594 EAL: No free 2048 kB hugepages reported on node 1 00:07:45.594 [2024-07-21 03:17:30.750931] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:45.594 [2024-07-21 03:17:30.846301] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:45.594 [2024-07-21 03:17:30.846356] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:45.594 [2024-07-21 03:17:30.846475] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:45.594 [2024-07-21 03:17:30.846478] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.852 03:17:30 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:45.852 03:17:30 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:45.852 03:17:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:45.852 03:17:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:45.852 03:17:30 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:45.852 03:17:30 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:45.852 03:17:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:45.852 03:17:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:45.852 03:17:30 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:45.852 03:17:30 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:45.852 03:17:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:45.852 03:17:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:45.852 03:17:30 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:45.852 03:17:30 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:45.852 03:17:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:45.852 03:17:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:45.852 03:17:30 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:45.852 03:17:30 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:45.852 03:17:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:45.852 03:17:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:45.852 03:17:30 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:45.852 03:17:30 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:45.852 03:17:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:45.852 03:17:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:45.852 03:17:30 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:45.852 03:17:30 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:45.852 03:17:30 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:45.852 03:17:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:45.852 03:17:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:45.852 03:17:30 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:45.852 03:17:30 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:45.852 03:17:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:45.852 03:17:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:45.852 03:17:30 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:45.852 03:17:30 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:45.852 03:17:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:45.852 03:17:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:45.852 03:17:30 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:07:45.852 03:17:30 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:45.852 03:17:30 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:07:45.852 03:17:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:45.852 03:17:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:45.852 03:17:30 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:45.852 03:17:30 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:45.852 03:17:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:45.852 03:17:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:45.852 03:17:30 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:45.852 03:17:30 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:45.852 03:17:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:45.852 03:17:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:45.852 03:17:30 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:45.852 03:17:30 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:45.852 03:17:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:45.852 03:17:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:45.852 03:17:30 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:07:45.852 03:17:30 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:45.852 03:17:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:45.852 03:17:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:45.852 03:17:30 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:45.852 03:17:30 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:45.852 03:17:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:45.852 03:17:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:45.852 03:17:30 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:45.852 03:17:30 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:45.852 03:17:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:45.852 03:17:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:45.852 03:17:30 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:45.852 03:17:30 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:45.852 03:17:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:45.852 03:17:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:45.852 03:17:30 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:45.853 03:17:30 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:45.853 03:17:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:45.853 03:17:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:46.786 03:17:32 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:46.786 03:17:32 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:46.786 03:17:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:46.786 03:17:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:46.786 03:17:32 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:46.786 03:17:32 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:46.786 03:17:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:46.786 03:17:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:46.786 03:17:32 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:46.786 03:17:32 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:46.786 03:17:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:46.786 03:17:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:46.786 03:17:32 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:46.786 03:17:32 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:46.786 03:17:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:46.786 03:17:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:46.786 03:17:32 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:46.786 03:17:32 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:46.786 03:17:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:46.786 03:17:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:46.786 03:17:32 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:46.786 03:17:32 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:46.786 03:17:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:46.786 03:17:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:46.786 03:17:32 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:46.786 03:17:32 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:46.786 03:17:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:46.786 03:17:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:46.786 03:17:32 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:46.786 03:17:32 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:46.786 03:17:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:46.786 03:17:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:46.786 03:17:32 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:46.786 03:17:32 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:46.786 03:17:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:46.786 03:17:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:46.786 03:17:32 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:46.786 03:17:32 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:46.786 03:17:32 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:46.786 00:07:46.786 real 0m1.423s 00:07:46.786 user 0m4.742s 00:07:46.786 sys 0m0.151s 00:07:46.786 03:17:32 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:46.786 03:17:32 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:46.786 ************************************ 00:07:46.786 END TEST accel_decomp_full_mcore 00:07:46.786 ************************************ 00:07:47.044 03:17:32 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:47.044 03:17:32 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:07:47.044 03:17:32 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:47.044 03:17:32 accel -- common/autotest_common.sh@10 -- # set +x 00:07:47.044 ************************************ 00:07:47.044 START TEST accel_decomp_mthread 00:07:47.044 ************************************ 00:07:47.044 03:17:32 accel.accel_decomp_mthread -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:47.044 03:17:32 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:47.044 03:17:32 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:47.044 03:17:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:47.044 03:17:32 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:47.044 03:17:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:47.044 03:17:32 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:47.044 03:17:32 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:47.044 03:17:32 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:47.044 03:17:32 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:47.044 03:17:32 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:47.044 03:17:32 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:47.044 03:17:32 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:47.044 03:17:32 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:47.044 03:17:32 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:47.044 [2024-07-21 03:17:32.154950] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:47.044 [2024-07-21 03:17:32.155015] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2292802 ] 00:07:47.044 EAL: No free 2048 kB hugepages reported on node 1 00:07:47.044 [2024-07-21 03:17:32.217356] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.044 [2024-07-21 03:17:32.310795] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.302 03:17:32 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:47.302 03:17:32 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:47.302 03:17:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:47.302 03:17:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:47.302 03:17:32 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:47.302 03:17:32 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:47.302 03:17:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:47.302 03:17:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:47.302 03:17:32 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:47.302 03:17:32 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:47.302 03:17:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:47.302 03:17:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:47.302 03:17:32 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:47.302 03:17:32 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:47.302 03:17:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:47.302 03:17:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:47.302 03:17:32 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:47.302 03:17:32 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:47.302 03:17:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:47.302 03:17:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:47.302 03:17:32 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:47.302 03:17:32 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:47.302 03:17:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:47.302 03:17:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:47.302 03:17:32 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:47.302 03:17:32 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:47.302 03:17:32 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:47.302 03:17:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:47.302 03:17:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:47.302 03:17:32 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:47.302 03:17:32 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:47.302 03:17:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:47.302 03:17:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:47.302 03:17:32 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:47.302 03:17:32 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:47.302 03:17:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:47.302 03:17:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:47.302 03:17:32 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:07:47.302 03:17:32 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:47.302 03:17:32 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:47.302 03:17:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:47.302 03:17:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:47.302 03:17:32 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:47.302 03:17:32 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:47.302 03:17:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:47.302 03:17:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:47.302 03:17:32 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:47.302 03:17:32 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:47.302 03:17:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:47.302 03:17:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:47.302 03:17:32 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:47.302 03:17:32 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:47.302 03:17:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:47.302 03:17:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:47.302 03:17:32 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:07:47.302 03:17:32 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:47.302 03:17:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:47.302 03:17:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:47.302 03:17:32 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:47.302 03:17:32 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:47.302 03:17:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:47.302 03:17:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:47.302 03:17:32 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:47.302 03:17:32 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:47.302 03:17:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:47.302 03:17:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:47.302 03:17:32 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:47.302 03:17:32 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:47.302 03:17:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:47.302 03:17:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:47.302 03:17:32 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:47.302 03:17:32 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:47.302 03:17:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:47.302 03:17:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:48.672 03:17:33 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:48.672 03:17:33 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:48.672 03:17:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:48.672 03:17:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:48.672 03:17:33 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:48.672 03:17:33 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:48.672 03:17:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:48.672 03:17:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:48.672 03:17:33 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:48.672 03:17:33 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:48.672 03:17:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:48.672 03:17:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:48.672 03:17:33 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:48.672 03:17:33 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:48.672 03:17:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:48.672 03:17:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:48.672 03:17:33 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:48.672 03:17:33 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:48.672 03:17:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:48.672 03:17:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:48.672 03:17:33 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:48.672 03:17:33 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:48.672 03:17:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:48.672 03:17:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:48.672 03:17:33 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:48.672 03:17:33 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:48.672 03:17:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:48.672 03:17:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:48.672 03:17:33 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:48.672 03:17:33 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:48.672 03:17:33 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:48.672 00:07:48.672 real 0m1.420s 00:07:48.672 user 0m1.272s 00:07:48.672 sys 0m0.151s 00:07:48.672 03:17:33 accel.accel_decomp_mthread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:48.672 03:17:33 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:48.672 ************************************ 00:07:48.672 END TEST accel_decomp_mthread 00:07:48.672 ************************************ 00:07:48.672 03:17:33 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:48.672 03:17:33 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:07:48.672 03:17:33 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:48.672 03:17:33 accel -- common/autotest_common.sh@10 -- # set +x 00:07:48.672 ************************************ 00:07:48.672 START TEST accel_decomp_full_mthread 00:07:48.672 ************************************ 00:07:48.672 03:17:33 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:48.672 03:17:33 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:48.672 03:17:33 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:48.672 03:17:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:48.672 03:17:33 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:48.672 03:17:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:48.672 03:17:33 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:48.672 03:17:33 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:48.672 03:17:33 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:48.672 03:17:33 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:48.672 03:17:33 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:48.672 03:17:33 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:48.672 03:17:33 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:48.672 03:17:33 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:48.672 03:17:33 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:48.672 [2024-07-21 03:17:33.620767] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:48.672 [2024-07-21 03:17:33.620830] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2292953 ] 00:07:48.672 EAL: No free 2048 kB hugepages reported on node 1 00:07:48.672 [2024-07-21 03:17:33.683150] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:48.672 [2024-07-21 03:17:33.775344] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.672 03:17:33 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:48.672 03:17:33 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:48.672 03:17:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:48.672 03:17:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:48.672 03:17:33 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:48.672 03:17:33 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:48.672 03:17:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:48.672 03:17:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:48.672 03:17:33 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:48.672 03:17:33 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:48.672 03:17:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:48.672 03:17:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:48.672 03:17:33 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:48.672 03:17:33 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:48.672 03:17:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:48.672 03:17:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:48.672 03:17:33 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:48.672 03:17:33 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:48.672 03:17:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:48.672 03:17:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:48.672 03:17:33 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:48.672 03:17:33 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:48.672 03:17:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:48.672 03:17:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:48.672 03:17:33 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:48.672 03:17:33 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:48.672 03:17:33 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:48.672 03:17:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:48.672 03:17:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:48.672 03:17:33 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:48.672 03:17:33 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:48.672 03:17:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:48.672 03:17:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:48.672 03:17:33 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:48.672 03:17:33 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:48.672 03:17:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:48.672 03:17:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:48.672 03:17:33 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:07:48.672 03:17:33 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:48.672 03:17:33 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:48.672 03:17:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:48.672 03:17:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:48.672 03:17:33 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:48.672 03:17:33 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:48.672 03:17:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:48.672 03:17:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:48.672 03:17:33 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:48.672 03:17:33 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:48.672 03:17:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:48.672 03:17:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:48.672 03:17:33 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:48.672 03:17:33 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:48.672 03:17:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:48.672 03:17:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:48.672 03:17:33 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:07:48.672 03:17:33 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:48.672 03:17:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:48.672 03:17:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:48.672 03:17:33 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:48.672 03:17:33 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:48.672 03:17:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:48.672 03:17:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:48.672 03:17:33 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:48.672 03:17:33 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:48.672 03:17:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:48.672 03:17:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:48.672 03:17:33 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:48.672 03:17:33 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:48.672 03:17:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:48.672 03:17:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:48.672 03:17:33 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:48.672 03:17:33 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:48.672 03:17:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:48.672 03:17:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:50.043 03:17:35 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:50.043 03:17:35 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:50.043 03:17:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:50.043 03:17:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:50.043 03:17:35 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:50.044 03:17:35 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:50.044 03:17:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:50.044 03:17:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:50.044 03:17:35 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:50.044 03:17:35 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:50.044 03:17:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:50.044 03:17:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:50.044 03:17:35 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:50.044 03:17:35 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:50.044 03:17:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:50.044 03:17:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:50.044 03:17:35 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:50.044 03:17:35 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:50.044 03:17:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:50.044 03:17:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:50.044 03:17:35 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:50.044 03:17:35 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:50.044 03:17:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:50.044 03:17:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:50.044 03:17:35 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:50.044 03:17:35 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:50.044 03:17:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:50.044 03:17:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:50.044 03:17:35 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:50.044 03:17:35 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:50.044 03:17:35 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:50.044 00:07:50.044 real 0m1.443s 00:07:50.044 user 0m1.297s 00:07:50.044 sys 0m0.149s 00:07:50.044 03:17:35 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:50.044 03:17:35 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:50.044 ************************************ 00:07:50.044 END TEST accel_decomp_full_mthread 00:07:50.044 ************************************ 00:07:50.044 03:17:35 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:07:50.044 03:17:35 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:50.044 03:17:35 accel -- accel/accel.sh@137 -- # build_accel_config 00:07:50.044 03:17:35 accel -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:07:50.044 03:17:35 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:50.044 03:17:35 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:50.044 03:17:35 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:50.044 03:17:35 accel -- common/autotest_common.sh@10 -- # set +x 00:07:50.044 03:17:35 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:50.044 03:17:35 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:50.044 03:17:35 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:50.044 03:17:35 accel -- accel/accel.sh@40 -- # local IFS=, 00:07:50.044 03:17:35 accel -- accel/accel.sh@41 -- # jq -r . 00:07:50.044 ************************************ 00:07:50.044 START TEST accel_dif_functional_tests 00:07:50.044 ************************************ 00:07:50.044 03:17:35 accel.accel_dif_functional_tests -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:50.044 [2024-07-21 03:17:35.129400] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:50.044 [2024-07-21 03:17:35.129471] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2293228 ] 00:07:50.044 EAL: No free 2048 kB hugepages reported on node 1 00:07:50.044 [2024-07-21 03:17:35.196867] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:50.044 [2024-07-21 03:17:35.288402] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:50.044 [2024-07-21 03:17:35.288470] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:50.044 [2024-07-21 03:17:35.288472] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.302 00:07:50.302 00:07:50.302 CUnit - A unit testing framework for C - Version 2.1-3 00:07:50.302 http://cunit.sourceforge.net/ 00:07:50.302 00:07:50.302 00:07:50.302 Suite: accel_dif 00:07:50.302 Test: verify: DIF generated, GUARD check ...passed 00:07:50.302 Test: verify: DIF generated, APPTAG check ...passed 00:07:50.302 Test: verify: DIF generated, REFTAG check ...passed 00:07:50.302 Test: verify: DIF not generated, GUARD check ...[2024-07-21 03:17:35.377506] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:50.302 passed 00:07:50.302 Test: verify: DIF not generated, APPTAG check ...[2024-07-21 03:17:35.377574] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:50.302 passed 00:07:50.302 Test: verify: DIF not generated, REFTAG check ...[2024-07-21 03:17:35.377629] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:50.302 passed 00:07:50.302 Test: verify: APPTAG correct, APPTAG check ...passed 00:07:50.302 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-21 03:17:35.377704] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:07:50.302 passed 00:07:50.302 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:07:50.302 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:07:50.302 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:07:50.302 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-21 03:17:35.377838] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:07:50.302 passed 00:07:50.302 Test: verify copy: DIF generated, GUARD check ...passed 00:07:50.302 Test: verify copy: DIF generated, APPTAG check ...passed 00:07:50.302 Test: verify copy: DIF generated, REFTAG check ...passed 00:07:50.302 Test: verify copy: DIF not generated, GUARD check ...[2024-07-21 03:17:35.377995] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:50.302 passed 00:07:50.302 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-21 03:17:35.378031] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:50.302 passed 00:07:50.302 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-21 03:17:35.378062] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:50.302 passed 00:07:50.302 Test: generate copy: DIF generated, GUARD check ...passed 00:07:50.302 Test: generate copy: DIF generated, APTTAG check ...passed 00:07:50.302 Test: generate copy: DIF generated, REFTAG check ...passed 00:07:50.302 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:07:50.302 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:07:50.302 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:07:50.302 Test: generate copy: iovecs-len validate ...[2024-07-21 03:17:35.378281] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:07:50.302 passed 00:07:50.302 Test: generate copy: buffer alignment validate ...passed 00:07:50.302 00:07:50.302 Run Summary: Type Total Ran Passed Failed Inactive 00:07:50.302 suites 1 1 n/a 0 0 00:07:50.302 tests 26 26 26 0 0 00:07:50.302 asserts 115 115 115 0 n/a 00:07:50.302 00:07:50.302 Elapsed time = 0.002 seconds 00:07:50.302 00:07:50.302 real 0m0.498s 00:07:50.302 user 0m0.763s 00:07:50.302 sys 0m0.174s 00:07:50.302 03:17:35 accel.accel_dif_functional_tests -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:50.302 03:17:35 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:07:50.302 ************************************ 00:07:50.302 END TEST accel_dif_functional_tests 00:07:50.302 ************************************ 00:07:50.302 00:07:50.302 real 0m31.730s 00:07:50.302 user 0m35.148s 00:07:50.302 sys 0m4.546s 00:07:50.302 03:17:35 accel -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:50.302 03:17:35 accel -- common/autotest_common.sh@10 -- # set +x 00:07:50.302 ************************************ 00:07:50.302 END TEST accel 00:07:50.302 ************************************ 00:07:50.559 03:17:35 -- spdk/autotest.sh@184 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:50.559 03:17:35 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:50.559 03:17:35 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:50.559 03:17:35 -- common/autotest_common.sh@10 -- # set +x 00:07:50.559 ************************************ 00:07:50.559 START TEST accel_rpc 00:07:50.559 ************************************ 00:07:50.559 03:17:35 accel_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:50.559 * Looking for test storage... 00:07:50.559 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:07:50.559 03:17:35 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:50.559 03:17:35 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=2293300 00:07:50.559 03:17:35 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:07:50.559 03:17:35 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 2293300 00:07:50.559 03:17:35 accel_rpc -- common/autotest_common.sh@827 -- # '[' -z 2293300 ']' 00:07:50.559 03:17:35 accel_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:50.559 03:17:35 accel_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:50.559 03:17:35 accel_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:50.559 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:50.559 03:17:35 accel_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:50.559 03:17:35 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:50.559 [2024-07-21 03:17:35.764027] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:50.559 [2024-07-21 03:17:35.764103] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2293300 ] 00:07:50.559 EAL: No free 2048 kB hugepages reported on node 1 00:07:50.559 [2024-07-21 03:17:35.823979] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.818 [2024-07-21 03:17:35.908063] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.818 03:17:35 accel_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:50.818 03:17:35 accel_rpc -- common/autotest_common.sh@860 -- # return 0 00:07:50.818 03:17:35 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:07:50.818 03:17:35 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:07:50.818 03:17:35 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:07:50.818 03:17:35 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:07:50.818 03:17:35 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:07:50.818 03:17:35 accel_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:50.818 03:17:35 accel_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:50.818 03:17:35 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:50.818 ************************************ 00:07:50.818 START TEST accel_assign_opcode 00:07:50.818 ************************************ 00:07:50.818 03:17:35 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1121 -- # accel_assign_opcode_test_suite 00:07:50.818 03:17:35 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:07:50.818 03:17:35 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:50.818 03:17:35 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:50.818 [2024-07-21 03:17:35.992700] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:07:50.818 03:17:35 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:50.818 03:17:35 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:07:50.818 03:17:35 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:50.818 03:17:35 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:50.818 [2024-07-21 03:17:36.000720] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:07:50.818 03:17:36 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:50.818 03:17:36 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:07:50.818 03:17:36 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:50.818 03:17:36 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:51.075 03:17:36 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:51.075 03:17:36 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:07:51.075 03:17:36 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:51.075 03:17:36 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:51.075 03:17:36 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:07:51.075 03:17:36 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:07:51.075 03:17:36 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:51.075 software 00:07:51.075 00:07:51.075 real 0m0.293s 00:07:51.075 user 0m0.042s 00:07:51.075 sys 0m0.006s 00:07:51.075 03:17:36 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:51.075 03:17:36 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:51.075 ************************************ 00:07:51.075 END TEST accel_assign_opcode 00:07:51.075 ************************************ 00:07:51.075 03:17:36 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 2293300 00:07:51.075 03:17:36 accel_rpc -- common/autotest_common.sh@946 -- # '[' -z 2293300 ']' 00:07:51.075 03:17:36 accel_rpc -- common/autotest_common.sh@950 -- # kill -0 2293300 00:07:51.075 03:17:36 accel_rpc -- common/autotest_common.sh@951 -- # uname 00:07:51.075 03:17:36 accel_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:51.075 03:17:36 accel_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2293300 00:07:51.075 03:17:36 accel_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:51.075 03:17:36 accel_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:51.075 03:17:36 accel_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2293300' 00:07:51.075 killing process with pid 2293300 00:07:51.075 03:17:36 accel_rpc -- common/autotest_common.sh@965 -- # kill 2293300 00:07:51.075 03:17:36 accel_rpc -- common/autotest_common.sh@970 -- # wait 2293300 00:07:51.639 00:07:51.639 real 0m1.051s 00:07:51.639 user 0m1.005s 00:07:51.639 sys 0m0.395s 00:07:51.639 03:17:36 accel_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:51.639 03:17:36 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:51.639 ************************************ 00:07:51.639 END TEST accel_rpc 00:07:51.639 ************************************ 00:07:51.639 03:17:36 -- spdk/autotest.sh@185 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:51.639 03:17:36 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:51.639 03:17:36 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:51.639 03:17:36 -- common/autotest_common.sh@10 -- # set +x 00:07:51.639 ************************************ 00:07:51.639 START TEST app_cmdline 00:07:51.639 ************************************ 00:07:51.639 03:17:36 app_cmdline -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:51.639 * Looking for test storage... 00:07:51.639 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:51.639 03:17:36 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:51.639 03:17:36 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=2293504 00:07:51.639 03:17:36 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:51.639 03:17:36 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 2293504 00:07:51.639 03:17:36 app_cmdline -- common/autotest_common.sh@827 -- # '[' -z 2293504 ']' 00:07:51.639 03:17:36 app_cmdline -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:51.639 03:17:36 app_cmdline -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:51.639 03:17:36 app_cmdline -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:51.639 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:51.639 03:17:36 app_cmdline -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:51.639 03:17:36 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:51.639 [2024-07-21 03:17:36.867437] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:51.639 [2024-07-21 03:17:36.867545] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2293504 ] 00:07:51.639 EAL: No free 2048 kB hugepages reported on node 1 00:07:51.639 [2024-07-21 03:17:36.925547] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:51.895 [2024-07-21 03:17:37.011414] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.152 03:17:37 app_cmdline -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:52.152 03:17:37 app_cmdline -- common/autotest_common.sh@860 -- # return 0 00:07:52.152 03:17:37 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:07:52.410 { 00:07:52.410 "version": "SPDK v24.05.1-pre git sha1 5fa2f5086", 00:07:52.410 "fields": { 00:07:52.410 "major": 24, 00:07:52.410 "minor": 5, 00:07:52.410 "patch": 1, 00:07:52.410 "suffix": "-pre", 00:07:52.410 "commit": "5fa2f5086" 00:07:52.410 } 00:07:52.410 } 00:07:52.410 03:17:37 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:52.410 03:17:37 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:52.410 03:17:37 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:52.410 03:17:37 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:52.410 03:17:37 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:52.410 03:17:37 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:52.410 03:17:37 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:52.410 03:17:37 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.410 03:17:37 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:52.410 03:17:37 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.410 03:17:37 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:52.410 03:17:37 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:52.410 03:17:37 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:52.410 03:17:37 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:07:52.410 03:17:37 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:52.410 03:17:37 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:52.410 03:17:37 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:52.410 03:17:37 app_cmdline -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:52.410 03:17:37 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:52.410 03:17:37 app_cmdline -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:52.410 03:17:37 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:52.410 03:17:37 app_cmdline -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:52.410 03:17:37 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:52.410 03:17:37 app_cmdline -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:52.668 request: 00:07:52.668 { 00:07:52.668 "method": "env_dpdk_get_mem_stats", 00:07:52.668 "req_id": 1 00:07:52.668 } 00:07:52.668 Got JSON-RPC error response 00:07:52.668 response: 00:07:52.668 { 00:07:52.668 "code": -32601, 00:07:52.668 "message": "Method not found" 00:07:52.668 } 00:07:52.668 03:17:37 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:07:52.668 03:17:37 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:52.668 03:17:37 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:52.668 03:17:37 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:52.668 03:17:37 app_cmdline -- app/cmdline.sh@1 -- # killprocess 2293504 00:07:52.668 03:17:37 app_cmdline -- common/autotest_common.sh@946 -- # '[' -z 2293504 ']' 00:07:52.668 03:17:37 app_cmdline -- common/autotest_common.sh@950 -- # kill -0 2293504 00:07:52.668 03:17:37 app_cmdline -- common/autotest_common.sh@951 -- # uname 00:07:52.668 03:17:37 app_cmdline -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:52.668 03:17:37 app_cmdline -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2293504 00:07:52.668 03:17:37 app_cmdline -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:52.668 03:17:37 app_cmdline -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:52.668 03:17:37 app_cmdline -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2293504' 00:07:52.668 killing process with pid 2293504 00:07:52.668 03:17:37 app_cmdline -- common/autotest_common.sh@965 -- # kill 2293504 00:07:52.668 03:17:37 app_cmdline -- common/autotest_common.sh@970 -- # wait 2293504 00:07:53.233 00:07:53.233 real 0m1.498s 00:07:53.233 user 0m1.819s 00:07:53.233 sys 0m0.478s 00:07:53.233 03:17:38 app_cmdline -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:53.233 03:17:38 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:53.233 ************************************ 00:07:53.233 END TEST app_cmdline 00:07:53.233 ************************************ 00:07:53.233 03:17:38 -- spdk/autotest.sh@186 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:53.233 03:17:38 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:53.233 03:17:38 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:53.233 03:17:38 -- common/autotest_common.sh@10 -- # set +x 00:07:53.233 ************************************ 00:07:53.233 START TEST version 00:07:53.233 ************************************ 00:07:53.233 03:17:38 version -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:53.233 * Looking for test storage... 00:07:53.233 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:53.233 03:17:38 version -- app/version.sh@17 -- # get_header_version major 00:07:53.233 03:17:38 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:53.233 03:17:38 version -- app/version.sh@14 -- # cut -f2 00:07:53.233 03:17:38 version -- app/version.sh@14 -- # tr -d '"' 00:07:53.233 03:17:38 version -- app/version.sh@17 -- # major=24 00:07:53.233 03:17:38 version -- app/version.sh@18 -- # get_header_version minor 00:07:53.233 03:17:38 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:53.233 03:17:38 version -- app/version.sh@14 -- # cut -f2 00:07:53.233 03:17:38 version -- app/version.sh@14 -- # tr -d '"' 00:07:53.233 03:17:38 version -- app/version.sh@18 -- # minor=5 00:07:53.233 03:17:38 version -- app/version.sh@19 -- # get_header_version patch 00:07:53.233 03:17:38 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:53.233 03:17:38 version -- app/version.sh@14 -- # cut -f2 00:07:53.233 03:17:38 version -- app/version.sh@14 -- # tr -d '"' 00:07:53.233 03:17:38 version -- app/version.sh@19 -- # patch=1 00:07:53.233 03:17:38 version -- app/version.sh@20 -- # get_header_version suffix 00:07:53.233 03:17:38 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:53.233 03:17:38 version -- app/version.sh@14 -- # cut -f2 00:07:53.233 03:17:38 version -- app/version.sh@14 -- # tr -d '"' 00:07:53.233 03:17:38 version -- app/version.sh@20 -- # suffix=-pre 00:07:53.233 03:17:38 version -- app/version.sh@22 -- # version=24.5 00:07:53.233 03:17:38 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:53.233 03:17:38 version -- app/version.sh@25 -- # version=24.5.1 00:07:53.233 03:17:38 version -- app/version.sh@28 -- # version=24.5.1rc0 00:07:53.233 03:17:38 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:53.233 03:17:38 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:53.233 03:17:38 version -- app/version.sh@30 -- # py_version=24.5.1rc0 00:07:53.233 03:17:38 version -- app/version.sh@31 -- # [[ 24.5.1rc0 == \2\4\.\5\.\1\r\c\0 ]] 00:07:53.233 00:07:53.233 real 0m0.106s 00:07:53.233 user 0m0.066s 00:07:53.233 sys 0m0.061s 00:07:53.233 03:17:38 version -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:53.233 03:17:38 version -- common/autotest_common.sh@10 -- # set +x 00:07:53.233 ************************************ 00:07:53.233 END TEST version 00:07:53.233 ************************************ 00:07:53.233 03:17:38 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:07:53.233 03:17:38 -- spdk/autotest.sh@198 -- # uname -s 00:07:53.233 03:17:38 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:07:53.233 03:17:38 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:07:53.233 03:17:38 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:07:53.233 03:17:38 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:07:53.233 03:17:38 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:07:53.233 03:17:38 -- spdk/autotest.sh@260 -- # timing_exit lib 00:07:53.233 03:17:38 -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:53.233 03:17:38 -- common/autotest_common.sh@10 -- # set +x 00:07:53.233 03:17:38 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:07:53.233 03:17:38 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:07:53.233 03:17:38 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:07:53.233 03:17:38 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:07:53.233 03:17:38 -- spdk/autotest.sh@283 -- # '[' tcp = rdma ']' 00:07:53.233 03:17:38 -- spdk/autotest.sh@286 -- # '[' tcp = tcp ']' 00:07:53.233 03:17:38 -- spdk/autotest.sh@287 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:53.233 03:17:38 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:53.233 03:17:38 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:53.233 03:17:38 -- common/autotest_common.sh@10 -- # set +x 00:07:53.233 ************************************ 00:07:53.233 START TEST nvmf_tcp 00:07:53.233 ************************************ 00:07:53.233 03:17:38 nvmf_tcp -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:53.233 * Looking for test storage... 00:07:53.233 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:53.233 03:17:38 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:07:53.233 03:17:38 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:53.233 03:17:38 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:53.233 03:17:38 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:07:53.233 03:17:38 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:53.233 03:17:38 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:53.233 03:17:38 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:53.233 03:17:38 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:53.233 03:17:38 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:53.233 03:17:38 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:53.233 03:17:38 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:53.233 03:17:38 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:53.233 03:17:38 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:53.233 03:17:38 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:53.492 03:17:38 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:53.492 03:17:38 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:53.492 03:17:38 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:53.492 03:17:38 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:53.492 03:17:38 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:53.492 03:17:38 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:53.492 03:17:38 nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:53.492 03:17:38 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:53.492 03:17:38 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:53.492 03:17:38 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:53.492 03:17:38 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:53.492 03:17:38 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:53.492 03:17:38 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:53.492 03:17:38 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:07:53.492 03:17:38 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:53.492 03:17:38 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:07:53.492 03:17:38 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:53.492 03:17:38 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:53.492 03:17:38 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:53.492 03:17:38 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:53.492 03:17:38 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:53.492 03:17:38 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:53.492 03:17:38 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:53.492 03:17:38 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:53.492 03:17:38 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:53.492 03:17:38 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:07:53.492 03:17:38 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:07:53.492 03:17:38 nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:53.492 03:17:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:53.492 03:17:38 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:07:53.492 03:17:38 nvmf_tcp -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:53.492 03:17:38 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:53.492 03:17:38 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:53.492 03:17:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:53.492 ************************************ 00:07:53.492 START TEST nvmf_example 00:07:53.492 ************************************ 00:07:53.492 03:17:38 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:53.492 * Looking for test storage... 00:07:53.492 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:53.492 03:17:38 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:53.492 03:17:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:07:53.492 03:17:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:53.492 03:17:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:53.492 03:17:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:53.492 03:17:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:53.492 03:17:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:53.492 03:17:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:53.492 03:17:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:53.492 03:17:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:53.492 03:17:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:53.492 03:17:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:53.492 03:17:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:53.492 03:17:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:53.492 03:17:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:53.492 03:17:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:53.492 03:17:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:53.492 03:17:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:53.492 03:17:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:53.492 03:17:38 nvmf_tcp.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:53.492 03:17:38 nvmf_tcp.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:53.492 03:17:38 nvmf_tcp.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:53.492 03:17:38 nvmf_tcp.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:53.492 03:17:38 nvmf_tcp.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:53.492 03:17:38 nvmf_tcp.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:53.492 03:17:38 nvmf_tcp.nvmf_example -- paths/export.sh@5 -- # export PATH 00:07:53.492 03:17:38 nvmf_tcp.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:53.492 03:17:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:07:53.492 03:17:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:53.492 03:17:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:53.492 03:17:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:53.492 03:17:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:53.492 03:17:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:53.492 03:17:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:53.492 03:17:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:53.492 03:17:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:53.492 03:17:38 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:07:53.492 03:17:38 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:07:53.492 03:17:38 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:07:53.492 03:17:38 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:07:53.492 03:17:38 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:07:53.492 03:17:38 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:07:53.492 03:17:38 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:07:53.492 03:17:38 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:07:53.492 03:17:38 nvmf_tcp.nvmf_example -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:53.492 03:17:38 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:53.492 03:17:38 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:07:53.492 03:17:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:53.492 03:17:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:53.492 03:17:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:53.492 03:17:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:53.492 03:17:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:53.492 03:17:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:53.492 03:17:38 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:53.492 03:17:38 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:53.492 03:17:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:53.492 03:17:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:53.492 03:17:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:07:53.492 03:17:38 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:55.393 03:17:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:55.393 03:17:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:07:55.393 03:17:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:55.393 03:17:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:55.393 03:17:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:55.393 03:17:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:55.393 03:17:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:55.393 03:17:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:07:55.393 03:17:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:55.393 03:17:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:07:55.393 03:17:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:07:55.393 03:17:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:07:55.393 03:17:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:07:55.393 03:17:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:07:55.393 03:17:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:07:55.393 03:17:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:55.393 03:17:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:55.393 03:17:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:55.393 03:17:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:55.393 03:17:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:55.393 03:17:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:55.393 03:17:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:55.393 03:17:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:55.393 03:17:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:55.393 03:17:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:55.393 03:17:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:55.393 03:17:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:55.393 03:17:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:55.393 03:17:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:55.393 03:17:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:55.394 03:17:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:55.394 03:17:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:55.394 03:17:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:55.394 03:17:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:55.394 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:55.394 03:17:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:55.394 03:17:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:55.394 03:17:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:55.394 03:17:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:55.394 03:17:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:55.394 03:17:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:55.394 03:17:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:55.394 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:55.394 03:17:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:55.394 03:17:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:55.394 03:17:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:55.394 03:17:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:55.394 03:17:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:55.394 03:17:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:55.394 03:17:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:55.394 03:17:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:55.394 03:17:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:55.394 03:17:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:55.394 03:17:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:55.394 03:17:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:55.394 03:17:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:55.394 03:17:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:55.394 03:17:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:55.394 03:17:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:55.394 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:55.394 03:17:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:55.394 03:17:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:55.394 03:17:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:55.394 03:17:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:55.394 03:17:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:55.394 03:17:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:55.394 03:17:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:55.394 03:17:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:55.394 03:17:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:55.394 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:55.394 03:17:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:55.394 03:17:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:55.394 03:17:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:07:55.394 03:17:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:55.394 03:17:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:55.394 03:17:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:55.394 03:17:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:55.394 03:17:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:55.394 03:17:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:55.394 03:17:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:55.394 03:17:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:55.394 03:17:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:55.394 03:17:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:55.394 03:17:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:55.394 03:17:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:55.394 03:17:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:55.394 03:17:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:55.394 03:17:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:55.394 03:17:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:55.394 03:17:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:55.394 03:17:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:55.394 03:17:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:55.394 03:17:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:55.652 03:17:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:55.652 03:17:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:55.652 03:17:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:55.652 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:55.652 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.181 ms 00:07:55.652 00:07:55.652 --- 10.0.0.2 ping statistics --- 00:07:55.652 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:55.652 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:07:55.652 03:17:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:55.652 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:55.652 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.098 ms 00:07:55.652 00:07:55.652 --- 10.0.0.1 ping statistics --- 00:07:55.652 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:55.652 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:07:55.652 03:17:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:55.652 03:17:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:07:55.652 03:17:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:55.652 03:17:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:55.652 03:17:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:55.652 03:17:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:55.652 03:17:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:55.652 03:17:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:55.652 03:17:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:55.652 03:17:40 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:07:55.652 03:17:40 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:07:55.652 03:17:40 nvmf_tcp.nvmf_example -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:55.652 03:17:40 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:55.652 03:17:40 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:07:55.652 03:17:40 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:07:55.652 03:17:40 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=2295519 00:07:55.652 03:17:40 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:07:55.652 03:17:40 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:55.652 03:17:40 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 2295519 00:07:55.652 03:17:40 nvmf_tcp.nvmf_example -- common/autotest_common.sh@827 -- # '[' -z 2295519 ']' 00:07:55.652 03:17:40 nvmf_tcp.nvmf_example -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:55.652 03:17:40 nvmf_tcp.nvmf_example -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:55.652 03:17:40 nvmf_tcp.nvmf_example -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:55.652 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:55.652 03:17:40 nvmf_tcp.nvmf_example -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:55.652 03:17:40 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:55.652 EAL: No free 2048 kB hugepages reported on node 1 00:07:56.584 03:17:41 nvmf_tcp.nvmf_example -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:56.584 03:17:41 nvmf_tcp.nvmf_example -- common/autotest_common.sh@860 -- # return 0 00:07:56.584 03:17:41 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:07:56.584 03:17:41 nvmf_tcp.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:56.584 03:17:41 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:56.584 03:17:41 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:56.584 03:17:41 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:56.584 03:17:41 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:56.584 03:17:41 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:56.584 03:17:41 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:07:56.584 03:17:41 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:56.584 03:17:41 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:56.584 03:17:41 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:56.584 03:17:41 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:07:56.584 03:17:41 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:56.584 03:17:41 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:56.584 03:17:41 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:56.584 03:17:41 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:56.584 03:17:41 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:07:56.584 03:17:41 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:56.584 03:17:41 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:56.584 03:17:41 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:56.584 03:17:41 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:56.584 03:17:41 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:56.584 03:17:41 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:56.584 03:17:41 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:56.584 03:17:41 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:56.584 03:17:41 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:07:56.584 03:17:41 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:07:56.584 EAL: No free 2048 kB hugepages reported on node 1 00:08:08.794 Initializing NVMe Controllers 00:08:08.794 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:08.794 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:08.794 Initialization complete. Launching workers. 00:08:08.794 ======================================================== 00:08:08.794 Latency(us) 00:08:08.794 Device Information : IOPS MiB/s Average min max 00:08:08.794 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 14777.98 57.73 4330.23 876.15 16410.68 00:08:08.794 ======================================================== 00:08:08.794 Total : 14777.98 57.73 4330.23 876.15 16410.68 00:08:08.794 00:08:08.794 03:17:52 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:08:08.794 03:17:52 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:08:08.794 03:17:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:08.794 03:17:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@117 -- # sync 00:08:08.794 03:17:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:08.794 03:17:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:08:08.794 03:17:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:08.794 03:17:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:08.794 rmmod nvme_tcp 00:08:08.794 rmmod nvme_fabrics 00:08:08.794 rmmod nvme_keyring 00:08:08.794 03:17:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:08.794 03:17:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:08:08.794 03:17:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:08:08.794 03:17:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 2295519 ']' 00:08:08.794 03:17:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@490 -- # killprocess 2295519 00:08:08.794 03:17:52 nvmf_tcp.nvmf_example -- common/autotest_common.sh@946 -- # '[' -z 2295519 ']' 00:08:08.794 03:17:52 nvmf_tcp.nvmf_example -- common/autotest_common.sh@950 -- # kill -0 2295519 00:08:08.794 03:17:52 nvmf_tcp.nvmf_example -- common/autotest_common.sh@951 -- # uname 00:08:08.794 03:17:52 nvmf_tcp.nvmf_example -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:08.794 03:17:52 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2295519 00:08:08.794 03:17:52 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # process_name=nvmf 00:08:08.794 03:17:52 nvmf_tcp.nvmf_example -- common/autotest_common.sh@956 -- # '[' nvmf = sudo ']' 00:08:08.794 03:17:52 nvmf_tcp.nvmf_example -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2295519' 00:08:08.794 killing process with pid 2295519 00:08:08.794 03:17:52 nvmf_tcp.nvmf_example -- common/autotest_common.sh@965 -- # kill 2295519 00:08:08.794 03:17:52 nvmf_tcp.nvmf_example -- common/autotest_common.sh@970 -- # wait 2295519 00:08:08.794 nvmf threads initialize successfully 00:08:08.795 bdev subsystem init successfully 00:08:08.795 created a nvmf target service 00:08:08.795 create targets's poll groups done 00:08:08.795 all subsystems of target started 00:08:08.795 nvmf target is running 00:08:08.795 all subsystems of target stopped 00:08:08.795 destroy targets's poll groups done 00:08:08.795 destroyed the nvmf target service 00:08:08.795 bdev subsystem finish successfully 00:08:08.795 nvmf threads destroy successfully 00:08:08.795 03:17:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:08.795 03:17:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:08.795 03:17:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:08.795 03:17:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:08.795 03:17:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:08.795 03:17:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:08.795 03:17:52 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:08.795 03:17:52 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:09.066 03:17:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:09.066 03:17:54 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:08:09.066 03:17:54 nvmf_tcp.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:09.066 03:17:54 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:09.327 00:08:09.327 real 0m15.816s 00:08:09.327 user 0m44.948s 00:08:09.327 sys 0m3.292s 00:08:09.327 03:17:54 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:09.327 03:17:54 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:09.327 ************************************ 00:08:09.327 END TEST nvmf_example 00:08:09.327 ************************************ 00:08:09.327 03:17:54 nvmf_tcp -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:08:09.327 03:17:54 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:08:09.327 03:17:54 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:09.327 03:17:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:09.327 ************************************ 00:08:09.327 START TEST nvmf_filesystem 00:08:09.327 ************************************ 00:08:09.327 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:08:09.327 * Looking for test storage... 00:08:09.327 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:09.327 03:17:54 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:08:09.327 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:08:09.327 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:08:09.327 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:08:09.327 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:08:09.327 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@38 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:08:09.327 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@43 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:08:09.327 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:08:09.327 03:17:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:08:09.327 03:17:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:08:09.327 03:17:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:08:09.327 03:17:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:08:09.327 03:17:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:08:09.327 03:17:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:08:09.327 03:17:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:08:09.327 03:17:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:08:09.327 03:17:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:08:09.327 03:17:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:08:09.327 03:17:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:08:09.327 03:17:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:08:09.327 03:17:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:08:09.327 03:17:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:08:09.327 03:17:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:08:09.327 03:17:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:08:09.327 03:17:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:08:09.327 03:17:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:08:09.328 03:17:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:08:09.328 03:17:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:08:09.328 03:17:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:08:09.328 03:17:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:08:09.328 03:17:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:08:09.328 03:17:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:08:09.328 03:17:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:08:09.328 03:17:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:08:09.328 03:17:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:08:09.328 03:17:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:08:09.328 03:17:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:08:09.328 03:17:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:08:09.328 03:17:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:08:09.328 03:17:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:08:09.328 03:17:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:08:09.328 03:17:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:08:09.328 03:17:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:08:09.328 03:17:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:08:09.328 03:17:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:08:09.328 03:17:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:08:09.328 03:17:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:08:09.328 03:17:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:08:09.328 03:17:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR=//var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:08:09.328 03:17:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:08:09.328 03:17:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:08:09.328 03:17:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:08:09.328 03:17:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:08:09.328 03:17:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:08:09.328 03:17:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:08:09.328 03:17:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:08:09.328 03:17:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:08:09.328 03:17:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:08:09.328 03:17:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:08:09.328 03:17:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:08:09.328 03:17:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:08:09.328 03:17:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:08:09.328 03:17:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:08:09.328 03:17:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:08:09.328 03:17:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:08:09.328 03:17:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:08:09.328 03:17:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:08:09.328 03:17:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:08:09.328 03:17:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:08:09.328 03:17:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:08:09.328 03:17:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:08:09.328 03:17:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:08:09.328 03:17:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:08:09.328 03:17:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:08:09.328 03:17:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:08:09.328 03:17:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:08:09.328 03:17:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:08:09.328 03:17:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:08:09.328 03:17:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:08:09.328 03:17:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:08:09.328 03:17:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:08:09.328 03:17:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:08:09.328 03:17:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:08:09.328 03:17:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:08:09.328 03:17:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES= 00:08:09.328 03:17:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:08:09.328 03:17:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:08:09.328 03:17:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:08:09.328 03:17:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:08:09.328 03:17:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:08:09.328 03:17:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:08:09.328 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@53 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:08:09.328 03:17:54 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:08:09.328 03:17:54 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:08:09.328 03:17:54 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:08:09.328 03:17:54 nvmf_tcp.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:08:09.328 03:17:54 nvmf_tcp.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:08:09.328 03:17:54 nvmf_tcp.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:08:09.328 03:17:54 nvmf_tcp.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:08:09.328 03:17:54 nvmf_tcp.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:08:09.328 03:17:54 nvmf_tcp.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:08:09.328 03:17:54 nvmf_tcp.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:08:09.328 03:17:54 nvmf_tcp.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:08:09.328 03:17:54 nvmf_tcp.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:08:09.328 03:17:54 nvmf_tcp.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:08:09.328 03:17:54 nvmf_tcp.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:08:09.329 03:17:54 nvmf_tcp.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:08:09.329 #define SPDK_CONFIG_H 00:08:09.329 #define SPDK_CONFIG_APPS 1 00:08:09.329 #define SPDK_CONFIG_ARCH native 00:08:09.329 #undef SPDK_CONFIG_ASAN 00:08:09.329 #undef SPDK_CONFIG_AVAHI 00:08:09.329 #undef SPDK_CONFIG_CET 00:08:09.329 #define SPDK_CONFIG_COVERAGE 1 00:08:09.329 #define SPDK_CONFIG_CROSS_PREFIX 00:08:09.329 #undef SPDK_CONFIG_CRYPTO 00:08:09.329 #undef SPDK_CONFIG_CRYPTO_MLX5 00:08:09.329 #undef SPDK_CONFIG_CUSTOMOCF 00:08:09.329 #undef SPDK_CONFIG_DAOS 00:08:09.329 #define SPDK_CONFIG_DAOS_DIR 00:08:09.329 #define SPDK_CONFIG_DEBUG 1 00:08:09.329 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:08:09.329 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:08:09.329 #define SPDK_CONFIG_DPDK_INC_DIR //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:08:09.329 #define SPDK_CONFIG_DPDK_LIB_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:08:09.329 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:08:09.329 #undef SPDK_CONFIG_DPDK_UADK 00:08:09.329 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:08:09.329 #define SPDK_CONFIG_EXAMPLES 1 00:08:09.329 #undef SPDK_CONFIG_FC 00:08:09.329 #define SPDK_CONFIG_FC_PATH 00:08:09.329 #define SPDK_CONFIG_FIO_PLUGIN 1 00:08:09.329 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:08:09.329 #undef SPDK_CONFIG_FUSE 00:08:09.329 #undef SPDK_CONFIG_FUZZER 00:08:09.329 #define SPDK_CONFIG_FUZZER_LIB 00:08:09.329 #undef SPDK_CONFIG_GOLANG 00:08:09.329 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:08:09.329 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:08:09.329 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:08:09.329 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:08:09.329 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:08:09.329 #undef SPDK_CONFIG_HAVE_LIBBSD 00:08:09.329 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:08:09.329 #define SPDK_CONFIG_IDXD 1 00:08:09.329 #define SPDK_CONFIG_IDXD_KERNEL 1 00:08:09.329 #undef SPDK_CONFIG_IPSEC_MB 00:08:09.329 #define SPDK_CONFIG_IPSEC_MB_DIR 00:08:09.329 #define SPDK_CONFIG_ISAL 1 00:08:09.329 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:08:09.329 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:08:09.329 #define SPDK_CONFIG_LIBDIR 00:08:09.329 #undef SPDK_CONFIG_LTO 00:08:09.329 #define SPDK_CONFIG_MAX_LCORES 00:08:09.329 #define SPDK_CONFIG_NVME_CUSE 1 00:08:09.329 #undef SPDK_CONFIG_OCF 00:08:09.329 #define SPDK_CONFIG_OCF_PATH 00:08:09.329 #define SPDK_CONFIG_OPENSSL_PATH 00:08:09.329 #undef SPDK_CONFIG_PGO_CAPTURE 00:08:09.329 #define SPDK_CONFIG_PGO_DIR 00:08:09.329 #undef SPDK_CONFIG_PGO_USE 00:08:09.329 #define SPDK_CONFIG_PREFIX /usr/local 00:08:09.329 #undef SPDK_CONFIG_RAID5F 00:08:09.329 #undef SPDK_CONFIG_RBD 00:08:09.329 #define SPDK_CONFIG_RDMA 1 00:08:09.329 #define SPDK_CONFIG_RDMA_PROV verbs 00:08:09.329 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:08:09.329 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:08:09.329 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:08:09.329 #define SPDK_CONFIG_SHARED 1 00:08:09.329 #undef SPDK_CONFIG_SMA 00:08:09.329 #define SPDK_CONFIG_TESTS 1 00:08:09.329 #undef SPDK_CONFIG_TSAN 00:08:09.329 #define SPDK_CONFIG_UBLK 1 00:08:09.329 #define SPDK_CONFIG_UBSAN 1 00:08:09.329 #undef SPDK_CONFIG_UNIT_TESTS 00:08:09.329 #undef SPDK_CONFIG_URING 00:08:09.329 #define SPDK_CONFIG_URING_PATH 00:08:09.329 #undef SPDK_CONFIG_URING_ZNS 00:08:09.329 #undef SPDK_CONFIG_USDT 00:08:09.329 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:08:09.329 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:08:09.329 #define SPDK_CONFIG_VFIO_USER 1 00:08:09.329 #define SPDK_CONFIG_VFIO_USER_DIR 00:08:09.329 #define SPDK_CONFIG_VHOST 1 00:08:09.329 #define SPDK_CONFIG_VIRTIO 1 00:08:09.329 #undef SPDK_CONFIG_VTUNE 00:08:09.329 #define SPDK_CONFIG_VTUNE_DIR 00:08:09.329 #define SPDK_CONFIG_WERROR 1 00:08:09.329 #define SPDK_CONFIG_WPDK_DIR 00:08:09.329 #undef SPDK_CONFIG_XNVME 00:08:09.329 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:08:09.329 03:17:54 nvmf_tcp.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:08:09.329 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:09.329 03:17:54 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:09.329 03:17:54 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:09.329 03:17:54 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:09.329 03:17:54 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:09.329 03:17:54 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:09.329 03:17:54 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:09.329 03:17:54 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:08:09.329 03:17:54 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:09.329 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:08:09.329 03:17:54 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:08:09.329 03:17:54 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:08:09.330 03:17:54 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:08:09.330 03:17:54 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:08:09.330 03:17:54 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:08:09.330 03:17:54 nvmf_tcp.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:08:09.330 03:17:54 nvmf_tcp.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:08:09.330 03:17:54 nvmf_tcp.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:08:09.330 03:17:54 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # uname -s 00:08:09.330 03:17:54 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:08:09.330 03:17:54 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:08:09.330 03:17:54 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:08:09.330 03:17:54 nvmf_tcp.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:08:09.330 03:17:54 nvmf_tcp.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:08:09.330 03:17:54 nvmf_tcp.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:08:09.330 03:17:54 nvmf_tcp.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:08:09.330 03:17:54 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:08:09.330 03:17:54 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:08:09.330 03:17:54 nvmf_tcp.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:08:09.330 03:17:54 nvmf_tcp.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:08:09.330 03:17:54 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:08:09.330 03:17:54 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:08:09.330 03:17:54 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:08:09.330 03:17:54 nvmf_tcp.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:08:09.330 03:17:54 nvmf_tcp.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:08:09.330 03:17:54 nvmf_tcp.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:08:09.330 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@57 -- # : 1 00:08:09.330 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@58 -- # export RUN_NIGHTLY 00:08:09.330 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@61 -- # : 0 00:08:09.330 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@62 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:08:09.330 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@63 -- # : 0 00:08:09.330 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@64 -- # export SPDK_RUN_VALGRIND 00:08:09.330 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@65 -- # : 1 00:08:09.330 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@66 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:08:09.330 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@67 -- # : 0 00:08:09.330 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@68 -- # export SPDK_TEST_UNITTEST 00:08:09.330 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@69 -- # : 00:08:09.330 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@70 -- # export SPDK_TEST_AUTOBUILD 00:08:09.330 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@71 -- # : 0 00:08:09.330 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@72 -- # export SPDK_TEST_RELEASE_BUILD 00:08:09.330 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@73 -- # : 0 00:08:09.330 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@74 -- # export SPDK_TEST_ISAL 00:08:09.330 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@75 -- # : 0 00:08:09.330 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@76 -- # export SPDK_TEST_ISCSI 00:08:09.330 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@77 -- # : 0 00:08:09.330 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@78 -- # export SPDK_TEST_ISCSI_INITIATOR 00:08:09.330 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@79 -- # : 0 00:08:09.330 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@80 -- # export SPDK_TEST_NVME 00:08:09.330 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@81 -- # : 0 00:08:09.330 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@82 -- # export SPDK_TEST_NVME_PMR 00:08:09.330 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@83 -- # : 0 00:08:09.330 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@84 -- # export SPDK_TEST_NVME_BP 00:08:09.330 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@85 -- # : 1 00:08:09.330 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@86 -- # export SPDK_TEST_NVME_CLI 00:08:09.330 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@87 -- # : 0 00:08:09.330 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@88 -- # export SPDK_TEST_NVME_CUSE 00:08:09.330 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@89 -- # : 0 00:08:09.330 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@90 -- # export SPDK_TEST_NVME_FDP 00:08:09.330 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@91 -- # : 1 00:08:09.330 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@92 -- # export SPDK_TEST_NVMF 00:08:09.330 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@93 -- # : 1 00:08:09.330 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@94 -- # export SPDK_TEST_VFIOUSER 00:08:09.330 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@95 -- # : 0 00:08:09.330 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@96 -- # export SPDK_TEST_VFIOUSER_QEMU 00:08:09.330 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@97 -- # : 0 00:08:09.330 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@98 -- # export SPDK_TEST_FUZZER 00:08:09.330 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@99 -- # : 0 00:08:09.330 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@100 -- # export SPDK_TEST_FUZZER_SHORT 00:08:09.330 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@101 -- # : tcp 00:08:09.330 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@102 -- # export SPDK_TEST_NVMF_TRANSPORT 00:08:09.330 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@103 -- # : 0 00:08:09.330 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@104 -- # export SPDK_TEST_RBD 00:08:09.330 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@105 -- # : 0 00:08:09.330 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@106 -- # export SPDK_TEST_VHOST 00:08:09.330 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@107 -- # : 0 00:08:09.330 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@108 -- # export SPDK_TEST_BLOCKDEV 00:08:09.330 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@109 -- # : 0 00:08:09.330 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@110 -- # export SPDK_TEST_IOAT 00:08:09.330 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@111 -- # : 0 00:08:09.330 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@112 -- # export SPDK_TEST_BLOBFS 00:08:09.330 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@113 -- # : 0 00:08:09.330 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@114 -- # export SPDK_TEST_VHOST_INIT 00:08:09.330 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@115 -- # : 0 00:08:09.330 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@116 -- # export SPDK_TEST_LVOL 00:08:09.330 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@117 -- # : 0 00:08:09.331 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@118 -- # export SPDK_TEST_VBDEV_COMPRESS 00:08:09.331 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@119 -- # : 0 00:08:09.331 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@120 -- # export SPDK_RUN_ASAN 00:08:09.331 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@121 -- # : 1 00:08:09.331 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@122 -- # export SPDK_RUN_UBSAN 00:08:09.331 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@123 -- # : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:08:09.331 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@124 -- # export SPDK_RUN_EXTERNAL_DPDK 00:08:09.331 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@125 -- # : 0 00:08:09.331 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@126 -- # export SPDK_RUN_NON_ROOT 00:08:09.331 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@127 -- # : 0 00:08:09.331 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@128 -- # export SPDK_TEST_CRYPTO 00:08:09.331 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@129 -- # : 0 00:08:09.331 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@130 -- # export SPDK_TEST_FTL 00:08:09.331 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@131 -- # : 0 00:08:09.331 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@132 -- # export SPDK_TEST_OCF 00:08:09.331 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@133 -- # : 0 00:08:09.331 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@134 -- # export SPDK_TEST_VMD 00:08:09.331 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@135 -- # : 0 00:08:09.331 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@136 -- # export SPDK_TEST_OPAL 00:08:09.331 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@137 -- # : v23.11 00:08:09.331 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@138 -- # export SPDK_TEST_NATIVE_DPDK 00:08:09.331 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@139 -- # : true 00:08:09.331 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@140 -- # export SPDK_AUTOTEST_X 00:08:09.331 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@141 -- # : 0 00:08:09.331 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@142 -- # export SPDK_TEST_RAID5 00:08:09.331 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@143 -- # : 0 00:08:09.331 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@144 -- # export SPDK_TEST_URING 00:08:09.331 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@145 -- # : 0 00:08:09.331 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@146 -- # export SPDK_TEST_USDT 00:08:09.331 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@147 -- # : 0 00:08:09.331 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@148 -- # export SPDK_TEST_USE_IGB_UIO 00:08:09.331 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@149 -- # : 0 00:08:09.331 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@150 -- # export SPDK_TEST_SCHEDULER 00:08:09.331 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@151 -- # : 0 00:08:09.331 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@152 -- # export SPDK_TEST_SCANBUILD 00:08:09.331 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@153 -- # : e810 00:08:09.331 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@154 -- # export SPDK_TEST_NVMF_NICS 00:08:09.331 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@155 -- # : 0 00:08:09.331 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@156 -- # export SPDK_TEST_SMA 00:08:09.331 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@157 -- # : 0 00:08:09.331 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@158 -- # export SPDK_TEST_DAOS 00:08:09.331 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@159 -- # : 0 00:08:09.331 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@160 -- # export SPDK_TEST_XNVME 00:08:09.331 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@161 -- # : 0 00:08:09.331 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@162 -- # export SPDK_TEST_ACCEL_DSA 00:08:09.331 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@163 -- # : 0 00:08:09.331 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@164 -- # export SPDK_TEST_ACCEL_IAA 00:08:09.331 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 00:08:09.331 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_FUZZER_TARGET 00:08:09.331 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@168 -- # : 0 00:08:09.331 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@169 -- # export SPDK_TEST_NVMF_MDNS 00:08:09.331 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@170 -- # : 0 00:08:09.331 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@171 -- # export SPDK_JSONRPC_GO_CLIENT 00:08:09.331 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:08:09.331 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@174 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:08:09.331 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:08:09.331 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:08:09.331 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:09.331 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:09.331 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:09.331 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:09.331 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@180 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:08:09.331 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@180 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:08:09.331 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@184 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:08:09.332 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@184 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:08:09.332 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@188 -- # export PYTHONDONTWRITEBYTECODE=1 00:08:09.332 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@188 -- # PYTHONDONTWRITEBYTECODE=1 00:08:09.332 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@192 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:08:09.332 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@192 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:08:09.332 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:08:09.332 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:08:09.332 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@197 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:08:09.332 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@198 -- # rm -rf /var/tmp/asan_suppression_file 00:08:09.332 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@199 -- # cat 00:08:09.332 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@235 -- # echo leak:libfuse3.so 00:08:09.332 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@237 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:08:09.332 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@237 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:08:09.332 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@239 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:08:09.332 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@239 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:08:09.332 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@241 -- # '[' -z /var/spdk/dependencies ']' 00:08:09.332 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@244 -- # export DEPENDENCY_DIR 00:08:09.332 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@248 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:08:09.332 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@248 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:08:09.332 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:08:09.332 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:08:09.332 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@252 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:09.332 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@252 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:09.332 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:09.332 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:09.332 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@255 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:08:09.332 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@255 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:08:09.332 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@258 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:09.332 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@258 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:09.332 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@261 -- # '[' 0 -eq 0 ']' 00:08:09.332 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # export valgrind= 00:08:09.332 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # valgrind= 00:08:09.332 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@268 -- # uname -s 00:08:09.332 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@268 -- # '[' Linux = Linux ']' 00:08:09.332 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # HUGEMEM=4096 00:08:09.332 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # export CLEAR_HUGE=yes 00:08:09.332 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # CLEAR_HUGE=yes 00:08:09.332 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # [[ 0 -eq 1 ]] 00:08:09.332 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # [[ 0 -eq 1 ]] 00:08:09.332 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@278 -- # MAKE=make 00:08:09.332 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKEFLAGS=-j48 00:08:09.332 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@295 -- # export HUGEMEM=4096 00:08:09.332 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@295 -- # HUGEMEM=4096 00:08:09.332 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@297 -- # NO_HUGE=() 00:08:09.332 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@298 -- # TEST_MODE= 00:08:09.332 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@299 -- # for i in "$@" 00:08:09.332 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@300 -- # case "$i" in 00:08:09.332 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@305 -- # TEST_TRANSPORT=tcp 00:08:09.332 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@317 -- # [[ -z 2297232 ]] 00:08:09.332 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@317 -- # kill -0 2297232 00:08:09.332 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1676 -- # set_test_storage 2147483648 00:08:09.332 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # [[ -v testdir ]] 00:08:09.332 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@329 -- # local requested_size=2147483648 00:08:09.332 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@330 -- # local mount target_dir 00:08:09.332 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@332 -- # local -A mounts fss sizes avails uses 00:08:09.332 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@333 -- # local source fs size avail mount use 00:08:09.332 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@335 -- # local storage_fallback storage_candidates 00:08:09.332 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@337 -- # mktemp -udt spdk.XXXXXX 00:08:09.332 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@337 -- # storage_fallback=/tmp/spdk.OuwVq1 00:08:09.332 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@342 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:08:09.332 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@344 -- # [[ -n '' ]] 00:08:09.332 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@349 -- # [[ -n '' ]] 00:08:09.332 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@354 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.OuwVq1/tests/target /tmp/spdk.OuwVq1 00:08:09.332 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@357 -- # requested_size=2214592512 00:08:09.332 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:08:09.332 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@326 -- # df -T 00:08:09.332 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@326 -- # grep -v Filesystem 00:08:09.332 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=spdk_devtmpfs 00:08:09.332 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=devtmpfs 00:08:09.332 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=67108864 00:08:09.332 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=67108864 00:08:09.332 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=0 00:08:09.332 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:08:09.332 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=/dev/pmem0 00:08:09.332 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=ext2 00:08:09.333 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=953643008 00:08:09.333 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=5284429824 00:08:09.333 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=4330786816 00:08:09.333 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:08:09.333 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=spdk_root 00:08:09.333 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=overlay 00:08:09.333 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=53118316544 00:08:09.333 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=61994708992 00:08:09.333 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=8876392448 00:08:09.333 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:08:09.333 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:08:09.333 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:08:09.333 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=30993977344 00:08:09.333 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=30997352448 00:08:09.333 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=3375104 00:08:09.333 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:08:09.333 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:08:09.333 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:08:09.333 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=12390182912 00:08:09.333 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=12398944256 00:08:09.333 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=8761344 00:08:09.333 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:08:09.333 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:08:09.333 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:08:09.333 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=30996836352 00:08:09.333 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=30997356544 00:08:09.333 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=520192 00:08:09.333 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:08:09.333 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:08:09.333 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:08:09.333 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=6199463936 00:08:09.333 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=6199468032 00:08:09.333 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=4096 00:08:09.333 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:08:09.333 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@365 -- # printf '* Looking for test storage...\n' 00:08:09.333 * Looking for test storage... 00:08:09.333 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@367 -- # local target_space new_size 00:08:09.333 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@368 -- # for target_dir in "${storage_candidates[@]}" 00:08:09.333 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@371 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:09.333 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@371 -- # awk '$1 !~ /Filesystem/{print $6}' 00:08:09.333 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@371 -- # mount=/ 00:08:09.333 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@373 -- # target_space=53118316544 00:08:09.333 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@374 -- # (( target_space == 0 || target_space < requested_size )) 00:08:09.333 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@377 -- # (( target_space >= requested_size )) 00:08:09.333 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@379 -- # [[ overlay == tmpfs ]] 00:08:09.333 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@379 -- # [[ overlay == ramfs ]] 00:08:09.333 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@379 -- # [[ / == / ]] 00:08:09.333 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # new_size=11090984960 00:08:09.333 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@381 -- # (( new_size * 100 / sizes[/] > 95 )) 00:08:09.333 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@386 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:09.333 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@386 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:09.333 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:09.333 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:09.333 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@388 -- # return 0 00:08:09.333 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set -o errtrace 00:08:09.333 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1679 -- # shopt -s extdebug 00:08:09.333 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1680 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:08:09.333 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1682 -- # PS4=' \t $test_domain -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:08:09.333 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1683 -- # true 00:08:09.333 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1685 -- # xtrace_fd 00:08:09.333 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:08:09.333 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:08:09.333 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:08:09.333 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:08:09.333 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:08:09.333 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:08:09.333 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:08:09.333 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:08:09.333 03:17:54 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:09.333 03:17:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:08:09.333 03:17:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:09.333 03:17:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:09.333 03:17:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:09.333 03:17:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:09.333 03:17:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:09.333 03:17:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:09.333 03:17:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:09.333 03:17:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:09.333 03:17:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:09.333 03:17:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:09.334 03:17:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:09.334 03:17:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:09.334 03:17:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:09.334 03:17:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:09.334 03:17:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:09.334 03:17:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:09.334 03:17:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:09.334 03:17:54 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:09.334 03:17:54 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:09.334 03:17:54 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:09.334 03:17:54 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:09.334 03:17:54 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:09.334 03:17:54 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:09.334 03:17:54 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:08:09.334 03:17:54 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:09.334 03:17:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:08:09.334 03:17:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:09.334 03:17:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:09.334 03:17:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:09.334 03:17:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:09.334 03:17:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:09.334 03:17:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:09.334 03:17:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:09.334 03:17:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:09.334 03:17:54 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:08:09.334 03:17:54 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:08:09.334 03:17:54 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:08:09.334 03:17:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:09.334 03:17:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:09.334 03:17:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:09.334 03:17:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:09.334 03:17:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:09.334 03:17:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:09.334 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:09.334 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:09.334 03:17:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:09.334 03:17:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:09.334 03:17:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:08:09.334 03:17:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:11.863 03:17:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:11.863 03:17:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:08:11.863 03:17:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:11.863 03:17:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:11.863 03:17:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:11.863 03:17:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:11.863 03:17:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:11.863 03:17:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:08:11.863 03:17:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:11.863 03:17:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:08:11.863 03:17:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:08:11.863 03:17:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:08:11.863 03:17:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:08:11.863 03:17:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:08:11.863 03:17:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:08:11.863 03:17:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:11.863 03:17:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:11.863 03:17:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:11.863 03:17:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:11.863 03:17:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:11.863 03:17:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:11.863 03:17:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:11.863 03:17:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:11.863 03:17:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:11.863 03:17:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:11.863 03:17:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:11.863 03:17:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:11.863 03:17:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:11.863 03:17:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:11.863 03:17:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:11.863 03:17:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:11.863 03:17:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:11.863 03:17:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:11.863 03:17:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:11.863 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:11.863 03:17:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:11.863 03:17:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:11.863 03:17:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:11.863 03:17:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:11.863 03:17:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:11.863 03:17:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:11.863 03:17:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:11.863 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:11.863 03:17:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:11.863 03:17:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:11.863 03:17:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:11.863 03:17:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:11.863 03:17:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:11.863 03:17:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:11.863 03:17:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:11.863 03:17:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:11.863 03:17:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:11.863 03:17:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:11.863 03:17:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:11.863 03:17:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:11.863 03:17:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:11.863 03:17:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:11.863 03:17:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:11.863 03:17:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:11.863 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:11.863 03:17:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:11.863 03:17:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:11.863 03:17:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:11.863 03:17:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:11.863 03:17:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:11.863 03:17:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:11.863 03:17:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:11.863 03:17:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:11.863 03:17:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:11.863 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:11.863 03:17:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:11.863 03:17:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:11.863 03:17:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:08:11.863 03:17:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:11.863 03:17:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:11.863 03:17:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:11.863 03:17:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:11.863 03:17:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:11.863 03:17:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:11.863 03:17:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:11.863 03:17:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:11.863 03:17:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:11.863 03:17:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:11.863 03:17:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:11.864 03:17:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:11.864 03:17:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:11.864 03:17:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:11.864 03:17:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:11.864 03:17:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:11.864 03:17:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:11.864 03:17:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:11.864 03:17:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:11.864 03:17:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:11.864 03:17:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:11.864 03:17:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:11.864 03:17:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:11.864 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:11.864 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.210 ms 00:08:11.864 00:08:11.864 --- 10.0.0.2 ping statistics --- 00:08:11.864 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:11.864 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:08:11.864 03:17:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:11.864 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:11.864 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.091 ms 00:08:11.864 00:08:11.864 --- 10.0.0.1 ping statistics --- 00:08:11.864 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:11.864 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:08:11.864 03:17:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:11.864 03:17:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:08:11.864 03:17:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:11.864 03:17:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:11.864 03:17:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:11.864 03:17:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:11.864 03:17:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:11.864 03:17:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:11.864 03:17:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:11.864 03:17:56 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:08:11.864 03:17:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:08:11.864 03:17:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:11.864 03:17:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:11.864 ************************************ 00:08:11.864 START TEST nvmf_filesystem_no_in_capsule 00:08:11.864 ************************************ 00:08:11.864 03:17:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1121 -- # nvmf_filesystem_part 0 00:08:11.864 03:17:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:08:11.864 03:17:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:08:11.864 03:17:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:11.864 03:17:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@720 -- # xtrace_disable 00:08:11.864 03:17:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:11.864 03:17:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=2298859 00:08:11.864 03:17:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:11.864 03:17:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 2298859 00:08:11.864 03:17:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@827 -- # '[' -z 2298859 ']' 00:08:11.864 03:17:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:11.864 03:17:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:11.864 03:17:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:11.864 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:11.864 03:17:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:11.864 03:17:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:11.864 [2024-07-21 03:17:56.789963] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:08:11.864 [2024-07-21 03:17:56.790042] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:11.864 EAL: No free 2048 kB hugepages reported on node 1 00:08:11.864 [2024-07-21 03:17:56.863142] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:11.864 [2024-07-21 03:17:56.961599] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:11.864 [2024-07-21 03:17:56.961670] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:11.864 [2024-07-21 03:17:56.961687] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:11.864 [2024-07-21 03:17:56.961700] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:11.864 [2024-07-21 03:17:56.961711] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:11.864 [2024-07-21 03:17:56.961769] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:11.864 [2024-07-21 03:17:56.961838] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:11.864 [2024-07-21 03:17:56.961859] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:11.864 [2024-07-21 03:17:56.961864] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:11.864 03:17:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:11.864 03:17:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@860 -- # return 0 00:08:11.864 03:17:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:11.864 03:17:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:11.864 03:17:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:11.864 03:17:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:11.864 03:17:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:08:11.864 03:17:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:08:11.864 03:17:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:11.864 03:17:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:11.864 [2024-07-21 03:17:57.111388] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:11.864 03:17:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:11.864 03:17:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:08:11.864 03:17:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:11.864 03:17:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:12.123 Malloc1 00:08:12.123 03:17:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:12.123 03:17:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:12.123 03:17:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:12.123 03:17:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:12.123 03:17:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:12.123 03:17:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:12.123 03:17:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:12.123 03:17:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:12.123 03:17:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:12.123 03:17:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:12.123 03:17:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:12.123 03:17:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:12.123 [2024-07-21 03:17:57.281329] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:12.123 03:17:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:12.123 03:17:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:08:12.123 03:17:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1374 -- # local bdev_name=Malloc1 00:08:12.123 03:17:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1375 -- # local bdev_info 00:08:12.123 03:17:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1376 -- # local bs 00:08:12.123 03:17:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1377 -- # local nb 00:08:12.123 03:17:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:08:12.123 03:17:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:12.123 03:17:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:12.123 03:17:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:12.123 03:17:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:08:12.123 { 00:08:12.123 "name": "Malloc1", 00:08:12.123 "aliases": [ 00:08:12.123 "2c71dbac-dcab-4423-a103-edd567bfbee0" 00:08:12.123 ], 00:08:12.123 "product_name": "Malloc disk", 00:08:12.123 "block_size": 512, 00:08:12.123 "num_blocks": 1048576, 00:08:12.123 "uuid": "2c71dbac-dcab-4423-a103-edd567bfbee0", 00:08:12.123 "assigned_rate_limits": { 00:08:12.123 "rw_ios_per_sec": 0, 00:08:12.123 "rw_mbytes_per_sec": 0, 00:08:12.123 "r_mbytes_per_sec": 0, 00:08:12.123 "w_mbytes_per_sec": 0 00:08:12.123 }, 00:08:12.123 "claimed": true, 00:08:12.123 "claim_type": "exclusive_write", 00:08:12.123 "zoned": false, 00:08:12.123 "supported_io_types": { 00:08:12.123 "read": true, 00:08:12.123 "write": true, 00:08:12.123 "unmap": true, 00:08:12.123 "write_zeroes": true, 00:08:12.123 "flush": true, 00:08:12.123 "reset": true, 00:08:12.123 "compare": false, 00:08:12.123 "compare_and_write": false, 00:08:12.123 "abort": true, 00:08:12.123 "nvme_admin": false, 00:08:12.123 "nvme_io": false 00:08:12.123 }, 00:08:12.123 "memory_domains": [ 00:08:12.123 { 00:08:12.123 "dma_device_id": "system", 00:08:12.123 "dma_device_type": 1 00:08:12.123 }, 00:08:12.123 { 00:08:12.123 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:12.123 "dma_device_type": 2 00:08:12.123 } 00:08:12.123 ], 00:08:12.123 "driver_specific": {} 00:08:12.123 } 00:08:12.123 ]' 00:08:12.123 03:17:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:08:12.123 03:17:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # bs=512 00:08:12.123 03:17:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:08:12.123 03:17:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # nb=1048576 00:08:12.123 03:17:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bdev_size=512 00:08:12.123 03:17:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # echo 512 00:08:12.123 03:17:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:08:12.123 03:17:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:13.053 03:17:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:08:13.053 03:17:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1194 -- # local i=0 00:08:13.053 03:17:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:08:13.053 03:17:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:08:13.053 03:17:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1201 -- # sleep 2 00:08:14.946 03:18:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:08:14.946 03:18:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:08:14.946 03:18:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:08:14.946 03:18:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:08:14.946 03:18:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:08:14.946 03:18:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # return 0 00:08:14.946 03:18:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:08:14.946 03:18:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:08:14.946 03:18:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:08:14.946 03:18:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:08:14.946 03:18:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:08:14.946 03:18:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:08:14.946 03:18:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:08:14.946 03:18:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:08:14.946 03:18:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:08:14.946 03:18:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:08:14.946 03:18:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:08:15.202 03:18:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:08:15.764 03:18:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:08:16.694 03:18:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:08:16.694 03:18:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:08:16.694 03:18:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:08:16.694 03:18:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:16.694 03:18:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:16.694 ************************************ 00:08:16.694 START TEST filesystem_ext4 00:08:16.694 ************************************ 00:08:16.694 03:18:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create ext4 nvme0n1 00:08:16.694 03:18:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:08:16.694 03:18:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:16.694 03:18:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:08:16.694 03:18:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@922 -- # local fstype=ext4 00:08:16.694 03:18:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:08:16.694 03:18:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@924 -- # local i=0 00:08:16.694 03:18:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local force 00:08:16.694 03:18:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # '[' ext4 = ext4 ']' 00:08:16.694 03:18:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # force=-F 00:08:16.694 03:18:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:08:16.694 mke2fs 1.46.5 (30-Dec-2021) 00:08:16.952 Discarding device blocks: 0/522240 done 00:08:16.952 Creating filesystem with 522240 1k blocks and 130560 inodes 00:08:16.952 Filesystem UUID: e1e7619c-beaf-4443-b101-2ff25f6f2217 00:08:16.952 Superblock backups stored on blocks: 00:08:16.952 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:08:16.952 00:08:16.952 Allocating group tables: 0/64 done 00:08:16.952 Writing inode tables: 0/64 done 00:08:16.952 Creating journal (8192 blocks): done 00:08:16.952 Writing superblocks and filesystem accounting information: 0/64 done 00:08:16.952 00:08:16.952 03:18:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # return 0 00:08:16.952 03:18:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:17.208 03:18:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:17.465 03:18:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:08:17.465 03:18:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:17.465 03:18:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:08:17.465 03:18:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:08:17.465 03:18:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:17.465 03:18:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 2298859 00:08:17.465 03:18:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:17.465 03:18:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:17.465 03:18:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:17.465 03:18:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:17.465 00:08:17.465 real 0m0.693s 00:08:17.465 user 0m0.014s 00:08:17.465 sys 0m0.060s 00:08:17.465 03:18:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:17.465 03:18:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:08:17.465 ************************************ 00:08:17.465 END TEST filesystem_ext4 00:08:17.465 ************************************ 00:08:17.465 03:18:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:08:17.465 03:18:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:08:17.465 03:18:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:17.465 03:18:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:17.465 ************************************ 00:08:17.465 START TEST filesystem_btrfs 00:08:17.465 ************************************ 00:08:17.465 03:18:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create btrfs nvme0n1 00:08:17.465 03:18:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:08:17.465 03:18:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:17.465 03:18:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:08:17.465 03:18:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@922 -- # local fstype=btrfs 00:08:17.465 03:18:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:08:17.465 03:18:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@924 -- # local i=0 00:08:17.465 03:18:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local force 00:08:17.465 03:18:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # '[' btrfs = ext4 ']' 00:08:17.465 03:18:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # force=-f 00:08:17.466 03:18:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:08:17.723 btrfs-progs v6.6.2 00:08:17.723 See https://btrfs.readthedocs.io for more information. 00:08:17.723 00:08:17.723 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:08:17.723 NOTE: several default settings have changed in version 5.15, please make sure 00:08:17.723 this does not affect your deployments: 00:08:17.723 - DUP for metadata (-m dup) 00:08:17.723 - enabled no-holes (-O no-holes) 00:08:17.723 - enabled free-space-tree (-R free-space-tree) 00:08:17.723 00:08:17.723 Label: (null) 00:08:17.723 UUID: a6b96262-59d2-4739-8626-c8b3f47afd17 00:08:17.723 Node size: 16384 00:08:17.723 Sector size: 4096 00:08:17.723 Filesystem size: 510.00MiB 00:08:17.723 Block group profiles: 00:08:17.723 Data: single 8.00MiB 00:08:17.723 Metadata: DUP 32.00MiB 00:08:17.723 System: DUP 8.00MiB 00:08:17.723 SSD detected: yes 00:08:17.723 Zoned device: no 00:08:17.723 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:08:17.723 Runtime features: free-space-tree 00:08:17.723 Checksum: crc32c 00:08:17.723 Number of devices: 1 00:08:17.723 Devices: 00:08:17.723 ID SIZE PATH 00:08:17.723 1 510.00MiB /dev/nvme0n1p1 00:08:17.723 00:08:17.723 03:18:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # return 0 00:08:17.723 03:18:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:18.653 03:18:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:18.653 03:18:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:08:18.654 03:18:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:18.654 03:18:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:08:18.654 03:18:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:08:18.654 03:18:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:18.654 03:18:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 2298859 00:08:18.654 03:18:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:18.654 03:18:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:18.654 03:18:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:18.654 03:18:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:18.654 00:08:18.654 real 0m1.099s 00:08:18.654 user 0m0.017s 00:08:18.654 sys 0m0.113s 00:08:18.654 03:18:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:18.654 03:18:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:08:18.654 ************************************ 00:08:18.654 END TEST filesystem_btrfs 00:08:18.654 ************************************ 00:08:18.654 03:18:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:08:18.654 03:18:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:08:18.654 03:18:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:18.654 03:18:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:18.654 ************************************ 00:08:18.654 START TEST filesystem_xfs 00:08:18.654 ************************************ 00:08:18.654 03:18:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create xfs nvme0n1 00:08:18.654 03:18:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:08:18.654 03:18:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:18.654 03:18:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:08:18.654 03:18:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@922 -- # local fstype=xfs 00:08:18.654 03:18:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:08:18.654 03:18:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@924 -- # local i=0 00:08:18.654 03:18:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local force 00:08:18.654 03:18:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # '[' xfs = ext4 ']' 00:08:18.654 03:18:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # force=-f 00:08:18.654 03:18:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # mkfs.xfs -f /dev/nvme0n1p1 00:08:18.654 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:08:18.654 = sectsz=512 attr=2, projid32bit=1 00:08:18.654 = crc=1 finobt=1, sparse=1, rmapbt=0 00:08:18.654 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:08:18.654 data = bsize=4096 blocks=130560, imaxpct=25 00:08:18.654 = sunit=0 swidth=0 blks 00:08:18.654 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:08:18.654 log =internal log bsize=4096 blocks=16384, version=2 00:08:18.654 = sectsz=512 sunit=0 blks, lazy-count=1 00:08:18.654 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:20.022 Discarding blocks...Done. 00:08:20.022 03:18:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # return 0 00:08:20.022 03:18:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:21.389 03:18:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:21.647 03:18:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:08:21.647 03:18:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:21.647 03:18:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:08:21.647 03:18:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:08:21.647 03:18:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:21.647 03:18:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 2298859 00:08:21.647 03:18:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:21.647 03:18:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:21.647 03:18:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:21.647 03:18:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:21.647 00:08:21.647 real 0m2.990s 00:08:21.647 user 0m0.017s 00:08:21.647 sys 0m0.058s 00:08:21.647 03:18:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:21.647 03:18:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:08:21.647 ************************************ 00:08:21.647 END TEST filesystem_xfs 00:08:21.647 ************************************ 00:08:21.647 03:18:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:21.647 03:18:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:08:21.647 03:18:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:21.904 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:21.905 03:18:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:21.905 03:18:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1215 -- # local i=0 00:08:21.905 03:18:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:08:21.905 03:18:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:21.905 03:18:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:08:21.905 03:18:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:21.905 03:18:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # return 0 00:08:21.905 03:18:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:21.905 03:18:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:21.905 03:18:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:21.905 03:18:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:21.905 03:18:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:21.905 03:18:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 2298859 00:08:21.905 03:18:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@946 -- # '[' -z 2298859 ']' 00:08:21.905 03:18:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@950 -- # kill -0 2298859 00:08:21.905 03:18:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@951 -- # uname 00:08:21.905 03:18:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:21.905 03:18:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2298859 00:08:21.905 03:18:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:21.905 03:18:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:21.905 03:18:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2298859' 00:08:21.905 killing process with pid 2298859 00:08:21.905 03:18:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@965 -- # kill 2298859 00:08:21.905 03:18:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@970 -- # wait 2298859 00:08:22.162 03:18:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:08:22.162 00:08:22.162 real 0m10.735s 00:08:22.162 user 0m41.123s 00:08:22.162 sys 0m1.644s 00:08:22.162 03:18:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:22.418 03:18:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:22.418 ************************************ 00:08:22.418 END TEST nvmf_filesystem_no_in_capsule 00:08:22.418 ************************************ 00:08:22.418 03:18:07 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:08:22.418 03:18:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:08:22.418 03:18:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:22.418 03:18:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:22.418 ************************************ 00:08:22.418 START TEST nvmf_filesystem_in_capsule 00:08:22.418 ************************************ 00:08:22.418 03:18:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1121 -- # nvmf_filesystem_part 4096 00:08:22.418 03:18:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:08:22.418 03:18:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:08:22.418 03:18:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:22.418 03:18:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@720 -- # xtrace_disable 00:08:22.418 03:18:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:22.418 03:18:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=2300654 00:08:22.418 03:18:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:22.418 03:18:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 2300654 00:08:22.418 03:18:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@827 -- # '[' -z 2300654 ']' 00:08:22.418 03:18:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:22.418 03:18:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:22.418 03:18:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:22.418 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:22.418 03:18:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:22.418 03:18:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:22.418 [2024-07-21 03:18:07.573238] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:08:22.418 [2024-07-21 03:18:07.573311] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:22.418 EAL: No free 2048 kB hugepages reported on node 1 00:08:22.418 [2024-07-21 03:18:07.638146] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:22.418 [2024-07-21 03:18:07.723559] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:22.418 [2024-07-21 03:18:07.723637] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:22.418 [2024-07-21 03:18:07.723654] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:22.418 [2024-07-21 03:18:07.723664] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:22.418 [2024-07-21 03:18:07.723689] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:22.419 [2024-07-21 03:18:07.723737] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:22.419 [2024-07-21 03:18:07.723797] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:22.419 [2024-07-21 03:18:07.723863] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:22.419 [2024-07-21 03:18:07.723865] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:22.675 03:18:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:22.675 03:18:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@860 -- # return 0 00:08:22.675 03:18:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:22.675 03:18:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:22.675 03:18:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:22.675 03:18:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:22.675 03:18:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:08:22.675 03:18:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:08:22.675 03:18:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:22.675 03:18:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:22.675 [2024-07-21 03:18:07.873166] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:22.675 03:18:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:22.675 03:18:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:08:22.675 03:18:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:22.675 03:18:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:22.954 Malloc1 00:08:22.954 03:18:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:22.954 03:18:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:22.954 03:18:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:22.954 03:18:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:22.954 03:18:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:22.954 03:18:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:22.954 03:18:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:22.954 03:18:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:22.954 03:18:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:22.954 03:18:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:22.954 03:18:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:22.954 03:18:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:22.954 [2024-07-21 03:18:08.059484] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:22.954 03:18:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:22.954 03:18:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:08:22.954 03:18:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1374 -- # local bdev_name=Malloc1 00:08:22.954 03:18:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1375 -- # local bdev_info 00:08:22.954 03:18:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1376 -- # local bs 00:08:22.954 03:18:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1377 -- # local nb 00:08:22.954 03:18:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:08:22.954 03:18:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:22.954 03:18:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:22.954 03:18:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:22.954 03:18:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:08:22.954 { 00:08:22.954 "name": "Malloc1", 00:08:22.954 "aliases": [ 00:08:22.954 "26883680-547f-48d8-b24b-19cbfa13bc73" 00:08:22.954 ], 00:08:22.954 "product_name": "Malloc disk", 00:08:22.954 "block_size": 512, 00:08:22.954 "num_blocks": 1048576, 00:08:22.954 "uuid": "26883680-547f-48d8-b24b-19cbfa13bc73", 00:08:22.954 "assigned_rate_limits": { 00:08:22.954 "rw_ios_per_sec": 0, 00:08:22.954 "rw_mbytes_per_sec": 0, 00:08:22.954 "r_mbytes_per_sec": 0, 00:08:22.954 "w_mbytes_per_sec": 0 00:08:22.954 }, 00:08:22.954 "claimed": true, 00:08:22.954 "claim_type": "exclusive_write", 00:08:22.954 "zoned": false, 00:08:22.954 "supported_io_types": { 00:08:22.954 "read": true, 00:08:22.954 "write": true, 00:08:22.954 "unmap": true, 00:08:22.954 "write_zeroes": true, 00:08:22.954 "flush": true, 00:08:22.954 "reset": true, 00:08:22.954 "compare": false, 00:08:22.954 "compare_and_write": false, 00:08:22.954 "abort": true, 00:08:22.954 "nvme_admin": false, 00:08:22.954 "nvme_io": false 00:08:22.954 }, 00:08:22.954 "memory_domains": [ 00:08:22.954 { 00:08:22.954 "dma_device_id": "system", 00:08:22.954 "dma_device_type": 1 00:08:22.954 }, 00:08:22.954 { 00:08:22.954 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:22.954 "dma_device_type": 2 00:08:22.954 } 00:08:22.954 ], 00:08:22.954 "driver_specific": {} 00:08:22.954 } 00:08:22.954 ]' 00:08:22.954 03:18:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:08:22.955 03:18:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # bs=512 00:08:22.955 03:18:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:08:22.955 03:18:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # nb=1048576 00:08:22.955 03:18:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bdev_size=512 00:08:22.955 03:18:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # echo 512 00:08:22.955 03:18:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:08:22.955 03:18:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:23.532 03:18:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:08:23.532 03:18:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1194 -- # local i=0 00:08:23.532 03:18:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:08:23.532 03:18:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:08:23.532 03:18:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1201 -- # sleep 2 00:08:26.062 03:18:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:08:26.062 03:18:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:08:26.062 03:18:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:08:26.062 03:18:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:08:26.062 03:18:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:08:26.062 03:18:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # return 0 00:08:26.062 03:18:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:08:26.062 03:18:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:08:26.062 03:18:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:08:26.062 03:18:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:08:26.062 03:18:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:08:26.062 03:18:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:08:26.062 03:18:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:08:26.062 03:18:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:08:26.062 03:18:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:08:26.062 03:18:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:08:26.062 03:18:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:08:26.062 03:18:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:08:26.626 03:18:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:08:27.995 03:18:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:08:27.995 03:18:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:08:27.995 03:18:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:08:27.995 03:18:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:27.995 03:18:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:27.995 ************************************ 00:08:27.995 START TEST filesystem_in_capsule_ext4 00:08:27.995 ************************************ 00:08:27.995 03:18:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create ext4 nvme0n1 00:08:27.995 03:18:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:08:27.995 03:18:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:27.995 03:18:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:08:27.995 03:18:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@922 -- # local fstype=ext4 00:08:27.995 03:18:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:08:27.995 03:18:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@924 -- # local i=0 00:08:27.995 03:18:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local force 00:08:27.995 03:18:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # '[' ext4 = ext4 ']' 00:08:27.995 03:18:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # force=-F 00:08:27.995 03:18:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:08:27.995 mke2fs 1.46.5 (30-Dec-2021) 00:08:27.996 Discarding device blocks: 0/522240 done 00:08:27.996 Creating filesystem with 522240 1k blocks and 130560 inodes 00:08:27.996 Filesystem UUID: 4299dc23-9281-4a14-9a95-8fed12fbdeaa 00:08:27.996 Superblock backups stored on blocks: 00:08:27.996 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:08:27.996 00:08:27.996 Allocating group tables: 0/64 done 00:08:27.996 Writing inode tables: 0/64 done 00:08:31.295 Creating journal (8192 blocks): done 00:08:31.295 Writing superblocks and filesystem accounting information: 0/64 done 00:08:31.295 00:08:31.295 03:18:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # return 0 00:08:31.295 03:18:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:31.295 03:18:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:31.295 03:18:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:08:31.295 03:18:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:31.295 03:18:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:08:31.295 03:18:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:08:31.295 03:18:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:31.295 03:18:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 2300654 00:08:31.295 03:18:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:31.295 03:18:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:31.295 03:18:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:31.295 03:18:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:31.295 00:08:31.295 real 0m3.550s 00:08:31.295 user 0m0.021s 00:08:31.295 sys 0m0.057s 00:08:31.295 03:18:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:31.295 03:18:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:08:31.295 ************************************ 00:08:31.295 END TEST filesystem_in_capsule_ext4 00:08:31.295 ************************************ 00:08:31.295 03:18:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:08:31.295 03:18:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:08:31.295 03:18:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:31.295 03:18:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:31.295 ************************************ 00:08:31.295 START TEST filesystem_in_capsule_btrfs 00:08:31.295 ************************************ 00:08:31.295 03:18:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create btrfs nvme0n1 00:08:31.295 03:18:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:08:31.295 03:18:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:31.295 03:18:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:08:31.295 03:18:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@922 -- # local fstype=btrfs 00:08:31.295 03:18:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:08:31.295 03:18:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@924 -- # local i=0 00:08:31.295 03:18:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local force 00:08:31.295 03:18:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # '[' btrfs = ext4 ']' 00:08:31.295 03:18:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # force=-f 00:08:31.295 03:18:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:08:31.553 btrfs-progs v6.6.2 00:08:31.553 See https://btrfs.readthedocs.io for more information. 00:08:31.553 00:08:31.553 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:08:31.553 NOTE: several default settings have changed in version 5.15, please make sure 00:08:31.553 this does not affect your deployments: 00:08:31.553 - DUP for metadata (-m dup) 00:08:31.553 - enabled no-holes (-O no-holes) 00:08:31.553 - enabled free-space-tree (-R free-space-tree) 00:08:31.553 00:08:31.553 Label: (null) 00:08:31.553 UUID: e4d4df5c-2f20-415e-a85b-384fa4963894 00:08:31.553 Node size: 16384 00:08:31.553 Sector size: 4096 00:08:31.553 Filesystem size: 510.00MiB 00:08:31.553 Block group profiles: 00:08:31.553 Data: single 8.00MiB 00:08:31.553 Metadata: DUP 32.00MiB 00:08:31.553 System: DUP 8.00MiB 00:08:31.553 SSD detected: yes 00:08:31.553 Zoned device: no 00:08:31.553 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:08:31.553 Runtime features: free-space-tree 00:08:31.553 Checksum: crc32c 00:08:31.553 Number of devices: 1 00:08:31.553 Devices: 00:08:31.553 ID SIZE PATH 00:08:31.553 1 510.00MiB /dev/nvme0n1p1 00:08:31.553 00:08:31.553 03:18:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # return 0 00:08:31.553 03:18:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:32.484 03:18:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:32.484 03:18:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:08:32.484 03:18:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:32.484 03:18:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:08:32.484 03:18:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:08:32.484 03:18:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:32.484 03:18:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 2300654 00:08:32.484 03:18:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:32.484 03:18:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:32.484 03:18:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:32.484 03:18:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:32.484 00:08:32.484 real 0m1.233s 00:08:32.484 user 0m0.023s 00:08:32.484 sys 0m0.120s 00:08:32.484 03:18:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:32.485 03:18:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:08:32.485 ************************************ 00:08:32.485 END TEST filesystem_in_capsule_btrfs 00:08:32.485 ************************************ 00:08:32.485 03:18:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:08:32.485 03:18:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:08:32.485 03:18:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:32.485 03:18:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:32.485 ************************************ 00:08:32.485 START TEST filesystem_in_capsule_xfs 00:08:32.485 ************************************ 00:08:32.485 03:18:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create xfs nvme0n1 00:08:32.485 03:18:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:08:32.485 03:18:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:32.485 03:18:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:08:32.485 03:18:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@922 -- # local fstype=xfs 00:08:32.485 03:18:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:08:32.485 03:18:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@924 -- # local i=0 00:08:32.485 03:18:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local force 00:08:32.485 03:18:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # '[' xfs = ext4 ']' 00:08:32.485 03:18:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # force=-f 00:08:32.485 03:18:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # mkfs.xfs -f /dev/nvme0n1p1 00:08:32.741 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:08:32.741 = sectsz=512 attr=2, projid32bit=1 00:08:32.741 = crc=1 finobt=1, sparse=1, rmapbt=0 00:08:32.741 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:08:32.741 data = bsize=4096 blocks=130560, imaxpct=25 00:08:32.741 = sunit=0 swidth=0 blks 00:08:32.741 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:08:32.741 log =internal log bsize=4096 blocks=16384, version=2 00:08:32.741 = sectsz=512 sunit=0 blks, lazy-count=1 00:08:32.741 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:33.685 Discarding blocks...Done. 00:08:33.685 03:18:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # return 0 00:08:33.685 03:18:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:35.579 03:18:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:35.579 03:18:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:08:35.579 03:18:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:35.579 03:18:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:08:35.579 03:18:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:08:35.579 03:18:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:35.579 03:18:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 2300654 00:08:35.579 03:18:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:35.579 03:18:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:35.579 03:18:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:35.579 03:18:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:35.579 00:08:35.579 real 0m2.863s 00:08:35.579 user 0m0.017s 00:08:35.579 sys 0m0.063s 00:08:35.579 03:18:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:35.579 03:18:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:08:35.579 ************************************ 00:08:35.579 END TEST filesystem_in_capsule_xfs 00:08:35.579 ************************************ 00:08:35.579 03:18:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:35.579 03:18:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:08:35.579 03:18:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:35.579 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:35.579 03:18:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:35.579 03:18:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1215 -- # local i=0 00:08:35.580 03:18:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:08:35.580 03:18:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:35.837 03:18:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:08:35.837 03:18:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:35.837 03:18:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # return 0 00:08:35.837 03:18:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:35.837 03:18:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:35.837 03:18:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:35.837 03:18:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:35.837 03:18:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:35.837 03:18:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 2300654 00:08:35.837 03:18:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@946 -- # '[' -z 2300654 ']' 00:08:35.837 03:18:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@950 -- # kill -0 2300654 00:08:35.837 03:18:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@951 -- # uname 00:08:35.837 03:18:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:35.837 03:18:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2300654 00:08:35.837 03:18:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:35.837 03:18:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:35.837 03:18:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2300654' 00:08:35.837 killing process with pid 2300654 00:08:35.837 03:18:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@965 -- # kill 2300654 00:08:35.837 03:18:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@970 -- # wait 2300654 00:08:36.095 03:18:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:08:36.095 00:08:36.095 real 0m13.876s 00:08:36.095 user 0m53.443s 00:08:36.095 sys 0m1.924s 00:08:36.095 03:18:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:36.095 03:18:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:36.095 ************************************ 00:08:36.095 END TEST nvmf_filesystem_in_capsule 00:08:36.095 ************************************ 00:08:36.353 03:18:21 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:08:36.353 03:18:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:36.353 03:18:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:08:36.353 03:18:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:36.353 03:18:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:08:36.353 03:18:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:36.353 03:18:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:36.353 rmmod nvme_tcp 00:08:36.353 rmmod nvme_fabrics 00:08:36.353 rmmod nvme_keyring 00:08:36.353 03:18:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:36.353 03:18:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:08:36.353 03:18:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:08:36.353 03:18:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:08:36.353 03:18:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:36.353 03:18:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:36.353 03:18:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:36.353 03:18:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:36.353 03:18:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:36.353 03:18:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:36.353 03:18:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:36.353 03:18:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:38.251 03:18:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:38.251 00:08:38.251 real 0m29.084s 00:08:38.251 user 1m35.441s 00:08:38.251 sys 0m5.163s 00:08:38.251 03:18:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:38.251 03:18:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:38.251 ************************************ 00:08:38.251 END TEST nvmf_filesystem 00:08:38.251 ************************************ 00:08:38.251 03:18:23 nvmf_tcp -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:08:38.251 03:18:23 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:08:38.251 03:18:23 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:38.251 03:18:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:38.508 ************************************ 00:08:38.508 START TEST nvmf_target_discovery 00:08:38.508 ************************************ 00:08:38.508 03:18:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:08:38.508 * Looking for test storage... 00:08:38.509 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:38.509 03:18:23 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:38.509 03:18:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:08:38.509 03:18:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:38.509 03:18:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:38.509 03:18:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:38.509 03:18:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:38.509 03:18:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:38.509 03:18:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:38.509 03:18:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:38.509 03:18:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:38.509 03:18:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:38.509 03:18:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:38.509 03:18:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:38.509 03:18:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:38.509 03:18:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:38.509 03:18:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:38.509 03:18:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:38.509 03:18:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:38.509 03:18:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:38.509 03:18:23 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:38.509 03:18:23 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:38.509 03:18:23 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:38.509 03:18:23 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.509 03:18:23 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.509 03:18:23 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.509 03:18:23 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:08:38.509 03:18:23 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.509 03:18:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:08:38.509 03:18:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:38.509 03:18:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:38.509 03:18:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:38.509 03:18:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:38.509 03:18:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:38.509 03:18:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:38.509 03:18:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:38.509 03:18:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:38.509 03:18:23 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:08:38.509 03:18:23 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:08:38.509 03:18:23 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:08:38.509 03:18:23 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:08:38.509 03:18:23 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:08:38.509 03:18:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:38.509 03:18:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:38.509 03:18:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:38.509 03:18:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:38.509 03:18:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:38.509 03:18:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:38.509 03:18:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:38.509 03:18:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:38.509 03:18:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:38.509 03:18:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:38.509 03:18:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:08:38.509 03:18:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:40.411 03:18:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:40.411 03:18:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:08:40.411 03:18:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:40.411 03:18:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:40.411 03:18:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:40.411 03:18:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:40.411 03:18:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:40.411 03:18:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:08:40.411 03:18:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:40.411 03:18:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:08:40.411 03:18:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:08:40.411 03:18:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:08:40.411 03:18:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:08:40.411 03:18:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:08:40.411 03:18:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:08:40.411 03:18:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:40.411 03:18:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:40.411 03:18:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:40.411 03:18:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:40.411 03:18:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:40.411 03:18:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:40.411 03:18:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:40.411 03:18:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:40.411 03:18:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:40.411 03:18:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:40.411 03:18:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:40.411 03:18:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:40.411 03:18:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:40.411 03:18:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:40.411 03:18:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:40.411 03:18:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:40.411 03:18:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:40.411 03:18:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:40.411 03:18:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:40.411 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:40.411 03:18:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:40.412 03:18:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:40.412 03:18:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:40.412 03:18:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:40.412 03:18:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:40.412 03:18:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:40.412 03:18:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:40.412 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:40.412 03:18:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:40.412 03:18:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:40.412 03:18:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:40.412 03:18:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:40.412 03:18:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:40.412 03:18:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:40.412 03:18:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:40.412 03:18:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:40.412 03:18:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:40.412 03:18:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:40.412 03:18:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:40.412 03:18:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:40.412 03:18:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:40.412 03:18:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:40.412 03:18:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:40.412 03:18:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:40.412 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:40.412 03:18:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:40.412 03:18:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:40.412 03:18:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:40.412 03:18:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:40.412 03:18:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:40.412 03:18:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:40.412 03:18:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:40.412 03:18:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:40.412 03:18:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:40.412 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:40.412 03:18:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:40.412 03:18:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:40.412 03:18:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:08:40.412 03:18:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:40.412 03:18:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:40.412 03:18:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:40.412 03:18:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:40.412 03:18:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:40.412 03:18:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:40.412 03:18:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:40.412 03:18:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:40.412 03:18:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:40.412 03:18:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:40.412 03:18:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:40.412 03:18:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:40.412 03:18:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:40.412 03:18:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:40.412 03:18:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:40.412 03:18:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:40.412 03:18:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:40.412 03:18:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:40.412 03:18:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:40.412 03:18:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:40.412 03:18:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:40.412 03:18:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:40.412 03:18:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:40.412 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:40.412 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.253 ms 00:08:40.412 00:08:40.412 --- 10.0.0.2 ping statistics --- 00:08:40.412 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:40.412 rtt min/avg/max/mdev = 0.253/0.253/0.253/0.000 ms 00:08:40.412 03:18:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:40.412 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:40.412 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.116 ms 00:08:40.412 00:08:40.412 --- 10.0.0.1 ping statistics --- 00:08:40.412 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:40.412 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:08:40.412 03:18:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:40.412 03:18:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:08:40.412 03:18:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:40.412 03:18:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:40.412 03:18:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:40.412 03:18:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:40.412 03:18:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:40.412 03:18:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:40.412 03:18:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:40.412 03:18:25 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:08:40.412 03:18:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:40.412 03:18:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@720 -- # xtrace_disable 00:08:40.412 03:18:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:40.412 03:18:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=2304680 00:08:40.412 03:18:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:40.412 03:18:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 2304680 00:08:40.412 03:18:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@827 -- # '[' -z 2304680 ']' 00:08:40.412 03:18:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:40.412 03:18:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:40.412 03:18:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:40.412 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:40.412 03:18:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:40.412 03:18:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:40.412 [2024-07-21 03:18:25.703839] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:08:40.412 [2024-07-21 03:18:25.703937] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:40.669 EAL: No free 2048 kB hugepages reported on node 1 00:08:40.669 [2024-07-21 03:18:25.773943] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:40.669 [2024-07-21 03:18:25.862966] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:40.669 [2024-07-21 03:18:25.863024] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:40.669 [2024-07-21 03:18:25.863037] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:40.669 [2024-07-21 03:18:25.863054] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:40.669 [2024-07-21 03:18:25.863065] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:40.669 [2024-07-21 03:18:25.863155] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:40.669 [2024-07-21 03:18:25.866634] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:40.669 [2024-07-21 03:18:25.866701] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:40.669 [2024-07-21 03:18:25.866704] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:40.927 03:18:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:40.927 03:18:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@860 -- # return 0 00:08:40.927 03:18:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:40.927 03:18:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:40.927 03:18:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:40.927 03:18:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:40.927 03:18:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:40.927 03:18:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:40.927 03:18:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:40.927 [2024-07-21 03:18:26.018435] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:40.927 03:18:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:40.927 03:18:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:08:40.927 03:18:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:40.927 03:18:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:08:40.927 03:18:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:40.927 03:18:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:40.927 Null1 00:08:40.927 03:18:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:40.927 03:18:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:40.927 03:18:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:40.927 03:18:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:40.927 03:18:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:40.927 03:18:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:08:40.927 03:18:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:40.927 03:18:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:40.927 03:18:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:40.927 03:18:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:40.927 03:18:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:40.927 03:18:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:40.927 [2024-07-21 03:18:26.058759] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:40.927 03:18:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:40.927 03:18:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:40.927 03:18:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:08:40.927 03:18:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:40.927 03:18:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:40.927 Null2 00:08:40.927 03:18:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:40.927 03:18:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:08:40.927 03:18:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:40.927 03:18:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:40.927 03:18:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:40.927 03:18:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:08:40.927 03:18:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:40.927 03:18:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:40.927 03:18:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:40.927 03:18:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:08:40.927 03:18:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:40.927 03:18:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:40.927 03:18:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:40.927 03:18:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:40.927 03:18:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:08:40.927 03:18:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:40.927 03:18:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:40.927 Null3 00:08:40.927 03:18:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:40.927 03:18:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:08:40.927 03:18:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:40.927 03:18:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:40.927 03:18:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:40.927 03:18:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:08:40.927 03:18:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:40.927 03:18:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:40.927 03:18:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:40.927 03:18:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:08:40.927 03:18:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:40.927 03:18:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:40.927 03:18:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:40.927 03:18:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:40.927 03:18:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:08:40.927 03:18:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:40.927 03:18:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:40.927 Null4 00:08:40.927 03:18:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:40.927 03:18:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:08:40.927 03:18:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:40.927 03:18:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:40.927 03:18:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:40.927 03:18:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:08:40.927 03:18:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:40.927 03:18:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:40.927 03:18:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:40.927 03:18:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:08:40.927 03:18:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:40.927 03:18:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:40.927 03:18:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:40.927 03:18:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:40.928 03:18:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:40.928 03:18:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:40.928 03:18:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:40.928 03:18:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:08:40.928 03:18:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:40.928 03:18:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:40.928 03:18:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:40.928 03:18:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:08:41.185 00:08:41.185 Discovery Log Number of Records 6, Generation counter 6 00:08:41.185 =====Discovery Log Entry 0====== 00:08:41.185 trtype: tcp 00:08:41.185 adrfam: ipv4 00:08:41.185 subtype: current discovery subsystem 00:08:41.185 treq: not required 00:08:41.185 portid: 0 00:08:41.185 trsvcid: 4420 00:08:41.185 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:41.185 traddr: 10.0.0.2 00:08:41.185 eflags: explicit discovery connections, duplicate discovery information 00:08:41.185 sectype: none 00:08:41.185 =====Discovery Log Entry 1====== 00:08:41.185 trtype: tcp 00:08:41.185 adrfam: ipv4 00:08:41.185 subtype: nvme subsystem 00:08:41.185 treq: not required 00:08:41.185 portid: 0 00:08:41.185 trsvcid: 4420 00:08:41.185 subnqn: nqn.2016-06.io.spdk:cnode1 00:08:41.185 traddr: 10.0.0.2 00:08:41.185 eflags: none 00:08:41.185 sectype: none 00:08:41.185 =====Discovery Log Entry 2====== 00:08:41.185 trtype: tcp 00:08:41.185 adrfam: ipv4 00:08:41.185 subtype: nvme subsystem 00:08:41.185 treq: not required 00:08:41.185 portid: 0 00:08:41.185 trsvcid: 4420 00:08:41.185 subnqn: nqn.2016-06.io.spdk:cnode2 00:08:41.185 traddr: 10.0.0.2 00:08:41.185 eflags: none 00:08:41.185 sectype: none 00:08:41.185 =====Discovery Log Entry 3====== 00:08:41.185 trtype: tcp 00:08:41.185 adrfam: ipv4 00:08:41.185 subtype: nvme subsystem 00:08:41.185 treq: not required 00:08:41.185 portid: 0 00:08:41.185 trsvcid: 4420 00:08:41.185 subnqn: nqn.2016-06.io.spdk:cnode3 00:08:41.185 traddr: 10.0.0.2 00:08:41.186 eflags: none 00:08:41.186 sectype: none 00:08:41.186 =====Discovery Log Entry 4====== 00:08:41.186 trtype: tcp 00:08:41.186 adrfam: ipv4 00:08:41.186 subtype: nvme subsystem 00:08:41.186 treq: not required 00:08:41.186 portid: 0 00:08:41.186 trsvcid: 4420 00:08:41.186 subnqn: nqn.2016-06.io.spdk:cnode4 00:08:41.186 traddr: 10.0.0.2 00:08:41.186 eflags: none 00:08:41.186 sectype: none 00:08:41.186 =====Discovery Log Entry 5====== 00:08:41.186 trtype: tcp 00:08:41.186 adrfam: ipv4 00:08:41.186 subtype: discovery subsystem referral 00:08:41.186 treq: not required 00:08:41.186 portid: 0 00:08:41.186 trsvcid: 4430 00:08:41.186 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:41.186 traddr: 10.0.0.2 00:08:41.186 eflags: none 00:08:41.186 sectype: none 00:08:41.186 03:18:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:08:41.186 Perform nvmf subsystem discovery via RPC 00:08:41.186 03:18:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:08:41.186 03:18:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:41.186 03:18:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:41.186 [ 00:08:41.186 { 00:08:41.186 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:08:41.186 "subtype": "Discovery", 00:08:41.186 "listen_addresses": [ 00:08:41.186 { 00:08:41.186 "trtype": "TCP", 00:08:41.186 "adrfam": "IPv4", 00:08:41.186 "traddr": "10.0.0.2", 00:08:41.186 "trsvcid": "4420" 00:08:41.186 } 00:08:41.186 ], 00:08:41.186 "allow_any_host": true, 00:08:41.186 "hosts": [] 00:08:41.186 }, 00:08:41.186 { 00:08:41.186 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:08:41.186 "subtype": "NVMe", 00:08:41.186 "listen_addresses": [ 00:08:41.186 { 00:08:41.186 "trtype": "TCP", 00:08:41.186 "adrfam": "IPv4", 00:08:41.186 "traddr": "10.0.0.2", 00:08:41.186 "trsvcid": "4420" 00:08:41.186 } 00:08:41.186 ], 00:08:41.186 "allow_any_host": true, 00:08:41.186 "hosts": [], 00:08:41.186 "serial_number": "SPDK00000000000001", 00:08:41.186 "model_number": "SPDK bdev Controller", 00:08:41.186 "max_namespaces": 32, 00:08:41.186 "min_cntlid": 1, 00:08:41.186 "max_cntlid": 65519, 00:08:41.186 "namespaces": [ 00:08:41.186 { 00:08:41.186 "nsid": 1, 00:08:41.186 "bdev_name": "Null1", 00:08:41.186 "name": "Null1", 00:08:41.186 "nguid": "D7D2360A632B416CBE2E1A44B1B3197F", 00:08:41.186 "uuid": "d7d2360a-632b-416c-be2e-1a44b1b3197f" 00:08:41.186 } 00:08:41.186 ] 00:08:41.186 }, 00:08:41.186 { 00:08:41.186 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:08:41.186 "subtype": "NVMe", 00:08:41.186 "listen_addresses": [ 00:08:41.186 { 00:08:41.186 "trtype": "TCP", 00:08:41.186 "adrfam": "IPv4", 00:08:41.186 "traddr": "10.0.0.2", 00:08:41.186 "trsvcid": "4420" 00:08:41.186 } 00:08:41.186 ], 00:08:41.186 "allow_any_host": true, 00:08:41.186 "hosts": [], 00:08:41.186 "serial_number": "SPDK00000000000002", 00:08:41.186 "model_number": "SPDK bdev Controller", 00:08:41.186 "max_namespaces": 32, 00:08:41.186 "min_cntlid": 1, 00:08:41.186 "max_cntlid": 65519, 00:08:41.186 "namespaces": [ 00:08:41.186 { 00:08:41.186 "nsid": 1, 00:08:41.186 "bdev_name": "Null2", 00:08:41.186 "name": "Null2", 00:08:41.186 "nguid": "B3981FC5984F42428FB0559619CEC3B2", 00:08:41.186 "uuid": "b3981fc5-984f-4242-8fb0-559619cec3b2" 00:08:41.186 } 00:08:41.186 ] 00:08:41.186 }, 00:08:41.186 { 00:08:41.186 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:08:41.186 "subtype": "NVMe", 00:08:41.186 "listen_addresses": [ 00:08:41.186 { 00:08:41.186 "trtype": "TCP", 00:08:41.186 "adrfam": "IPv4", 00:08:41.186 "traddr": "10.0.0.2", 00:08:41.186 "trsvcid": "4420" 00:08:41.186 } 00:08:41.186 ], 00:08:41.186 "allow_any_host": true, 00:08:41.186 "hosts": [], 00:08:41.186 "serial_number": "SPDK00000000000003", 00:08:41.186 "model_number": "SPDK bdev Controller", 00:08:41.186 "max_namespaces": 32, 00:08:41.186 "min_cntlid": 1, 00:08:41.186 "max_cntlid": 65519, 00:08:41.186 "namespaces": [ 00:08:41.186 { 00:08:41.186 "nsid": 1, 00:08:41.186 "bdev_name": "Null3", 00:08:41.186 "name": "Null3", 00:08:41.186 "nguid": "F3863E1225244B028CB12DDBF095F40A", 00:08:41.186 "uuid": "f3863e12-2524-4b02-8cb1-2ddbf095f40a" 00:08:41.186 } 00:08:41.186 ] 00:08:41.186 }, 00:08:41.186 { 00:08:41.186 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:08:41.186 "subtype": "NVMe", 00:08:41.186 "listen_addresses": [ 00:08:41.186 { 00:08:41.186 "trtype": "TCP", 00:08:41.186 "adrfam": "IPv4", 00:08:41.186 "traddr": "10.0.0.2", 00:08:41.186 "trsvcid": "4420" 00:08:41.186 } 00:08:41.186 ], 00:08:41.186 "allow_any_host": true, 00:08:41.186 "hosts": [], 00:08:41.186 "serial_number": "SPDK00000000000004", 00:08:41.186 "model_number": "SPDK bdev Controller", 00:08:41.186 "max_namespaces": 32, 00:08:41.186 "min_cntlid": 1, 00:08:41.186 "max_cntlid": 65519, 00:08:41.186 "namespaces": [ 00:08:41.186 { 00:08:41.186 "nsid": 1, 00:08:41.186 "bdev_name": "Null4", 00:08:41.186 "name": "Null4", 00:08:41.186 "nguid": "3BD01A8AD9AB468ABC36682A52D4DDDE", 00:08:41.186 "uuid": "3bd01a8a-d9ab-468a-bc36-682a52d4ddde" 00:08:41.186 } 00:08:41.186 ] 00:08:41.186 } 00:08:41.186 ] 00:08:41.186 03:18:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:41.186 03:18:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:08:41.186 03:18:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:41.186 03:18:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:41.186 03:18:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:41.186 03:18:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:41.186 03:18:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:41.186 03:18:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:08:41.186 03:18:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:41.186 03:18:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:41.186 03:18:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:41.186 03:18:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:41.186 03:18:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:08:41.186 03:18:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:41.186 03:18:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:41.186 03:18:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:41.186 03:18:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:08:41.186 03:18:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:41.186 03:18:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:41.186 03:18:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:41.186 03:18:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:41.186 03:18:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:08:41.186 03:18:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:41.186 03:18:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:41.186 03:18:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:41.186 03:18:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:08:41.186 03:18:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:41.186 03:18:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:41.186 03:18:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:41.186 03:18:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:41.186 03:18:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:08:41.186 03:18:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:41.186 03:18:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:41.186 03:18:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:41.186 03:18:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:08:41.186 03:18:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:41.186 03:18:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:41.186 03:18:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:41.186 03:18:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:08:41.186 03:18:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:41.186 03:18:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:41.186 03:18:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:41.186 03:18:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:08:41.186 03:18:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:08:41.186 03:18:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:41.186 03:18:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:41.186 03:18:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:41.444 03:18:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:08:41.444 03:18:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:08:41.444 03:18:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:08:41.444 03:18:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:08:41.444 03:18:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:41.444 03:18:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:08:41.444 03:18:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:41.444 03:18:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:08:41.444 03:18:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:41.444 03:18:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:41.444 rmmod nvme_tcp 00:08:41.444 rmmod nvme_fabrics 00:08:41.444 rmmod nvme_keyring 00:08:41.444 03:18:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:41.444 03:18:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:08:41.444 03:18:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:08:41.444 03:18:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 2304680 ']' 00:08:41.444 03:18:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 2304680 00:08:41.444 03:18:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@946 -- # '[' -z 2304680 ']' 00:08:41.444 03:18:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@950 -- # kill -0 2304680 00:08:41.444 03:18:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@951 -- # uname 00:08:41.444 03:18:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:41.444 03:18:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2304680 00:08:41.444 03:18:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:41.444 03:18:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:41.444 03:18:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2304680' 00:08:41.444 killing process with pid 2304680 00:08:41.444 03:18:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@965 -- # kill 2304680 00:08:41.444 03:18:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@970 -- # wait 2304680 00:08:41.703 03:18:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:41.703 03:18:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:41.703 03:18:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:41.703 03:18:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:41.703 03:18:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:41.703 03:18:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:41.703 03:18:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:41.703 03:18:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:43.602 03:18:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:43.602 00:08:43.602 real 0m5.292s 00:08:43.602 user 0m4.483s 00:08:43.602 sys 0m1.758s 00:08:43.602 03:18:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:43.602 03:18:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:43.602 ************************************ 00:08:43.602 END TEST nvmf_target_discovery 00:08:43.602 ************************************ 00:08:43.602 03:18:28 nvmf_tcp -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:43.602 03:18:28 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:08:43.602 03:18:28 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:43.602 03:18:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:43.602 ************************************ 00:08:43.602 START TEST nvmf_referrals 00:08:43.602 ************************************ 00:08:43.602 03:18:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:43.860 * Looking for test storage... 00:08:43.860 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:43.860 03:18:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:43.860 03:18:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:08:43.860 03:18:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:43.860 03:18:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:43.860 03:18:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:43.860 03:18:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:43.860 03:18:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:43.860 03:18:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:43.860 03:18:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:43.860 03:18:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:43.860 03:18:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:43.860 03:18:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:43.860 03:18:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:43.860 03:18:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:43.860 03:18:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:43.860 03:18:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:43.860 03:18:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:43.860 03:18:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:43.860 03:18:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:43.860 03:18:28 nvmf_tcp.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:43.860 03:18:28 nvmf_tcp.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:43.860 03:18:28 nvmf_tcp.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:43.860 03:18:28 nvmf_tcp.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:43.860 03:18:28 nvmf_tcp.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:43.860 03:18:28 nvmf_tcp.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:43.860 03:18:28 nvmf_tcp.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:08:43.860 03:18:28 nvmf_tcp.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:43.860 03:18:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:08:43.860 03:18:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:43.860 03:18:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:43.860 03:18:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:43.860 03:18:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:43.860 03:18:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:43.860 03:18:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:43.860 03:18:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:43.860 03:18:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:43.860 03:18:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:08:43.860 03:18:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:08:43.860 03:18:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:08:43.860 03:18:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:08:43.860 03:18:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:08:43.860 03:18:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:08:43.860 03:18:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:08:43.860 03:18:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:43.860 03:18:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:43.860 03:18:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:43.860 03:18:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:43.860 03:18:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:43.860 03:18:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:43.860 03:18:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:43.860 03:18:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:43.860 03:18:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:43.860 03:18:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:43.860 03:18:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:08:43.860 03:18:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:45.758 03:18:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:45.758 03:18:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:08:45.758 03:18:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:45.758 03:18:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:45.759 03:18:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:45.759 03:18:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:45.759 03:18:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:45.759 03:18:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:08:45.759 03:18:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:45.759 03:18:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:08:45.759 03:18:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:08:45.759 03:18:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:08:45.759 03:18:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:08:45.759 03:18:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:08:45.759 03:18:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:08:45.759 03:18:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:45.759 03:18:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:45.759 03:18:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:45.759 03:18:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:45.759 03:18:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:45.759 03:18:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:45.759 03:18:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:45.759 03:18:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:45.759 03:18:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:45.759 03:18:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:45.759 03:18:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:45.759 03:18:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:45.759 03:18:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:45.759 03:18:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:45.759 03:18:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:45.759 03:18:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:45.759 03:18:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:45.759 03:18:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:45.759 03:18:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:45.759 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:45.759 03:18:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:45.759 03:18:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:45.759 03:18:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:45.759 03:18:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:45.759 03:18:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:45.759 03:18:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:45.759 03:18:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:45.759 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:45.759 03:18:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:45.759 03:18:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:45.759 03:18:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:45.759 03:18:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:45.759 03:18:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:45.759 03:18:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:45.759 03:18:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:45.759 03:18:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:45.759 03:18:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:45.759 03:18:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:45.759 03:18:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:45.759 03:18:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:45.759 03:18:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:45.759 03:18:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:45.759 03:18:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:45.759 03:18:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:45.759 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:45.759 03:18:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:45.759 03:18:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:45.759 03:18:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:45.759 03:18:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:45.759 03:18:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:45.759 03:18:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:45.759 03:18:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:45.759 03:18:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:45.759 03:18:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:45.759 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:45.759 03:18:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:45.759 03:18:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:45.759 03:18:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:08:45.759 03:18:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:45.759 03:18:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:45.759 03:18:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:45.759 03:18:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:45.759 03:18:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:45.759 03:18:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:45.759 03:18:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:45.759 03:18:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:45.759 03:18:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:45.759 03:18:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:45.759 03:18:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:45.759 03:18:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:45.759 03:18:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:45.759 03:18:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:45.759 03:18:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:45.759 03:18:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:45.759 03:18:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:45.759 03:18:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:45.759 03:18:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:46.017 03:18:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:46.017 03:18:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:46.017 03:18:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:46.017 03:18:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:46.017 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:46.017 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.240 ms 00:08:46.017 00:08:46.017 --- 10.0.0.2 ping statistics --- 00:08:46.017 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:46.017 rtt min/avg/max/mdev = 0.240/0.240/0.240/0.000 ms 00:08:46.017 03:18:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:46.017 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:46.017 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.076 ms 00:08:46.017 00:08:46.017 --- 10.0.0.1 ping statistics --- 00:08:46.017 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:46.017 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:08:46.017 03:18:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:46.017 03:18:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:08:46.017 03:18:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:46.017 03:18:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:46.017 03:18:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:46.017 03:18:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:46.017 03:18:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:46.017 03:18:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:46.017 03:18:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:46.017 03:18:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:08:46.017 03:18:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:46.017 03:18:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@720 -- # xtrace_disable 00:08:46.017 03:18:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:46.017 03:18:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=2306770 00:08:46.017 03:18:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:46.017 03:18:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 2306770 00:08:46.017 03:18:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@827 -- # '[' -z 2306770 ']' 00:08:46.017 03:18:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:46.017 03:18:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:46.017 03:18:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:46.017 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:46.017 03:18:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:46.017 03:18:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:46.017 [2024-07-21 03:18:31.200554] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:08:46.017 [2024-07-21 03:18:31.200637] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:46.017 EAL: No free 2048 kB hugepages reported on node 1 00:08:46.017 [2024-07-21 03:18:31.270603] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:46.274 [2024-07-21 03:18:31.365927] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:46.274 [2024-07-21 03:18:31.365980] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:46.274 [2024-07-21 03:18:31.365996] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:46.274 [2024-07-21 03:18:31.366009] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:46.274 [2024-07-21 03:18:31.366021] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:46.274 [2024-07-21 03:18:31.366102] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:46.274 [2024-07-21 03:18:31.366153] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:46.274 [2024-07-21 03:18:31.366489] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:46.274 [2024-07-21 03:18:31.366493] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:46.274 03:18:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:46.274 03:18:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@860 -- # return 0 00:08:46.274 03:18:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:46.274 03:18:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:46.274 03:18:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:46.274 03:18:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:46.274 03:18:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:46.274 03:18:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:46.274 03:18:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:46.274 [2024-07-21 03:18:31.532585] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:46.274 03:18:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:46.274 03:18:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:08:46.274 03:18:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:46.274 03:18:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:46.274 [2024-07-21 03:18:31.544874] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:08:46.274 03:18:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:46.274 03:18:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:08:46.274 03:18:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:46.274 03:18:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:46.274 03:18:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:46.274 03:18:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:08:46.274 03:18:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:46.274 03:18:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:46.274 03:18:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:46.274 03:18:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:08:46.274 03:18:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:46.274 03:18:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:46.274 03:18:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:46.274 03:18:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:46.274 03:18:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:08:46.274 03:18:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:46.274 03:18:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:46.531 03:18:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:46.531 03:18:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:08:46.531 03:18:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:08:46.531 03:18:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:46.531 03:18:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:46.531 03:18:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:46.531 03:18:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:46.531 03:18:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:46.531 03:18:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:46.531 03:18:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:46.531 03:18:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:46.531 03:18:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:46.531 03:18:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:08:46.531 03:18:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:46.531 03:18:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:46.531 03:18:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:46.531 03:18:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:46.531 03:18:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:46.788 03:18:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:46.788 03:18:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:46.788 03:18:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:08:46.788 03:18:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:46.788 03:18:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:46.788 03:18:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:46.788 03:18:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:08:46.788 03:18:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:46.788 03:18:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:46.788 03:18:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:46.788 03:18:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:08:46.788 03:18:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:46.788 03:18:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:46.788 03:18:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:46.788 03:18:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:46.788 03:18:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:08:46.788 03:18:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:46.788 03:18:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:46.788 03:18:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:46.788 03:18:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:08:46.788 03:18:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:08:46.788 03:18:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:46.788 03:18:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:46.788 03:18:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:46.788 03:18:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:46.788 03:18:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:46.788 03:18:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:08:46.788 03:18:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:08:46.788 03:18:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:08:46.788 03:18:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:46.788 03:18:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:46.788 03:18:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:46.788 03:18:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:46.788 03:18:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:46.788 03:18:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:46.788 03:18:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:46.788 03:18:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:08:46.788 03:18:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:46.788 03:18:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:46.788 03:18:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:46.788 03:18:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:46.788 03:18:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:46.788 03:18:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:47.046 03:18:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:47.046 03:18:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:08:47.046 03:18:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:47.046 03:18:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:08:47.046 03:18:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:47.046 03:18:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:47.046 03:18:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:47.046 03:18:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:47.046 03:18:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:47.046 03:18:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:08:47.046 03:18:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:47.046 03:18:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:08:47.046 03:18:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:08:47.046 03:18:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:47.046 03:18:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:47.046 03:18:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:47.303 03:18:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:08:47.303 03:18:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:08:47.303 03:18:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:08:47.303 03:18:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:47.303 03:18:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:47.303 03:18:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:47.303 03:18:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:47.303 03:18:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:47.303 03:18:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:47.303 03:18:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:47.303 03:18:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:47.303 03:18:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:08:47.303 03:18:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:47.303 03:18:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:47.303 03:18:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:47.303 03:18:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:47.304 03:18:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:47.304 03:18:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:47.304 03:18:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:47.560 03:18:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:08:47.560 03:18:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:47.560 03:18:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:08:47.560 03:18:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:47.560 03:18:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:47.560 03:18:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:47.560 03:18:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:47.560 03:18:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:47.560 03:18:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:08:47.560 03:18:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:47.560 03:18:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:08:47.560 03:18:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:47.560 03:18:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:08:47.560 03:18:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:47.560 03:18:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:47.817 03:18:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:08:47.817 03:18:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:08:47.817 03:18:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:08:47.817 03:18:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:47.817 03:18:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:47.817 03:18:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:47.817 03:18:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:47.817 03:18:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:08:47.818 03:18:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:47.818 03:18:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:47.818 03:18:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:47.818 03:18:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:47.818 03:18:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:08:47.818 03:18:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:47.818 03:18:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:47.818 03:18:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:47.818 03:18:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:08:47.818 03:18:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:08:47.818 03:18:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:47.818 03:18:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:47.818 03:18:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:47.818 03:18:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:47.818 03:18:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:48.075 03:18:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:08:48.075 03:18:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:08:48.075 03:18:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:08:48.075 03:18:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:08:48.075 03:18:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:48.075 03:18:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:08:48.075 03:18:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:48.075 03:18:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:08:48.075 03:18:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:48.075 03:18:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:48.075 rmmod nvme_tcp 00:08:48.075 rmmod nvme_fabrics 00:08:48.075 rmmod nvme_keyring 00:08:48.075 03:18:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:48.075 03:18:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:08:48.075 03:18:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:08:48.075 03:18:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 2306770 ']' 00:08:48.075 03:18:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 2306770 00:08:48.075 03:18:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@946 -- # '[' -z 2306770 ']' 00:08:48.075 03:18:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@950 -- # kill -0 2306770 00:08:48.075 03:18:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@951 -- # uname 00:08:48.075 03:18:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:48.075 03:18:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2306770 00:08:48.075 03:18:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:48.075 03:18:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:48.075 03:18:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2306770' 00:08:48.075 killing process with pid 2306770 00:08:48.075 03:18:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@965 -- # kill 2306770 00:08:48.075 03:18:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@970 -- # wait 2306770 00:08:48.333 03:18:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:48.333 03:18:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:48.333 03:18:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:48.333 03:18:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:48.333 03:18:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:48.333 03:18:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:48.333 03:18:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:48.333 03:18:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:50.278 03:18:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:50.278 00:08:50.278 real 0m6.612s 00:08:50.278 user 0m9.700s 00:08:50.278 sys 0m2.130s 00:08:50.278 03:18:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:50.278 03:18:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:50.278 ************************************ 00:08:50.278 END TEST nvmf_referrals 00:08:50.278 ************************************ 00:08:50.278 03:18:35 nvmf_tcp -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:50.278 03:18:35 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:08:50.278 03:18:35 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:50.278 03:18:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:50.278 ************************************ 00:08:50.278 START TEST nvmf_connect_disconnect 00:08:50.278 ************************************ 00:08:50.278 03:18:35 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:50.551 * Looking for test storage... 00:08:50.551 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:50.551 03:18:35 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:50.551 03:18:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:08:50.551 03:18:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:50.551 03:18:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:50.551 03:18:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:50.551 03:18:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:50.551 03:18:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:50.551 03:18:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:50.551 03:18:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:50.551 03:18:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:50.551 03:18:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:50.551 03:18:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:50.551 03:18:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:50.551 03:18:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:50.551 03:18:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:50.551 03:18:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:50.551 03:18:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:50.551 03:18:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:50.551 03:18:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:50.551 03:18:35 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:50.551 03:18:35 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:50.551 03:18:35 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:50.551 03:18:35 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:50.551 03:18:35 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:50.551 03:18:35 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:50.551 03:18:35 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:08:50.551 03:18:35 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:50.551 03:18:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:08:50.551 03:18:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:50.551 03:18:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:50.552 03:18:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:50.552 03:18:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:50.552 03:18:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:50.552 03:18:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:50.552 03:18:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:50.552 03:18:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:50.552 03:18:35 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:50.552 03:18:35 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:50.552 03:18:35 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:08:50.552 03:18:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:50.552 03:18:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:50.552 03:18:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:50.552 03:18:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:50.552 03:18:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:50.552 03:18:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:50.552 03:18:35 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:50.552 03:18:35 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:50.552 03:18:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:50.552 03:18:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:50.552 03:18:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:08:50.552 03:18:35 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:52.450 03:18:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:52.450 03:18:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:08:52.450 03:18:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:52.450 03:18:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:52.450 03:18:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:52.450 03:18:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:52.450 03:18:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:52.450 03:18:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:08:52.450 03:18:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:52.450 03:18:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:08:52.450 03:18:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:08:52.450 03:18:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:08:52.450 03:18:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:08:52.450 03:18:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:08:52.450 03:18:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:08:52.450 03:18:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:52.450 03:18:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:52.450 03:18:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:52.450 03:18:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:52.450 03:18:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:52.450 03:18:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:52.450 03:18:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:52.450 03:18:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:52.450 03:18:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:52.450 03:18:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:52.450 03:18:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:52.450 03:18:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:52.450 03:18:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:52.450 03:18:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:52.450 03:18:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:52.450 03:18:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:52.450 03:18:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:52.450 03:18:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:52.450 03:18:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:52.450 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:52.450 03:18:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:52.450 03:18:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:52.450 03:18:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:52.450 03:18:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:52.450 03:18:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:52.450 03:18:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:52.451 03:18:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:52.451 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:52.451 03:18:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:52.451 03:18:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:52.451 03:18:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:52.451 03:18:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:52.451 03:18:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:52.451 03:18:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:52.451 03:18:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:52.451 03:18:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:52.451 03:18:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:52.451 03:18:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:52.451 03:18:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:52.451 03:18:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:52.451 03:18:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:52.451 03:18:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:52.451 03:18:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:52.451 03:18:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:52.451 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:52.451 03:18:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:52.451 03:18:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:52.451 03:18:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:52.451 03:18:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:52.451 03:18:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:52.451 03:18:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:52.451 03:18:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:52.451 03:18:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:52.451 03:18:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:52.451 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:52.451 03:18:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:52.451 03:18:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:52.451 03:18:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:08:52.451 03:18:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:52.451 03:18:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:52.451 03:18:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:52.451 03:18:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:52.451 03:18:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:52.451 03:18:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:52.451 03:18:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:52.451 03:18:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:52.451 03:18:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:52.451 03:18:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:52.451 03:18:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:52.451 03:18:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:52.451 03:18:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:52.451 03:18:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:52.451 03:18:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:52.451 03:18:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:52.451 03:18:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:52.451 03:18:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:52.451 03:18:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:52.451 03:18:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:52.451 03:18:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:52.451 03:18:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:52.451 03:18:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:52.451 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:52.451 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.121 ms 00:08:52.451 00:08:52.451 --- 10.0.0.2 ping statistics --- 00:08:52.451 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:52.451 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:08:52.451 03:18:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:52.451 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:52.451 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.130 ms 00:08:52.451 00:08:52.451 --- 10.0.0.1 ping statistics --- 00:08:52.451 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:52.451 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:08:52.451 03:18:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:52.451 03:18:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:08:52.451 03:18:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:52.451 03:18:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:52.708 03:18:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:52.708 03:18:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:52.708 03:18:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:52.708 03:18:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:52.708 03:18:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:52.708 03:18:37 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:08:52.708 03:18:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:52.708 03:18:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@720 -- # xtrace_disable 00:08:52.708 03:18:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:52.708 03:18:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=2309067 00:08:52.708 03:18:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:52.708 03:18:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 2309067 00:08:52.708 03:18:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@827 -- # '[' -z 2309067 ']' 00:08:52.708 03:18:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:52.708 03:18:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:52.708 03:18:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:52.708 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:52.708 03:18:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:52.708 03:18:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:52.708 [2024-07-21 03:18:37.839806] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:08:52.708 [2024-07-21 03:18:37.839881] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:52.708 EAL: No free 2048 kB hugepages reported on node 1 00:08:52.708 [2024-07-21 03:18:37.911656] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:52.708 [2024-07-21 03:18:38.010766] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:52.708 [2024-07-21 03:18:38.010830] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:52.708 [2024-07-21 03:18:38.010847] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:52.708 [2024-07-21 03:18:38.010861] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:52.708 [2024-07-21 03:18:38.010873] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:52.708 [2024-07-21 03:18:38.010931] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:52.708 [2024-07-21 03:18:38.010987] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:52.708 [2024-07-21 03:18:38.011036] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:52.708 [2024-07-21 03:18:38.011039] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:52.966 03:18:38 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:52.966 03:18:38 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@860 -- # return 0 00:08:52.966 03:18:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:52.966 03:18:38 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:52.966 03:18:38 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:52.966 03:18:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:52.966 03:18:38 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:08:52.966 03:18:38 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:52.966 03:18:38 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:52.966 [2024-07-21 03:18:38.171482] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:52.966 03:18:38 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:52.966 03:18:38 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:08:52.966 03:18:38 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:52.966 03:18:38 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:52.966 03:18:38 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:52.966 03:18:38 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:08:52.966 03:18:38 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:52.966 03:18:38 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:52.966 03:18:38 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:52.966 03:18:38 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:52.966 03:18:38 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:52.967 03:18:38 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:52.967 03:18:38 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:52.967 03:18:38 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:52.967 03:18:38 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:52.967 03:18:38 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:52.967 03:18:38 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:52.967 [2024-07-21 03:18:38.228827] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:52.967 03:18:38 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:52.967 03:18:38 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:08:52.967 03:18:38 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:08:52.967 03:18:38 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:08:52.967 03:18:38 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:08:55.489 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:58.012 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:59.906 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:02.428 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:04.322 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:06.845 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:09.364 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:11.886 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:13.779 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:16.333 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:18.250 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:20.773 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:22.667 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:25.200 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:27.725 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:30.241 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:32.135 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:34.661 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:37.189 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:39.085 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:41.648 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:44.173 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:46.071 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:48.592 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:51.110 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:53.005 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:55.530 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:57.423 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:59.982 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:01.882 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:04.407 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:06.930 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:08.852 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:11.373 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:13.899 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:15.793 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:18.312 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:20.832 [2024-07-21 03:20:05.635370] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4ac80 is same with the state(5) to be set 00:10:20.832 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:22.729 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:25.248 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:27.774 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:29.670 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:32.195 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:34.746 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:36.638 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:39.158 [2024-07-21 03:20:24.009386] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4ac60 is same with the state(5) to be set 00:10:39.158 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:41.052 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:43.572 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:46.095 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:47.990 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:50.510 [2024-07-21 03:20:35.575389] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4a750 is same with the state(5) to be set 00:10:50.510 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:53.045 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:54.942 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:57.466 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:00.012 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:02.538 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:04.444 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:06.965 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:09.488 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:11.383 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:13.909 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:15.803 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:18.326 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:20.846 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:22.741 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:25.261 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:27.799 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:29.705 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:32.225 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:34.122 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:36.647 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:39.169 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:41.064 [2024-07-21 03:21:26.138470] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4aa90 is same with the state(5) to be set 00:11:41.064 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:43.589 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:46.114 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:48.009 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:50.548 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:53.119 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:55.011 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:57.533 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:00.054 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:01.959 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:04.484 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:07.010 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:08.904 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:11.425 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:13.949 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:15.845 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:18.409 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:20.931 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:22.829 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:25.355 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:27.875 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:29.770 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:32.290 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:34.824 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:36.718 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:39.241 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:41.172 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:43.694 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:43.694 03:22:28 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:12:43.694 03:22:28 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:12:43.694 03:22:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:43.694 03:22:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:12:43.694 03:22:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:43.694 03:22:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:12:43.694 03:22:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:43.694 03:22:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:43.694 rmmod nvme_tcp 00:12:43.694 rmmod nvme_fabrics 00:12:43.694 rmmod nvme_keyring 00:12:43.694 03:22:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:43.694 03:22:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:12:43.694 03:22:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:12:43.694 03:22:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 2309067 ']' 00:12:43.694 03:22:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 2309067 00:12:43.694 03:22:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@946 -- # '[' -z 2309067 ']' 00:12:43.694 03:22:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@950 -- # kill -0 2309067 00:12:43.694 03:22:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@951 -- # uname 00:12:43.694 03:22:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:43.694 03:22:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2309067 00:12:43.694 03:22:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:43.694 03:22:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:43.694 03:22:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2309067' 00:12:43.694 killing process with pid 2309067 00:12:43.694 03:22:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@965 -- # kill 2309067 00:12:43.694 03:22:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@970 -- # wait 2309067 00:12:43.953 03:22:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:43.953 03:22:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:43.953 03:22:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:43.953 03:22:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:43.953 03:22:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:43.953 03:22:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:43.953 03:22:29 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:43.953 03:22:29 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:45.857 03:22:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:45.857 00:12:45.857 real 3m55.496s 00:12:45.857 user 14m56.382s 00:12:45.857 sys 0m35.144s 00:12:45.857 03:22:31 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:45.857 03:22:31 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:45.857 ************************************ 00:12:45.857 END TEST nvmf_connect_disconnect 00:12:45.857 ************************************ 00:12:45.857 03:22:31 nvmf_tcp -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:45.857 03:22:31 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:12:45.857 03:22:31 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:45.857 03:22:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:45.857 ************************************ 00:12:45.857 START TEST nvmf_multitarget 00:12:45.857 ************************************ 00:12:45.857 03:22:31 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:45.857 * Looking for test storage... 00:12:45.857 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:45.857 03:22:31 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:46.115 03:22:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:12:46.115 03:22:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:46.115 03:22:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:46.115 03:22:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:46.115 03:22:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:46.115 03:22:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:46.116 03:22:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:46.116 03:22:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:46.116 03:22:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:46.116 03:22:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:46.116 03:22:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:46.116 03:22:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:46.116 03:22:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:46.116 03:22:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:46.116 03:22:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:46.116 03:22:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:46.116 03:22:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:46.116 03:22:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:46.116 03:22:31 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:46.116 03:22:31 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:46.116 03:22:31 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:46.116 03:22:31 nvmf_tcp.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:46.116 03:22:31 nvmf_tcp.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:46.116 03:22:31 nvmf_tcp.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:46.116 03:22:31 nvmf_tcp.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:12:46.116 03:22:31 nvmf_tcp.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:46.116 03:22:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:12:46.116 03:22:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:46.116 03:22:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:46.116 03:22:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:46.116 03:22:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:46.116 03:22:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:46.116 03:22:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:46.116 03:22:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:46.116 03:22:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:46.116 03:22:31 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:46.116 03:22:31 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:12:46.116 03:22:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:46.116 03:22:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:46.116 03:22:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:46.116 03:22:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:46.116 03:22:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:46.116 03:22:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:46.116 03:22:31 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:46.116 03:22:31 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:46.116 03:22:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:46.116 03:22:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:46.116 03:22:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:12:46.116 03:22:31 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:48.017 03:22:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:48.017 03:22:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:12:48.017 03:22:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:48.017 03:22:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:48.017 03:22:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:48.017 03:22:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:48.017 03:22:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:48.017 03:22:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:12:48.017 03:22:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:48.017 03:22:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:12:48.017 03:22:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:12:48.017 03:22:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:12:48.017 03:22:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:12:48.017 03:22:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:12:48.017 03:22:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:12:48.017 03:22:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:48.017 03:22:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:48.017 03:22:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:48.017 03:22:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:48.017 03:22:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:48.017 03:22:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:48.017 03:22:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:48.017 03:22:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:48.017 03:22:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:48.017 03:22:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:48.017 03:22:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:48.017 03:22:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:48.017 03:22:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:48.017 03:22:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:48.017 03:22:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:48.017 03:22:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:48.017 03:22:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:48.017 03:22:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:48.017 03:22:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:48.017 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:48.017 03:22:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:48.017 03:22:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:48.017 03:22:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:48.017 03:22:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:48.017 03:22:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:48.017 03:22:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:48.017 03:22:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:48.017 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:48.017 03:22:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:48.017 03:22:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:48.017 03:22:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:48.017 03:22:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:48.017 03:22:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:48.017 03:22:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:48.017 03:22:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:48.017 03:22:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:48.017 03:22:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:48.017 03:22:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:48.017 03:22:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:48.017 03:22:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:48.017 03:22:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:48.017 03:22:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:48.017 03:22:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:48.017 03:22:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:48.017 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:48.017 03:22:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:48.017 03:22:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:48.017 03:22:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:48.017 03:22:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:48.017 03:22:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:48.017 03:22:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:48.017 03:22:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:48.018 03:22:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:48.018 03:22:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:48.018 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:48.018 03:22:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:48.018 03:22:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:48.018 03:22:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:12:48.018 03:22:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:48.018 03:22:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:48.018 03:22:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:48.018 03:22:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:48.018 03:22:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:48.018 03:22:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:48.018 03:22:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:48.018 03:22:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:48.018 03:22:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:48.018 03:22:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:48.018 03:22:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:48.018 03:22:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:48.018 03:22:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:48.018 03:22:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:48.018 03:22:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:48.018 03:22:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:48.018 03:22:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:48.018 03:22:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:48.018 03:22:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:48.018 03:22:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:48.018 03:22:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:48.018 03:22:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:48.018 03:22:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:48.018 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:48.018 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.174 ms 00:12:48.018 00:12:48.018 --- 10.0.0.2 ping statistics --- 00:12:48.018 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:48.018 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:12:48.018 03:22:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:48.018 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:48.018 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.082 ms 00:12:48.018 00:12:48.018 --- 10.0.0.1 ping statistics --- 00:12:48.018 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:48.018 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:12:48.018 03:22:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:48.018 03:22:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:12:48.018 03:22:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:48.018 03:22:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:48.018 03:22:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:48.018 03:22:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:48.018 03:22:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:48.018 03:22:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:48.018 03:22:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:48.275 03:22:33 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:12:48.275 03:22:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:48.275 03:22:33 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@720 -- # xtrace_disable 00:12:48.275 03:22:33 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:48.275 03:22:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=2340007 00:12:48.275 03:22:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:48.275 03:22:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 2340007 00:12:48.275 03:22:33 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@827 -- # '[' -z 2340007 ']' 00:12:48.275 03:22:33 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:48.275 03:22:33 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:48.275 03:22:33 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:48.275 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:48.275 03:22:33 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:48.275 03:22:33 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:48.275 [2024-07-21 03:22:33.404100] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:12:48.275 [2024-07-21 03:22:33.404183] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:48.275 EAL: No free 2048 kB hugepages reported on node 1 00:12:48.275 [2024-07-21 03:22:33.475369] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:48.275 [2024-07-21 03:22:33.573501] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:48.275 [2024-07-21 03:22:33.573551] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:48.275 [2024-07-21 03:22:33.573568] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:48.275 [2024-07-21 03:22:33.573581] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:48.275 [2024-07-21 03:22:33.573593] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:48.275 [2024-07-21 03:22:33.573670] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:48.275 [2024-07-21 03:22:33.573704] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:48.275 [2024-07-21 03:22:33.573892] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:48.275 [2024-07-21 03:22:33.573897] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:48.532 03:22:33 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:48.532 03:22:33 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@860 -- # return 0 00:12:48.532 03:22:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:48.532 03:22:33 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:48.532 03:22:33 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:48.532 03:22:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:48.532 03:22:33 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:48.532 03:22:33 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:48.532 03:22:33 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:12:48.789 03:22:33 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:12:48.790 03:22:33 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:12:48.790 "nvmf_tgt_1" 00:12:48.790 03:22:33 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:12:48.790 "nvmf_tgt_2" 00:12:48.790 03:22:34 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:48.790 03:22:34 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:12:49.047 03:22:34 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:12:49.047 03:22:34 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:12:49.047 true 00:12:49.047 03:22:34 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:12:49.304 true 00:12:49.304 03:22:34 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:49.304 03:22:34 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:12:49.304 03:22:34 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:12:49.304 03:22:34 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:12:49.304 03:22:34 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:12:49.304 03:22:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:49.304 03:22:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:12:49.304 03:22:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:49.304 03:22:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:12:49.304 03:22:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:49.304 03:22:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:49.304 rmmod nvme_tcp 00:12:49.304 rmmod nvme_fabrics 00:12:49.304 rmmod nvme_keyring 00:12:49.304 03:22:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:49.304 03:22:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:12:49.304 03:22:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:12:49.304 03:22:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 2340007 ']' 00:12:49.304 03:22:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 2340007 00:12:49.304 03:22:34 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@946 -- # '[' -z 2340007 ']' 00:12:49.304 03:22:34 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@950 -- # kill -0 2340007 00:12:49.304 03:22:34 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@951 -- # uname 00:12:49.304 03:22:34 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:49.304 03:22:34 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2340007 00:12:49.304 03:22:34 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:49.304 03:22:34 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:49.304 03:22:34 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2340007' 00:12:49.304 killing process with pid 2340007 00:12:49.304 03:22:34 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@965 -- # kill 2340007 00:12:49.304 03:22:34 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@970 -- # wait 2340007 00:12:49.562 03:22:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:49.562 03:22:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:49.562 03:22:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:49.562 03:22:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:49.562 03:22:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:49.562 03:22:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:49.562 03:22:34 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:49.562 03:22:34 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:52.093 03:22:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:52.093 00:12:52.093 real 0m5.737s 00:12:52.093 user 0m6.426s 00:12:52.093 sys 0m1.860s 00:12:52.093 03:22:36 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:52.093 03:22:36 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:52.093 ************************************ 00:12:52.093 END TEST nvmf_multitarget 00:12:52.093 ************************************ 00:12:52.093 03:22:36 nvmf_tcp -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:52.093 03:22:36 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:12:52.093 03:22:36 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:52.093 03:22:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:52.093 ************************************ 00:12:52.093 START TEST nvmf_rpc 00:12:52.093 ************************************ 00:12:52.093 03:22:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:52.093 * Looking for test storage... 00:12:52.093 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:52.093 03:22:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:52.093 03:22:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:12:52.093 03:22:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:52.093 03:22:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:52.093 03:22:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:52.093 03:22:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:52.093 03:22:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:52.093 03:22:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:52.093 03:22:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:52.093 03:22:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:52.093 03:22:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:52.093 03:22:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:52.093 03:22:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:52.093 03:22:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:52.093 03:22:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:52.093 03:22:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:52.093 03:22:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:52.093 03:22:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:52.093 03:22:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:52.093 03:22:36 nvmf_tcp.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:52.093 03:22:36 nvmf_tcp.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:52.093 03:22:36 nvmf_tcp.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:52.093 03:22:36 nvmf_tcp.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:52.093 03:22:36 nvmf_tcp.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:52.093 03:22:36 nvmf_tcp.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:52.093 03:22:36 nvmf_tcp.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:12:52.093 03:22:36 nvmf_tcp.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:52.093 03:22:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:12:52.093 03:22:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:52.093 03:22:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:52.093 03:22:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:52.093 03:22:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:52.093 03:22:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:52.093 03:22:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:52.093 03:22:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:52.093 03:22:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:52.093 03:22:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:12:52.093 03:22:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:12:52.093 03:22:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:52.093 03:22:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:52.093 03:22:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:52.093 03:22:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:52.093 03:22:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:52.093 03:22:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:52.093 03:22:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:52.093 03:22:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:52.093 03:22:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:52.093 03:22:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:52.093 03:22:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:12:52.093 03:22:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:53.994 03:22:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:53.994 03:22:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:12:53.994 03:22:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:53.994 03:22:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:53.994 03:22:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:53.994 03:22:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:53.994 03:22:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:53.994 03:22:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:12:53.994 03:22:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:53.994 03:22:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:12:53.994 03:22:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:12:53.994 03:22:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:12:53.994 03:22:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:12:53.994 03:22:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:12:53.994 03:22:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:12:53.994 03:22:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:53.994 03:22:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:53.994 03:22:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:53.994 03:22:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:53.994 03:22:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:53.994 03:22:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:53.994 03:22:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:53.994 03:22:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:53.994 03:22:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:53.994 03:22:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:53.994 03:22:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:53.994 03:22:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:53.994 03:22:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:53.994 03:22:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:53.994 03:22:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:53.994 03:22:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:53.994 03:22:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:53.994 03:22:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:53.994 03:22:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:53.994 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:53.994 03:22:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:53.994 03:22:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:53.994 03:22:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:53.994 03:22:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:53.994 03:22:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:53.994 03:22:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:53.994 03:22:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:53.994 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:53.994 03:22:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:53.994 03:22:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:53.994 03:22:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:53.994 03:22:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:53.994 03:22:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:53.994 03:22:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:53.994 03:22:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:53.994 03:22:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:53.994 03:22:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:53.994 03:22:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:53.994 03:22:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:53.994 03:22:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:53.994 03:22:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:53.994 03:22:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:53.994 03:22:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:53.994 03:22:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:53.994 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:53.994 03:22:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:53.994 03:22:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:53.994 03:22:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:53.994 03:22:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:53.994 03:22:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:53.994 03:22:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:53.994 03:22:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:53.994 03:22:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:53.994 03:22:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:53.994 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:53.994 03:22:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:53.994 03:22:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:53.994 03:22:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:12:53.994 03:22:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:53.994 03:22:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:53.994 03:22:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:53.994 03:22:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:53.994 03:22:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:53.994 03:22:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:53.994 03:22:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:53.994 03:22:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:53.994 03:22:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:53.994 03:22:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:53.994 03:22:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:53.994 03:22:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:53.994 03:22:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:53.994 03:22:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:53.994 03:22:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:53.994 03:22:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:53.994 03:22:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:53.994 03:22:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:53.994 03:22:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:53.994 03:22:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:53.994 03:22:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:53.994 03:22:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:53.994 03:22:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:53.994 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:53.994 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.247 ms 00:12:53.994 00:12:53.994 --- 10.0.0.2 ping statistics --- 00:12:53.994 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:53.994 rtt min/avg/max/mdev = 0.247/0.247/0.247/0.000 ms 00:12:53.994 03:22:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:53.994 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:53.994 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.164 ms 00:12:53.994 00:12:53.994 --- 10.0.0.1 ping statistics --- 00:12:53.994 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:53.994 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:12:53.994 03:22:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:53.994 03:22:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:12:53.994 03:22:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:53.994 03:22:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:53.994 03:22:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:53.994 03:22:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:53.994 03:22:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:53.994 03:22:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:53.994 03:22:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:53.994 03:22:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:12:53.994 03:22:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:53.994 03:22:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@720 -- # xtrace_disable 00:12:53.994 03:22:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:53.994 03:22:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=2342097 00:12:53.994 03:22:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:53.994 03:22:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 2342097 00:12:53.994 03:22:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@827 -- # '[' -z 2342097 ']' 00:12:53.994 03:22:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:53.994 03:22:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:53.994 03:22:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:53.994 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:53.995 03:22:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:53.995 03:22:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:53.995 [2024-07-21 03:22:39.193210] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:12:53.995 [2024-07-21 03:22:39.193302] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:53.995 EAL: No free 2048 kB hugepages reported on node 1 00:12:53.995 [2024-07-21 03:22:39.265841] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:54.253 [2024-07-21 03:22:39.355514] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:54.253 [2024-07-21 03:22:39.355577] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:54.253 [2024-07-21 03:22:39.355606] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:54.253 [2024-07-21 03:22:39.355626] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:54.253 [2024-07-21 03:22:39.355638] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:54.253 [2024-07-21 03:22:39.355720] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:54.253 [2024-07-21 03:22:39.355747] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:54.253 [2024-07-21 03:22:39.355794] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:54.253 [2024-07-21 03:22:39.355797] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:54.253 03:22:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:54.253 03:22:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@860 -- # return 0 00:12:54.253 03:22:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:54.253 03:22:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:54.253 03:22:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.253 03:22:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:54.253 03:22:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:12:54.253 03:22:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:54.253 03:22:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.253 03:22:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:54.253 03:22:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:12:54.253 "tick_rate": 2700000000, 00:12:54.253 "poll_groups": [ 00:12:54.253 { 00:12:54.253 "name": "nvmf_tgt_poll_group_000", 00:12:54.253 "admin_qpairs": 0, 00:12:54.253 "io_qpairs": 0, 00:12:54.253 "current_admin_qpairs": 0, 00:12:54.253 "current_io_qpairs": 0, 00:12:54.253 "pending_bdev_io": 0, 00:12:54.253 "completed_nvme_io": 0, 00:12:54.253 "transports": [] 00:12:54.253 }, 00:12:54.253 { 00:12:54.253 "name": "nvmf_tgt_poll_group_001", 00:12:54.253 "admin_qpairs": 0, 00:12:54.253 "io_qpairs": 0, 00:12:54.253 "current_admin_qpairs": 0, 00:12:54.253 "current_io_qpairs": 0, 00:12:54.253 "pending_bdev_io": 0, 00:12:54.253 "completed_nvme_io": 0, 00:12:54.253 "transports": [] 00:12:54.253 }, 00:12:54.253 { 00:12:54.253 "name": "nvmf_tgt_poll_group_002", 00:12:54.253 "admin_qpairs": 0, 00:12:54.253 "io_qpairs": 0, 00:12:54.253 "current_admin_qpairs": 0, 00:12:54.253 "current_io_qpairs": 0, 00:12:54.253 "pending_bdev_io": 0, 00:12:54.253 "completed_nvme_io": 0, 00:12:54.253 "transports": [] 00:12:54.253 }, 00:12:54.253 { 00:12:54.253 "name": "nvmf_tgt_poll_group_003", 00:12:54.253 "admin_qpairs": 0, 00:12:54.253 "io_qpairs": 0, 00:12:54.253 "current_admin_qpairs": 0, 00:12:54.253 "current_io_qpairs": 0, 00:12:54.253 "pending_bdev_io": 0, 00:12:54.253 "completed_nvme_io": 0, 00:12:54.253 "transports": [] 00:12:54.253 } 00:12:54.253 ] 00:12:54.253 }' 00:12:54.253 03:22:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:12:54.253 03:22:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:12:54.253 03:22:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:12:54.253 03:22:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:12:54.253 03:22:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:12:54.253 03:22:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:12:54.512 03:22:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:12:54.512 03:22:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:54.512 03:22:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:54.512 03:22:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.512 [2024-07-21 03:22:39.608885] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:54.512 03:22:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:54.512 03:22:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:12:54.512 03:22:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:54.512 03:22:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.512 03:22:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:54.512 03:22:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:12:54.512 "tick_rate": 2700000000, 00:12:54.512 "poll_groups": [ 00:12:54.512 { 00:12:54.512 "name": "nvmf_tgt_poll_group_000", 00:12:54.512 "admin_qpairs": 0, 00:12:54.512 "io_qpairs": 0, 00:12:54.512 "current_admin_qpairs": 0, 00:12:54.512 "current_io_qpairs": 0, 00:12:54.512 "pending_bdev_io": 0, 00:12:54.512 "completed_nvme_io": 0, 00:12:54.512 "transports": [ 00:12:54.512 { 00:12:54.512 "trtype": "TCP" 00:12:54.512 } 00:12:54.512 ] 00:12:54.512 }, 00:12:54.512 { 00:12:54.512 "name": "nvmf_tgt_poll_group_001", 00:12:54.512 "admin_qpairs": 0, 00:12:54.512 "io_qpairs": 0, 00:12:54.512 "current_admin_qpairs": 0, 00:12:54.512 "current_io_qpairs": 0, 00:12:54.512 "pending_bdev_io": 0, 00:12:54.512 "completed_nvme_io": 0, 00:12:54.512 "transports": [ 00:12:54.512 { 00:12:54.512 "trtype": "TCP" 00:12:54.512 } 00:12:54.512 ] 00:12:54.512 }, 00:12:54.512 { 00:12:54.512 "name": "nvmf_tgt_poll_group_002", 00:12:54.512 "admin_qpairs": 0, 00:12:54.512 "io_qpairs": 0, 00:12:54.512 "current_admin_qpairs": 0, 00:12:54.512 "current_io_qpairs": 0, 00:12:54.512 "pending_bdev_io": 0, 00:12:54.512 "completed_nvme_io": 0, 00:12:54.512 "transports": [ 00:12:54.512 { 00:12:54.512 "trtype": "TCP" 00:12:54.512 } 00:12:54.512 ] 00:12:54.512 }, 00:12:54.512 { 00:12:54.512 "name": "nvmf_tgt_poll_group_003", 00:12:54.512 "admin_qpairs": 0, 00:12:54.512 "io_qpairs": 0, 00:12:54.512 "current_admin_qpairs": 0, 00:12:54.512 "current_io_qpairs": 0, 00:12:54.512 "pending_bdev_io": 0, 00:12:54.512 "completed_nvme_io": 0, 00:12:54.512 "transports": [ 00:12:54.512 { 00:12:54.512 "trtype": "TCP" 00:12:54.512 } 00:12:54.512 ] 00:12:54.512 } 00:12:54.512 ] 00:12:54.512 }' 00:12:54.512 03:22:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:12:54.512 03:22:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:54.512 03:22:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:54.512 03:22:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:54.512 03:22:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:12:54.512 03:22:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:12:54.512 03:22:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:54.512 03:22:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:54.512 03:22:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:54.512 03:22:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:12:54.512 03:22:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:12:54.512 03:22:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:12:54.512 03:22:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:12:54.512 03:22:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:54.512 03:22:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:54.512 03:22:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.512 Malloc1 00:12:54.512 03:22:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:54.512 03:22:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:54.512 03:22:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:54.512 03:22:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.512 03:22:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:54.512 03:22:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:54.512 03:22:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:54.512 03:22:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.512 03:22:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:54.512 03:22:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:12:54.512 03:22:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:54.512 03:22:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.512 03:22:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:54.512 03:22:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:54.512 03:22:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:54.512 03:22:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.512 [2024-07-21 03:22:39.747973] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:54.512 03:22:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:54.512 03:22:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:12:54.512 03:22:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:12:54.513 03:22:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:12:54.513 03:22:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:12:54.513 03:22:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:54.513 03:22:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:12:54.513 03:22:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:54.513 03:22:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:12:54.513 03:22:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:54.513 03:22:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:12:54.513 03:22:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:12:54.513 03:22:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:12:54.513 [2024-07-21 03:22:39.770372] ctrlr.c: 816:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:12:54.513 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:54.513 could not add new controller: failed to write to nvme-fabrics device 00:12:54.513 03:22:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:12:54.513 03:22:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:54.513 03:22:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:54.513 03:22:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:54.513 03:22:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:54.513 03:22:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:54.513 03:22:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.513 03:22:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:54.513 03:22:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:55.078 03:22:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:12:55.079 03:22:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:12:55.079 03:22:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:12:55.079 03:22:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:12:55.079 03:22:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:12:57.605 03:22:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:12:57.605 03:22:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:12:57.605 03:22:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:12:57.605 03:22:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:12:57.605 03:22:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:12:57.605 03:22:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:12:57.605 03:22:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:57.605 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:57.605 03:22:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:57.605 03:22:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:12:57.605 03:22:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:12:57.605 03:22:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:57.605 03:22:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:12:57.605 03:22:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:57.605 03:22:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:12:57.605 03:22:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:57.605 03:22:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:57.605 03:22:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.605 03:22:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:57.605 03:22:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:57.605 03:22:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:12:57.605 03:22:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:57.605 03:22:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:12:57.605 03:22:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:57.605 03:22:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:12:57.605 03:22:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:57.605 03:22:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:12:57.605 03:22:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:57.605 03:22:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:12:57.605 03:22:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:12:57.605 03:22:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:57.605 [2024-07-21 03:22:42.535977] ctrlr.c: 816:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:12:57.605 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:57.605 could not add new controller: failed to write to nvme-fabrics device 00:12:57.605 03:22:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:12:57.605 03:22:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:57.605 03:22:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:57.605 03:22:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:57.605 03:22:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:12:57.605 03:22:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:57.605 03:22:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.605 03:22:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:57.605 03:22:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:58.175 03:22:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:12:58.175 03:22:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:12:58.175 03:22:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:12:58.175 03:22:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:12:58.175 03:22:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:13:00.068 03:22:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:13:00.068 03:22:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:13:00.068 03:22:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:13:00.068 03:22:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:13:00.068 03:22:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:13:00.068 03:22:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:13:00.068 03:22:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:00.068 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:00.068 03:22:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:00.068 03:22:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:13:00.068 03:22:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:13:00.068 03:22:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:00.068 03:22:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:13:00.068 03:22:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:00.068 03:22:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:13:00.068 03:22:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:00.068 03:22:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:00.068 03:22:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.068 03:22:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:00.068 03:22:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:13:00.068 03:22:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:00.068 03:22:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:00.068 03:22:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:00.068 03:22:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.068 03:22:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:00.068 03:22:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:00.068 03:22:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:00.068 03:22:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.068 [2024-07-21 03:22:45.340631] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:00.068 03:22:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:00.068 03:22:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:00.068 03:22:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:00.068 03:22:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.068 03:22:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:00.068 03:22:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:00.068 03:22:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:00.068 03:22:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.068 03:22:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:00.068 03:22:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:01.000 03:22:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:01.000 03:22:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:13:01.000 03:22:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:13:01.000 03:22:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:13:01.000 03:22:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:13:02.950 03:22:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:13:02.950 03:22:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:13:02.950 03:22:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:13:02.950 03:22:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:13:02.950 03:22:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:13:02.950 03:22:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:13:02.950 03:22:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:02.950 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:02.950 03:22:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:02.950 03:22:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:13:02.950 03:22:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:13:02.950 03:22:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:02.950 03:22:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:13:02.950 03:22:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:02.950 03:22:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:13:02.950 03:22:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:02.950 03:22:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:02.950 03:22:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:02.950 03:22:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:02.950 03:22:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:02.950 03:22:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:02.950 03:22:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:02.950 03:22:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:02.950 03:22:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:02.950 03:22:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:02.950 03:22:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:02.950 03:22:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:02.950 03:22:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:02.950 03:22:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:02.950 03:22:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:02.950 03:22:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:02.950 [2024-07-21 03:22:48.069952] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:02.950 03:22:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:02.950 03:22:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:02.950 03:22:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:02.950 03:22:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:02.950 03:22:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:02.951 03:22:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:02.951 03:22:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:02.951 03:22:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:02.951 03:22:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:02.951 03:22:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:03.515 03:22:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:03.515 03:22:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:13:03.515 03:22:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:13:03.515 03:22:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:13:03.515 03:22:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:13:06.037 03:22:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:13:06.037 03:22:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:13:06.037 03:22:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:13:06.037 03:22:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:13:06.037 03:22:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:13:06.037 03:22:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:13:06.037 03:22:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:06.037 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:06.037 03:22:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:06.037 03:22:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:13:06.037 03:22:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:13:06.037 03:22:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:06.037 03:22:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:13:06.037 03:22:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:06.037 03:22:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:13:06.037 03:22:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:06.037 03:22:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:06.037 03:22:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:06.037 03:22:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:06.037 03:22:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:06.037 03:22:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:06.037 03:22:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:06.037 03:22:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:06.037 03:22:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:06.037 03:22:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:06.037 03:22:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:06.037 03:22:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:06.037 03:22:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:06.037 03:22:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:06.037 03:22:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:06.037 03:22:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:06.038 [2024-07-21 03:22:50.917202] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:06.038 03:22:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:06.038 03:22:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:06.038 03:22:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:06.038 03:22:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:06.038 03:22:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:06.038 03:22:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:06.038 03:22:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:06.038 03:22:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:06.038 03:22:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:06.038 03:22:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:06.294 03:22:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:06.294 03:22:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:13:06.294 03:22:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:13:06.294 03:22:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:13:06.294 03:22:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:13:08.814 03:22:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:13:08.814 03:22:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:13:08.814 03:22:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:13:08.814 03:22:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:13:08.814 03:22:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:13:08.814 03:22:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:13:08.814 03:22:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:08.814 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:08.814 03:22:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:08.814 03:22:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:13:08.814 03:22:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:13:08.814 03:22:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:08.814 03:22:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:13:08.814 03:22:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:08.814 03:22:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:13:08.814 03:22:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:08.814 03:22:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:08.814 03:22:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.814 03:22:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:08.814 03:22:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:08.814 03:22:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:08.814 03:22:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.814 03:22:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:08.814 03:22:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:08.814 03:22:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:08.814 03:22:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:08.814 03:22:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.814 03:22:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:08.814 03:22:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:08.814 03:22:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:08.814 03:22:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.814 [2024-07-21 03:22:53.688660] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:08.814 03:22:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:08.814 03:22:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:08.814 03:22:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:08.814 03:22:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.814 03:22:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:08.814 03:22:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:08.814 03:22:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:08.814 03:22:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.814 03:22:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:08.814 03:22:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:09.071 03:22:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:09.071 03:22:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:13:09.071 03:22:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:13:09.071 03:22:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:13:09.071 03:22:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:13:11.594 03:22:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:13:11.594 03:22:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:13:11.594 03:22:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:13:11.594 03:22:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:13:11.594 03:22:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:13:11.594 03:22:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:13:11.594 03:22:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:11.594 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:11.594 03:22:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:11.594 03:22:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:13:11.594 03:22:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:13:11.594 03:22:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:11.594 03:22:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:13:11.594 03:22:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:11.594 03:22:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:13:11.594 03:22:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:11.594 03:22:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:11.594 03:22:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:11.594 03:22:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:11.594 03:22:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:11.594 03:22:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:11.594 03:22:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:11.595 03:22:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:11.595 03:22:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:11.595 03:22:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:11.595 03:22:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:11.595 03:22:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:11.595 03:22:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:11.595 03:22:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:11.595 03:22:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:11.595 03:22:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:11.595 [2024-07-21 03:22:56.489415] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:11.595 03:22:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:11.595 03:22:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:11.595 03:22:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:11.595 03:22:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:11.595 03:22:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:11.595 03:22:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:11.595 03:22:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:11.595 03:22:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:11.595 03:22:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:11.595 03:22:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:11.851 03:22:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:11.851 03:22:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:13:11.851 03:22:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:13:11.851 03:22:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:13:11.851 03:22:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:13:14.370 03:22:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:13:14.370 03:22:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:13:14.370 03:22:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:13:14.370 03:22:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:13:14.370 03:22:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:13:14.370 03:22:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:13:14.370 03:22:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:14.370 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:14.370 03:22:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:14.370 03:22:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:13:14.370 03:22:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:13:14.370 03:22:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:14.370 03:22:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:13:14.370 03:22:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:14.370 03:22:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:13:14.370 03:22:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:14.370 03:22:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:14.370 03:22:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:14.370 03:22:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:14.370 03:22:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:14.370 03:22:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:14.370 03:22:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:14.370 03:22:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:14.370 03:22:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:13:14.370 03:22:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:14.370 03:22:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:14.370 03:22:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:14.370 03:22:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:14.370 03:22:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:14.370 03:22:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:14.370 03:22:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:14.370 03:22:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:14.370 [2024-07-21 03:22:59.239493] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:14.370 03:22:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:14.370 03:22:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:14.370 03:22:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:14.370 03:22:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:14.370 03:22:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:14.370 03:22:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:14.370 03:22:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:14.370 03:22:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:14.370 03:22:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:14.370 03:22:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:14.370 03:22:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:14.370 03:22:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:14.370 03:22:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:14.370 03:22:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:14.370 03:22:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:14.370 03:22:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:14.370 03:22:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:14.370 03:22:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:14.370 03:22:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:14.370 03:22:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:14.370 03:22:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:14.370 03:22:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:14.370 03:22:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:14.370 03:22:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:14.370 03:22:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:14.370 [2024-07-21 03:22:59.287560] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:14.370 03:22:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:14.370 03:22:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:14.371 03:22:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:14.371 03:22:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:14.371 03:22:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:14.371 03:22:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:14.371 03:22:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:14.371 03:22:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:14.371 03:22:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:14.371 03:22:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:14.371 03:22:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:14.371 03:22:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:14.371 03:22:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:14.371 03:22:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:14.371 03:22:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:14.371 03:22:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:14.371 03:22:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:14.371 03:22:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:14.371 03:22:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:14.371 03:22:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:14.371 03:22:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:14.371 03:22:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:14.371 03:22:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:14.371 03:22:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:14.371 03:22:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:14.371 [2024-07-21 03:22:59.335745] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:14.371 03:22:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:14.371 03:22:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:14.371 03:22:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:14.371 03:22:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:14.371 03:22:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:14.371 03:22:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:14.371 03:22:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:14.371 03:22:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:14.371 03:22:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:14.371 03:22:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:14.371 03:22:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:14.371 03:22:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:14.371 03:22:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:14.371 03:22:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:14.371 03:22:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:14.371 03:22:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:14.371 03:22:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:14.371 03:22:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:14.371 03:22:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:14.371 03:22:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:14.371 03:22:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:14.371 03:22:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:14.371 03:22:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:14.371 03:22:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:14.371 03:22:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:14.371 [2024-07-21 03:22:59.383923] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:14.371 03:22:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:14.371 03:22:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:14.371 03:22:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:14.371 03:22:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:14.371 03:22:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:14.371 03:22:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:14.371 03:22:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:14.371 03:22:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:14.371 03:22:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:14.371 03:22:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:14.371 03:22:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:14.371 03:22:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:14.371 03:22:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:14.371 03:22:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:14.371 03:22:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:14.371 03:22:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:14.371 03:22:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:14.371 03:22:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:14.371 03:22:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:14.371 03:22:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:14.371 03:22:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:14.371 03:22:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:14.371 03:22:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:14.371 03:22:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:14.371 03:22:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:14.371 [2024-07-21 03:22:59.432088] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:14.371 03:22:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:14.371 03:22:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:14.371 03:22:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:14.371 03:22:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:14.371 03:22:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:14.371 03:22:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:14.371 03:22:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:14.371 03:22:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:14.371 03:22:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:14.371 03:22:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:14.371 03:22:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:14.371 03:22:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:14.371 03:22:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:14.371 03:22:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:14.371 03:22:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:14.371 03:22:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:14.371 03:22:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:14.371 03:22:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:13:14.371 03:22:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:14.371 03:22:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:14.371 03:22:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:14.371 03:22:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:13:14.371 "tick_rate": 2700000000, 00:13:14.371 "poll_groups": [ 00:13:14.371 { 00:13:14.371 "name": "nvmf_tgt_poll_group_000", 00:13:14.371 "admin_qpairs": 2, 00:13:14.371 "io_qpairs": 84, 00:13:14.371 "current_admin_qpairs": 0, 00:13:14.371 "current_io_qpairs": 0, 00:13:14.371 "pending_bdev_io": 0, 00:13:14.371 "completed_nvme_io": 183, 00:13:14.371 "transports": [ 00:13:14.371 { 00:13:14.371 "trtype": "TCP" 00:13:14.371 } 00:13:14.371 ] 00:13:14.371 }, 00:13:14.371 { 00:13:14.371 "name": "nvmf_tgt_poll_group_001", 00:13:14.371 "admin_qpairs": 2, 00:13:14.371 "io_qpairs": 84, 00:13:14.371 "current_admin_qpairs": 0, 00:13:14.371 "current_io_qpairs": 0, 00:13:14.371 "pending_bdev_io": 0, 00:13:14.371 "completed_nvme_io": 184, 00:13:14.371 "transports": [ 00:13:14.371 { 00:13:14.371 "trtype": "TCP" 00:13:14.371 } 00:13:14.371 ] 00:13:14.371 }, 00:13:14.371 { 00:13:14.371 "name": "nvmf_tgt_poll_group_002", 00:13:14.371 "admin_qpairs": 1, 00:13:14.371 "io_qpairs": 84, 00:13:14.371 "current_admin_qpairs": 0, 00:13:14.371 "current_io_qpairs": 0, 00:13:14.371 "pending_bdev_io": 0, 00:13:14.371 "completed_nvme_io": 147, 00:13:14.371 "transports": [ 00:13:14.371 { 00:13:14.371 "trtype": "TCP" 00:13:14.371 } 00:13:14.371 ] 00:13:14.371 }, 00:13:14.371 { 00:13:14.371 "name": "nvmf_tgt_poll_group_003", 00:13:14.371 "admin_qpairs": 2, 00:13:14.371 "io_qpairs": 84, 00:13:14.371 "current_admin_qpairs": 0, 00:13:14.371 "current_io_qpairs": 0, 00:13:14.371 "pending_bdev_io": 0, 00:13:14.371 "completed_nvme_io": 172, 00:13:14.371 "transports": [ 00:13:14.371 { 00:13:14.371 "trtype": "TCP" 00:13:14.371 } 00:13:14.371 ] 00:13:14.371 } 00:13:14.371 ] 00:13:14.371 }' 00:13:14.371 03:22:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:13:14.371 03:22:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:13:14.371 03:22:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:13:14.371 03:22:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:14.371 03:22:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:13:14.371 03:22:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:13:14.371 03:22:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:13:14.371 03:22:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:13:14.371 03:22:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:14.371 03:22:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # (( 336 > 0 )) 00:13:14.371 03:22:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:13:14.371 03:22:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:13:14.371 03:22:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:13:14.371 03:22:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:14.371 03:22:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:13:14.371 03:22:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:14.371 03:22:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:13:14.371 03:22:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:14.371 03:22:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:14.371 rmmod nvme_tcp 00:13:14.371 rmmod nvme_fabrics 00:13:14.371 rmmod nvme_keyring 00:13:14.371 03:22:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:14.371 03:22:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:13:14.371 03:22:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:13:14.371 03:22:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 2342097 ']' 00:13:14.371 03:22:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 2342097 00:13:14.371 03:22:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@946 -- # '[' -z 2342097 ']' 00:13:14.371 03:22:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@950 -- # kill -0 2342097 00:13:14.371 03:22:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@951 -- # uname 00:13:14.371 03:22:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:14.371 03:22:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2342097 00:13:14.371 03:22:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:13:14.371 03:22:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:13:14.371 03:22:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2342097' 00:13:14.371 killing process with pid 2342097 00:13:14.371 03:22:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@965 -- # kill 2342097 00:13:14.371 03:22:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@970 -- # wait 2342097 00:13:14.627 03:22:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:14.627 03:22:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:14.627 03:22:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:14.628 03:22:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:14.628 03:22:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:14.628 03:22:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:14.628 03:22:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:14.628 03:22:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:17.155 03:23:01 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:17.155 00:13:17.155 real 0m25.044s 00:13:17.155 user 1m21.321s 00:13:17.155 sys 0m4.075s 00:13:17.155 03:23:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:17.155 03:23:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:17.155 ************************************ 00:13:17.155 END TEST nvmf_rpc 00:13:17.155 ************************************ 00:13:17.155 03:23:01 nvmf_tcp -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:17.155 03:23:01 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:13:17.155 03:23:01 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:17.155 03:23:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:17.155 ************************************ 00:13:17.155 START TEST nvmf_invalid 00:13:17.155 ************************************ 00:13:17.156 03:23:01 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:17.156 * Looking for test storage... 00:13:17.156 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:17.156 03:23:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:17.156 03:23:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:13:17.156 03:23:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:17.156 03:23:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:17.156 03:23:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:17.156 03:23:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:17.156 03:23:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:17.156 03:23:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:17.156 03:23:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:17.156 03:23:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:17.156 03:23:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:17.156 03:23:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:17.156 03:23:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:17.156 03:23:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:17.156 03:23:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:17.156 03:23:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:17.156 03:23:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:17.156 03:23:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:17.156 03:23:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:17.156 03:23:02 nvmf_tcp.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:17.156 03:23:02 nvmf_tcp.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:17.156 03:23:02 nvmf_tcp.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:17.156 03:23:02 nvmf_tcp.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:17.156 03:23:02 nvmf_tcp.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:17.156 03:23:02 nvmf_tcp.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:17.156 03:23:02 nvmf_tcp.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:13:17.156 03:23:02 nvmf_tcp.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:17.156 03:23:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:13:17.156 03:23:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:17.156 03:23:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:17.156 03:23:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:17.156 03:23:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:17.156 03:23:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:17.156 03:23:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:17.156 03:23:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:17.156 03:23:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:17.156 03:23:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:13:17.156 03:23:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:17.156 03:23:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:13:17.156 03:23:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:13:17.156 03:23:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:13:17.156 03:23:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:13:17.156 03:23:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:17.156 03:23:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:17.156 03:23:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:17.156 03:23:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:17.156 03:23:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:17.156 03:23:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:17.156 03:23:02 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:17.156 03:23:02 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:17.156 03:23:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:17.156 03:23:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:17.156 03:23:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:13:17.156 03:23:02 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:19.070 03:23:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:19.070 03:23:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:13:19.070 03:23:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:19.070 03:23:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:19.070 03:23:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:19.070 03:23:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:19.070 03:23:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:19.070 03:23:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:13:19.070 03:23:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:19.070 03:23:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:13:19.070 03:23:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:13:19.070 03:23:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:13:19.070 03:23:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:13:19.070 03:23:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:13:19.070 03:23:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:13:19.070 03:23:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:19.070 03:23:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:19.070 03:23:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:19.070 03:23:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:19.070 03:23:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:19.070 03:23:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:19.070 03:23:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:19.070 03:23:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:19.070 03:23:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:19.070 03:23:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:19.070 03:23:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:19.070 03:23:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:19.070 03:23:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:19.070 03:23:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:19.070 03:23:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:19.070 03:23:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:19.070 03:23:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:19.070 03:23:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:19.070 03:23:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:19.070 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:19.071 03:23:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:19.071 03:23:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:19.071 03:23:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:19.071 03:23:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:19.071 03:23:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:19.071 03:23:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:19.071 03:23:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:19.071 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:19.071 03:23:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:19.071 03:23:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:19.071 03:23:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:19.071 03:23:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:19.071 03:23:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:19.071 03:23:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:19.071 03:23:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:19.071 03:23:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:19.071 03:23:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:19.071 03:23:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:19.071 03:23:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:19.071 03:23:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:19.071 03:23:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:19.071 03:23:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:19.071 03:23:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:19.071 03:23:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:19.071 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:19.071 03:23:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:19.071 03:23:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:19.071 03:23:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:19.071 03:23:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:19.071 03:23:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:19.071 03:23:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:19.071 03:23:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:19.071 03:23:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:19.071 03:23:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:19.071 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:19.071 03:23:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:19.071 03:23:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:19.071 03:23:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:13:19.071 03:23:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:19.071 03:23:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:19.071 03:23:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:19.071 03:23:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:19.071 03:23:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:19.071 03:23:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:19.071 03:23:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:19.071 03:23:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:19.071 03:23:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:19.071 03:23:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:19.071 03:23:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:19.071 03:23:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:19.071 03:23:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:19.071 03:23:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:19.071 03:23:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:19.071 03:23:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:19.071 03:23:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:19.071 03:23:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:19.071 03:23:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:19.071 03:23:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:19.071 03:23:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:19.071 03:23:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:19.071 03:23:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:19.071 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:19.071 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.244 ms 00:13:19.071 00:13:19.071 --- 10.0.0.2 ping statistics --- 00:13:19.071 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:19.071 rtt min/avg/max/mdev = 0.244/0.244/0.244/0.000 ms 00:13:19.071 03:23:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:19.071 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:19.071 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.161 ms 00:13:19.071 00:13:19.071 --- 10.0.0.1 ping statistics --- 00:13:19.071 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:19.071 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:13:19.071 03:23:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:19.071 03:23:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:13:19.071 03:23:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:19.071 03:23:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:19.071 03:23:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:19.071 03:23:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:19.071 03:23:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:19.071 03:23:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:19.071 03:23:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:19.071 03:23:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:13:19.071 03:23:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:19.071 03:23:04 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@720 -- # xtrace_disable 00:13:19.071 03:23:04 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:19.071 03:23:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=2346604 00:13:19.071 03:23:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:19.071 03:23:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 2346604 00:13:19.071 03:23:04 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@827 -- # '[' -z 2346604 ']' 00:13:19.071 03:23:04 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:19.071 03:23:04 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:19.071 03:23:04 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:19.071 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:19.071 03:23:04 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:19.071 03:23:04 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:19.071 [2024-07-21 03:23:04.306940] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:13:19.071 [2024-07-21 03:23:04.307031] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:19.071 EAL: No free 2048 kB hugepages reported on node 1 00:13:19.071 [2024-07-21 03:23:04.372846] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:19.329 [2024-07-21 03:23:04.462610] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:19.329 [2024-07-21 03:23:04.462676] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:19.329 [2024-07-21 03:23:04.462690] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:19.329 [2024-07-21 03:23:04.462701] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:19.329 [2024-07-21 03:23:04.462711] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:19.329 [2024-07-21 03:23:04.462779] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:19.329 [2024-07-21 03:23:04.462903] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:19.329 [2024-07-21 03:23:04.462946] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:19.329 [2024-07-21 03:23:04.462948] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:19.329 03:23:04 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:19.329 03:23:04 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@860 -- # return 0 00:13:19.329 03:23:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:19.329 03:23:04 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:19.329 03:23:04 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:19.329 03:23:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:19.329 03:23:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:13:19.329 03:23:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode3575 00:13:19.586 [2024-07-21 03:23:04.833182] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:13:19.586 03:23:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:13:19.586 { 00:13:19.586 "nqn": "nqn.2016-06.io.spdk:cnode3575", 00:13:19.586 "tgt_name": "foobar", 00:13:19.586 "method": "nvmf_create_subsystem", 00:13:19.586 "req_id": 1 00:13:19.586 } 00:13:19.586 Got JSON-RPC error response 00:13:19.586 response: 00:13:19.586 { 00:13:19.586 "code": -32603, 00:13:19.586 "message": "Unable to find target foobar" 00:13:19.586 }' 00:13:19.586 03:23:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:13:19.586 { 00:13:19.586 "nqn": "nqn.2016-06.io.spdk:cnode3575", 00:13:19.586 "tgt_name": "foobar", 00:13:19.586 "method": "nvmf_create_subsystem", 00:13:19.586 "req_id": 1 00:13:19.586 } 00:13:19.586 Got JSON-RPC error response 00:13:19.586 response: 00:13:19.586 { 00:13:19.586 "code": -32603, 00:13:19.586 "message": "Unable to find target foobar" 00:13:19.586 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:13:19.586 03:23:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:13:19.586 03:23:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode6933 00:13:19.843 [2024-07-21 03:23:05.126172] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode6933: invalid serial number 'SPDKISFASTANDAWESOME' 00:13:19.843 03:23:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:13:19.843 { 00:13:19.843 "nqn": "nqn.2016-06.io.spdk:cnode6933", 00:13:19.843 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:19.843 "method": "nvmf_create_subsystem", 00:13:19.843 "req_id": 1 00:13:19.843 } 00:13:19.843 Got JSON-RPC error response 00:13:19.843 response: 00:13:19.843 { 00:13:19.843 "code": -32602, 00:13:19.843 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:19.843 }' 00:13:19.843 03:23:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:13:19.843 { 00:13:19.843 "nqn": "nqn.2016-06.io.spdk:cnode6933", 00:13:19.843 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:19.843 "method": "nvmf_create_subsystem", 00:13:19.843 "req_id": 1 00:13:19.843 } 00:13:19.843 Got JSON-RPC error response 00:13:19.843 response: 00:13:19.843 { 00:13:19.843 "code": -32602, 00:13:19.843 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:19.843 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:19.843 03:23:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:13:19.843 03:23:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode2701 00:13:20.408 [2024-07-21 03:23:05.419101] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2701: invalid model number 'SPDK_Controller' 00:13:20.408 03:23:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:13:20.408 { 00:13:20.408 "nqn": "nqn.2016-06.io.spdk:cnode2701", 00:13:20.408 "model_number": "SPDK_Controller\u001f", 00:13:20.408 "method": "nvmf_create_subsystem", 00:13:20.408 "req_id": 1 00:13:20.408 } 00:13:20.408 Got JSON-RPC error response 00:13:20.408 response: 00:13:20.408 { 00:13:20.408 "code": -32602, 00:13:20.408 "message": "Invalid MN SPDK_Controller\u001f" 00:13:20.408 }' 00:13:20.408 03:23:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:13:20.408 { 00:13:20.408 "nqn": "nqn.2016-06.io.spdk:cnode2701", 00:13:20.408 "model_number": "SPDK_Controller\u001f", 00:13:20.408 "method": "nvmf_create_subsystem", 00:13:20.408 "req_id": 1 00:13:20.408 } 00:13:20.408 Got JSON-RPC error response 00:13:20.408 response: 00:13:20.408 { 00:13:20.408 "code": -32602, 00:13:20.408 "message": "Invalid MN SPDK_Controller\u001f" 00:13:20.408 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:20.408 03:23:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:13:20.409 03:23:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:13:20.409 03:23:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:20.409 03:23:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:13:20.409 03:23:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:13:20.409 03:23:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:20.409 03:23:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:20.409 03:23:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:13:20.409 03:23:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:13:20.409 03:23:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:13:20.409 03:23:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:20.409 03:23:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:20.409 03:23:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:13:20.409 03:23:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:13:20.409 03:23:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:13:20.409 03:23:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:20.409 03:23:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:20.409 03:23:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:13:20.409 03:23:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:13:20.409 03:23:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:13:20.409 03:23:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:20.409 03:23:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:20.409 03:23:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:13:20.409 03:23:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:13:20.409 03:23:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:13:20.409 03:23:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:20.409 03:23:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:20.409 03:23:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:13:20.409 03:23:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:13:20.409 03:23:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:13:20.409 03:23:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:20.409 03:23:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:20.409 03:23:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:13:20.409 03:23:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:13:20.409 03:23:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:13:20.409 03:23:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:20.409 03:23:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:20.409 03:23:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:13:20.409 03:23:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:13:20.409 03:23:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:13:20.409 03:23:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:20.409 03:23:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:20.409 03:23:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:13:20.409 03:23:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:13:20.409 03:23:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:13:20.409 03:23:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:20.409 03:23:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:20.409 03:23:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:13:20.409 03:23:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:13:20.409 03:23:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:13:20.409 03:23:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:20.409 03:23:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:20.409 03:23:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:13:20.409 03:23:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:13:20.409 03:23:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:13:20.409 03:23:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:20.409 03:23:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:20.409 03:23:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:13:20.409 03:23:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:13:20.409 03:23:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:13:20.409 03:23:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:20.409 03:23:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:20.409 03:23:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:13:20.409 03:23:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:13:20.409 03:23:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:13:20.409 03:23:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:20.409 03:23:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:20.409 03:23:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:13:20.409 03:23:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:13:20.409 03:23:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:13:20.409 03:23:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:20.409 03:23:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:20.409 03:23:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:13:20.409 03:23:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:13:20.409 03:23:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:13:20.409 03:23:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:20.409 03:23:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:20.409 03:23:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:13:20.409 03:23:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:13:20.409 03:23:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:13:20.409 03:23:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:20.409 03:23:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:20.409 03:23:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:13:20.409 03:23:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:13:20.409 03:23:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:13:20.409 03:23:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:20.409 03:23:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:20.409 03:23:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:13:20.409 03:23:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:13:20.409 03:23:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:13:20.409 03:23:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:20.409 03:23:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:20.409 03:23:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:13:20.409 03:23:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:13:20.409 03:23:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:13:20.409 03:23:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:20.409 03:23:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:20.409 03:23:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:13:20.409 03:23:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:13:20.409 03:23:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:13:20.409 03:23:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:20.409 03:23:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:20.409 03:23:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:13:20.409 03:23:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:13:20.409 03:23:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:13:20.409 03:23:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:20.409 03:23:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:20.409 03:23:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:13:20.409 03:23:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:13:20.409 03:23:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:13:20.409 03:23:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:20.409 03:23:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:20.409 03:23:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ s == \- ]] 00:13:20.409 03:23:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo 'sPEPI_PjjMIsOEw'\''xqQ-ikz' 00:13:20.669 03:23:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d '3oUn}+VaB^&o,y=qQcE`tG8Ius@ -!h/Ll>qQ-ikz' nqn.2016-06.io.spdk:cnode22781 00:13:20.926 [2024-07-21 03:23:06.145495] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode22781: invalid model number '3oUn}+VaB^&o,y=qQcE`tG8Ius@ -!h/Ll>qQ-ikz' 00:13:20.926 03:23:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:13:20.926 { 00:13:20.926 "nqn": "nqn.2016-06.io.spdk:cnode22781", 00:13:20.926 "model_number": "3oUn}+VaB^&o,y=qQcE`tG8Ius@ -!h/Ll>qQ-ikz", 00:13:20.926 "method": "nvmf_create_subsystem", 00:13:20.926 "req_id": 1 00:13:20.926 } 00:13:20.926 Got JSON-RPC error response 00:13:20.926 response: 00:13:20.926 { 00:13:20.926 "code": -32602, 00:13:20.926 "message": "Invalid MN 3oUn}+VaB^&o,y=qQcE`tG8Ius@ -!h/Ll>qQ-ikz" 00:13:20.926 }' 00:13:20.926 03:23:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:13:20.926 { 00:13:20.926 "nqn": "nqn.2016-06.io.spdk:cnode22781", 00:13:20.926 "model_number": "3oUn}+VaB^&o,y=qQcE`tG8Ius@ -!h/Ll>qQ-ikz", 00:13:20.926 "method": "nvmf_create_subsystem", 00:13:20.926 "req_id": 1 00:13:20.926 } 00:13:20.926 Got JSON-RPC error response 00:13:20.926 response: 00:13:20.926 { 00:13:20.926 "code": -32602, 00:13:20.926 "message": "Invalid MN 3oUn}+VaB^&o,y=qQcE`tG8Ius@ -!h/Ll>qQ-ikz" 00:13:20.926 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:20.926 03:23:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:13:21.183 [2024-07-21 03:23:06.378316] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:21.184 03:23:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:13:21.440 03:23:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:13:21.440 03:23:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:13:21.440 03:23:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:13:21.440 03:23:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:13:21.440 03:23:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:13:21.697 [2024-07-21 03:23:06.888014] nvmf_rpc.c: 804:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:13:21.697 03:23:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:13:21.697 { 00:13:21.697 "nqn": "nqn.2016-06.io.spdk:cnode", 00:13:21.697 "listen_address": { 00:13:21.697 "trtype": "tcp", 00:13:21.697 "traddr": "", 00:13:21.697 "trsvcid": "4421" 00:13:21.697 }, 00:13:21.697 "method": "nvmf_subsystem_remove_listener", 00:13:21.697 "req_id": 1 00:13:21.697 } 00:13:21.697 Got JSON-RPC error response 00:13:21.697 response: 00:13:21.697 { 00:13:21.697 "code": -32602, 00:13:21.697 "message": "Invalid parameters" 00:13:21.697 }' 00:13:21.697 03:23:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:13:21.697 { 00:13:21.697 "nqn": "nqn.2016-06.io.spdk:cnode", 00:13:21.697 "listen_address": { 00:13:21.697 "trtype": "tcp", 00:13:21.697 "traddr": "", 00:13:21.697 "trsvcid": "4421" 00:13:21.697 }, 00:13:21.697 "method": "nvmf_subsystem_remove_listener", 00:13:21.697 "req_id": 1 00:13:21.697 } 00:13:21.697 Got JSON-RPC error response 00:13:21.697 response: 00:13:21.697 { 00:13:21.697 "code": -32602, 00:13:21.697 "message": "Invalid parameters" 00:13:21.697 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:13:21.697 03:23:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode25417 -i 0 00:13:21.954 [2024-07-21 03:23:07.128735] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode25417: invalid cntlid range [0-65519] 00:13:21.955 03:23:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:13:21.955 { 00:13:21.955 "nqn": "nqn.2016-06.io.spdk:cnode25417", 00:13:21.955 "min_cntlid": 0, 00:13:21.955 "method": "nvmf_create_subsystem", 00:13:21.955 "req_id": 1 00:13:21.955 } 00:13:21.955 Got JSON-RPC error response 00:13:21.955 response: 00:13:21.955 { 00:13:21.955 "code": -32602, 00:13:21.955 "message": "Invalid cntlid range [0-65519]" 00:13:21.955 }' 00:13:21.955 03:23:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:13:21.955 { 00:13:21.955 "nqn": "nqn.2016-06.io.spdk:cnode25417", 00:13:21.955 "min_cntlid": 0, 00:13:21.955 "method": "nvmf_create_subsystem", 00:13:21.955 "req_id": 1 00:13:21.955 } 00:13:21.955 Got JSON-RPC error response 00:13:21.955 response: 00:13:21.955 { 00:13:21.955 "code": -32602, 00:13:21.955 "message": "Invalid cntlid range [0-65519]" 00:13:21.955 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:21.955 03:23:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2246 -i 65520 00:13:22.212 [2024-07-21 03:23:07.373564] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2246: invalid cntlid range [65520-65519] 00:13:22.212 03:23:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:13:22.212 { 00:13:22.212 "nqn": "nqn.2016-06.io.spdk:cnode2246", 00:13:22.212 "min_cntlid": 65520, 00:13:22.212 "method": "nvmf_create_subsystem", 00:13:22.212 "req_id": 1 00:13:22.212 } 00:13:22.212 Got JSON-RPC error response 00:13:22.212 response: 00:13:22.212 { 00:13:22.212 "code": -32602, 00:13:22.212 "message": "Invalid cntlid range [65520-65519]" 00:13:22.212 }' 00:13:22.212 03:23:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:13:22.212 { 00:13:22.212 "nqn": "nqn.2016-06.io.spdk:cnode2246", 00:13:22.212 "min_cntlid": 65520, 00:13:22.212 "method": "nvmf_create_subsystem", 00:13:22.212 "req_id": 1 00:13:22.212 } 00:13:22.212 Got JSON-RPC error response 00:13:22.212 response: 00:13:22.212 { 00:13:22.212 "code": -32602, 00:13:22.212 "message": "Invalid cntlid range [65520-65519]" 00:13:22.212 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:22.212 03:23:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6453 -I 0 00:13:22.520 [2024-07-21 03:23:07.614393] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode6453: invalid cntlid range [1-0] 00:13:22.520 03:23:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:13:22.520 { 00:13:22.520 "nqn": "nqn.2016-06.io.spdk:cnode6453", 00:13:22.520 "max_cntlid": 0, 00:13:22.520 "method": "nvmf_create_subsystem", 00:13:22.520 "req_id": 1 00:13:22.520 } 00:13:22.520 Got JSON-RPC error response 00:13:22.520 response: 00:13:22.520 { 00:13:22.520 "code": -32602, 00:13:22.520 "message": "Invalid cntlid range [1-0]" 00:13:22.520 }' 00:13:22.520 03:23:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:13:22.520 { 00:13:22.520 "nqn": "nqn.2016-06.io.spdk:cnode6453", 00:13:22.520 "max_cntlid": 0, 00:13:22.520 "method": "nvmf_create_subsystem", 00:13:22.520 "req_id": 1 00:13:22.520 } 00:13:22.520 Got JSON-RPC error response 00:13:22.520 response: 00:13:22.520 { 00:13:22.520 "code": -32602, 00:13:22.520 "message": "Invalid cntlid range [1-0]" 00:13:22.520 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:22.520 03:23:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode23426 -I 65520 00:13:22.818 [2024-07-21 03:23:07.863170] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode23426: invalid cntlid range [1-65520] 00:13:22.818 03:23:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:13:22.818 { 00:13:22.818 "nqn": "nqn.2016-06.io.spdk:cnode23426", 00:13:22.818 "max_cntlid": 65520, 00:13:22.818 "method": "nvmf_create_subsystem", 00:13:22.818 "req_id": 1 00:13:22.818 } 00:13:22.818 Got JSON-RPC error response 00:13:22.818 response: 00:13:22.818 { 00:13:22.818 "code": -32602, 00:13:22.818 "message": "Invalid cntlid range [1-65520]" 00:13:22.818 }' 00:13:22.818 03:23:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:13:22.818 { 00:13:22.818 "nqn": "nqn.2016-06.io.spdk:cnode23426", 00:13:22.818 "max_cntlid": 65520, 00:13:22.818 "method": "nvmf_create_subsystem", 00:13:22.818 "req_id": 1 00:13:22.818 } 00:13:22.818 Got JSON-RPC error response 00:13:22.818 response: 00:13:22.818 { 00:13:22.818 "code": -32602, 00:13:22.818 "message": "Invalid cntlid range [1-65520]" 00:13:22.818 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:22.818 03:23:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7881 -i 6 -I 5 00:13:22.818 [2024-07-21 03:23:08.120016] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode7881: invalid cntlid range [6-5] 00:13:23.075 03:23:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:13:23.075 { 00:13:23.075 "nqn": "nqn.2016-06.io.spdk:cnode7881", 00:13:23.075 "min_cntlid": 6, 00:13:23.075 "max_cntlid": 5, 00:13:23.075 "method": "nvmf_create_subsystem", 00:13:23.075 "req_id": 1 00:13:23.075 } 00:13:23.075 Got JSON-RPC error response 00:13:23.075 response: 00:13:23.075 { 00:13:23.075 "code": -32602, 00:13:23.075 "message": "Invalid cntlid range [6-5]" 00:13:23.075 }' 00:13:23.075 03:23:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:13:23.075 { 00:13:23.075 "nqn": "nqn.2016-06.io.spdk:cnode7881", 00:13:23.075 "min_cntlid": 6, 00:13:23.075 "max_cntlid": 5, 00:13:23.075 "method": "nvmf_create_subsystem", 00:13:23.075 "req_id": 1 00:13:23.075 } 00:13:23.075 Got JSON-RPC error response 00:13:23.075 response: 00:13:23.075 { 00:13:23.075 "code": -32602, 00:13:23.075 "message": "Invalid cntlid range [6-5]" 00:13:23.075 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:23.075 03:23:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:13:23.075 03:23:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:13:23.075 { 00:13:23.075 "name": "foobar", 00:13:23.075 "method": "nvmf_delete_target", 00:13:23.075 "req_id": 1 00:13:23.075 } 00:13:23.075 Got JSON-RPC error response 00:13:23.075 response: 00:13:23.075 { 00:13:23.075 "code": -32602, 00:13:23.075 "message": "The specified target doesn'\''t exist, cannot delete it." 00:13:23.075 }' 00:13:23.075 03:23:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:13:23.075 { 00:13:23.075 "name": "foobar", 00:13:23.075 "method": "nvmf_delete_target", 00:13:23.075 "req_id": 1 00:13:23.075 } 00:13:23.075 Got JSON-RPC error response 00:13:23.075 response: 00:13:23.075 { 00:13:23.075 "code": -32602, 00:13:23.075 "message": "The specified target doesn't exist, cannot delete it." 00:13:23.075 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:13:23.075 03:23:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:13:23.075 03:23:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:13:23.075 03:23:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:23.075 03:23:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@117 -- # sync 00:13:23.075 03:23:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:23.075 03:23:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 00:13:23.075 03:23:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:23.075 03:23:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:23.075 rmmod nvme_tcp 00:13:23.075 rmmod nvme_fabrics 00:13:23.075 rmmod nvme_keyring 00:13:23.075 03:23:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:23.075 03:23:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 00:13:23.075 03:23:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 00:13:23.075 03:23:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 2346604 ']' 00:13:23.075 03:23:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 2346604 00:13:23.075 03:23:08 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@946 -- # '[' -z 2346604 ']' 00:13:23.075 03:23:08 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@950 -- # kill -0 2346604 00:13:23.075 03:23:08 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@951 -- # uname 00:13:23.075 03:23:08 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:23.075 03:23:08 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2346604 00:13:23.075 03:23:08 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:13:23.075 03:23:08 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:13:23.075 03:23:08 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2346604' 00:13:23.075 killing process with pid 2346604 00:13:23.075 03:23:08 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@965 -- # kill 2346604 00:13:23.075 03:23:08 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@970 -- # wait 2346604 00:13:23.334 03:23:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:23.334 03:23:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:23.334 03:23:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:23.334 03:23:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:23.334 03:23:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:23.334 03:23:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:23.334 03:23:08 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:23.334 03:23:08 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:25.866 03:23:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:25.866 00:13:25.866 real 0m8.633s 00:13:25.866 user 0m20.040s 00:13:25.866 sys 0m2.469s 00:13:25.866 03:23:10 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:25.866 03:23:10 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:25.866 ************************************ 00:13:25.866 END TEST nvmf_invalid 00:13:25.866 ************************************ 00:13:25.866 03:23:10 nvmf_tcp -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:13:25.866 03:23:10 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:13:25.866 03:23:10 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:25.866 03:23:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:25.866 ************************************ 00:13:25.866 START TEST nvmf_abort 00:13:25.866 ************************************ 00:13:25.866 03:23:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:13:25.866 * Looking for test storage... 00:13:25.866 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:25.866 03:23:10 nvmf_tcp.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:25.866 03:23:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:13:25.866 03:23:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:25.866 03:23:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:25.866 03:23:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:25.866 03:23:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:25.866 03:23:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:25.866 03:23:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:25.866 03:23:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:25.866 03:23:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:25.866 03:23:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:25.866 03:23:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:25.866 03:23:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:25.866 03:23:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:25.866 03:23:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:25.866 03:23:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:25.866 03:23:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:25.866 03:23:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:25.866 03:23:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:25.866 03:23:10 nvmf_tcp.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:25.866 03:23:10 nvmf_tcp.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:25.866 03:23:10 nvmf_tcp.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:25.866 03:23:10 nvmf_tcp.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:25.867 03:23:10 nvmf_tcp.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:25.867 03:23:10 nvmf_tcp.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:25.867 03:23:10 nvmf_tcp.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:13:25.867 03:23:10 nvmf_tcp.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:25.867 03:23:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:13:25.867 03:23:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:25.867 03:23:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:25.867 03:23:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:25.867 03:23:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:25.867 03:23:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:25.867 03:23:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:25.867 03:23:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:25.867 03:23:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:25.867 03:23:10 nvmf_tcp.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:25.867 03:23:10 nvmf_tcp.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:13:25.867 03:23:10 nvmf_tcp.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:13:25.867 03:23:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:25.867 03:23:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:25.867 03:23:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:25.867 03:23:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:25.867 03:23:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:25.867 03:23:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:25.867 03:23:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:25.867 03:23:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:25.867 03:23:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:25.867 03:23:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:25.867 03:23:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:13:25.867 03:23:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:27.767 03:23:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:27.767 03:23:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:13:27.767 03:23:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:27.767 03:23:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:27.767 03:23:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:27.767 03:23:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:27.767 03:23:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:27.767 03:23:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:13:27.767 03:23:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:27.767 03:23:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:13:27.767 03:23:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:13:27.767 03:23:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:13:27.767 03:23:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:13:27.767 03:23:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:13:27.767 03:23:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:13:27.767 03:23:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:27.767 03:23:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:27.767 03:23:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:27.767 03:23:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:27.767 03:23:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:27.767 03:23:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:27.767 03:23:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:27.767 03:23:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:27.767 03:23:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:27.767 03:23:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:27.767 03:23:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:27.767 03:23:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:27.767 03:23:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:27.767 03:23:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:27.767 03:23:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:27.767 03:23:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:27.767 03:23:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:27.767 03:23:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:27.767 03:23:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:27.767 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:27.767 03:23:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:27.767 03:23:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:27.767 03:23:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:27.767 03:23:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:27.767 03:23:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:27.767 03:23:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:27.767 03:23:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:27.767 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:27.767 03:23:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:27.767 03:23:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:27.767 03:23:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:27.767 03:23:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:27.767 03:23:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:27.767 03:23:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:27.767 03:23:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:27.767 03:23:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:27.767 03:23:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:27.767 03:23:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:27.767 03:23:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:27.767 03:23:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:27.767 03:23:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:27.767 03:23:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:27.767 03:23:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:27.767 03:23:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:27.767 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:27.767 03:23:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:27.767 03:23:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:27.767 03:23:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:27.767 03:23:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:27.767 03:23:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:27.767 03:23:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:27.767 03:23:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:27.767 03:23:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:27.767 03:23:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:27.767 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:27.767 03:23:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:27.767 03:23:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:27.767 03:23:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:13:27.767 03:23:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:27.767 03:23:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:27.767 03:23:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:27.767 03:23:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:27.767 03:23:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:27.767 03:23:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:27.767 03:23:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:27.767 03:23:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:27.767 03:23:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:27.767 03:23:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:27.767 03:23:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:27.767 03:23:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:27.767 03:23:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:27.767 03:23:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:27.767 03:23:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:27.767 03:23:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:27.767 03:23:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:27.767 03:23:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:27.767 03:23:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:27.767 03:23:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:27.767 03:23:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:27.767 03:23:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:27.767 03:23:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:27.767 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:27.767 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.259 ms 00:13:27.767 00:13:27.767 --- 10.0.0.2 ping statistics --- 00:13:27.767 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:27.767 rtt min/avg/max/mdev = 0.259/0.259/0.259/0.000 ms 00:13:27.767 03:23:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:27.767 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:27.767 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.132 ms 00:13:27.767 00:13:27.768 --- 10.0.0.1 ping statistics --- 00:13:27.768 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:27.768 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:13:27.768 03:23:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:27.768 03:23:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:13:27.768 03:23:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:27.768 03:23:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:27.768 03:23:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:27.768 03:23:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:27.768 03:23:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:27.768 03:23:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:27.768 03:23:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:27.768 03:23:12 nvmf_tcp.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:13:27.768 03:23:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:27.768 03:23:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@720 -- # xtrace_disable 00:13:27.768 03:23:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:27.768 03:23:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=2349236 00:13:27.768 03:23:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:27.768 03:23:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 2349236 00:13:27.768 03:23:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@827 -- # '[' -z 2349236 ']' 00:13:27.768 03:23:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:27.768 03:23:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:27.768 03:23:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:27.768 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:27.768 03:23:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:27.768 03:23:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:27.768 [2024-07-21 03:23:12.997878] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:13:27.768 [2024-07-21 03:23:12.997974] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:27.768 EAL: No free 2048 kB hugepages reported on node 1 00:13:27.768 [2024-07-21 03:23:13.068537] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:28.025 [2024-07-21 03:23:13.159723] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:28.025 [2024-07-21 03:23:13.159779] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:28.025 [2024-07-21 03:23:13.159794] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:28.025 [2024-07-21 03:23:13.159806] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:28.025 [2024-07-21 03:23:13.159817] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:28.025 [2024-07-21 03:23:13.159904] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:28.025 [2024-07-21 03:23:13.159966] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:28.025 [2024-07-21 03:23:13.159969] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:28.025 03:23:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:28.025 03:23:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@860 -- # return 0 00:13:28.025 03:23:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:28.025 03:23:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:28.025 03:23:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:28.025 03:23:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:28.025 03:23:13 nvmf_tcp.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:13:28.025 03:23:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:28.025 03:23:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:28.025 [2024-07-21 03:23:13.306396] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:28.025 03:23:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:28.025 03:23:13 nvmf_tcp.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:13:28.025 03:23:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:28.025 03:23:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:28.282 Malloc0 00:13:28.282 03:23:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:28.282 03:23:13 nvmf_tcp.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:28.282 03:23:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:28.282 03:23:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:28.282 Delay0 00:13:28.282 03:23:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:28.282 03:23:13 nvmf_tcp.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:13:28.282 03:23:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:28.282 03:23:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:28.282 03:23:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:28.282 03:23:13 nvmf_tcp.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:13:28.282 03:23:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:28.282 03:23:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:28.282 03:23:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:28.282 03:23:13 nvmf_tcp.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:13:28.282 03:23:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:28.282 03:23:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:28.282 [2024-07-21 03:23:13.371734] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:28.282 03:23:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:28.282 03:23:13 nvmf_tcp.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:28.282 03:23:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:28.282 03:23:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:28.282 03:23:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:28.282 03:23:13 nvmf_tcp.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:13:28.282 EAL: No free 2048 kB hugepages reported on node 1 00:13:28.282 [2024-07-21 03:23:13.436416] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:13:30.803 Initializing NVMe Controllers 00:13:30.803 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:13:30.803 controller IO queue size 128 less than required 00:13:30.803 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:13:30.803 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:13:30.803 Initialization complete. Launching workers. 00:13:30.803 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 32387 00:13:30.803 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 32448, failed to submit 62 00:13:30.803 success 32391, unsuccess 57, failed 0 00:13:30.803 03:23:15 nvmf_tcp.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:13:30.803 03:23:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:30.803 03:23:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:30.803 03:23:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:30.803 03:23:15 nvmf_tcp.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:13:30.803 03:23:15 nvmf_tcp.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:13:30.803 03:23:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:30.803 03:23:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:13:30.803 03:23:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:30.803 03:23:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:13:30.803 03:23:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:30.803 03:23:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:30.803 rmmod nvme_tcp 00:13:30.803 rmmod nvme_fabrics 00:13:30.803 rmmod nvme_keyring 00:13:30.803 03:23:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:30.803 03:23:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:13:30.803 03:23:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:13:30.803 03:23:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 2349236 ']' 00:13:30.803 03:23:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 2349236 00:13:30.803 03:23:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@946 -- # '[' -z 2349236 ']' 00:13:30.803 03:23:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@950 -- # kill -0 2349236 00:13:30.803 03:23:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@951 -- # uname 00:13:30.803 03:23:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:30.803 03:23:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2349236 00:13:30.803 03:23:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:13:30.803 03:23:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:13:30.803 03:23:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2349236' 00:13:30.803 killing process with pid 2349236 00:13:30.803 03:23:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@965 -- # kill 2349236 00:13:30.803 03:23:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@970 -- # wait 2349236 00:13:30.803 03:23:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:30.803 03:23:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:30.803 03:23:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:30.803 03:23:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:30.803 03:23:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:30.803 03:23:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:30.803 03:23:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:30.803 03:23:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:32.706 03:23:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:32.706 00:13:32.706 real 0m7.234s 00:13:32.706 user 0m10.350s 00:13:32.706 sys 0m2.514s 00:13:32.706 03:23:17 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:32.706 03:23:17 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:32.706 ************************************ 00:13:32.706 END TEST nvmf_abort 00:13:32.706 ************************************ 00:13:32.706 03:23:17 nvmf_tcp -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:13:32.706 03:23:17 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:13:32.706 03:23:17 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:32.706 03:23:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:32.706 ************************************ 00:13:32.706 START TEST nvmf_ns_hotplug_stress 00:13:32.706 ************************************ 00:13:32.706 03:23:17 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:13:32.706 * Looking for test storage... 00:13:32.706 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:32.706 03:23:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:32.706 03:23:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:13:32.965 03:23:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:32.965 03:23:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:32.965 03:23:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:32.965 03:23:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:32.965 03:23:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:32.965 03:23:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:32.965 03:23:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:32.965 03:23:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:32.965 03:23:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:32.965 03:23:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:32.965 03:23:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:32.965 03:23:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:32.965 03:23:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:32.965 03:23:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:32.965 03:23:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:32.965 03:23:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:32.965 03:23:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:32.965 03:23:18 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:32.965 03:23:18 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:32.965 03:23:18 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:32.965 03:23:18 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:32.965 03:23:18 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:32.965 03:23:18 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:32.965 03:23:18 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:13:32.965 03:23:18 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:32.965 03:23:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:13:32.965 03:23:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:32.965 03:23:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:32.965 03:23:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:32.965 03:23:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:32.965 03:23:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:32.965 03:23:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:32.965 03:23:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:32.965 03:23:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:32.965 03:23:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:32.965 03:23:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:13:32.965 03:23:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:32.965 03:23:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:32.965 03:23:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:32.965 03:23:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:32.965 03:23:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:32.965 03:23:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:32.965 03:23:18 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:32.965 03:23:18 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:32.965 03:23:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:32.965 03:23:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:32.965 03:23:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:13:32.965 03:23:18 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:13:34.870 03:23:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:34.870 03:23:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:13:34.870 03:23:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:34.870 03:23:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:34.870 03:23:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:34.870 03:23:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:34.870 03:23:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:34.870 03:23:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:13:34.870 03:23:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:34.870 03:23:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:13:34.870 03:23:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:13:34.870 03:23:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:13:34.870 03:23:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:13:34.870 03:23:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:13:34.870 03:23:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:13:34.870 03:23:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:34.870 03:23:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:34.870 03:23:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:34.870 03:23:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:34.870 03:23:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:34.870 03:23:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:34.870 03:23:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:34.870 03:23:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:34.870 03:23:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:34.870 03:23:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:34.870 03:23:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:34.870 03:23:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:34.870 03:23:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:34.870 03:23:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:34.870 03:23:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:34.870 03:23:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:34.870 03:23:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:34.870 03:23:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:34.870 03:23:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:34.870 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:34.870 03:23:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:34.870 03:23:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:34.870 03:23:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:34.870 03:23:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:34.870 03:23:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:34.870 03:23:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:34.870 03:23:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:34.870 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:34.870 03:23:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:34.870 03:23:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:34.870 03:23:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:34.870 03:23:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:34.870 03:23:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:34.870 03:23:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:34.870 03:23:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:34.870 03:23:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:34.870 03:23:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:34.870 03:23:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:34.870 03:23:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:34.870 03:23:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:34.870 03:23:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:34.870 03:23:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:34.870 03:23:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:34.870 03:23:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:34.870 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:34.870 03:23:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:34.870 03:23:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:34.870 03:23:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:34.870 03:23:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:34.870 03:23:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:34.870 03:23:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:34.870 03:23:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:34.870 03:23:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:34.870 03:23:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:34.870 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:34.870 03:23:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:34.870 03:23:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:34.870 03:23:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:13:34.870 03:23:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:34.870 03:23:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:34.870 03:23:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:34.870 03:23:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:34.870 03:23:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:34.870 03:23:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:34.871 03:23:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:34.871 03:23:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:34.871 03:23:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:34.871 03:23:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:34.871 03:23:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:34.871 03:23:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:34.871 03:23:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:34.871 03:23:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:34.871 03:23:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:34.871 03:23:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:34.871 03:23:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:34.871 03:23:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:34.871 03:23:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:34.871 03:23:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:34.871 03:23:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:34.871 03:23:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:34.871 03:23:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:34.871 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:34.871 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.147 ms 00:13:34.871 00:13:34.871 --- 10.0.0.2 ping statistics --- 00:13:34.871 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:34.871 rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms 00:13:34.871 03:23:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:34.871 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:34.871 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.105 ms 00:13:34.871 00:13:34.871 --- 10.0.0.1 ping statistics --- 00:13:34.871 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:34.871 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:13:34.871 03:23:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:34.871 03:23:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:13:34.871 03:23:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:34.871 03:23:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:34.871 03:23:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:34.871 03:23:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:34.871 03:23:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:34.871 03:23:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:34.871 03:23:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:34.871 03:23:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:13:34.871 03:23:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:34.871 03:23:20 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@720 -- # xtrace_disable 00:13:34.871 03:23:20 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:13:34.871 03:23:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=2351528 00:13:34.871 03:23:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:34.871 03:23:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 2351528 00:13:34.871 03:23:20 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@827 -- # '[' -z 2351528 ']' 00:13:34.871 03:23:20 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:34.871 03:23:20 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:34.871 03:23:20 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:34.871 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:34.871 03:23:20 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:34.871 03:23:20 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:13:35.129 [2024-07-21 03:23:20.218318] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:13:35.129 [2024-07-21 03:23:20.218405] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:35.129 EAL: No free 2048 kB hugepages reported on node 1 00:13:35.129 [2024-07-21 03:23:20.292468] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:35.129 [2024-07-21 03:23:20.385452] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:35.129 [2024-07-21 03:23:20.385514] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:35.130 [2024-07-21 03:23:20.385541] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:35.130 [2024-07-21 03:23:20.385555] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:35.130 [2024-07-21 03:23:20.385567] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:35.130 [2024-07-21 03:23:20.385652] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:35.130 [2024-07-21 03:23:20.385708] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:35.130 [2024-07-21 03:23:20.385711] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:35.389 03:23:20 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:35.389 03:23:20 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # return 0 00:13:35.389 03:23:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:35.389 03:23:20 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:35.389 03:23:20 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:13:35.389 03:23:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:35.389 03:23:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:13:35.389 03:23:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:35.647 [2024-07-21 03:23:20.750695] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:35.647 03:23:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:35.905 03:23:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:36.163 [2024-07-21 03:23:21.241512] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:36.163 03:23:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:36.421 03:23:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:13:36.679 Malloc0 00:13:36.679 03:23:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:36.936 Delay0 00:13:36.936 03:23:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:37.194 03:23:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:13:37.194 NULL1 00:13:37.452 03:23:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:13:37.452 03:23:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=2351863 00:13:37.452 03:23:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:13:37.452 03:23:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2351863 00:13:37.452 03:23:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:37.710 EAL: No free 2048 kB hugepages reported on node 1 00:13:38.641 Read completed with error (sct=0, sc=11) 00:13:38.898 03:23:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:38.898 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:38.898 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:38.898 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:38.898 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:38.898 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:38.898 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:39.155 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:39.155 03:23:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:13:39.155 03:23:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:13:39.155 true 00:13:39.412 03:23:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2351863 00:13:39.412 03:23:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:39.976 03:23:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:40.233 03:23:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:13:40.233 03:23:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:13:40.489 true 00:13:40.489 03:23:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2351863 00:13:40.489 03:23:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:40.745 03:23:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:41.001 03:23:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:13:41.001 03:23:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:13:41.295 true 00:13:41.295 03:23:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2351863 00:13:41.295 03:23:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:42.225 03:23:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:42.225 03:23:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:13:42.225 03:23:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:13:42.482 true 00:13:42.482 03:23:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2351863 00:13:42.482 03:23:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:42.739 03:23:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:42.996 03:23:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:13:42.996 03:23:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:13:43.253 true 00:13:43.253 03:23:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2351863 00:13:43.253 03:23:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:44.185 03:23:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:44.442 03:23:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:13:44.442 03:23:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:13:44.700 true 00:13:44.700 03:23:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2351863 00:13:44.700 03:23:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:44.957 03:23:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:45.215 03:23:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:13:45.215 03:23:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:13:45.215 true 00:13:45.215 03:23:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2351863 00:13:45.215 03:23:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:46.147 03:23:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:46.147 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:46.413 03:23:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:13:46.413 03:23:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:13:46.670 true 00:13:46.670 03:23:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2351863 00:13:46.670 03:23:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:46.926 03:23:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:47.181 03:23:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:13:47.181 03:23:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:13:47.438 true 00:13:47.438 03:23:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2351863 00:13:47.438 03:23:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:48.366 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:48.366 03:23:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:48.622 03:23:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:13:48.622 03:23:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:13:48.878 true 00:13:48.878 03:23:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2351863 00:13:48.878 03:23:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:49.135 03:23:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:49.391 03:23:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:13:49.391 03:23:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:13:49.647 true 00:13:49.647 03:23:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2351863 00:13:49.647 03:23:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:50.578 03:23:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:50.578 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:50.578 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:50.578 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:50.836 03:23:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:13:50.836 03:23:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:13:51.093 true 00:13:51.093 03:23:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2351863 00:13:51.093 03:23:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:51.350 03:23:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:51.607 03:23:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:13:51.607 03:23:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:13:51.864 true 00:13:51.864 03:23:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2351863 00:13:51.864 03:23:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:52.796 03:23:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:52.796 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:52.796 03:23:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:13:52.796 03:23:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:13:53.053 true 00:13:53.053 03:23:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2351863 00:13:53.053 03:23:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:53.311 03:23:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:53.583 03:23:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:13:53.583 03:23:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:13:53.840 true 00:13:53.840 03:23:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2351863 00:13:53.840 03:23:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:54.771 03:23:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:54.771 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:54.771 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:55.028 03:23:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:13:55.028 03:23:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:13:55.285 true 00:13:55.285 03:23:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2351863 00:13:55.285 03:23:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:55.543 03:23:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:55.801 03:23:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:13:55.801 03:23:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:13:56.058 true 00:13:56.058 03:23:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2351863 00:13:56.058 03:23:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:56.988 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:56.988 03:23:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:56.988 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:56.988 03:23:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:13:56.988 03:23:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:13:57.244 true 00:13:57.244 03:23:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2351863 00:13:57.244 03:23:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:57.531 03:23:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:57.788 03:23:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:13:57.788 03:23:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:13:58.045 true 00:13:58.045 03:23:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2351863 00:13:58.045 03:23:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:58.976 03:23:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:59.233 03:23:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:13:59.233 03:23:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:13:59.490 true 00:13:59.748 03:23:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2351863 00:13:59.748 03:23:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:59.748 03:23:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:00.005 03:23:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:14:00.005 03:23:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:14:00.263 true 00:14:00.263 03:23:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2351863 00:14:00.263 03:23:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:00.521 03:23:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:00.779 03:23:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:14:00.779 03:23:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:14:01.037 true 00:14:01.037 03:23:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2351863 00:14:01.037 03:23:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:02.409 03:23:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:02.409 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:02.409 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:02.409 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:02.409 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:02.409 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:02.409 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:02.409 03:23:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:14:02.409 03:23:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:14:02.666 true 00:14:02.666 03:23:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2351863 00:14:02.666 03:23:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:03.596 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:03.596 03:23:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:03.859 03:23:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:14:03.859 03:23:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:14:04.116 true 00:14:04.116 03:23:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2351863 00:14:04.116 03:23:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:04.373 03:23:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:04.629 03:23:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:14:04.629 03:23:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:14:04.886 true 00:14:04.886 03:23:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2351863 00:14:04.886 03:23:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:05.817 03:23:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:05.817 03:23:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:14:05.817 03:23:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:14:06.075 true 00:14:06.332 03:23:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2351863 00:14:06.332 03:23:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:06.590 03:23:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:06.590 03:23:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:14:06.590 03:23:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:14:06.847 true 00:14:06.847 03:23:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2351863 00:14:06.847 03:23:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:07.781 Initializing NVMe Controllers 00:14:07.781 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:07.781 Controller IO queue size 128, less than required. 00:14:07.781 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:07.781 Controller IO queue size 128, less than required. 00:14:07.781 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:07.781 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:07.781 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:14:07.781 Initialization complete. Launching workers. 00:14:07.781 ======================================================== 00:14:07.781 Latency(us) 00:14:07.781 Device Information : IOPS MiB/s Average min max 00:14:07.781 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 891.37 0.44 80227.24 3280.02 1067996.71 00:14:07.781 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 11098.43 5.42 11533.36 4119.89 365553.34 00:14:07.781 ======================================================== 00:14:07.781 Total : 11989.80 5.85 16640.32 3280.02 1067996.71 00:14:07.781 00:14:07.781 03:23:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:08.037 03:23:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:14:08.037 03:23:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:14:08.294 true 00:14:08.294 03:23:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2351863 00:14:08.294 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (2351863) - No such process 00:14:08.294 03:23:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 2351863 00:14:08.294 03:23:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:08.551 03:23:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:08.808 03:23:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:14:08.808 03:23:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:14:08.808 03:23:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:14:08.808 03:23:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:08.808 03:23:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:14:09.064 null0 00:14:09.064 03:23:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:09.065 03:23:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:09.065 03:23:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:14:09.321 null1 00:14:09.321 03:23:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:09.321 03:23:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:09.321 03:23:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:14:09.579 null2 00:14:09.579 03:23:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:09.579 03:23:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:09.579 03:23:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:14:09.836 null3 00:14:09.836 03:23:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:09.836 03:23:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:09.836 03:23:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:14:10.093 null4 00:14:10.093 03:23:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:10.093 03:23:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:10.094 03:23:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:14:10.350 null5 00:14:10.350 03:23:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:10.350 03:23:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:10.350 03:23:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:14:10.607 null6 00:14:10.607 03:23:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:10.607 03:23:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:10.607 03:23:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:14:10.864 null7 00:14:10.864 03:23:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:10.864 03:23:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:10.864 03:23:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:14:10.864 03:23:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:10.864 03:23:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:10.864 03:23:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:14:10.864 03:23:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:10.864 03:23:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:14:10.864 03:23:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:10.864 03:23:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:10.864 03:23:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:10.864 03:23:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:10.864 03:23:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:10.864 03:23:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:14:10.864 03:23:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:10.864 03:23:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:10.864 03:23:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:14:10.864 03:23:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:10.864 03:23:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:10.864 03:23:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:10.864 03:23:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:10.864 03:23:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:10.864 03:23:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:14:10.864 03:23:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:10.864 03:23:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:14:10.864 03:23:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:10.864 03:23:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:10.864 03:23:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:10.864 03:23:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:10.864 03:23:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:14:10.864 03:23:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:10.864 03:23:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:10.864 03:23:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:14:10.864 03:23:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:10.864 03:23:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:10.864 03:23:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:10.864 03:23:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:10.864 03:23:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:14:10.864 03:23:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:10.864 03:23:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:10.864 03:23:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:14:10.864 03:23:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:10.864 03:23:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:10.864 03:23:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:10.864 03:23:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:10.864 03:23:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:14:10.864 03:23:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:10.864 03:23:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:14:10.864 03:23:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:10.864 03:23:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:10.864 03:23:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:10.864 03:23:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:10.864 03:23:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:10.864 03:23:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:14:10.864 03:23:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:10.864 03:23:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:10.864 03:23:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:14:10.864 03:23:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:10.864 03:23:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:10.864 03:23:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:10.864 03:23:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:10.864 03:23:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:14:10.864 03:23:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:10.864 03:23:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:14:10.864 03:23:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:10.864 03:23:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:10.864 03:23:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 2355904 2355905 2355907 2355909 2355911 2355913 2355915 2355917 00:14:10.864 03:23:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:10.864 03:23:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:11.121 03:23:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:11.121 03:23:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:11.121 03:23:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:11.121 03:23:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:11.121 03:23:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:11.121 03:23:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:11.121 03:23:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:11.121 03:23:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:11.378 03:23:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:11.378 03:23:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:11.378 03:23:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:11.378 03:23:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:11.378 03:23:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:11.378 03:23:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:11.378 03:23:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:11.378 03:23:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:11.378 03:23:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:11.378 03:23:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:11.378 03:23:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:11.378 03:23:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:11.378 03:23:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:11.378 03:23:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:11.378 03:23:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:11.378 03:23:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:11.378 03:23:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:11.378 03:23:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:11.378 03:23:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:11.378 03:23:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:11.378 03:23:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:11.378 03:23:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:11.378 03:23:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:11.378 03:23:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:11.634 03:23:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:11.634 03:23:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:11.634 03:23:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:11.634 03:23:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:11.634 03:23:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:11.634 03:23:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:11.634 03:23:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:11.634 03:23:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:11.890 03:23:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:11.890 03:23:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:11.890 03:23:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:11.890 03:23:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:11.890 03:23:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:11.890 03:23:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:11.890 03:23:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:11.890 03:23:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:11.890 03:23:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:11.890 03:23:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:11.890 03:23:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:11.890 03:23:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:11.890 03:23:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:11.890 03:23:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:11.890 03:23:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:11.890 03:23:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:11.890 03:23:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:11.890 03:23:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:11.890 03:23:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:11.890 03:23:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:11.890 03:23:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:11.890 03:23:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:11.890 03:23:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:11.890 03:23:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:12.147 03:23:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:12.147 03:23:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:12.147 03:23:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:12.147 03:23:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:12.147 03:23:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:12.147 03:23:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:12.147 03:23:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:12.147 03:23:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:12.404 03:23:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:12.404 03:23:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:12.404 03:23:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:12.404 03:23:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:12.404 03:23:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:12.404 03:23:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:12.404 03:23:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:12.404 03:23:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:12.404 03:23:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:12.404 03:23:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:12.404 03:23:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:12.404 03:23:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:12.404 03:23:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:12.404 03:23:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:12.404 03:23:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:12.404 03:23:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:12.404 03:23:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:12.404 03:23:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:12.404 03:23:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:12.404 03:23:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:12.404 03:23:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:12.404 03:23:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:12.404 03:23:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:12.404 03:23:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:12.661 03:23:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:12.661 03:23:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:12.661 03:23:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:12.662 03:23:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:12.662 03:23:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:12.662 03:23:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:12.662 03:23:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:12.662 03:23:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:12.918 03:23:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:12.918 03:23:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:12.918 03:23:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:12.918 03:23:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:12.918 03:23:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:12.918 03:23:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:12.918 03:23:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:12.918 03:23:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:12.918 03:23:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:12.918 03:23:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:12.918 03:23:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:12.918 03:23:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:12.918 03:23:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:12.918 03:23:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:12.918 03:23:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:12.918 03:23:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:12.918 03:23:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:12.918 03:23:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:12.918 03:23:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:12.918 03:23:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:12.918 03:23:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:12.918 03:23:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:12.918 03:23:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:12.918 03:23:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:13.182 03:23:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:13.183 03:23:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:13.183 03:23:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:13.183 03:23:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:13.183 03:23:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:13.183 03:23:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:13.183 03:23:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:13.183 03:23:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:13.450 03:23:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:13.450 03:23:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:13.450 03:23:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:13.450 03:23:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:13.450 03:23:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:13.450 03:23:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:13.450 03:23:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:13.450 03:23:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:13.450 03:23:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:13.450 03:23:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:13.450 03:23:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:13.450 03:23:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:13.450 03:23:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:13.450 03:23:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:13.450 03:23:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:13.450 03:23:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:13.450 03:23:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:13.450 03:23:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:13.450 03:23:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:13.450 03:23:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:13.450 03:23:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:13.450 03:23:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:13.450 03:23:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:13.450 03:23:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:13.708 03:23:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:13.708 03:23:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:13.708 03:23:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:13.708 03:23:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:13.708 03:23:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:13.708 03:23:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:13.708 03:23:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:13.708 03:23:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:13.966 03:23:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:13.966 03:23:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:13.966 03:23:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:13.966 03:23:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:13.966 03:23:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:13.966 03:23:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:13.966 03:23:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:13.966 03:23:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:13.966 03:23:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:13.966 03:23:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:13.966 03:23:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:13.966 03:23:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:13.966 03:23:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:13.966 03:23:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:13.966 03:23:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:13.966 03:23:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:13.966 03:23:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:13.966 03:23:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:13.966 03:23:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:13.966 03:23:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:13.966 03:23:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:13.966 03:23:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:13.966 03:23:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:13.966 03:23:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:14.223 03:23:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:14.223 03:23:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:14.223 03:23:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:14.223 03:23:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:14.224 03:23:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:14.224 03:23:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:14.224 03:23:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:14.224 03:23:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:14.481 03:23:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:14.481 03:23:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:14.481 03:23:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:14.481 03:23:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:14.481 03:23:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:14.481 03:23:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:14.481 03:23:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:14.481 03:23:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:14.481 03:23:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:14.481 03:23:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:14.481 03:23:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:14.481 03:23:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:14.481 03:23:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:14.481 03:23:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:14.481 03:23:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:14.481 03:23:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:14.481 03:23:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:14.481 03:23:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:14.481 03:23:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:14.482 03:23:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:14.482 03:23:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:14.740 03:23:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:14.740 03:23:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:14.740 03:23:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:14.740 03:24:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:14.740 03:24:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:14.740 03:24:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:14.740 03:24:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:14.997 03:24:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:14.997 03:24:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:14.997 03:24:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:14.997 03:24:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:14.997 03:24:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:14.997 03:24:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:14.997 03:24:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:15.254 03:24:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:15.255 03:24:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:15.255 03:24:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:15.255 03:24:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:15.255 03:24:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:15.255 03:24:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:15.255 03:24:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:15.255 03:24:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:15.255 03:24:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:15.255 03:24:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:15.255 03:24:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:15.255 03:24:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:15.255 03:24:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:15.255 03:24:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:15.255 03:24:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:15.255 03:24:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:15.255 03:24:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:15.255 03:24:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:15.255 03:24:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:15.255 03:24:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:15.255 03:24:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:15.513 03:24:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:15.513 03:24:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:15.513 03:24:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:15.513 03:24:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:15.513 03:24:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:15.513 03:24:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:15.513 03:24:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:15.513 03:24:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:15.771 03:24:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:15.771 03:24:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:15.771 03:24:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:15.771 03:24:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:15.771 03:24:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:15.771 03:24:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:15.771 03:24:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:15.771 03:24:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:15.771 03:24:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:15.771 03:24:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:15.771 03:24:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:15.771 03:24:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:15.771 03:24:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:15.771 03:24:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:15.771 03:24:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:15.771 03:24:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:15.771 03:24:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:15.771 03:24:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:15.771 03:24:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:15.771 03:24:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:15.771 03:24:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:15.771 03:24:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:15.771 03:24:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:15.771 03:24:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:16.029 03:24:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:16.029 03:24:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:16.029 03:24:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:16.029 03:24:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:16.029 03:24:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:16.029 03:24:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:16.029 03:24:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:16.029 03:24:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:16.287 03:24:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:16.287 03:24:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:16.287 03:24:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:16.287 03:24:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:16.287 03:24:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:16.287 03:24:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:16.287 03:24:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:16.287 03:24:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:16.287 03:24:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:16.287 03:24:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:16.287 03:24:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:16.287 03:24:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:16.287 03:24:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:16.287 03:24:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:16.287 03:24:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:16.287 03:24:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:16.287 03:24:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:14:16.287 03:24:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:14:16.287 03:24:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:16.287 03:24:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:14:16.287 03:24:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:16.287 03:24:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:14:16.287 03:24:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:16.287 03:24:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:16.287 rmmod nvme_tcp 00:14:16.287 rmmod nvme_fabrics 00:14:16.287 rmmod nvme_keyring 00:14:16.287 03:24:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:16.287 03:24:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:14:16.287 03:24:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:14:16.287 03:24:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 2351528 ']' 00:14:16.287 03:24:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 2351528 00:14:16.287 03:24:01 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@946 -- # '[' -z 2351528 ']' 00:14:16.287 03:24:01 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # kill -0 2351528 00:14:16.287 03:24:01 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@951 -- # uname 00:14:16.287 03:24:01 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:16.287 03:24:01 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2351528 00:14:16.287 03:24:01 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:14:16.287 03:24:01 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:14:16.287 03:24:01 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2351528' 00:14:16.287 killing process with pid 2351528 00:14:16.287 03:24:01 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@965 -- # kill 2351528 00:14:16.287 03:24:01 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@970 -- # wait 2351528 00:14:16.545 03:24:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:16.545 03:24:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:16.545 03:24:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:16.545 03:24:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:16.545 03:24:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:16.545 03:24:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:16.545 03:24:01 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:16.545 03:24:01 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:18.443 03:24:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:18.699 00:14:18.699 real 0m45.795s 00:14:18.699 user 3m27.928s 00:14:18.699 sys 0m16.796s 00:14:18.699 03:24:03 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:18.699 03:24:03 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:14:18.699 ************************************ 00:14:18.699 END TEST nvmf_ns_hotplug_stress 00:14:18.699 ************************************ 00:14:18.699 03:24:03 nvmf_tcp -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:14:18.699 03:24:03 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:18.699 03:24:03 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:18.699 03:24:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:18.699 ************************************ 00:14:18.699 START TEST nvmf_connect_stress 00:14:18.700 ************************************ 00:14:18.700 03:24:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:14:18.700 * Looking for test storage... 00:14:18.700 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:18.700 03:24:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:18.700 03:24:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:14:18.700 03:24:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:18.700 03:24:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:18.700 03:24:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:18.700 03:24:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:18.700 03:24:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:18.700 03:24:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:18.700 03:24:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:18.700 03:24:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:18.700 03:24:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:18.700 03:24:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:18.700 03:24:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:18.700 03:24:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:18.700 03:24:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:18.700 03:24:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:18.700 03:24:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:18.700 03:24:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:18.700 03:24:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:18.700 03:24:03 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:18.700 03:24:03 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:18.700 03:24:03 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:18.700 03:24:03 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:18.700 03:24:03 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:18.700 03:24:03 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:18.700 03:24:03 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:14:18.700 03:24:03 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:18.700 03:24:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:14:18.700 03:24:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:18.700 03:24:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:18.700 03:24:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:18.700 03:24:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:18.700 03:24:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:18.700 03:24:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:18.700 03:24:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:18.700 03:24:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:18.700 03:24:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:14:18.700 03:24:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:18.700 03:24:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:18.700 03:24:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:18.700 03:24:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:18.700 03:24:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:18.700 03:24:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:18.700 03:24:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:18.700 03:24:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:18.700 03:24:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:18.700 03:24:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:18.700 03:24:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:14:18.700 03:24:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:20.594 03:24:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:20.594 03:24:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:14:20.594 03:24:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:20.594 03:24:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:20.594 03:24:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:20.594 03:24:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:20.594 03:24:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:20.594 03:24:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:14:20.594 03:24:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:20.594 03:24:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:14:20.594 03:24:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:14:20.594 03:24:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:14:20.594 03:24:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:14:20.594 03:24:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:14:20.594 03:24:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:14:20.594 03:24:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:20.594 03:24:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:20.594 03:24:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:20.594 03:24:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:20.594 03:24:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:20.595 03:24:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:20.595 03:24:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:20.595 03:24:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:20.595 03:24:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:20.595 03:24:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:20.595 03:24:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:20.595 03:24:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:20.595 03:24:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:20.595 03:24:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:20.595 03:24:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:20.595 03:24:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:20.595 03:24:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:20.595 03:24:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:20.595 03:24:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:20.595 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:20.595 03:24:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:20.595 03:24:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:20.595 03:24:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:20.595 03:24:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:20.595 03:24:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:20.595 03:24:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:20.595 03:24:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:20.595 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:20.595 03:24:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:20.595 03:24:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:20.595 03:24:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:20.595 03:24:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:20.595 03:24:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:20.595 03:24:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:20.595 03:24:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:20.595 03:24:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:20.595 03:24:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:20.595 03:24:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:20.595 03:24:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:20.595 03:24:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:20.595 03:24:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:20.595 03:24:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:20.595 03:24:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:20.595 03:24:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:20.595 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:20.595 03:24:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:20.595 03:24:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:20.595 03:24:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:20.595 03:24:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:20.595 03:24:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:20.595 03:24:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:20.595 03:24:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:20.595 03:24:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:20.595 03:24:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:20.595 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:20.595 03:24:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:20.595 03:24:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:20.595 03:24:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:14:20.595 03:24:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:20.595 03:24:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:20.595 03:24:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:20.595 03:24:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:20.595 03:24:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:20.595 03:24:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:20.595 03:24:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:20.595 03:24:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:20.595 03:24:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:20.595 03:24:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:20.595 03:24:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:20.595 03:24:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:20.595 03:24:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:20.595 03:24:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:20.595 03:24:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:20.595 03:24:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:20.852 03:24:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:20.852 03:24:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:20.852 03:24:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:20.852 03:24:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:20.852 03:24:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:20.852 03:24:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:20.852 03:24:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:20.852 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:20.852 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.136 ms 00:14:20.852 00:14:20.852 --- 10.0.0.2 ping statistics --- 00:14:20.852 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:20.852 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:14:20.852 03:24:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:20.852 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:20.852 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.085 ms 00:14:20.852 00:14:20.852 --- 10.0.0.1 ping statistics --- 00:14:20.852 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:20.852 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:14:20.852 03:24:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:20.852 03:24:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:14:20.852 03:24:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:20.852 03:24:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:20.852 03:24:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:20.852 03:24:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:20.852 03:24:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:20.852 03:24:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:20.852 03:24:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:20.852 03:24:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:14:20.852 03:24:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:20.852 03:24:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@720 -- # xtrace_disable 00:14:20.852 03:24:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:20.852 03:24:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=2358764 00:14:20.852 03:24:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:14:20.852 03:24:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 2358764 00:14:20.852 03:24:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@827 -- # '[' -z 2358764 ']' 00:14:20.853 03:24:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:20.853 03:24:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:20.853 03:24:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:20.853 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:20.853 03:24:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:20.853 03:24:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:20.853 [2024-07-21 03:24:06.082632] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:14:20.853 [2024-07-21 03:24:06.082713] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:20.853 EAL: No free 2048 kB hugepages reported on node 1 00:14:20.853 [2024-07-21 03:24:06.156679] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:21.110 [2024-07-21 03:24:06.251180] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:21.110 [2024-07-21 03:24:06.251251] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:21.110 [2024-07-21 03:24:06.251268] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:21.110 [2024-07-21 03:24:06.251283] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:21.110 [2024-07-21 03:24:06.251295] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:21.110 [2024-07-21 03:24:06.251356] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:21.110 [2024-07-21 03:24:06.251480] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:21.110 [2024-07-21 03:24:06.251483] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:21.110 03:24:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:21.110 03:24:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@860 -- # return 0 00:14:21.110 03:24:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:21.110 03:24:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:21.110 03:24:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:21.110 03:24:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:21.110 03:24:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:21.110 03:24:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:21.110 03:24:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:21.110 [2024-07-21 03:24:06.396865] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:21.110 03:24:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:21.110 03:24:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:21.110 03:24:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:21.110 03:24:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:21.110 03:24:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:21.110 03:24:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:21.110 03:24:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:21.110 03:24:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:21.367 [2024-07-21 03:24:06.435772] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:21.367 03:24:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:21.368 03:24:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:21.368 03:24:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:21.368 03:24:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:21.368 NULL1 00:14:21.368 03:24:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:21.368 03:24:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=2358804 00:14:21.368 03:24:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:21.368 03:24:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:21.368 03:24:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:14:21.368 03:24:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:14:21.368 03:24:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:21.368 03:24:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:21.368 03:24:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:21.368 03:24:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:21.368 03:24:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:21.368 03:24:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:21.368 03:24:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:21.368 03:24:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:21.368 03:24:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:21.368 03:24:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:21.368 03:24:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:21.368 03:24:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:21.368 03:24:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:21.368 03:24:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:21.368 03:24:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:21.368 03:24:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:21.368 03:24:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:21.368 03:24:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:21.368 03:24:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:21.368 03:24:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:21.368 03:24:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:21.368 03:24:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:21.368 03:24:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:21.368 03:24:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:21.368 EAL: No free 2048 kB hugepages reported on node 1 00:14:21.368 03:24:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:21.368 03:24:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:21.368 03:24:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:21.368 03:24:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:21.368 03:24:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:21.368 03:24:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:21.368 03:24:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:21.368 03:24:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:21.368 03:24:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:21.368 03:24:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:21.368 03:24:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:21.368 03:24:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:21.368 03:24:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:21.368 03:24:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:21.368 03:24:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:21.368 03:24:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:21.368 03:24:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2358804 00:14:21.368 03:24:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:21.368 03:24:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:21.368 03:24:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:21.626 03:24:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:21.626 03:24:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2358804 00:14:21.626 03:24:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:21.626 03:24:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:21.626 03:24:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:21.883 03:24:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:21.883 03:24:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2358804 00:14:21.883 03:24:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:21.883 03:24:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:21.883 03:24:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:22.447 03:24:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:22.447 03:24:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2358804 00:14:22.447 03:24:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:22.447 03:24:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:22.447 03:24:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:22.705 03:24:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:22.705 03:24:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2358804 00:14:22.705 03:24:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:22.705 03:24:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:22.705 03:24:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:22.962 03:24:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:22.962 03:24:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2358804 00:14:22.962 03:24:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:22.962 03:24:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:22.962 03:24:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:23.220 03:24:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:23.220 03:24:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2358804 00:14:23.220 03:24:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:23.220 03:24:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:23.220 03:24:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:23.477 03:24:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:23.477 03:24:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2358804 00:14:23.477 03:24:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:23.477 03:24:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:23.478 03:24:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:24.049 03:24:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:24.049 03:24:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2358804 00:14:24.049 03:24:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:24.049 03:24:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:24.049 03:24:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:24.307 03:24:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:24.307 03:24:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2358804 00:14:24.307 03:24:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:24.307 03:24:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:24.307 03:24:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:24.564 03:24:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:24.564 03:24:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2358804 00:14:24.564 03:24:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:24.564 03:24:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:24.565 03:24:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:24.822 03:24:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:24.822 03:24:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2358804 00:14:24.822 03:24:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:24.822 03:24:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:24.822 03:24:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:25.080 03:24:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:25.080 03:24:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2358804 00:14:25.080 03:24:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:25.080 03:24:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:25.080 03:24:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:25.646 03:24:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:25.646 03:24:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2358804 00:14:25.646 03:24:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:25.646 03:24:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:25.646 03:24:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:25.904 03:24:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:25.904 03:24:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2358804 00:14:25.904 03:24:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:25.904 03:24:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:25.904 03:24:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:26.162 03:24:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:26.162 03:24:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2358804 00:14:26.162 03:24:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:26.162 03:24:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:26.162 03:24:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:26.419 03:24:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:26.419 03:24:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2358804 00:14:26.419 03:24:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:26.419 03:24:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:26.419 03:24:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:26.678 03:24:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:26.678 03:24:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2358804 00:14:26.678 03:24:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:26.678 03:24:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:26.678 03:24:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:27.262 03:24:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:27.262 03:24:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2358804 00:14:27.262 03:24:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:27.262 03:24:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:27.262 03:24:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:27.519 03:24:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:27.519 03:24:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2358804 00:14:27.519 03:24:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:27.519 03:24:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:27.519 03:24:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:27.775 03:24:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:27.775 03:24:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2358804 00:14:27.775 03:24:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:27.775 03:24:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:27.775 03:24:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:28.032 03:24:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:28.032 03:24:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2358804 00:14:28.032 03:24:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:28.032 03:24:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:28.032 03:24:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:28.289 03:24:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:28.289 03:24:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2358804 00:14:28.289 03:24:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:28.289 03:24:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:28.289 03:24:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:28.852 03:24:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:28.852 03:24:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2358804 00:14:28.852 03:24:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:28.852 03:24:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:28.852 03:24:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:29.118 03:24:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:29.118 03:24:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2358804 00:14:29.118 03:24:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:29.118 03:24:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:29.118 03:24:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:29.375 03:24:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:29.375 03:24:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2358804 00:14:29.375 03:24:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:29.375 03:24:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:29.375 03:24:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:29.654 03:24:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:29.654 03:24:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2358804 00:14:29.654 03:24:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:29.654 03:24:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:29.654 03:24:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:29.911 03:24:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:29.911 03:24:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2358804 00:14:29.911 03:24:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:29.911 03:24:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:29.911 03:24:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:30.475 03:24:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:30.475 03:24:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2358804 00:14:30.475 03:24:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:30.475 03:24:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:30.475 03:24:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:30.732 03:24:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:30.732 03:24:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2358804 00:14:30.732 03:24:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:30.732 03:24:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:30.732 03:24:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:30.989 03:24:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:30.989 03:24:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2358804 00:14:30.989 03:24:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:30.989 03:24:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:30.989 03:24:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:31.246 03:24:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:31.246 03:24:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2358804 00:14:31.246 03:24:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:31.246 03:24:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:31.246 03:24:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:31.246 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:31.504 03:24:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:31.504 03:24:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2358804 00:14:31.504 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (2358804) - No such process 00:14:31.504 03:24:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 2358804 00:14:31.504 03:24:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:31.504 03:24:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:14:31.504 03:24:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:14:31.504 03:24:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:31.504 03:24:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:14:31.504 03:24:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:31.504 03:24:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:14:31.504 03:24:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:31.504 03:24:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:31.504 rmmod nvme_tcp 00:14:31.504 rmmod nvme_fabrics 00:14:31.762 rmmod nvme_keyring 00:14:31.762 03:24:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:31.762 03:24:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:14:31.762 03:24:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:14:31.762 03:24:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 2358764 ']' 00:14:31.762 03:24:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 2358764 00:14:31.762 03:24:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@946 -- # '[' -z 2358764 ']' 00:14:31.762 03:24:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@950 -- # kill -0 2358764 00:14:31.762 03:24:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@951 -- # uname 00:14:31.762 03:24:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:31.762 03:24:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2358764 00:14:31.762 03:24:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:14:31.762 03:24:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:14:31.762 03:24:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2358764' 00:14:31.762 killing process with pid 2358764 00:14:31.762 03:24:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@965 -- # kill 2358764 00:14:31.762 03:24:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@970 -- # wait 2358764 00:14:32.020 03:24:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:32.020 03:24:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:32.020 03:24:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:32.020 03:24:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:32.020 03:24:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:32.020 03:24:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:32.020 03:24:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:32.020 03:24:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:33.922 03:24:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:33.922 00:14:33.922 real 0m15.314s 00:14:33.922 user 0m38.458s 00:14:33.922 sys 0m5.841s 00:14:33.922 03:24:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:33.922 03:24:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:33.922 ************************************ 00:14:33.922 END TEST nvmf_connect_stress 00:14:33.922 ************************************ 00:14:33.922 03:24:19 nvmf_tcp -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:33.922 03:24:19 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:33.922 03:24:19 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:33.922 03:24:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:33.922 ************************************ 00:14:33.922 START TEST nvmf_fused_ordering 00:14:33.922 ************************************ 00:14:33.922 03:24:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:33.922 * Looking for test storage... 00:14:33.922 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:33.922 03:24:19 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:33.922 03:24:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:14:33.922 03:24:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:33.922 03:24:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:33.922 03:24:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:33.922 03:24:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:33.922 03:24:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:33.922 03:24:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:33.922 03:24:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:33.922 03:24:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:33.922 03:24:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:33.922 03:24:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:34.180 03:24:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:34.180 03:24:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:34.180 03:24:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:34.180 03:24:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:34.180 03:24:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:34.180 03:24:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:34.180 03:24:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:34.180 03:24:19 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:34.180 03:24:19 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:34.180 03:24:19 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:34.180 03:24:19 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:34.180 03:24:19 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:34.180 03:24:19 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:34.180 03:24:19 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:14:34.180 03:24:19 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:34.180 03:24:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:14:34.180 03:24:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:34.180 03:24:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:34.180 03:24:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:34.180 03:24:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:34.180 03:24:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:34.180 03:24:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:34.180 03:24:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:34.180 03:24:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:34.180 03:24:19 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:14:34.180 03:24:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:34.180 03:24:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:34.180 03:24:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:34.180 03:24:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:34.180 03:24:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:34.180 03:24:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:34.180 03:24:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:34.180 03:24:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:34.180 03:24:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:34.180 03:24:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:34.180 03:24:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:14:34.180 03:24:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:36.076 03:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:36.076 03:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:14:36.076 03:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:36.076 03:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:36.076 03:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:36.076 03:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:36.076 03:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:36.076 03:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:14:36.076 03:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:36.076 03:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:14:36.076 03:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:14:36.076 03:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:14:36.076 03:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:14:36.076 03:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:14:36.076 03:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:14:36.076 03:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:36.076 03:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:36.076 03:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:36.076 03:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:36.076 03:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:36.076 03:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:36.076 03:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:36.076 03:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:36.076 03:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:36.076 03:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:36.076 03:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:36.076 03:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:36.076 03:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:36.076 03:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:36.076 03:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:36.076 03:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:36.076 03:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:36.076 03:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:36.076 03:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:36.076 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:36.076 03:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:36.076 03:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:36.076 03:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:36.076 03:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:36.076 03:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:36.076 03:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:36.076 03:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:36.076 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:36.076 03:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:36.076 03:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:36.076 03:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:36.076 03:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:36.076 03:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:36.076 03:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:36.076 03:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:36.076 03:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:36.076 03:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:36.076 03:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:36.076 03:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:36.076 03:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:36.076 03:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:36.076 03:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:36.076 03:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:36.076 03:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:36.076 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:36.076 03:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:36.076 03:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:36.076 03:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:36.076 03:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:36.076 03:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:36.076 03:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:36.076 03:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:36.076 03:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:36.076 03:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:36.076 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:36.076 03:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:36.076 03:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:36.076 03:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:14:36.076 03:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:36.076 03:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:36.076 03:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:36.076 03:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:36.076 03:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:36.076 03:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:36.076 03:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:36.076 03:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:36.076 03:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:36.076 03:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:36.076 03:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:36.076 03:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:36.076 03:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:36.076 03:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:36.076 03:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:36.076 03:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:36.076 03:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:36.076 03:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:36.076 03:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:36.076 03:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:36.076 03:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:36.076 03:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:36.076 03:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:36.076 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:36.076 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.119 ms 00:14:36.076 00:14:36.076 --- 10.0.0.2 ping statistics --- 00:14:36.076 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:36.076 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:14:36.076 03:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:36.076 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:36.076 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.083 ms 00:14:36.076 00:14:36.076 --- 10.0.0.1 ping statistics --- 00:14:36.076 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:36.076 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:14:36.076 03:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:36.076 03:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:14:36.076 03:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:36.076 03:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:36.076 03:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:36.076 03:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:36.076 03:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:36.076 03:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:36.076 03:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:36.076 03:24:21 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:14:36.076 03:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:36.076 03:24:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@720 -- # xtrace_disable 00:14:36.076 03:24:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:36.076 03:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=2362450 00:14:36.076 03:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:36.076 03:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 2362450 00:14:36.076 03:24:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@827 -- # '[' -z 2362450 ']' 00:14:36.076 03:24:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:36.076 03:24:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:36.076 03:24:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:36.076 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:36.076 03:24:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:36.076 03:24:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:36.076 [2024-07-21 03:24:21.349201] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:14:36.076 [2024-07-21 03:24:21.349287] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:36.076 EAL: No free 2048 kB hugepages reported on node 1 00:14:36.333 [2024-07-21 03:24:21.413302] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:36.333 [2024-07-21 03:24:21.496578] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:36.333 [2024-07-21 03:24:21.496653] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:36.333 [2024-07-21 03:24:21.496677] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:36.333 [2024-07-21 03:24:21.496688] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:36.333 [2024-07-21 03:24:21.496699] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:36.333 [2024-07-21 03:24:21.496739] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:36.333 03:24:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:36.333 03:24:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@860 -- # return 0 00:14:36.333 03:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:36.333 03:24:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:36.333 03:24:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:36.333 03:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:36.333 03:24:21 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:36.333 03:24:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:36.333 03:24:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:36.333 [2024-07-21 03:24:21.634001] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:36.333 03:24:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:36.333 03:24:21 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:36.333 03:24:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:36.333 03:24:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:36.590 03:24:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:36.590 03:24:21 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:36.590 03:24:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:36.590 03:24:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:36.590 [2024-07-21 03:24:21.650209] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:36.590 03:24:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:36.590 03:24:21 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:36.590 03:24:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:36.590 03:24:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:36.590 NULL1 00:14:36.590 03:24:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:36.590 03:24:21 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:14:36.590 03:24:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:36.590 03:24:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:36.590 03:24:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:36.590 03:24:21 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:14:36.590 03:24:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:36.590 03:24:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:36.590 03:24:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:36.590 03:24:21 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:14:36.590 [2024-07-21 03:24:21.694706] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:14:36.590 [2024-07-21 03:24:21.694743] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2362481 ] 00:14:36.590 EAL: No free 2048 kB hugepages reported on node 1 00:14:36.848 Attached to nqn.2016-06.io.spdk:cnode1 00:14:36.848 Namespace ID: 1 size: 1GB 00:14:36.848 fused_ordering(0) 00:14:36.848 fused_ordering(1) 00:14:36.848 fused_ordering(2) 00:14:36.848 fused_ordering(3) 00:14:36.848 fused_ordering(4) 00:14:36.848 fused_ordering(5) 00:14:36.848 fused_ordering(6) 00:14:36.848 fused_ordering(7) 00:14:36.848 fused_ordering(8) 00:14:36.848 fused_ordering(9) 00:14:36.848 fused_ordering(10) 00:14:36.848 fused_ordering(11) 00:14:36.848 fused_ordering(12) 00:14:36.848 fused_ordering(13) 00:14:36.848 fused_ordering(14) 00:14:36.848 fused_ordering(15) 00:14:36.848 fused_ordering(16) 00:14:36.848 fused_ordering(17) 00:14:36.848 fused_ordering(18) 00:14:36.848 fused_ordering(19) 00:14:36.848 fused_ordering(20) 00:14:36.848 fused_ordering(21) 00:14:36.848 fused_ordering(22) 00:14:36.848 fused_ordering(23) 00:14:36.848 fused_ordering(24) 00:14:36.848 fused_ordering(25) 00:14:36.848 fused_ordering(26) 00:14:36.848 fused_ordering(27) 00:14:36.848 fused_ordering(28) 00:14:36.848 fused_ordering(29) 00:14:36.848 fused_ordering(30) 00:14:36.848 fused_ordering(31) 00:14:36.848 fused_ordering(32) 00:14:36.848 fused_ordering(33) 00:14:36.848 fused_ordering(34) 00:14:36.848 fused_ordering(35) 00:14:36.848 fused_ordering(36) 00:14:36.848 fused_ordering(37) 00:14:36.848 fused_ordering(38) 00:14:36.848 fused_ordering(39) 00:14:36.848 fused_ordering(40) 00:14:36.848 fused_ordering(41) 00:14:36.848 fused_ordering(42) 00:14:36.848 fused_ordering(43) 00:14:36.848 fused_ordering(44) 00:14:36.848 fused_ordering(45) 00:14:36.848 fused_ordering(46) 00:14:36.848 fused_ordering(47) 00:14:36.848 fused_ordering(48) 00:14:36.848 fused_ordering(49) 00:14:36.848 fused_ordering(50) 00:14:36.848 fused_ordering(51) 00:14:36.848 fused_ordering(52) 00:14:36.848 fused_ordering(53) 00:14:36.848 fused_ordering(54) 00:14:36.848 fused_ordering(55) 00:14:36.848 fused_ordering(56) 00:14:36.848 fused_ordering(57) 00:14:36.848 fused_ordering(58) 00:14:36.848 fused_ordering(59) 00:14:36.848 fused_ordering(60) 00:14:36.848 fused_ordering(61) 00:14:36.848 fused_ordering(62) 00:14:36.848 fused_ordering(63) 00:14:36.848 fused_ordering(64) 00:14:36.848 fused_ordering(65) 00:14:36.848 fused_ordering(66) 00:14:36.848 fused_ordering(67) 00:14:36.848 fused_ordering(68) 00:14:36.848 fused_ordering(69) 00:14:36.848 fused_ordering(70) 00:14:36.848 fused_ordering(71) 00:14:36.848 fused_ordering(72) 00:14:36.848 fused_ordering(73) 00:14:36.848 fused_ordering(74) 00:14:36.848 fused_ordering(75) 00:14:36.848 fused_ordering(76) 00:14:36.848 fused_ordering(77) 00:14:36.848 fused_ordering(78) 00:14:36.848 fused_ordering(79) 00:14:36.848 fused_ordering(80) 00:14:36.848 fused_ordering(81) 00:14:36.848 fused_ordering(82) 00:14:36.848 fused_ordering(83) 00:14:36.848 fused_ordering(84) 00:14:36.848 fused_ordering(85) 00:14:36.848 fused_ordering(86) 00:14:36.848 fused_ordering(87) 00:14:36.848 fused_ordering(88) 00:14:36.848 fused_ordering(89) 00:14:36.848 fused_ordering(90) 00:14:36.848 fused_ordering(91) 00:14:36.848 fused_ordering(92) 00:14:36.848 fused_ordering(93) 00:14:36.848 fused_ordering(94) 00:14:36.848 fused_ordering(95) 00:14:36.848 fused_ordering(96) 00:14:36.848 fused_ordering(97) 00:14:36.848 fused_ordering(98) 00:14:36.848 fused_ordering(99) 00:14:36.848 fused_ordering(100) 00:14:36.848 fused_ordering(101) 00:14:36.848 fused_ordering(102) 00:14:36.848 fused_ordering(103) 00:14:36.848 fused_ordering(104) 00:14:36.848 fused_ordering(105) 00:14:36.848 fused_ordering(106) 00:14:36.848 fused_ordering(107) 00:14:36.848 fused_ordering(108) 00:14:36.848 fused_ordering(109) 00:14:36.848 fused_ordering(110) 00:14:36.848 fused_ordering(111) 00:14:36.848 fused_ordering(112) 00:14:36.848 fused_ordering(113) 00:14:36.848 fused_ordering(114) 00:14:36.848 fused_ordering(115) 00:14:36.848 fused_ordering(116) 00:14:36.848 fused_ordering(117) 00:14:36.848 fused_ordering(118) 00:14:36.848 fused_ordering(119) 00:14:36.848 fused_ordering(120) 00:14:36.848 fused_ordering(121) 00:14:36.848 fused_ordering(122) 00:14:36.848 fused_ordering(123) 00:14:36.848 fused_ordering(124) 00:14:36.848 fused_ordering(125) 00:14:36.848 fused_ordering(126) 00:14:36.848 fused_ordering(127) 00:14:36.848 fused_ordering(128) 00:14:36.848 fused_ordering(129) 00:14:36.848 fused_ordering(130) 00:14:36.848 fused_ordering(131) 00:14:36.848 fused_ordering(132) 00:14:36.848 fused_ordering(133) 00:14:36.848 fused_ordering(134) 00:14:36.848 fused_ordering(135) 00:14:36.848 fused_ordering(136) 00:14:36.848 fused_ordering(137) 00:14:36.848 fused_ordering(138) 00:14:36.848 fused_ordering(139) 00:14:36.848 fused_ordering(140) 00:14:36.848 fused_ordering(141) 00:14:36.848 fused_ordering(142) 00:14:36.848 fused_ordering(143) 00:14:36.848 fused_ordering(144) 00:14:36.848 fused_ordering(145) 00:14:36.848 fused_ordering(146) 00:14:36.848 fused_ordering(147) 00:14:36.848 fused_ordering(148) 00:14:36.848 fused_ordering(149) 00:14:36.848 fused_ordering(150) 00:14:36.848 fused_ordering(151) 00:14:36.848 fused_ordering(152) 00:14:36.848 fused_ordering(153) 00:14:36.848 fused_ordering(154) 00:14:36.848 fused_ordering(155) 00:14:36.848 fused_ordering(156) 00:14:36.848 fused_ordering(157) 00:14:36.848 fused_ordering(158) 00:14:36.848 fused_ordering(159) 00:14:36.848 fused_ordering(160) 00:14:36.848 fused_ordering(161) 00:14:36.848 fused_ordering(162) 00:14:36.848 fused_ordering(163) 00:14:36.848 fused_ordering(164) 00:14:36.848 fused_ordering(165) 00:14:36.848 fused_ordering(166) 00:14:36.848 fused_ordering(167) 00:14:36.848 fused_ordering(168) 00:14:36.848 fused_ordering(169) 00:14:36.848 fused_ordering(170) 00:14:36.848 fused_ordering(171) 00:14:36.848 fused_ordering(172) 00:14:36.848 fused_ordering(173) 00:14:36.848 fused_ordering(174) 00:14:36.848 fused_ordering(175) 00:14:36.849 fused_ordering(176) 00:14:36.849 fused_ordering(177) 00:14:36.849 fused_ordering(178) 00:14:36.849 fused_ordering(179) 00:14:36.849 fused_ordering(180) 00:14:36.849 fused_ordering(181) 00:14:36.849 fused_ordering(182) 00:14:36.849 fused_ordering(183) 00:14:36.849 fused_ordering(184) 00:14:36.849 fused_ordering(185) 00:14:36.849 fused_ordering(186) 00:14:36.849 fused_ordering(187) 00:14:36.849 fused_ordering(188) 00:14:36.849 fused_ordering(189) 00:14:36.849 fused_ordering(190) 00:14:36.849 fused_ordering(191) 00:14:36.849 fused_ordering(192) 00:14:36.849 fused_ordering(193) 00:14:36.849 fused_ordering(194) 00:14:36.849 fused_ordering(195) 00:14:36.849 fused_ordering(196) 00:14:36.849 fused_ordering(197) 00:14:36.849 fused_ordering(198) 00:14:36.849 fused_ordering(199) 00:14:36.849 fused_ordering(200) 00:14:36.849 fused_ordering(201) 00:14:36.849 fused_ordering(202) 00:14:36.849 fused_ordering(203) 00:14:36.849 fused_ordering(204) 00:14:36.849 fused_ordering(205) 00:14:37.413 fused_ordering(206) 00:14:37.413 fused_ordering(207) 00:14:37.413 fused_ordering(208) 00:14:37.413 fused_ordering(209) 00:14:37.413 fused_ordering(210) 00:14:37.413 fused_ordering(211) 00:14:37.413 fused_ordering(212) 00:14:37.413 fused_ordering(213) 00:14:37.413 fused_ordering(214) 00:14:37.413 fused_ordering(215) 00:14:37.413 fused_ordering(216) 00:14:37.413 fused_ordering(217) 00:14:37.413 fused_ordering(218) 00:14:37.413 fused_ordering(219) 00:14:37.413 fused_ordering(220) 00:14:37.413 fused_ordering(221) 00:14:37.413 fused_ordering(222) 00:14:37.413 fused_ordering(223) 00:14:37.413 fused_ordering(224) 00:14:37.413 fused_ordering(225) 00:14:37.413 fused_ordering(226) 00:14:37.413 fused_ordering(227) 00:14:37.413 fused_ordering(228) 00:14:37.413 fused_ordering(229) 00:14:37.413 fused_ordering(230) 00:14:37.413 fused_ordering(231) 00:14:37.413 fused_ordering(232) 00:14:37.413 fused_ordering(233) 00:14:37.413 fused_ordering(234) 00:14:37.413 fused_ordering(235) 00:14:37.413 fused_ordering(236) 00:14:37.413 fused_ordering(237) 00:14:37.413 fused_ordering(238) 00:14:37.413 fused_ordering(239) 00:14:37.413 fused_ordering(240) 00:14:37.413 fused_ordering(241) 00:14:37.413 fused_ordering(242) 00:14:37.413 fused_ordering(243) 00:14:37.413 fused_ordering(244) 00:14:37.413 fused_ordering(245) 00:14:37.413 fused_ordering(246) 00:14:37.413 fused_ordering(247) 00:14:37.413 fused_ordering(248) 00:14:37.413 fused_ordering(249) 00:14:37.413 fused_ordering(250) 00:14:37.413 fused_ordering(251) 00:14:37.413 fused_ordering(252) 00:14:37.413 fused_ordering(253) 00:14:37.413 fused_ordering(254) 00:14:37.413 fused_ordering(255) 00:14:37.413 fused_ordering(256) 00:14:37.413 fused_ordering(257) 00:14:37.413 fused_ordering(258) 00:14:37.413 fused_ordering(259) 00:14:37.413 fused_ordering(260) 00:14:37.413 fused_ordering(261) 00:14:37.413 fused_ordering(262) 00:14:37.413 fused_ordering(263) 00:14:37.413 fused_ordering(264) 00:14:37.413 fused_ordering(265) 00:14:37.413 fused_ordering(266) 00:14:37.413 fused_ordering(267) 00:14:37.413 fused_ordering(268) 00:14:37.413 fused_ordering(269) 00:14:37.413 fused_ordering(270) 00:14:37.413 fused_ordering(271) 00:14:37.413 fused_ordering(272) 00:14:37.413 fused_ordering(273) 00:14:37.413 fused_ordering(274) 00:14:37.413 fused_ordering(275) 00:14:37.413 fused_ordering(276) 00:14:37.413 fused_ordering(277) 00:14:37.413 fused_ordering(278) 00:14:37.413 fused_ordering(279) 00:14:37.413 fused_ordering(280) 00:14:37.413 fused_ordering(281) 00:14:37.413 fused_ordering(282) 00:14:37.413 fused_ordering(283) 00:14:37.413 fused_ordering(284) 00:14:37.413 fused_ordering(285) 00:14:37.413 fused_ordering(286) 00:14:37.413 fused_ordering(287) 00:14:37.413 fused_ordering(288) 00:14:37.413 fused_ordering(289) 00:14:37.413 fused_ordering(290) 00:14:37.414 fused_ordering(291) 00:14:37.414 fused_ordering(292) 00:14:37.414 fused_ordering(293) 00:14:37.414 fused_ordering(294) 00:14:37.414 fused_ordering(295) 00:14:37.414 fused_ordering(296) 00:14:37.414 fused_ordering(297) 00:14:37.414 fused_ordering(298) 00:14:37.414 fused_ordering(299) 00:14:37.414 fused_ordering(300) 00:14:37.414 fused_ordering(301) 00:14:37.414 fused_ordering(302) 00:14:37.414 fused_ordering(303) 00:14:37.414 fused_ordering(304) 00:14:37.414 fused_ordering(305) 00:14:37.414 fused_ordering(306) 00:14:37.414 fused_ordering(307) 00:14:37.414 fused_ordering(308) 00:14:37.414 fused_ordering(309) 00:14:37.414 fused_ordering(310) 00:14:37.414 fused_ordering(311) 00:14:37.414 fused_ordering(312) 00:14:37.414 fused_ordering(313) 00:14:37.414 fused_ordering(314) 00:14:37.414 fused_ordering(315) 00:14:37.414 fused_ordering(316) 00:14:37.414 fused_ordering(317) 00:14:37.414 fused_ordering(318) 00:14:37.414 fused_ordering(319) 00:14:37.414 fused_ordering(320) 00:14:37.414 fused_ordering(321) 00:14:37.414 fused_ordering(322) 00:14:37.414 fused_ordering(323) 00:14:37.414 fused_ordering(324) 00:14:37.414 fused_ordering(325) 00:14:37.414 fused_ordering(326) 00:14:37.414 fused_ordering(327) 00:14:37.414 fused_ordering(328) 00:14:37.414 fused_ordering(329) 00:14:37.414 fused_ordering(330) 00:14:37.414 fused_ordering(331) 00:14:37.414 fused_ordering(332) 00:14:37.414 fused_ordering(333) 00:14:37.414 fused_ordering(334) 00:14:37.414 fused_ordering(335) 00:14:37.414 fused_ordering(336) 00:14:37.414 fused_ordering(337) 00:14:37.414 fused_ordering(338) 00:14:37.414 fused_ordering(339) 00:14:37.414 fused_ordering(340) 00:14:37.414 fused_ordering(341) 00:14:37.414 fused_ordering(342) 00:14:37.414 fused_ordering(343) 00:14:37.414 fused_ordering(344) 00:14:37.414 fused_ordering(345) 00:14:37.414 fused_ordering(346) 00:14:37.414 fused_ordering(347) 00:14:37.414 fused_ordering(348) 00:14:37.414 fused_ordering(349) 00:14:37.414 fused_ordering(350) 00:14:37.414 fused_ordering(351) 00:14:37.414 fused_ordering(352) 00:14:37.414 fused_ordering(353) 00:14:37.414 fused_ordering(354) 00:14:37.414 fused_ordering(355) 00:14:37.414 fused_ordering(356) 00:14:37.414 fused_ordering(357) 00:14:37.414 fused_ordering(358) 00:14:37.414 fused_ordering(359) 00:14:37.414 fused_ordering(360) 00:14:37.414 fused_ordering(361) 00:14:37.414 fused_ordering(362) 00:14:37.414 fused_ordering(363) 00:14:37.414 fused_ordering(364) 00:14:37.414 fused_ordering(365) 00:14:37.414 fused_ordering(366) 00:14:37.414 fused_ordering(367) 00:14:37.414 fused_ordering(368) 00:14:37.414 fused_ordering(369) 00:14:37.414 fused_ordering(370) 00:14:37.414 fused_ordering(371) 00:14:37.414 fused_ordering(372) 00:14:37.414 fused_ordering(373) 00:14:37.414 fused_ordering(374) 00:14:37.414 fused_ordering(375) 00:14:37.414 fused_ordering(376) 00:14:37.414 fused_ordering(377) 00:14:37.414 fused_ordering(378) 00:14:37.414 fused_ordering(379) 00:14:37.414 fused_ordering(380) 00:14:37.414 fused_ordering(381) 00:14:37.414 fused_ordering(382) 00:14:37.414 fused_ordering(383) 00:14:37.414 fused_ordering(384) 00:14:37.414 fused_ordering(385) 00:14:37.414 fused_ordering(386) 00:14:37.414 fused_ordering(387) 00:14:37.414 fused_ordering(388) 00:14:37.414 fused_ordering(389) 00:14:37.414 fused_ordering(390) 00:14:37.414 fused_ordering(391) 00:14:37.414 fused_ordering(392) 00:14:37.414 fused_ordering(393) 00:14:37.414 fused_ordering(394) 00:14:37.414 fused_ordering(395) 00:14:37.414 fused_ordering(396) 00:14:37.414 fused_ordering(397) 00:14:37.414 fused_ordering(398) 00:14:37.414 fused_ordering(399) 00:14:37.414 fused_ordering(400) 00:14:37.414 fused_ordering(401) 00:14:37.414 fused_ordering(402) 00:14:37.414 fused_ordering(403) 00:14:37.414 fused_ordering(404) 00:14:37.414 fused_ordering(405) 00:14:37.414 fused_ordering(406) 00:14:37.414 fused_ordering(407) 00:14:37.414 fused_ordering(408) 00:14:37.414 fused_ordering(409) 00:14:37.414 fused_ordering(410) 00:14:37.671 fused_ordering(411) 00:14:37.671 fused_ordering(412) 00:14:37.671 fused_ordering(413) 00:14:37.671 fused_ordering(414) 00:14:37.671 fused_ordering(415) 00:14:37.671 fused_ordering(416) 00:14:37.671 fused_ordering(417) 00:14:37.671 fused_ordering(418) 00:14:37.671 fused_ordering(419) 00:14:37.671 fused_ordering(420) 00:14:37.671 fused_ordering(421) 00:14:37.671 fused_ordering(422) 00:14:37.671 fused_ordering(423) 00:14:37.671 fused_ordering(424) 00:14:37.671 fused_ordering(425) 00:14:37.671 fused_ordering(426) 00:14:37.671 fused_ordering(427) 00:14:37.671 fused_ordering(428) 00:14:37.671 fused_ordering(429) 00:14:37.671 fused_ordering(430) 00:14:37.671 fused_ordering(431) 00:14:37.671 fused_ordering(432) 00:14:37.671 fused_ordering(433) 00:14:37.671 fused_ordering(434) 00:14:37.671 fused_ordering(435) 00:14:37.671 fused_ordering(436) 00:14:37.671 fused_ordering(437) 00:14:37.671 fused_ordering(438) 00:14:37.671 fused_ordering(439) 00:14:37.671 fused_ordering(440) 00:14:37.671 fused_ordering(441) 00:14:37.671 fused_ordering(442) 00:14:37.671 fused_ordering(443) 00:14:37.671 fused_ordering(444) 00:14:37.671 fused_ordering(445) 00:14:37.671 fused_ordering(446) 00:14:37.671 fused_ordering(447) 00:14:37.671 fused_ordering(448) 00:14:37.671 fused_ordering(449) 00:14:37.671 fused_ordering(450) 00:14:37.671 fused_ordering(451) 00:14:37.671 fused_ordering(452) 00:14:37.671 fused_ordering(453) 00:14:37.671 fused_ordering(454) 00:14:37.671 fused_ordering(455) 00:14:37.671 fused_ordering(456) 00:14:37.671 fused_ordering(457) 00:14:37.671 fused_ordering(458) 00:14:37.671 fused_ordering(459) 00:14:37.671 fused_ordering(460) 00:14:37.671 fused_ordering(461) 00:14:37.671 fused_ordering(462) 00:14:37.671 fused_ordering(463) 00:14:37.671 fused_ordering(464) 00:14:37.671 fused_ordering(465) 00:14:37.671 fused_ordering(466) 00:14:37.671 fused_ordering(467) 00:14:37.671 fused_ordering(468) 00:14:37.671 fused_ordering(469) 00:14:37.671 fused_ordering(470) 00:14:37.671 fused_ordering(471) 00:14:37.671 fused_ordering(472) 00:14:37.671 fused_ordering(473) 00:14:37.671 fused_ordering(474) 00:14:37.671 fused_ordering(475) 00:14:37.671 fused_ordering(476) 00:14:37.671 fused_ordering(477) 00:14:37.671 fused_ordering(478) 00:14:37.671 fused_ordering(479) 00:14:37.671 fused_ordering(480) 00:14:37.671 fused_ordering(481) 00:14:37.671 fused_ordering(482) 00:14:37.671 fused_ordering(483) 00:14:37.671 fused_ordering(484) 00:14:37.671 fused_ordering(485) 00:14:37.671 fused_ordering(486) 00:14:37.671 fused_ordering(487) 00:14:37.671 fused_ordering(488) 00:14:37.671 fused_ordering(489) 00:14:37.671 fused_ordering(490) 00:14:37.671 fused_ordering(491) 00:14:37.671 fused_ordering(492) 00:14:37.671 fused_ordering(493) 00:14:37.671 fused_ordering(494) 00:14:37.671 fused_ordering(495) 00:14:37.671 fused_ordering(496) 00:14:37.671 fused_ordering(497) 00:14:37.671 fused_ordering(498) 00:14:37.671 fused_ordering(499) 00:14:37.671 fused_ordering(500) 00:14:37.671 fused_ordering(501) 00:14:37.671 fused_ordering(502) 00:14:37.671 fused_ordering(503) 00:14:37.671 fused_ordering(504) 00:14:37.671 fused_ordering(505) 00:14:37.671 fused_ordering(506) 00:14:37.671 fused_ordering(507) 00:14:37.671 fused_ordering(508) 00:14:37.671 fused_ordering(509) 00:14:37.671 fused_ordering(510) 00:14:37.671 fused_ordering(511) 00:14:37.671 fused_ordering(512) 00:14:37.671 fused_ordering(513) 00:14:37.671 fused_ordering(514) 00:14:37.671 fused_ordering(515) 00:14:37.671 fused_ordering(516) 00:14:37.671 fused_ordering(517) 00:14:37.671 fused_ordering(518) 00:14:37.671 fused_ordering(519) 00:14:37.671 fused_ordering(520) 00:14:37.671 fused_ordering(521) 00:14:37.671 fused_ordering(522) 00:14:37.671 fused_ordering(523) 00:14:37.671 fused_ordering(524) 00:14:37.671 fused_ordering(525) 00:14:37.671 fused_ordering(526) 00:14:37.671 fused_ordering(527) 00:14:37.671 fused_ordering(528) 00:14:37.671 fused_ordering(529) 00:14:37.671 fused_ordering(530) 00:14:37.671 fused_ordering(531) 00:14:37.671 fused_ordering(532) 00:14:37.671 fused_ordering(533) 00:14:37.671 fused_ordering(534) 00:14:37.671 fused_ordering(535) 00:14:37.671 fused_ordering(536) 00:14:37.671 fused_ordering(537) 00:14:37.671 fused_ordering(538) 00:14:37.671 fused_ordering(539) 00:14:37.671 fused_ordering(540) 00:14:37.671 fused_ordering(541) 00:14:37.671 fused_ordering(542) 00:14:37.671 fused_ordering(543) 00:14:37.671 fused_ordering(544) 00:14:37.671 fused_ordering(545) 00:14:37.671 fused_ordering(546) 00:14:37.671 fused_ordering(547) 00:14:37.671 fused_ordering(548) 00:14:37.671 fused_ordering(549) 00:14:37.671 fused_ordering(550) 00:14:37.671 fused_ordering(551) 00:14:37.671 fused_ordering(552) 00:14:37.671 fused_ordering(553) 00:14:37.671 fused_ordering(554) 00:14:37.671 fused_ordering(555) 00:14:37.671 fused_ordering(556) 00:14:37.671 fused_ordering(557) 00:14:37.671 fused_ordering(558) 00:14:37.671 fused_ordering(559) 00:14:37.671 fused_ordering(560) 00:14:37.671 fused_ordering(561) 00:14:37.671 fused_ordering(562) 00:14:37.671 fused_ordering(563) 00:14:37.671 fused_ordering(564) 00:14:37.671 fused_ordering(565) 00:14:37.671 fused_ordering(566) 00:14:37.671 fused_ordering(567) 00:14:37.671 fused_ordering(568) 00:14:37.671 fused_ordering(569) 00:14:37.671 fused_ordering(570) 00:14:37.671 fused_ordering(571) 00:14:37.671 fused_ordering(572) 00:14:37.671 fused_ordering(573) 00:14:37.671 fused_ordering(574) 00:14:37.671 fused_ordering(575) 00:14:37.671 fused_ordering(576) 00:14:37.671 fused_ordering(577) 00:14:37.671 fused_ordering(578) 00:14:37.671 fused_ordering(579) 00:14:37.671 fused_ordering(580) 00:14:37.671 fused_ordering(581) 00:14:37.671 fused_ordering(582) 00:14:37.671 fused_ordering(583) 00:14:37.671 fused_ordering(584) 00:14:37.671 fused_ordering(585) 00:14:37.671 fused_ordering(586) 00:14:37.671 fused_ordering(587) 00:14:37.671 fused_ordering(588) 00:14:37.671 fused_ordering(589) 00:14:37.671 fused_ordering(590) 00:14:37.671 fused_ordering(591) 00:14:37.671 fused_ordering(592) 00:14:37.671 fused_ordering(593) 00:14:37.671 fused_ordering(594) 00:14:37.671 fused_ordering(595) 00:14:37.671 fused_ordering(596) 00:14:37.671 fused_ordering(597) 00:14:37.671 fused_ordering(598) 00:14:37.671 fused_ordering(599) 00:14:37.671 fused_ordering(600) 00:14:37.671 fused_ordering(601) 00:14:37.671 fused_ordering(602) 00:14:37.671 fused_ordering(603) 00:14:37.671 fused_ordering(604) 00:14:37.671 fused_ordering(605) 00:14:37.671 fused_ordering(606) 00:14:37.671 fused_ordering(607) 00:14:37.671 fused_ordering(608) 00:14:37.671 fused_ordering(609) 00:14:37.671 fused_ordering(610) 00:14:37.671 fused_ordering(611) 00:14:37.671 fused_ordering(612) 00:14:37.671 fused_ordering(613) 00:14:37.671 fused_ordering(614) 00:14:37.671 fused_ordering(615) 00:14:38.600 fused_ordering(616) 00:14:38.600 fused_ordering(617) 00:14:38.600 fused_ordering(618) 00:14:38.600 fused_ordering(619) 00:14:38.600 fused_ordering(620) 00:14:38.600 fused_ordering(621) 00:14:38.600 fused_ordering(622) 00:14:38.600 fused_ordering(623) 00:14:38.600 fused_ordering(624) 00:14:38.600 fused_ordering(625) 00:14:38.600 fused_ordering(626) 00:14:38.600 fused_ordering(627) 00:14:38.600 fused_ordering(628) 00:14:38.600 fused_ordering(629) 00:14:38.600 fused_ordering(630) 00:14:38.600 fused_ordering(631) 00:14:38.600 fused_ordering(632) 00:14:38.600 fused_ordering(633) 00:14:38.600 fused_ordering(634) 00:14:38.600 fused_ordering(635) 00:14:38.600 fused_ordering(636) 00:14:38.600 fused_ordering(637) 00:14:38.600 fused_ordering(638) 00:14:38.600 fused_ordering(639) 00:14:38.600 fused_ordering(640) 00:14:38.600 fused_ordering(641) 00:14:38.600 fused_ordering(642) 00:14:38.600 fused_ordering(643) 00:14:38.600 fused_ordering(644) 00:14:38.600 fused_ordering(645) 00:14:38.600 fused_ordering(646) 00:14:38.600 fused_ordering(647) 00:14:38.600 fused_ordering(648) 00:14:38.600 fused_ordering(649) 00:14:38.600 fused_ordering(650) 00:14:38.600 fused_ordering(651) 00:14:38.600 fused_ordering(652) 00:14:38.600 fused_ordering(653) 00:14:38.600 fused_ordering(654) 00:14:38.600 fused_ordering(655) 00:14:38.600 fused_ordering(656) 00:14:38.600 fused_ordering(657) 00:14:38.600 fused_ordering(658) 00:14:38.600 fused_ordering(659) 00:14:38.600 fused_ordering(660) 00:14:38.600 fused_ordering(661) 00:14:38.600 fused_ordering(662) 00:14:38.600 fused_ordering(663) 00:14:38.600 fused_ordering(664) 00:14:38.600 fused_ordering(665) 00:14:38.600 fused_ordering(666) 00:14:38.600 fused_ordering(667) 00:14:38.600 fused_ordering(668) 00:14:38.600 fused_ordering(669) 00:14:38.600 fused_ordering(670) 00:14:38.600 fused_ordering(671) 00:14:38.600 fused_ordering(672) 00:14:38.600 fused_ordering(673) 00:14:38.600 fused_ordering(674) 00:14:38.600 fused_ordering(675) 00:14:38.600 fused_ordering(676) 00:14:38.600 fused_ordering(677) 00:14:38.600 fused_ordering(678) 00:14:38.600 fused_ordering(679) 00:14:38.600 fused_ordering(680) 00:14:38.600 fused_ordering(681) 00:14:38.600 fused_ordering(682) 00:14:38.600 fused_ordering(683) 00:14:38.600 fused_ordering(684) 00:14:38.600 fused_ordering(685) 00:14:38.600 fused_ordering(686) 00:14:38.600 fused_ordering(687) 00:14:38.600 fused_ordering(688) 00:14:38.600 fused_ordering(689) 00:14:38.600 fused_ordering(690) 00:14:38.600 fused_ordering(691) 00:14:38.600 fused_ordering(692) 00:14:38.600 fused_ordering(693) 00:14:38.600 fused_ordering(694) 00:14:38.600 fused_ordering(695) 00:14:38.600 fused_ordering(696) 00:14:38.600 fused_ordering(697) 00:14:38.600 fused_ordering(698) 00:14:38.600 fused_ordering(699) 00:14:38.600 fused_ordering(700) 00:14:38.600 fused_ordering(701) 00:14:38.600 fused_ordering(702) 00:14:38.600 fused_ordering(703) 00:14:38.600 fused_ordering(704) 00:14:38.600 fused_ordering(705) 00:14:38.600 fused_ordering(706) 00:14:38.600 fused_ordering(707) 00:14:38.600 fused_ordering(708) 00:14:38.600 fused_ordering(709) 00:14:38.600 fused_ordering(710) 00:14:38.600 fused_ordering(711) 00:14:38.600 fused_ordering(712) 00:14:38.600 fused_ordering(713) 00:14:38.600 fused_ordering(714) 00:14:38.600 fused_ordering(715) 00:14:38.600 fused_ordering(716) 00:14:38.600 fused_ordering(717) 00:14:38.600 fused_ordering(718) 00:14:38.600 fused_ordering(719) 00:14:38.600 fused_ordering(720) 00:14:38.600 fused_ordering(721) 00:14:38.600 fused_ordering(722) 00:14:38.600 fused_ordering(723) 00:14:38.600 fused_ordering(724) 00:14:38.600 fused_ordering(725) 00:14:38.600 fused_ordering(726) 00:14:38.600 fused_ordering(727) 00:14:38.600 fused_ordering(728) 00:14:38.600 fused_ordering(729) 00:14:38.600 fused_ordering(730) 00:14:38.600 fused_ordering(731) 00:14:38.600 fused_ordering(732) 00:14:38.600 fused_ordering(733) 00:14:38.600 fused_ordering(734) 00:14:38.600 fused_ordering(735) 00:14:38.600 fused_ordering(736) 00:14:38.600 fused_ordering(737) 00:14:38.600 fused_ordering(738) 00:14:38.600 fused_ordering(739) 00:14:38.600 fused_ordering(740) 00:14:38.600 fused_ordering(741) 00:14:38.600 fused_ordering(742) 00:14:38.600 fused_ordering(743) 00:14:38.600 fused_ordering(744) 00:14:38.600 fused_ordering(745) 00:14:38.600 fused_ordering(746) 00:14:38.600 fused_ordering(747) 00:14:38.600 fused_ordering(748) 00:14:38.600 fused_ordering(749) 00:14:38.600 fused_ordering(750) 00:14:38.600 fused_ordering(751) 00:14:38.600 fused_ordering(752) 00:14:38.600 fused_ordering(753) 00:14:38.600 fused_ordering(754) 00:14:38.600 fused_ordering(755) 00:14:38.600 fused_ordering(756) 00:14:38.600 fused_ordering(757) 00:14:38.600 fused_ordering(758) 00:14:38.600 fused_ordering(759) 00:14:38.600 fused_ordering(760) 00:14:38.600 fused_ordering(761) 00:14:38.600 fused_ordering(762) 00:14:38.600 fused_ordering(763) 00:14:38.600 fused_ordering(764) 00:14:38.600 fused_ordering(765) 00:14:38.600 fused_ordering(766) 00:14:38.600 fused_ordering(767) 00:14:38.600 fused_ordering(768) 00:14:38.600 fused_ordering(769) 00:14:38.600 fused_ordering(770) 00:14:38.600 fused_ordering(771) 00:14:38.600 fused_ordering(772) 00:14:38.600 fused_ordering(773) 00:14:38.600 fused_ordering(774) 00:14:38.600 fused_ordering(775) 00:14:38.600 fused_ordering(776) 00:14:38.600 fused_ordering(777) 00:14:38.600 fused_ordering(778) 00:14:38.600 fused_ordering(779) 00:14:38.600 fused_ordering(780) 00:14:38.600 fused_ordering(781) 00:14:38.600 fused_ordering(782) 00:14:38.600 fused_ordering(783) 00:14:38.600 fused_ordering(784) 00:14:38.600 fused_ordering(785) 00:14:38.600 fused_ordering(786) 00:14:38.600 fused_ordering(787) 00:14:38.600 fused_ordering(788) 00:14:38.600 fused_ordering(789) 00:14:38.600 fused_ordering(790) 00:14:38.600 fused_ordering(791) 00:14:38.600 fused_ordering(792) 00:14:38.600 fused_ordering(793) 00:14:38.600 fused_ordering(794) 00:14:38.600 fused_ordering(795) 00:14:38.600 fused_ordering(796) 00:14:38.600 fused_ordering(797) 00:14:38.600 fused_ordering(798) 00:14:38.601 fused_ordering(799) 00:14:38.601 fused_ordering(800) 00:14:38.601 fused_ordering(801) 00:14:38.601 fused_ordering(802) 00:14:38.601 fused_ordering(803) 00:14:38.601 fused_ordering(804) 00:14:38.601 fused_ordering(805) 00:14:38.601 fused_ordering(806) 00:14:38.601 fused_ordering(807) 00:14:38.601 fused_ordering(808) 00:14:38.601 fused_ordering(809) 00:14:38.601 fused_ordering(810) 00:14:38.601 fused_ordering(811) 00:14:38.601 fused_ordering(812) 00:14:38.601 fused_ordering(813) 00:14:38.601 fused_ordering(814) 00:14:38.601 fused_ordering(815) 00:14:38.601 fused_ordering(816) 00:14:38.601 fused_ordering(817) 00:14:38.601 fused_ordering(818) 00:14:38.601 fused_ordering(819) 00:14:38.601 fused_ordering(820) 00:14:39.164 fused_ordering(821) 00:14:39.164 fused_ordering(822) 00:14:39.164 fused_ordering(823) 00:14:39.164 fused_ordering(824) 00:14:39.164 fused_ordering(825) 00:14:39.164 fused_ordering(826) 00:14:39.164 fused_ordering(827) 00:14:39.164 fused_ordering(828) 00:14:39.164 fused_ordering(829) 00:14:39.164 fused_ordering(830) 00:14:39.164 fused_ordering(831) 00:14:39.164 fused_ordering(832) 00:14:39.164 fused_ordering(833) 00:14:39.164 fused_ordering(834) 00:14:39.164 fused_ordering(835) 00:14:39.164 fused_ordering(836) 00:14:39.164 fused_ordering(837) 00:14:39.164 fused_ordering(838) 00:14:39.164 fused_ordering(839) 00:14:39.164 fused_ordering(840) 00:14:39.164 fused_ordering(841) 00:14:39.164 fused_ordering(842) 00:14:39.164 fused_ordering(843) 00:14:39.164 fused_ordering(844) 00:14:39.164 fused_ordering(845) 00:14:39.164 fused_ordering(846) 00:14:39.164 fused_ordering(847) 00:14:39.164 fused_ordering(848) 00:14:39.164 fused_ordering(849) 00:14:39.164 fused_ordering(850) 00:14:39.164 fused_ordering(851) 00:14:39.164 fused_ordering(852) 00:14:39.164 fused_ordering(853) 00:14:39.164 fused_ordering(854) 00:14:39.164 fused_ordering(855) 00:14:39.164 fused_ordering(856) 00:14:39.164 fused_ordering(857) 00:14:39.164 fused_ordering(858) 00:14:39.164 fused_ordering(859) 00:14:39.164 fused_ordering(860) 00:14:39.164 fused_ordering(861) 00:14:39.164 fused_ordering(862) 00:14:39.164 fused_ordering(863) 00:14:39.164 fused_ordering(864) 00:14:39.164 fused_ordering(865) 00:14:39.164 fused_ordering(866) 00:14:39.164 fused_ordering(867) 00:14:39.164 fused_ordering(868) 00:14:39.164 fused_ordering(869) 00:14:39.164 fused_ordering(870) 00:14:39.164 fused_ordering(871) 00:14:39.164 fused_ordering(872) 00:14:39.164 fused_ordering(873) 00:14:39.164 fused_ordering(874) 00:14:39.164 fused_ordering(875) 00:14:39.164 fused_ordering(876) 00:14:39.164 fused_ordering(877) 00:14:39.164 fused_ordering(878) 00:14:39.164 fused_ordering(879) 00:14:39.164 fused_ordering(880) 00:14:39.164 fused_ordering(881) 00:14:39.164 fused_ordering(882) 00:14:39.164 fused_ordering(883) 00:14:39.164 fused_ordering(884) 00:14:39.164 fused_ordering(885) 00:14:39.164 fused_ordering(886) 00:14:39.164 fused_ordering(887) 00:14:39.164 fused_ordering(888) 00:14:39.164 fused_ordering(889) 00:14:39.164 fused_ordering(890) 00:14:39.164 fused_ordering(891) 00:14:39.164 fused_ordering(892) 00:14:39.164 fused_ordering(893) 00:14:39.164 fused_ordering(894) 00:14:39.164 fused_ordering(895) 00:14:39.164 fused_ordering(896) 00:14:39.164 fused_ordering(897) 00:14:39.164 fused_ordering(898) 00:14:39.164 fused_ordering(899) 00:14:39.164 fused_ordering(900) 00:14:39.164 fused_ordering(901) 00:14:39.164 fused_ordering(902) 00:14:39.164 fused_ordering(903) 00:14:39.164 fused_ordering(904) 00:14:39.164 fused_ordering(905) 00:14:39.164 fused_ordering(906) 00:14:39.164 fused_ordering(907) 00:14:39.164 fused_ordering(908) 00:14:39.164 fused_ordering(909) 00:14:39.164 fused_ordering(910) 00:14:39.164 fused_ordering(911) 00:14:39.164 fused_ordering(912) 00:14:39.164 fused_ordering(913) 00:14:39.164 fused_ordering(914) 00:14:39.164 fused_ordering(915) 00:14:39.164 fused_ordering(916) 00:14:39.164 fused_ordering(917) 00:14:39.164 fused_ordering(918) 00:14:39.164 fused_ordering(919) 00:14:39.164 fused_ordering(920) 00:14:39.164 fused_ordering(921) 00:14:39.164 fused_ordering(922) 00:14:39.164 fused_ordering(923) 00:14:39.164 fused_ordering(924) 00:14:39.164 fused_ordering(925) 00:14:39.164 fused_ordering(926) 00:14:39.164 fused_ordering(927) 00:14:39.164 fused_ordering(928) 00:14:39.164 fused_ordering(929) 00:14:39.164 fused_ordering(930) 00:14:39.164 fused_ordering(931) 00:14:39.164 fused_ordering(932) 00:14:39.164 fused_ordering(933) 00:14:39.164 fused_ordering(934) 00:14:39.164 fused_ordering(935) 00:14:39.164 fused_ordering(936) 00:14:39.164 fused_ordering(937) 00:14:39.164 fused_ordering(938) 00:14:39.164 fused_ordering(939) 00:14:39.164 fused_ordering(940) 00:14:39.164 fused_ordering(941) 00:14:39.164 fused_ordering(942) 00:14:39.164 fused_ordering(943) 00:14:39.164 fused_ordering(944) 00:14:39.164 fused_ordering(945) 00:14:39.164 fused_ordering(946) 00:14:39.164 fused_ordering(947) 00:14:39.164 fused_ordering(948) 00:14:39.164 fused_ordering(949) 00:14:39.164 fused_ordering(950) 00:14:39.164 fused_ordering(951) 00:14:39.164 fused_ordering(952) 00:14:39.164 fused_ordering(953) 00:14:39.164 fused_ordering(954) 00:14:39.164 fused_ordering(955) 00:14:39.164 fused_ordering(956) 00:14:39.164 fused_ordering(957) 00:14:39.164 fused_ordering(958) 00:14:39.164 fused_ordering(959) 00:14:39.164 fused_ordering(960) 00:14:39.164 fused_ordering(961) 00:14:39.164 fused_ordering(962) 00:14:39.164 fused_ordering(963) 00:14:39.164 fused_ordering(964) 00:14:39.164 fused_ordering(965) 00:14:39.164 fused_ordering(966) 00:14:39.164 fused_ordering(967) 00:14:39.164 fused_ordering(968) 00:14:39.164 fused_ordering(969) 00:14:39.164 fused_ordering(970) 00:14:39.164 fused_ordering(971) 00:14:39.164 fused_ordering(972) 00:14:39.164 fused_ordering(973) 00:14:39.164 fused_ordering(974) 00:14:39.164 fused_ordering(975) 00:14:39.164 fused_ordering(976) 00:14:39.164 fused_ordering(977) 00:14:39.164 fused_ordering(978) 00:14:39.164 fused_ordering(979) 00:14:39.164 fused_ordering(980) 00:14:39.164 fused_ordering(981) 00:14:39.164 fused_ordering(982) 00:14:39.164 fused_ordering(983) 00:14:39.164 fused_ordering(984) 00:14:39.164 fused_ordering(985) 00:14:39.164 fused_ordering(986) 00:14:39.164 fused_ordering(987) 00:14:39.164 fused_ordering(988) 00:14:39.164 fused_ordering(989) 00:14:39.164 fused_ordering(990) 00:14:39.164 fused_ordering(991) 00:14:39.164 fused_ordering(992) 00:14:39.164 fused_ordering(993) 00:14:39.164 fused_ordering(994) 00:14:39.164 fused_ordering(995) 00:14:39.164 fused_ordering(996) 00:14:39.164 fused_ordering(997) 00:14:39.164 fused_ordering(998) 00:14:39.164 fused_ordering(999) 00:14:39.164 fused_ordering(1000) 00:14:39.164 fused_ordering(1001) 00:14:39.164 fused_ordering(1002) 00:14:39.164 fused_ordering(1003) 00:14:39.164 fused_ordering(1004) 00:14:39.164 fused_ordering(1005) 00:14:39.164 fused_ordering(1006) 00:14:39.164 fused_ordering(1007) 00:14:39.164 fused_ordering(1008) 00:14:39.164 fused_ordering(1009) 00:14:39.164 fused_ordering(1010) 00:14:39.164 fused_ordering(1011) 00:14:39.164 fused_ordering(1012) 00:14:39.164 fused_ordering(1013) 00:14:39.164 fused_ordering(1014) 00:14:39.164 fused_ordering(1015) 00:14:39.164 fused_ordering(1016) 00:14:39.164 fused_ordering(1017) 00:14:39.164 fused_ordering(1018) 00:14:39.164 fused_ordering(1019) 00:14:39.164 fused_ordering(1020) 00:14:39.164 fused_ordering(1021) 00:14:39.164 fused_ordering(1022) 00:14:39.164 fused_ordering(1023) 00:14:39.164 03:24:24 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:14:39.164 03:24:24 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:14:39.164 03:24:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:39.164 03:24:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:14:39.164 03:24:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:39.164 03:24:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:14:39.164 03:24:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:39.164 03:24:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:39.164 rmmod nvme_tcp 00:14:39.164 rmmod nvme_fabrics 00:14:39.164 rmmod nvme_keyring 00:14:39.164 03:24:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:39.164 03:24:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:14:39.164 03:24:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:14:39.164 03:24:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 2362450 ']' 00:14:39.164 03:24:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 2362450 00:14:39.164 03:24:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@946 -- # '[' -z 2362450 ']' 00:14:39.164 03:24:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@950 -- # kill -0 2362450 00:14:39.164 03:24:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@951 -- # uname 00:14:39.164 03:24:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:39.164 03:24:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2362450 00:14:39.164 03:24:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:14:39.164 03:24:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:14:39.164 03:24:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2362450' 00:14:39.164 killing process with pid 2362450 00:14:39.164 03:24:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@965 -- # kill 2362450 00:14:39.164 03:24:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@970 -- # wait 2362450 00:14:39.422 03:24:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:39.422 03:24:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:39.422 03:24:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:39.422 03:24:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:39.422 03:24:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:39.422 03:24:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:39.422 03:24:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:39.422 03:24:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:41.952 03:24:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:41.952 00:14:41.952 real 0m7.473s 00:14:41.952 user 0m5.172s 00:14:41.952 sys 0m3.189s 00:14:41.952 03:24:26 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:41.952 03:24:26 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:41.952 ************************************ 00:14:41.952 END TEST nvmf_fused_ordering 00:14:41.952 ************************************ 00:14:41.952 03:24:26 nvmf_tcp -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:14:41.952 03:24:26 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:41.952 03:24:26 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:41.952 03:24:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:41.952 ************************************ 00:14:41.952 START TEST nvmf_delete_subsystem 00:14:41.952 ************************************ 00:14:41.952 03:24:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:14:41.952 * Looking for test storage... 00:14:41.952 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:41.952 03:24:26 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:41.952 03:24:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:14:41.952 03:24:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:41.952 03:24:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:41.952 03:24:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:41.952 03:24:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:41.952 03:24:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:41.952 03:24:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:41.952 03:24:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:41.952 03:24:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:41.952 03:24:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:41.952 03:24:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:41.952 03:24:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:41.952 03:24:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:41.952 03:24:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:41.952 03:24:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:41.952 03:24:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:41.952 03:24:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:41.952 03:24:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:41.952 03:24:26 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:41.952 03:24:26 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:41.952 03:24:26 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:41.952 03:24:26 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:41.952 03:24:26 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:41.952 03:24:26 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:41.952 03:24:26 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:14:41.952 03:24:26 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:41.952 03:24:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:14:41.952 03:24:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:41.952 03:24:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:41.952 03:24:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:41.952 03:24:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:41.952 03:24:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:41.952 03:24:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:41.952 03:24:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:41.952 03:24:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:41.953 03:24:26 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:14:41.953 03:24:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:41.953 03:24:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:41.953 03:24:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:41.953 03:24:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:41.953 03:24:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:41.953 03:24:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:41.953 03:24:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:41.953 03:24:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:41.953 03:24:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:41.953 03:24:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:41.953 03:24:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:14:41.953 03:24:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:43.851 03:24:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:43.851 03:24:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:14:43.851 03:24:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:43.851 03:24:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:43.851 03:24:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:43.851 03:24:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:43.851 03:24:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:43.851 03:24:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:14:43.851 03:24:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:43.851 03:24:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:14:43.851 03:24:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:14:43.851 03:24:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:14:43.851 03:24:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:14:43.851 03:24:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:14:43.851 03:24:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:14:43.851 03:24:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:43.851 03:24:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:43.851 03:24:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:43.851 03:24:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:43.851 03:24:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:43.851 03:24:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:43.851 03:24:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:43.851 03:24:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:43.851 03:24:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:43.851 03:24:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:43.851 03:24:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:43.851 03:24:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:43.851 03:24:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:43.851 03:24:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:43.851 03:24:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:43.851 03:24:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:43.851 03:24:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:43.851 03:24:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:43.851 03:24:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:43.851 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:43.851 03:24:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:43.851 03:24:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:43.851 03:24:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:43.851 03:24:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:43.851 03:24:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:43.851 03:24:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:43.851 03:24:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:43.851 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:43.851 03:24:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:43.851 03:24:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:43.851 03:24:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:43.851 03:24:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:43.851 03:24:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:43.852 03:24:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:43.852 03:24:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:43.852 03:24:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:43.852 03:24:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:43.852 03:24:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:43.852 03:24:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:43.852 03:24:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:43.852 03:24:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:43.852 03:24:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:43.852 03:24:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:43.852 03:24:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:43.852 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:43.852 03:24:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:43.852 03:24:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:43.852 03:24:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:43.852 03:24:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:43.852 03:24:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:43.852 03:24:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:43.852 03:24:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:43.852 03:24:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:43.852 03:24:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:43.852 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:43.852 03:24:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:43.852 03:24:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:43.852 03:24:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:14:43.852 03:24:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:43.852 03:24:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:43.852 03:24:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:43.852 03:24:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:43.852 03:24:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:43.852 03:24:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:43.852 03:24:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:43.852 03:24:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:43.852 03:24:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:43.852 03:24:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:43.852 03:24:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:43.852 03:24:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:43.852 03:24:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:43.852 03:24:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:43.852 03:24:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:43.852 03:24:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:43.852 03:24:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:43.852 03:24:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:43.852 03:24:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:43.852 03:24:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:43.852 03:24:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:43.852 03:24:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:43.852 03:24:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:43.852 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:43.852 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.241 ms 00:14:43.852 00:14:43.852 --- 10.0.0.2 ping statistics --- 00:14:43.852 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:43.852 rtt min/avg/max/mdev = 0.241/0.241/0.241/0.000 ms 00:14:43.852 03:24:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:43.852 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:43.852 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.123 ms 00:14:43.852 00:14:43.852 --- 10.0.0.1 ping statistics --- 00:14:43.852 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:43.852 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:14:43.852 03:24:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:43.852 03:24:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:14:43.852 03:24:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:43.852 03:24:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:43.852 03:24:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:43.852 03:24:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:43.852 03:24:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:43.852 03:24:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:43.852 03:24:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:43.852 03:24:28 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:14:43.852 03:24:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:43.852 03:24:28 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@720 -- # xtrace_disable 00:14:43.852 03:24:28 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:43.852 03:24:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=2364791 00:14:43.852 03:24:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:14:43.852 03:24:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 2364791 00:14:43.852 03:24:28 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@827 -- # '[' -z 2364791 ']' 00:14:43.852 03:24:28 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:43.852 03:24:28 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:43.852 03:24:28 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:43.852 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:43.852 03:24:28 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:43.852 03:24:28 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:43.852 [2024-07-21 03:24:28.957699] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:14:43.852 [2024-07-21 03:24:28.957799] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:43.852 EAL: No free 2048 kB hugepages reported on node 1 00:14:43.852 [2024-07-21 03:24:29.023524] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:43.852 [2024-07-21 03:24:29.111451] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:43.852 [2024-07-21 03:24:29.111510] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:43.852 [2024-07-21 03:24:29.111539] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:43.852 [2024-07-21 03:24:29.111551] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:43.852 [2024-07-21 03:24:29.111561] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:43.852 [2024-07-21 03:24:29.111653] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:43.852 [2024-07-21 03:24:29.111658] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:44.109 03:24:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:44.109 03:24:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # return 0 00:14:44.109 03:24:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:44.109 03:24:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:44.109 03:24:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:44.109 03:24:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:44.109 03:24:29 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:44.109 03:24:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:44.109 03:24:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:44.109 [2024-07-21 03:24:29.258479] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:44.109 03:24:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:44.109 03:24:29 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:44.110 03:24:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:44.110 03:24:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:44.110 03:24:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:44.110 03:24:29 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:44.110 03:24:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:44.110 03:24:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:44.110 [2024-07-21 03:24:29.274714] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:44.110 03:24:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:44.110 03:24:29 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:44.110 03:24:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:44.110 03:24:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:44.110 NULL1 00:14:44.110 03:24:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:44.110 03:24:29 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:14:44.110 03:24:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:44.110 03:24:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:44.110 Delay0 00:14:44.110 03:24:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:44.110 03:24:29 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:44.110 03:24:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:44.110 03:24:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:44.110 03:24:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:44.110 03:24:29 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=2364813 00:14:44.110 03:24:29 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:14:44.110 03:24:29 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:14:44.110 EAL: No free 2048 kB hugepages reported on node 1 00:14:44.110 [2024-07-21 03:24:29.349388] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:14:46.004 03:24:31 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:46.004 03:24:31 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:46.004 03:24:31 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:46.262 Read completed with error (sct=0, sc=8) 00:14:46.262 starting I/O failed: -6 00:14:46.262 Read completed with error (sct=0, sc=8) 00:14:46.262 Write completed with error (sct=0, sc=8) 00:14:46.262 Write completed with error (sct=0, sc=8) 00:14:46.262 Read completed with error (sct=0, sc=8) 00:14:46.262 starting I/O failed: -6 00:14:46.262 Read completed with error (sct=0, sc=8) 00:14:46.262 Write completed with error (sct=0, sc=8) 00:14:46.262 Read completed with error (sct=0, sc=8) 00:14:46.262 Read completed with error (sct=0, sc=8) 00:14:46.262 starting I/O failed: -6 00:14:46.262 Read completed with error (sct=0, sc=8) 00:14:46.262 Write completed with error (sct=0, sc=8) 00:14:46.262 Read completed with error (sct=0, sc=8) 00:14:46.262 Read completed with error (sct=0, sc=8) 00:14:46.262 Write completed with error (sct=0, sc=8) 00:14:46.262 starting I/O failed: -6 00:14:46.262 Read completed with error (sct=0, sc=8) 00:14:46.262 Read completed with error (sct=0, sc=8) 00:14:46.262 starting I/O failed: -6 00:14:46.262 Read completed with error (sct=0, sc=8) 00:14:46.262 Read completed with error (sct=0, sc=8) 00:14:46.262 Read completed with error (sct=0, sc=8) 00:14:46.262 Read completed with error (sct=0, sc=8) 00:14:46.262 Read completed with error (sct=0, sc=8) 00:14:46.262 starting I/O failed: -6 00:14:46.262 Read completed with error (sct=0, sc=8) 00:14:46.262 Read completed with error (sct=0, sc=8) 00:14:46.262 Read completed with error (sct=0, sc=8) 00:14:46.262 starting I/O failed: -6 00:14:46.262 Read completed with error (sct=0, sc=8) 00:14:46.262 Read completed with error (sct=0, sc=8) 00:14:46.262 Write completed with error (sct=0, sc=8) 00:14:46.262 Read completed with error (sct=0, sc=8) 00:14:46.262 Write completed with error (sct=0, sc=8) 00:14:46.262 Write completed with error (sct=0, sc=8) 00:14:46.262 starting I/O failed: -6 00:14:46.262 Write completed with error (sct=0, sc=8) 00:14:46.262 starting I/O failed: -6 00:14:46.262 Write completed with error (sct=0, sc=8) 00:14:46.262 Write completed with error (sct=0, sc=8) 00:14:46.262 Read completed with error (sct=0, sc=8) 00:14:46.262 Read completed with error (sct=0, sc=8) 00:14:46.262 Read completed with error (sct=0, sc=8) 00:14:46.262 Read completed with error (sct=0, sc=8) 00:14:46.262 Read completed with error (sct=0, sc=8) 00:14:46.262 Write completed with error (sct=0, sc=8) 00:14:46.262 starting I/O failed: -6 00:14:46.262 starting I/O failed: -6 00:14:46.262 Read completed with error (sct=0, sc=8) 00:14:46.262 Read completed with error (sct=0, sc=8) 00:14:46.262 Write completed with error (sct=0, sc=8) 00:14:46.262 Write completed with error (sct=0, sc=8) 00:14:46.262 Read completed with error (sct=0, sc=8) 00:14:46.262 Read completed with error (sct=0, sc=8) 00:14:46.262 Write completed with error (sct=0, sc=8) 00:14:46.262 Read completed with error (sct=0, sc=8) 00:14:46.262 starting I/O failed: -6 00:14:46.262 starting I/O failed: -6 00:14:46.262 Read completed with error (sct=0, sc=8) 00:14:46.262 Read completed with error (sct=0, sc=8) 00:14:46.262 Read completed with error (sct=0, sc=8) 00:14:46.262 Write completed with error (sct=0, sc=8) 00:14:46.262 Read completed with error (sct=0, sc=8) 00:14:46.262 Read completed with error (sct=0, sc=8) 00:14:46.262 Read completed with error (sct=0, sc=8) 00:14:46.262 starting I/O failed: -6 00:14:46.262 Read completed with error (sct=0, sc=8) 00:14:46.262 Read completed with error (sct=0, sc=8) 00:14:46.262 starting I/O failed: -6 00:14:46.262 Read completed with error (sct=0, sc=8) 00:14:46.262 Read completed with error (sct=0, sc=8) 00:14:46.262 Read completed with error (sct=0, sc=8) 00:14:46.262 Read completed with error (sct=0, sc=8) 00:14:46.262 Read completed with error (sct=0, sc=8) 00:14:46.262 Write completed with error (sct=0, sc=8) 00:14:46.262 starting I/O failed: -6 00:14:46.262 Read completed with error (sct=0, sc=8) 00:14:46.262 Read completed with error (sct=0, sc=8) 00:14:46.262 starting I/O failed: -6 00:14:46.262 Read completed with error (sct=0, sc=8) 00:14:46.262 Read completed with error (sct=0, sc=8) 00:14:46.262 Read completed with error (sct=0, sc=8) 00:14:46.262 Write completed with error (sct=0, sc=8) 00:14:46.262 Read completed with error (sct=0, sc=8) 00:14:46.262 Read completed with error (sct=0, sc=8) 00:14:46.262 starting I/O failed: -6 00:14:46.262 Read completed with error (sct=0, sc=8) 00:14:46.262 Read completed with error (sct=0, sc=8) 00:14:46.262 starting I/O failed: -6 00:14:46.262 Read completed with error (sct=0, sc=8) 00:14:46.262 Read completed with error (sct=0, sc=8) 00:14:46.262 Read completed with error (sct=0, sc=8) 00:14:46.262 Read completed with error (sct=0, sc=8) 00:14:46.262 Read completed with error (sct=0, sc=8) 00:14:46.262 Write completed with error (sct=0, sc=8) 00:14:46.262 starting I/O failed: -6 00:14:46.262 Read completed with error (sct=0, sc=8) 00:14:46.262 Read completed with error (sct=0, sc=8) 00:14:46.262 Read completed with error (sct=0, sc=8) 00:14:46.262 Read completed with error (sct=0, sc=8) 00:14:46.262 starting I/O failed: -6 00:14:46.262 Read completed with error (sct=0, sc=8) 00:14:46.262 [2024-07-21 03:24:31.521711] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f924c000c00 is same with the state(5) to be set 00:14:46.262 Read completed with error (sct=0, sc=8) 00:14:46.262 Write completed with error (sct=0, sc=8) 00:14:46.262 Read completed with error (sct=0, sc=8) 00:14:46.262 starting I/O failed: -6 00:14:46.263 Read completed with error (sct=0, sc=8) 00:14:46.263 Read completed with error (sct=0, sc=8) 00:14:46.263 Read completed with error (sct=0, sc=8) 00:14:46.263 Write completed with error (sct=0, sc=8) 00:14:46.263 Read completed with error (sct=0, sc=8) 00:14:46.263 Read completed with error (sct=0, sc=8) 00:14:46.263 Read completed with error (sct=0, sc=8) 00:14:46.263 Read completed with error (sct=0, sc=8) 00:14:46.263 Read completed with error (sct=0, sc=8) 00:14:46.263 Write completed with error (sct=0, sc=8) 00:14:46.263 Read completed with error (sct=0, sc=8) 00:14:46.263 Read completed with error (sct=0, sc=8) 00:14:46.263 Read completed with error (sct=0, sc=8) 00:14:46.263 Read completed with error (sct=0, sc=8) 00:14:46.263 Read completed with error (sct=0, sc=8) 00:14:46.263 Read completed with error (sct=0, sc=8) 00:14:46.263 Read completed with error (sct=0, sc=8) 00:14:46.263 Write completed with error (sct=0, sc=8) 00:14:46.263 Read completed with error (sct=0, sc=8) 00:14:46.263 Read completed with error (sct=0, sc=8) 00:14:46.263 starting I/O failed: -6 00:14:46.263 Read completed with error (sct=0, sc=8) 00:14:46.263 Read completed with error (sct=0, sc=8) 00:14:46.263 Write completed with error (sct=0, sc=8) 00:14:46.263 Write completed with error (sct=0, sc=8) 00:14:46.263 Write completed with error (sct=0, sc=8) 00:14:46.263 Read completed with error (sct=0, sc=8) 00:14:46.263 starting I/O failed: -6 00:14:46.263 Write completed with error (sct=0, sc=8) 00:14:46.263 Read completed with error (sct=0, sc=8) 00:14:46.263 Read completed with error (sct=0, sc=8) 00:14:46.263 Write completed with error (sct=0, sc=8) 00:14:46.263 Write completed with error (sct=0, sc=8) 00:14:46.263 Write completed with error (sct=0, sc=8) 00:14:46.263 starting I/O failed: -6 00:14:46.263 Read completed with error (sct=0, sc=8) 00:14:46.263 Read completed with error (sct=0, sc=8) 00:14:46.263 Read completed with error (sct=0, sc=8) 00:14:46.263 Read completed with error (sct=0, sc=8) 00:14:46.263 Write completed with error (sct=0, sc=8) 00:14:46.263 starting I/O failed: -6 00:14:46.263 Write completed with error (sct=0, sc=8) 00:14:46.263 Read completed with error (sct=0, sc=8) 00:14:46.263 Write completed with error (sct=0, sc=8) 00:14:46.263 Read completed with error (sct=0, sc=8) 00:14:46.263 Read completed with error (sct=0, sc=8) 00:14:46.263 Read completed with error (sct=0, sc=8) 00:14:46.263 starting I/O failed: -6 00:14:46.263 Read completed with error (sct=0, sc=8) 00:14:46.263 Read completed with error (sct=0, sc=8) 00:14:46.263 Read completed with error (sct=0, sc=8) 00:14:46.263 Read completed with error (sct=0, sc=8) 00:14:46.263 Write completed with error (sct=0, sc=8) 00:14:46.263 Read completed with error (sct=0, sc=8) 00:14:46.263 starting I/O failed: -6 00:14:46.263 Read completed with error (sct=0, sc=8) 00:14:46.263 Write completed with error (sct=0, sc=8) 00:14:46.263 Read completed with error (sct=0, sc=8) 00:14:46.263 Read completed with error (sct=0, sc=8) 00:14:46.263 Write completed with error (sct=0, sc=8) 00:14:46.263 Read completed with error (sct=0, sc=8) 00:14:46.263 starting I/O failed: -6 00:14:46.263 Read completed with error (sct=0, sc=8) 00:14:46.263 Read completed with error (sct=0, sc=8) 00:14:46.263 Write completed with error (sct=0, sc=8) 00:14:46.263 Read completed with error (sct=0, sc=8) 00:14:46.263 Read completed with error (sct=0, sc=8) 00:14:46.263 starting I/O failed: -6 00:14:46.263 Write completed with error (sct=0, sc=8) 00:14:46.263 Read completed with error (sct=0, sc=8) 00:14:46.263 Read completed with error (sct=0, sc=8) 00:14:46.263 Write completed with error (sct=0, sc=8) 00:14:46.263 Read completed with error (sct=0, sc=8) 00:14:46.263 Write completed with error (sct=0, sc=8) 00:14:46.263 starting I/O failed: -6 00:14:46.263 Read completed with error (sct=0, sc=8) 00:14:46.263 Read completed with error (sct=0, sc=8) 00:14:46.263 Read completed with error (sct=0, sc=8) 00:14:46.263 Read completed with error (sct=0, sc=8) 00:14:46.263 Read completed with error (sct=0, sc=8) 00:14:46.263 starting I/O failed: -6 00:14:46.263 Write completed with error (sct=0, sc=8) 00:14:46.263 Read completed with error (sct=0, sc=8) 00:14:46.263 Write completed with error (sct=0, sc=8) 00:14:46.263 Read completed with error (sct=0, sc=8) 00:14:46.263 Write completed with error (sct=0, sc=8) 00:14:46.263 starting I/O failed: -6 00:14:46.263 Read completed with error (sct=0, sc=8) 00:14:46.263 Write completed with error (sct=0, sc=8) 00:14:46.263 Read completed with error (sct=0, sc=8) 00:14:46.263 Read completed with error (sct=0, sc=8) 00:14:46.263 Read completed with error (sct=0, sc=8) 00:14:46.263 starting I/O failed: -6 00:14:46.263 Read completed with error (sct=0, sc=8) 00:14:46.263 Write completed with error (sct=0, sc=8) 00:14:46.263 starting I/O failed: -6 00:14:46.263 Write completed with error (sct=0, sc=8) 00:14:46.263 Read completed with error (sct=0, sc=8) 00:14:46.263 starting I/O failed: -6 00:14:46.263 Read completed with error (sct=0, sc=8) 00:14:46.263 Read completed with error (sct=0, sc=8) 00:14:46.263 starting I/O failed: -6 00:14:46.263 Read completed with error (sct=0, sc=8) 00:14:46.263 Read completed with error (sct=0, sc=8) 00:14:46.263 starting I/O failed: -6 00:14:46.263 Read completed with error (sct=0, sc=8) 00:14:46.263 Read completed with error (sct=0, sc=8) 00:14:46.263 starting I/O failed: -6 00:14:46.263 Read completed with error (sct=0, sc=8) 00:14:46.263 Read completed with error (sct=0, sc=8) 00:14:46.263 starting I/O failed: -6 00:14:46.263 Read completed with error (sct=0, sc=8) 00:14:46.263 Read completed with error (sct=0, sc=8) 00:14:46.263 starting I/O failed: -6 00:14:46.263 Write completed with error (sct=0, sc=8) 00:14:46.263 Read completed with error (sct=0, sc=8) 00:14:46.263 starting I/O failed: -6 00:14:46.263 Write completed with error (sct=0, sc=8) 00:14:46.263 Write completed with error (sct=0, sc=8) 00:14:46.263 starting I/O failed: -6 00:14:46.263 Write completed with error (sct=0, sc=8) 00:14:46.263 Read completed with error (sct=0, sc=8) 00:14:46.263 starting I/O failed: -6 00:14:46.263 Write completed with error (sct=0, sc=8) 00:14:46.263 Read completed with error (sct=0, sc=8) 00:14:46.263 starting I/O failed: -6 00:14:46.263 Write completed with error (sct=0, sc=8) 00:14:46.263 Read completed with error (sct=0, sc=8) 00:14:46.263 starting I/O failed: -6 00:14:46.263 Write completed with error (sct=0, sc=8) 00:14:46.263 Read completed with error (sct=0, sc=8) 00:14:46.263 starting I/O failed: -6 00:14:46.263 Read completed with error (sct=0, sc=8) 00:14:46.263 Read completed with error (sct=0, sc=8) 00:14:46.263 starting I/O failed: -6 00:14:46.263 Write completed with error (sct=0, sc=8) 00:14:46.263 Read completed with error (sct=0, sc=8) 00:14:46.263 starting I/O failed: -6 00:14:46.263 Read completed with error (sct=0, sc=8) 00:14:46.263 starting I/O failed: -6 00:14:46.263 starting I/O failed: -6 00:14:46.263 starting I/O failed: -6 00:14:46.263 starting I/O failed: -6 00:14:46.263 starting I/O failed: -6 00:14:46.263 starting I/O failed: -6 00:14:46.263 starting I/O failed: -6 00:14:47.196 [2024-07-21 03:24:32.487392] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11af620 is same with the state(5) to be set 00:14:47.454 Read completed with error (sct=0, sc=8) 00:14:47.454 Read completed with error (sct=0, sc=8) 00:14:47.454 Write completed with error (sct=0, sc=8) 00:14:47.454 Read completed with error (sct=0, sc=8) 00:14:47.454 Read completed with error (sct=0, sc=8) 00:14:47.454 Read completed with error (sct=0, sc=8) 00:14:47.454 Read completed with error (sct=0, sc=8) 00:14:47.454 Read completed with error (sct=0, sc=8) 00:14:47.454 Read completed with error (sct=0, sc=8) 00:14:47.454 Write completed with error (sct=0, sc=8) 00:14:47.454 Read completed with error (sct=0, sc=8) 00:14:47.454 Read completed with error (sct=0, sc=8) 00:14:47.454 Read completed with error (sct=0, sc=8) 00:14:47.454 Read completed with error (sct=0, sc=8) 00:14:47.454 Read completed with error (sct=0, sc=8) 00:14:47.454 Read completed with error (sct=0, sc=8) 00:14:47.454 Write completed with error (sct=0, sc=8) 00:14:47.454 Read completed with error (sct=0, sc=8) 00:14:47.454 Read completed with error (sct=0, sc=8) 00:14:47.454 Read completed with error (sct=0, sc=8) 00:14:47.454 Write completed with error (sct=0, sc=8) 00:14:47.454 Read completed with error (sct=0, sc=8) 00:14:47.454 Write completed with error (sct=0, sc=8) 00:14:47.454 Read completed with error (sct=0, sc=8) 00:14:47.454 Read completed with error (sct=0, sc=8) 00:14:47.454 Read completed with error (sct=0, sc=8) 00:14:47.454 Read completed with error (sct=0, sc=8) 00:14:47.454 Write completed with error (sct=0, sc=8) 00:14:47.454 Read completed with error (sct=0, sc=8) 00:14:47.454 Read completed with error (sct=0, sc=8) 00:14:47.454 Read completed with error (sct=0, sc=8) 00:14:47.454 Read completed with error (sct=0, sc=8) 00:14:47.454 Read completed with error (sct=0, sc=8) 00:14:47.454 Write completed with error (sct=0, sc=8) 00:14:47.454 Write completed with error (sct=0, sc=8) 00:14:47.454 Read completed with error (sct=0, sc=8) 00:14:47.454 Write completed with error (sct=0, sc=8) 00:14:47.454 Read completed with error (sct=0, sc=8) 00:14:47.454 Read completed with error (sct=0, sc=8) 00:14:47.454 Write completed with error (sct=0, sc=8) 00:14:47.454 [2024-07-21 03:24:32.522967] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1192ce0 is same with the state(5) to be set 00:14:47.454 Write completed with error (sct=0, sc=8) 00:14:47.454 Read completed with error (sct=0, sc=8) 00:14:47.454 Read completed with error (sct=0, sc=8) 00:14:47.454 Read completed with error (sct=0, sc=8) 00:14:47.454 Write completed with error (sct=0, sc=8) 00:14:47.454 Write completed with error (sct=0, sc=8) 00:14:47.454 Write completed with error (sct=0, sc=8) 00:14:47.454 Read completed with error (sct=0, sc=8) 00:14:47.454 Read completed with error (sct=0, sc=8) 00:14:47.454 Write completed with error (sct=0, sc=8) 00:14:47.454 Read completed with error (sct=0, sc=8) 00:14:47.454 Read completed with error (sct=0, sc=8) 00:14:47.454 Read completed with error (sct=0, sc=8) 00:14:47.454 Read completed with error (sct=0, sc=8) 00:14:47.454 Read completed with error (sct=0, sc=8) 00:14:47.454 Read completed with error (sct=0, sc=8) 00:14:47.454 Read completed with error (sct=0, sc=8) 00:14:47.454 Read completed with error (sct=0, sc=8) 00:14:47.454 Read completed with error (sct=0, sc=8) 00:14:47.454 Write completed with error (sct=0, sc=8) 00:14:47.454 Write completed with error (sct=0, sc=8) 00:14:47.454 Read completed with error (sct=0, sc=8) 00:14:47.454 Read completed with error (sct=0, sc=8) 00:14:47.454 Write completed with error (sct=0, sc=8) 00:14:47.454 Read completed with error (sct=0, sc=8) 00:14:47.454 Read completed with error (sct=0, sc=8) 00:14:47.454 Read completed with error (sct=0, sc=8) 00:14:47.454 Write completed with error (sct=0, sc=8) 00:14:47.454 Write completed with error (sct=0, sc=8) 00:14:47.455 Read completed with error (sct=0, sc=8) 00:14:47.455 Read completed with error (sct=0, sc=8) 00:14:47.455 Read completed with error (sct=0, sc=8) 00:14:47.455 Read completed with error (sct=0, sc=8) 00:14:47.455 Write completed with error (sct=0, sc=8) 00:14:47.455 Read completed with error (sct=0, sc=8) 00:14:47.455 Read completed with error (sct=0, sc=8) 00:14:47.455 Write completed with error (sct=0, sc=8) 00:14:47.455 Read completed with error (sct=0, sc=8) 00:14:47.455 Read completed with error (sct=0, sc=8) 00:14:47.455 Write completed with error (sct=0, sc=8) 00:14:47.455 Read completed with error (sct=0, sc=8) 00:14:47.455 [2024-07-21 03:24:32.523234] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1197d40 is same with the state(5) to be set 00:14:47.455 Read completed with error (sct=0, sc=8) 00:14:47.455 Read completed with error (sct=0, sc=8) 00:14:47.455 Write completed with error (sct=0, sc=8) 00:14:47.455 Read completed with error (sct=0, sc=8) 00:14:47.455 Read completed with error (sct=0, sc=8) 00:14:47.455 Write completed with error (sct=0, sc=8) 00:14:47.455 Read completed with error (sct=0, sc=8) 00:14:47.455 Read completed with error (sct=0, sc=8) 00:14:47.455 Read completed with error (sct=0, sc=8) 00:14:47.455 Read completed with error (sct=0, sc=8) 00:14:47.455 Read completed with error (sct=0, sc=8) 00:14:47.455 Read completed with error (sct=0, sc=8) 00:14:47.455 Read completed with error (sct=0, sc=8) 00:14:47.455 Read completed with error (sct=0, sc=8) 00:14:47.455 Read completed with error (sct=0, sc=8) 00:14:47.455 Read completed with error (sct=0, sc=8) 00:14:47.455 Read completed with error (sct=0, sc=8) 00:14:47.455 Read completed with error (sct=0, sc=8) 00:14:47.455 Read completed with error (sct=0, sc=8) 00:14:47.455 Read completed with error (sct=0, sc=8) 00:14:47.455 Write completed with error (sct=0, sc=8) 00:14:47.455 Read completed with error (sct=0, sc=8) 00:14:47.455 Read completed with error (sct=0, sc=8) 00:14:47.455 [2024-07-21 03:24:32.523447] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f924c00c2f0 is same with the state(5) to be set 00:14:47.455 Read completed with error (sct=0, sc=8) 00:14:47.455 Read completed with error (sct=0, sc=8) 00:14:47.455 Read completed with error (sct=0, sc=8) 00:14:47.455 Read completed with error (sct=0, sc=8) 00:14:47.455 Read completed with error (sct=0, sc=8) 00:14:47.455 Write completed with error (sct=0, sc=8) 00:14:47.455 Read completed with error (sct=0, sc=8) 00:14:47.455 Read completed with error (sct=0, sc=8) 00:14:47.455 Write completed with error (sct=0, sc=8) 00:14:47.455 Write completed with error (sct=0, sc=8) 00:14:47.455 Write completed with error (sct=0, sc=8) 00:14:47.455 Read completed with error (sct=0, sc=8) 00:14:47.455 Write completed with error (sct=0, sc=8) 00:14:47.455 Read completed with error (sct=0, sc=8) 00:14:47.455 Write completed with error (sct=0, sc=8) 00:14:47.455 Read completed with error (sct=0, sc=8) 00:14:47.455 Read completed with error (sct=0, sc=8) 00:14:47.455 Read completed with error (sct=0, sc=8) 00:14:47.455 Read completed with error (sct=0, sc=8) 00:14:47.455 Read completed with error (sct=0, sc=8) 00:14:47.455 Write completed with error (sct=0, sc=8) 00:14:47.455 Write completed with error (sct=0, sc=8) 00:14:47.455 Read completed with error (sct=0, sc=8) 00:14:47.455 [2024-07-21 03:24:32.523883] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f924c00c600 is same with the state(5) to be set 00:14:47.455 Initializing NVMe Controllers 00:14:47.455 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:47.455 Controller IO queue size 128, less than required. 00:14:47.455 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:47.455 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:14:47.455 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:14:47.455 Initialization complete. Launching workers. 00:14:47.455 ======================================================== 00:14:47.455 Latency(us) 00:14:47.455 Device Information : IOPS MiB/s Average min max 00:14:47.455 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 184.52 0.09 911205.14 729.01 1013454.91 00:14:47.455 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 167.16 0.08 902192.35 567.16 1013828.27 00:14:47.455 ======================================================== 00:14:47.455 Total : 351.67 0.17 906921.21 567.16 1013828.27 00:14:47.455 00:14:47.455 [2024-07-21 03:24:32.524888] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11af620 (9): Bad file descriptor 00:14:47.455 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:14:47.455 03:24:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:47.455 03:24:32 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:14:47.455 03:24:32 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2364813 00:14:47.455 03:24:32 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:14:48.020 03:24:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:14:48.020 03:24:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2364813 00:14:48.020 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (2364813) - No such process 00:14:48.020 03:24:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 2364813 00:14:48.020 03:24:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@648 -- # local es=0 00:14:48.020 03:24:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # valid_exec_arg wait 2364813 00:14:48.020 03:24:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@636 -- # local arg=wait 00:14:48.020 03:24:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:48.020 03:24:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # type -t wait 00:14:48.020 03:24:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:48.020 03:24:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # wait 2364813 00:14:48.020 03:24:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # es=1 00:14:48.020 03:24:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:48.020 03:24:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:48.020 03:24:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:48.020 03:24:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:48.020 03:24:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:48.020 03:24:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:48.020 03:24:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:48.020 03:24:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:48.020 03:24:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:48.020 03:24:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:48.020 [2024-07-21 03:24:33.047930] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:48.020 03:24:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:48.020 03:24:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:48.020 03:24:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:48.020 03:24:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:48.020 03:24:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:48.020 03:24:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=2365230 00:14:48.020 03:24:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:14:48.021 03:24:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:14:48.021 03:24:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2365230 00:14:48.021 03:24:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:48.021 EAL: No free 2048 kB hugepages reported on node 1 00:14:48.021 [2024-07-21 03:24:33.111956] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:14:48.288 03:24:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:48.288 03:24:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2365230 00:14:48.288 03:24:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:48.931 03:24:34 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:48.931 03:24:34 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2365230 00:14:48.931 03:24:34 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:49.495 03:24:34 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:49.495 03:24:34 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2365230 00:14:49.495 03:24:34 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:50.059 03:24:35 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:50.059 03:24:35 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2365230 00:14:50.059 03:24:35 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:50.316 03:24:35 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:50.316 03:24:35 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2365230 00:14:50.316 03:24:35 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:50.879 03:24:36 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:50.879 03:24:36 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2365230 00:14:50.880 03:24:36 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:51.136 Initializing NVMe Controllers 00:14:51.136 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:51.136 Controller IO queue size 128, less than required. 00:14:51.136 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:51.136 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:14:51.136 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:14:51.136 Initialization complete. Launching workers. 00:14:51.136 ======================================================== 00:14:51.136 Latency(us) 00:14:51.136 Device Information : IOPS MiB/s Average min max 00:14:51.136 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003373.14 1000175.38 1011855.81 00:14:51.136 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004918.38 1000217.66 1042857.04 00:14:51.136 ======================================================== 00:14:51.136 Total : 256.00 0.12 1004145.76 1000175.38 1042857.04 00:14:51.136 00:14:51.394 03:24:36 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:51.394 03:24:36 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2365230 00:14:51.394 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (2365230) - No such process 00:14:51.394 03:24:36 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 2365230 00:14:51.394 03:24:36 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:14:51.394 03:24:36 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:14:51.394 03:24:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:51.394 03:24:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:14:51.394 03:24:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:51.394 03:24:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:14:51.394 03:24:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:51.394 03:24:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:51.394 rmmod nvme_tcp 00:14:51.394 rmmod nvme_fabrics 00:14:51.394 rmmod nvme_keyring 00:14:51.394 03:24:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:51.394 03:24:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:14:51.394 03:24:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:14:51.394 03:24:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 2364791 ']' 00:14:51.394 03:24:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 2364791 00:14:51.394 03:24:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@946 -- # '[' -z 2364791 ']' 00:14:51.394 03:24:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # kill -0 2364791 00:14:51.394 03:24:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@951 -- # uname 00:14:51.394 03:24:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:51.394 03:24:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2364791 00:14:51.394 03:24:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:14:51.394 03:24:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:14:51.394 03:24:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2364791' 00:14:51.394 killing process with pid 2364791 00:14:51.394 03:24:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@965 -- # kill 2364791 00:14:51.394 03:24:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@970 -- # wait 2364791 00:14:51.652 03:24:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:51.652 03:24:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:51.652 03:24:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:51.652 03:24:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:51.652 03:24:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:51.652 03:24:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:51.652 03:24:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:51.652 03:24:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:54.185 03:24:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:54.185 00:14:54.185 real 0m12.245s 00:14:54.185 user 0m27.872s 00:14:54.185 sys 0m2.892s 00:14:54.185 03:24:38 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:54.185 03:24:38 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:54.185 ************************************ 00:14:54.185 END TEST nvmf_delete_subsystem 00:14:54.185 ************************************ 00:14:54.185 03:24:38 nvmf_tcp -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:14:54.185 03:24:38 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:54.185 03:24:38 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:54.185 03:24:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:54.185 ************************************ 00:14:54.185 START TEST nvmf_ns_masking 00:14:54.185 ************************************ 00:14:54.185 03:24:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1121 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:14:54.185 * Looking for test storage... 00:14:54.185 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:54.185 03:24:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:54.185 03:24:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:14:54.185 03:24:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:54.185 03:24:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:54.185 03:24:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:54.185 03:24:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:54.185 03:24:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:54.185 03:24:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:54.185 03:24:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:54.185 03:24:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:54.185 03:24:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:54.185 03:24:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:54.185 03:24:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:54.185 03:24:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:54.185 03:24:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:54.185 03:24:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:54.185 03:24:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:54.185 03:24:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:54.185 03:24:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:54.185 03:24:39 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:54.185 03:24:39 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:54.185 03:24:39 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:54.185 03:24:39 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:54.186 03:24:39 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:54.186 03:24:39 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:54.186 03:24:39 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:14:54.186 03:24:39 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:54.186 03:24:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:14:54.186 03:24:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:54.186 03:24:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:54.186 03:24:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:54.186 03:24:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:54.186 03:24:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:54.186 03:24:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:54.186 03:24:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:54.186 03:24:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:54.186 03:24:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:54.186 03:24:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@11 -- # loops=5 00:14:54.186 03:24:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:14:54.186 03:24:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # HOSTNQN=nqn.2016-06.io.spdk:host1 00:14:54.186 03:24:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@15 -- # uuidgen 00:14:54.186 03:24:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@15 -- # HOSTID=454c2199-83b5-4134-92ee-9185725e146d 00:14:54.186 03:24:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvmftestinit 00:14:54.186 03:24:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:54.186 03:24:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:54.186 03:24:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:54.186 03:24:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:54.186 03:24:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:54.186 03:24:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:54.186 03:24:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:54.186 03:24:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:54.186 03:24:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:54.186 03:24:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:54.186 03:24:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:14:54.186 03:24:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:56.084 03:24:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:56.084 03:24:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:14:56.084 03:24:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:56.084 03:24:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:56.084 03:24:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:56.084 03:24:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:56.084 03:24:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:56.084 03:24:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:14:56.084 03:24:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:56.084 03:24:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:14:56.084 03:24:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:14:56.084 03:24:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:14:56.084 03:24:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:14:56.084 03:24:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:14:56.084 03:24:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:14:56.084 03:24:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:56.084 03:24:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:56.084 03:24:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:56.084 03:24:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:56.084 03:24:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:56.084 03:24:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:56.084 03:24:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:56.084 03:24:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:56.084 03:24:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:56.084 03:24:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:56.084 03:24:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:56.084 03:24:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:56.084 03:24:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:56.084 03:24:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:56.084 03:24:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:56.084 03:24:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:56.084 03:24:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:56.084 03:24:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:56.084 03:24:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:56.084 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:56.084 03:24:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:56.084 03:24:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:56.084 03:24:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:56.084 03:24:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:56.084 03:24:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:56.084 03:24:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:56.084 03:24:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:56.084 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:56.084 03:24:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:56.084 03:24:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:56.084 03:24:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:56.084 03:24:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:56.084 03:24:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:56.084 03:24:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:56.084 03:24:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:56.084 03:24:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:56.084 03:24:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:56.084 03:24:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:56.084 03:24:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:56.084 03:24:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:56.084 03:24:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:56.084 03:24:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:56.084 03:24:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:56.084 03:24:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:56.084 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:56.084 03:24:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:56.084 03:24:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:56.084 03:24:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:56.084 03:24:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:56.084 03:24:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:56.084 03:24:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:56.084 03:24:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:56.084 03:24:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:56.084 03:24:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:56.084 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:56.084 03:24:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:56.084 03:24:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:56.084 03:24:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:14:56.084 03:24:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:56.084 03:24:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:56.084 03:24:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:56.084 03:24:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:56.084 03:24:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:56.084 03:24:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:56.084 03:24:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:56.084 03:24:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:56.084 03:24:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:56.084 03:24:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:56.084 03:24:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:56.084 03:24:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:56.084 03:24:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:56.084 03:24:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:56.084 03:24:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:56.084 03:24:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:56.084 03:24:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:56.084 03:24:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:56.084 03:24:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:56.084 03:24:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:56.084 03:24:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:56.084 03:24:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:56.084 03:24:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:56.084 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:56.084 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.116 ms 00:14:56.084 00:14:56.084 --- 10.0.0.2 ping statistics --- 00:14:56.084 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:56.084 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:14:56.084 03:24:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:56.084 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:56.084 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.087 ms 00:14:56.084 00:14:56.084 --- 10.0.0.1 ping statistics --- 00:14:56.084 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:56.084 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:14:56.084 03:24:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:56.084 03:24:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:14:56.084 03:24:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:56.084 03:24:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:56.084 03:24:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:56.084 03:24:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:56.084 03:24:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:56.084 03:24:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:56.084 03:24:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:56.084 03:24:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # nvmfappstart -m 0xF 00:14:56.084 03:24:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:56.084 03:24:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@720 -- # xtrace_disable 00:14:56.084 03:24:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:56.084 03:24:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=2367668 00:14:56.084 03:24:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:56.084 03:24:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 2367668 00:14:56.084 03:24:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@827 -- # '[' -z 2367668 ']' 00:14:56.084 03:24:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:56.084 03:24:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:56.085 03:24:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:56.085 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:56.085 03:24:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:56.085 03:24:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:56.085 [2024-07-21 03:24:41.127938] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:14:56.085 [2024-07-21 03:24:41.128018] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:56.085 EAL: No free 2048 kB hugepages reported on node 1 00:14:56.085 [2024-07-21 03:24:41.195401] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:56.085 [2024-07-21 03:24:41.290753] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:56.085 [2024-07-21 03:24:41.290806] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:56.085 [2024-07-21 03:24:41.290827] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:56.085 [2024-07-21 03:24:41.290841] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:56.085 [2024-07-21 03:24:41.290852] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:56.085 [2024-07-21 03:24:41.290921] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:56.085 [2024-07-21 03:24:41.290975] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:56.085 [2024-07-21 03:24:41.292633] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:56.085 [2024-07-21 03:24:41.292644] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:56.342 03:24:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:56.342 03:24:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@860 -- # return 0 00:14:56.342 03:24:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:56.342 03:24:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:56.342 03:24:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:56.342 03:24:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:56.342 03:24:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:56.342 [2024-07-21 03:24:41.642862] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:56.600 03:24:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@49 -- # MALLOC_BDEV_SIZE=64 00:14:56.600 03:24:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@50 -- # MALLOC_BLOCK_SIZE=512 00:14:56.600 03:24:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:56.857 Malloc1 00:14:56.857 03:24:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:57.115 Malloc2 00:14:57.115 03:24:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:57.373 03:24:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:14:57.373 03:24:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:57.631 [2024-07-21 03:24:42.898535] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:57.631 03:24:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@61 -- # connect 00:14:57.631 03:24:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 454c2199-83b5-4134-92ee-9185725e146d -a 10.0.0.2 -s 4420 -i 4 00:14:57.891 03:24:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 00:14:57.891 03:24:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1194 -- # local i=0 00:14:57.891 03:24:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:14:57.891 03:24:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:14:57.891 03:24:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # sleep 2 00:14:59.782 03:24:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:14:59.782 03:24:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:14:59.782 03:24:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:14:59.782 03:24:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:14:59.782 03:24:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:14:59.782 03:24:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # return 0 00:14:59.782 03:24:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:14:59.782 03:24:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:59.782 03:24:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:14:59.782 03:24:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:14:59.782 03:24:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@62 -- # ns_is_visible 0x1 00:14:59.782 03:24:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:59.782 03:24:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:14:59.782 [ 0]:0x1 00:15:00.044 03:24:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:00.044 03:24:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:00.044 03:24:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=cb0b63cd31ec4bba89e3711bfdf58786 00:15:00.044 03:24:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ cb0b63cd31ec4bba89e3711bfdf58786 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:00.044 03:24:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:15:00.301 03:24:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@66 -- # ns_is_visible 0x1 00:15:00.301 03:24:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:00.301 03:24:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:15:00.301 [ 0]:0x1 00:15:00.301 03:24:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:00.301 03:24:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:00.301 03:24:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=cb0b63cd31ec4bba89e3711bfdf58786 00:15:00.301 03:24:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ cb0b63cd31ec4bba89e3711bfdf58786 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:00.301 03:24:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@67 -- # ns_is_visible 0x2 00:15:00.301 03:24:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:00.301 03:24:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:15:00.301 [ 1]:0x2 00:15:00.301 03:24:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:00.301 03:24:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:00.301 03:24:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=5febbb5171b24ed99a5ec89ed58d35fd 00:15:00.301 03:24:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 5febbb5171b24ed99a5ec89ed58d35fd != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:00.301 03:24:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@69 -- # disconnect 00:15:00.301 03:24:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:00.558 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:00.558 03:24:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:00.815 03:24:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:15:01.072 03:24:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@77 -- # connect 1 00:15:01.072 03:24:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 454c2199-83b5-4134-92ee-9185725e146d -a 10.0.0.2 -s 4420 -i 4 00:15:01.330 03:24:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 1 00:15:01.330 03:24:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1194 -- # local i=0 00:15:01.330 03:24:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:15:01.330 03:24:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # [[ -n 1 ]] 00:15:01.330 03:24:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # nvme_device_counter=1 00:15:01.330 03:24:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # sleep 2 00:15:03.225 03:24:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:15:03.225 03:24:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:15:03.225 03:24:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:15:03.225 03:24:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:15:03.225 03:24:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:15:03.225 03:24:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # return 0 00:15:03.225 03:24:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:15:03.225 03:24:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:15:03.483 03:24:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:15:03.483 03:24:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:15:03.483 03:24:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@78 -- # NOT ns_is_visible 0x1 00:15:03.483 03:24:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:15:03.483 03:24:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:15:03.483 03:24:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:15:03.483 03:24:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:03.483 03:24:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:15:03.483 03:24:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:03.483 03:24:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:15:03.483 03:24:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:03.483 03:24:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:15:03.483 03:24:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:03.483 03:24:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:03.483 03:24:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:15:03.483 03:24:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:03.483 03:24:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:15:03.483 03:24:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:03.483 03:24:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:03.483 03:24:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:03.483 03:24:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@79 -- # ns_is_visible 0x2 00:15:03.483 03:24:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:03.483 03:24:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:15:03.483 [ 0]:0x2 00:15:03.483 03:24:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:03.483 03:24:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:03.483 03:24:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=5febbb5171b24ed99a5ec89ed58d35fd 00:15:03.483 03:24:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 5febbb5171b24ed99a5ec89ed58d35fd != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:03.483 03:24:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:03.741 03:24:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@83 -- # ns_is_visible 0x1 00:15:03.741 03:24:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:03.741 03:24:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:15:03.741 [ 0]:0x1 00:15:03.741 03:24:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:03.741 03:24:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:03.741 03:24:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=cb0b63cd31ec4bba89e3711bfdf58786 00:15:03.741 03:24:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ cb0b63cd31ec4bba89e3711bfdf58786 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:03.741 03:24:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@84 -- # ns_is_visible 0x2 00:15:03.741 03:24:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:03.741 03:24:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:15:03.741 [ 1]:0x2 00:15:03.741 03:24:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:03.741 03:24:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:03.998 03:24:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=5febbb5171b24ed99a5ec89ed58d35fd 00:15:03.998 03:24:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 5febbb5171b24ed99a5ec89ed58d35fd != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:03.998 03:24:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:04.256 03:24:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@88 -- # NOT ns_is_visible 0x1 00:15:04.256 03:24:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:15:04.256 03:24:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:15:04.256 03:24:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:15:04.256 03:24:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:04.256 03:24:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:15:04.256 03:24:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:04.256 03:24:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:15:04.256 03:24:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:04.256 03:24:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:15:04.256 03:24:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:04.256 03:24:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:04.256 03:24:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:15:04.256 03:24:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:04.256 03:24:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:15:04.256 03:24:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:04.256 03:24:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:04.256 03:24:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:04.256 03:24:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x2 00:15:04.256 03:24:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:04.256 03:24:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:15:04.256 [ 0]:0x2 00:15:04.256 03:24:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:04.256 03:24:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:04.256 03:24:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=5febbb5171b24ed99a5ec89ed58d35fd 00:15:04.256 03:24:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 5febbb5171b24ed99a5ec89ed58d35fd != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:04.256 03:24:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@91 -- # disconnect 00:15:04.256 03:24:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:04.256 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:04.256 03:24:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:04.513 03:24:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@95 -- # connect 2 00:15:04.513 03:24:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 454c2199-83b5-4134-92ee-9185725e146d -a 10.0.0.2 -s 4420 -i 4 00:15:04.513 03:24:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 2 00:15:04.513 03:24:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1194 -- # local i=0 00:15:04.513 03:24:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:15:04.513 03:24:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # [[ -n 2 ]] 00:15:04.513 03:24:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # nvme_device_counter=2 00:15:04.513 03:24:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # sleep 2 00:15:07.047 03:24:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:15:07.047 03:24:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:15:07.047 03:24:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:15:07.047 03:24:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_devices=2 00:15:07.047 03:24:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:15:07.047 03:24:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # return 0 00:15:07.047 03:24:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:15:07.047 03:24:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:15:07.047 03:24:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:15:07.047 03:24:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:15:07.047 03:24:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@96 -- # ns_is_visible 0x1 00:15:07.047 03:24:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:07.047 03:24:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:15:07.047 [ 0]:0x1 00:15:07.047 03:24:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:07.047 03:24:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:07.047 03:24:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=cb0b63cd31ec4bba89e3711bfdf58786 00:15:07.047 03:24:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ cb0b63cd31ec4bba89e3711bfdf58786 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:07.047 03:24:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@97 -- # ns_is_visible 0x2 00:15:07.047 03:24:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:07.048 03:24:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:15:07.048 [ 1]:0x2 00:15:07.048 03:24:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:07.048 03:24:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:07.048 03:24:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=5febbb5171b24ed99a5ec89ed58d35fd 00:15:07.048 03:24:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 5febbb5171b24ed99a5ec89ed58d35fd != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:07.048 03:24:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:07.048 03:24:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@101 -- # NOT ns_is_visible 0x1 00:15:07.048 03:24:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:15:07.048 03:24:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:15:07.048 03:24:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:15:07.048 03:24:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:07.048 03:24:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:15:07.048 03:24:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:07.048 03:24:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:15:07.048 03:24:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:07.048 03:24:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:15:07.048 03:24:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:07.048 03:24:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:07.048 03:24:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:15:07.048 03:24:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:07.048 03:24:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:15:07.048 03:24:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:07.048 03:24:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:07.048 03:24:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:07.048 03:24:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x2 00:15:07.048 03:24:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:07.048 03:24:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:15:07.048 [ 0]:0x2 00:15:07.048 03:24:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:07.048 03:24:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:07.048 03:24:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=5febbb5171b24ed99a5ec89ed58d35fd 00:15:07.048 03:24:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 5febbb5171b24ed99a5ec89ed58d35fd != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:07.048 03:24:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@105 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:15:07.048 03:24:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:15:07.048 03:24:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:15:07.048 03:24:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:07.048 03:24:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:07.048 03:24:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:07.048 03:24:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:07.048 03:24:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:07.048 03:24:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:07.048 03:24:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:07.048 03:24:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:15:07.048 03:24:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:15:07.305 [2024-07-21 03:24:52.609461] nvmf_rpc.c:1791:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:15:07.305 request: 00:15:07.305 { 00:15:07.305 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:07.305 "nsid": 2, 00:15:07.305 "host": "nqn.2016-06.io.spdk:host1", 00:15:07.305 "method": "nvmf_ns_remove_host", 00:15:07.305 "req_id": 1 00:15:07.305 } 00:15:07.305 Got JSON-RPC error response 00:15:07.305 response: 00:15:07.305 { 00:15:07.305 "code": -32602, 00:15:07.305 "message": "Invalid parameters" 00:15:07.305 } 00:15:07.593 03:24:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:15:07.593 03:24:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:07.593 03:24:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:07.593 03:24:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:07.593 03:24:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@106 -- # NOT ns_is_visible 0x1 00:15:07.593 03:24:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:15:07.593 03:24:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:15:07.593 03:24:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:15:07.593 03:24:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:07.593 03:24:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:15:07.593 03:24:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:07.593 03:24:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:15:07.593 03:24:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:07.593 03:24:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:15:07.593 03:24:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:07.593 03:24:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:07.593 03:24:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:15:07.593 03:24:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:07.593 03:24:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:15:07.593 03:24:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:07.593 03:24:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:07.593 03:24:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:07.593 03:24:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@107 -- # ns_is_visible 0x2 00:15:07.593 03:24:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:07.593 03:24:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:15:07.593 [ 0]:0x2 00:15:07.593 03:24:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:07.593 03:24:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:07.593 03:24:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=5febbb5171b24ed99a5ec89ed58d35fd 00:15:07.593 03:24:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 5febbb5171b24ed99a5ec89ed58d35fd != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:07.593 03:24:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@108 -- # disconnect 00:15:07.593 03:24:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:07.593 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:07.593 03:24:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@110 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:07.851 03:24:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:15:07.851 03:24:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@114 -- # nvmftestfini 00:15:07.851 03:24:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:07.851 03:24:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:15:07.851 03:24:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:07.851 03:24:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:15:07.851 03:24:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:07.851 03:24:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:07.851 rmmod nvme_tcp 00:15:07.851 rmmod nvme_fabrics 00:15:07.851 rmmod nvme_keyring 00:15:07.851 03:24:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:07.851 03:24:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:15:07.851 03:24:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:15:07.851 03:24:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 2367668 ']' 00:15:07.851 03:24:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 2367668 00:15:07.851 03:24:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@946 -- # '[' -z 2367668 ']' 00:15:07.851 03:24:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@950 -- # kill -0 2367668 00:15:07.851 03:24:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@951 -- # uname 00:15:07.851 03:24:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:07.851 03:24:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2367668 00:15:07.851 03:24:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:15:07.851 03:24:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:15:07.851 03:24:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2367668' 00:15:07.851 killing process with pid 2367668 00:15:07.851 03:24:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@965 -- # kill 2367668 00:15:07.851 03:24:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@970 -- # wait 2367668 00:15:08.109 03:24:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:08.109 03:24:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:08.109 03:24:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:08.109 03:24:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:08.109 03:24:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:08.109 03:24:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:08.109 03:24:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:08.109 03:24:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:10.637 03:24:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:10.637 00:15:10.637 real 0m16.419s 00:15:10.637 user 0m51.343s 00:15:10.637 sys 0m3.703s 00:15:10.637 03:24:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:10.637 03:24:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:10.637 ************************************ 00:15:10.637 END TEST nvmf_ns_masking 00:15:10.637 ************************************ 00:15:10.637 03:24:55 nvmf_tcp -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:15:10.637 03:24:55 nvmf_tcp -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:15:10.637 03:24:55 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:15:10.637 03:24:55 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:10.637 03:24:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:10.637 ************************************ 00:15:10.637 START TEST nvmf_nvme_cli 00:15:10.637 ************************************ 00:15:10.637 03:24:55 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:15:10.637 * Looking for test storage... 00:15:10.637 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:10.637 03:24:55 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:10.637 03:24:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:15:10.637 03:24:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:10.637 03:24:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:10.637 03:24:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:10.637 03:24:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:10.638 03:24:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:10.638 03:24:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:10.638 03:24:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:10.638 03:24:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:10.638 03:24:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:10.638 03:24:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:10.638 03:24:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:10.638 03:24:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:15:10.638 03:24:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:10.638 03:24:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:10.638 03:24:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:10.638 03:24:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:10.638 03:24:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:10.638 03:24:55 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:10.638 03:24:55 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:10.638 03:24:55 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:10.638 03:24:55 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:10.638 03:24:55 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:10.638 03:24:55 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:10.638 03:24:55 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:15:10.638 03:24:55 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:10.638 03:24:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:15:10.638 03:24:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:10.638 03:24:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:10.638 03:24:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:10.638 03:24:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:10.638 03:24:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:10.638 03:24:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:10.638 03:24:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:10.638 03:24:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:10.638 03:24:55 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:10.638 03:24:55 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:10.638 03:24:55 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:15:10.638 03:24:55 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:15:10.638 03:24:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:10.638 03:24:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:10.638 03:24:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:10.638 03:24:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:10.638 03:24:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:10.638 03:24:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:10.638 03:24:55 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:10.638 03:24:55 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:10.638 03:24:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:10.638 03:24:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:10.638 03:24:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:15:10.638 03:24:55 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:12.538 03:24:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:12.538 03:24:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:15:12.538 03:24:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:12.538 03:24:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:12.538 03:24:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:12.538 03:24:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:12.538 03:24:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:12.538 03:24:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:15:12.538 03:24:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:12.538 03:24:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:15:12.538 03:24:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:15:12.538 03:24:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:15:12.538 03:24:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:15:12.538 03:24:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:15:12.538 03:24:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:15:12.538 03:24:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:12.538 03:24:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:12.538 03:24:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:12.538 03:24:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:12.538 03:24:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:12.538 03:24:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:12.538 03:24:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:12.538 03:24:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:12.538 03:24:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:12.538 03:24:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:12.538 03:24:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:12.538 03:24:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:12.538 03:24:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:12.539 03:24:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:12.539 03:24:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:12.539 03:24:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:12.539 03:24:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:12.539 03:24:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:12.539 03:24:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:15:12.539 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:15:12.539 03:24:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:12.539 03:24:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:12.539 03:24:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:12.539 03:24:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:12.539 03:24:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:12.539 03:24:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:12.539 03:24:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:15:12.539 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:15:12.539 03:24:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:12.539 03:24:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:12.539 03:24:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:12.539 03:24:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:12.539 03:24:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:12.539 03:24:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:12.539 03:24:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:12.539 03:24:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:12.539 03:24:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:12.539 03:24:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:12.539 03:24:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:12.539 03:24:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:12.539 03:24:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:12.539 03:24:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:12.539 03:24:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:12.539 03:24:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:15:12.539 Found net devices under 0000:0a:00.0: cvl_0_0 00:15:12.539 03:24:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:12.539 03:24:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:12.539 03:24:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:12.539 03:24:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:12.539 03:24:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:12.539 03:24:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:12.539 03:24:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:12.539 03:24:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:12.539 03:24:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:15:12.539 Found net devices under 0000:0a:00.1: cvl_0_1 00:15:12.539 03:24:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:12.539 03:24:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:12.539 03:24:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:15:12.539 03:24:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:12.539 03:24:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:12.539 03:24:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:12.539 03:24:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:12.539 03:24:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:12.539 03:24:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:12.539 03:24:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:12.539 03:24:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:12.539 03:24:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:12.539 03:24:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:12.539 03:24:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:12.539 03:24:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:12.539 03:24:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:12.539 03:24:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:12.539 03:24:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:12.539 03:24:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:12.539 03:24:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:12.539 03:24:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:12.539 03:24:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:12.539 03:24:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:12.539 03:24:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:12.539 03:24:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:12.539 03:24:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:12.539 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:12.539 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.256 ms 00:15:12.539 00:15:12.539 --- 10.0.0.2 ping statistics --- 00:15:12.539 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:12.539 rtt min/avg/max/mdev = 0.256/0.256/0.256/0.000 ms 00:15:12.539 03:24:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:12.539 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:12.539 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.163 ms 00:15:12.539 00:15:12.539 --- 10.0.0.1 ping statistics --- 00:15:12.539 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:12.539 rtt min/avg/max/mdev = 0.163/0.163/0.163/0.000 ms 00:15:12.539 03:24:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:12.539 03:24:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:15:12.539 03:24:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:12.539 03:24:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:12.539 03:24:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:12.539 03:24:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:12.539 03:24:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:12.539 03:24:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:12.539 03:24:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:12.539 03:24:57 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:15:12.539 03:24:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:12.539 03:24:57 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@720 -- # xtrace_disable 00:15:12.539 03:24:57 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:12.539 03:24:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=2371113 00:15:12.539 03:24:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:12.539 03:24:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 2371113 00:15:12.539 03:24:57 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@827 -- # '[' -z 2371113 ']' 00:15:12.539 03:24:57 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:12.539 03:24:57 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:12.539 03:24:57 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:12.539 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:12.539 03:24:57 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:12.539 03:24:57 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:12.539 [2024-07-21 03:24:57.696067] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:15:12.539 [2024-07-21 03:24:57.696145] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:12.539 EAL: No free 2048 kB hugepages reported on node 1 00:15:12.539 [2024-07-21 03:24:57.764934] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:12.797 [2024-07-21 03:24:57.857049] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:12.797 [2024-07-21 03:24:57.857102] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:12.797 [2024-07-21 03:24:57.857116] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:12.797 [2024-07-21 03:24:57.857127] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:12.797 [2024-07-21 03:24:57.857136] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:12.797 [2024-07-21 03:24:57.857217] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:12.798 [2024-07-21 03:24:57.857240] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:12.798 [2024-07-21 03:24:57.857296] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:12.798 [2024-07-21 03:24:57.857298] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:12.798 03:24:57 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:12.798 03:24:57 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@860 -- # return 0 00:15:12.798 03:24:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:12.798 03:24:57 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:12.798 03:24:57 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:12.798 03:24:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:12.798 03:24:58 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:12.798 03:24:58 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:12.798 03:24:58 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:12.798 [2024-07-21 03:24:58.025479] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:12.798 03:24:58 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:12.798 03:24:58 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:12.798 03:24:58 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:12.798 03:24:58 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:12.798 Malloc0 00:15:12.798 03:24:58 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:12.798 03:24:58 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:15:12.798 03:24:58 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:12.798 03:24:58 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:12.798 Malloc1 00:15:12.798 03:24:58 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:12.798 03:24:58 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:15:12.798 03:24:58 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:12.798 03:24:58 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:12.798 03:24:58 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:12.798 03:24:58 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:12.798 03:24:58 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:12.798 03:24:58 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:12.798 03:24:58 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:12.798 03:24:58 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:12.798 03:24:58 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:12.798 03:24:58 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:12.798 03:24:58 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:12.798 03:24:58 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:12.798 03:24:58 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:12.798 03:24:58 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:12.798 [2024-07-21 03:24:58.109558] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:13.056 03:24:58 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:13.056 03:24:58 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:13.056 03:24:58 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:13.056 03:24:58 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:13.056 03:24:58 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:13.056 03:24:58 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:15:13.056 00:15:13.056 Discovery Log Number of Records 2, Generation counter 2 00:15:13.056 =====Discovery Log Entry 0====== 00:15:13.056 trtype: tcp 00:15:13.056 adrfam: ipv4 00:15:13.056 subtype: current discovery subsystem 00:15:13.056 treq: not required 00:15:13.056 portid: 0 00:15:13.056 trsvcid: 4420 00:15:13.056 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:15:13.056 traddr: 10.0.0.2 00:15:13.056 eflags: explicit discovery connections, duplicate discovery information 00:15:13.056 sectype: none 00:15:13.056 =====Discovery Log Entry 1====== 00:15:13.056 trtype: tcp 00:15:13.056 adrfam: ipv4 00:15:13.056 subtype: nvme subsystem 00:15:13.056 treq: not required 00:15:13.056 portid: 0 00:15:13.056 trsvcid: 4420 00:15:13.056 subnqn: nqn.2016-06.io.spdk:cnode1 00:15:13.056 traddr: 10.0.0.2 00:15:13.056 eflags: none 00:15:13.056 sectype: none 00:15:13.056 03:24:58 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:15:13.056 03:24:58 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:15:13.056 03:24:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:15:13.056 03:24:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:13.056 03:24:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:15:13.056 03:24:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:15:13.056 03:24:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:13.056 03:24:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:15:13.056 03:24:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:13.056 03:24:58 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:15:13.056 03:24:58 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:13.622 03:24:58 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:15:13.622 03:24:58 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1194 -- # local i=0 00:15:13.622 03:24:58 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:15:13.622 03:24:58 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1196 -- # [[ -n 2 ]] 00:15:13.622 03:24:58 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1197 -- # nvme_device_counter=2 00:15:13.622 03:24:58 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # sleep 2 00:15:16.152 03:25:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:15:16.152 03:25:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:15:16.152 03:25:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:15:16.152 03:25:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # nvme_devices=2 00:15:16.152 03:25:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:15:16.152 03:25:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # return 0 00:15:16.152 03:25:00 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:15:16.152 03:25:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:15:16.152 03:25:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:16.152 03:25:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:15:16.152 03:25:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:15:16.152 03:25:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:16.152 03:25:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:15:16.152 03:25:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:16.152 03:25:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:15:16.152 03:25:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:15:16.152 03:25:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:16.152 03:25:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:15:16.152 03:25:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:15:16.152 03:25:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:16.152 03:25:00 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:15:16.152 /dev/nvme0n1 ]] 00:15:16.152 03:25:00 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:15:16.152 03:25:00 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:15:16.152 03:25:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:15:16.152 03:25:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:16.152 03:25:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:15:16.152 03:25:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:15:16.152 03:25:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:16.152 03:25:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:15:16.152 03:25:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:16.152 03:25:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:15:16.152 03:25:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:15:16.152 03:25:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:16.152 03:25:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:15:16.152 03:25:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:15:16.152 03:25:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:16.152 03:25:01 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:15:16.152 03:25:01 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:16.152 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:16.152 03:25:01 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:16.152 03:25:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1215 -- # local i=0 00:15:16.152 03:25:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:15:16.152 03:25:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:16.152 03:25:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:15:16.152 03:25:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:16.152 03:25:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # return 0 00:15:16.152 03:25:01 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:15:16.152 03:25:01 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:16.152 03:25:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:16.152 03:25:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:16.152 03:25:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:16.152 03:25:01 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:15:16.152 03:25:01 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:15:16.152 03:25:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:16.152 03:25:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:15:16.152 03:25:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:16.152 03:25:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:15:16.152 03:25:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:16.152 03:25:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:16.152 rmmod nvme_tcp 00:15:16.409 rmmod nvme_fabrics 00:15:16.409 rmmod nvme_keyring 00:15:16.409 03:25:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:16.409 03:25:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:15:16.409 03:25:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:15:16.409 03:25:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 2371113 ']' 00:15:16.409 03:25:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 2371113 00:15:16.409 03:25:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@946 -- # '[' -z 2371113 ']' 00:15:16.409 03:25:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@950 -- # kill -0 2371113 00:15:16.409 03:25:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@951 -- # uname 00:15:16.409 03:25:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:16.409 03:25:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2371113 00:15:16.409 03:25:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:15:16.409 03:25:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:15:16.409 03:25:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2371113' 00:15:16.409 killing process with pid 2371113 00:15:16.409 03:25:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@965 -- # kill 2371113 00:15:16.409 03:25:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@970 -- # wait 2371113 00:15:16.668 03:25:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:16.668 03:25:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:16.668 03:25:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:16.668 03:25:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:16.668 03:25:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:16.668 03:25:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:16.668 03:25:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:16.668 03:25:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:18.569 03:25:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:18.569 00:15:18.569 real 0m8.374s 00:15:18.569 user 0m16.082s 00:15:18.569 sys 0m2.193s 00:15:18.569 03:25:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:18.569 03:25:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:18.569 ************************************ 00:15:18.569 END TEST nvmf_nvme_cli 00:15:18.569 ************************************ 00:15:18.569 03:25:03 nvmf_tcp -- nvmf/nvmf.sh@40 -- # [[ 1 -eq 1 ]] 00:15:18.569 03:25:03 nvmf_tcp -- nvmf/nvmf.sh@41 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:15:18.569 03:25:03 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:15:18.569 03:25:03 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:18.569 03:25:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:18.569 ************************************ 00:15:18.569 START TEST nvmf_vfio_user 00:15:18.569 ************************************ 00:15:18.569 03:25:03 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:15:18.826 * Looking for test storage... 00:15:18.826 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:18.826 03:25:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:18.826 03:25:03 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:15:18.826 03:25:03 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:18.826 03:25:03 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:18.826 03:25:03 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:18.826 03:25:03 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:18.826 03:25:03 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:18.826 03:25:03 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:18.826 03:25:03 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:18.826 03:25:03 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:18.826 03:25:03 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:18.826 03:25:03 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:18.826 03:25:03 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:18.826 03:25:03 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:15:18.826 03:25:03 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:18.826 03:25:03 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:18.826 03:25:03 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:18.826 03:25:03 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:18.826 03:25:03 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:18.826 03:25:03 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:18.826 03:25:03 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:18.826 03:25:03 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:18.826 03:25:03 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:18.826 03:25:03 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:18.827 03:25:03 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:18.827 03:25:03 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:15:18.827 03:25:03 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:18.827 03:25:03 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@47 -- # : 0 00:15:18.827 03:25:03 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:18.827 03:25:03 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:18.827 03:25:03 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:18.827 03:25:03 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:18.827 03:25:03 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:18.827 03:25:03 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:18.827 03:25:03 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:18.827 03:25:03 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:18.827 03:25:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:18.827 03:25:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:18.827 03:25:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:15:18.827 03:25:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:18.827 03:25:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:15:18.827 03:25:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:15:18.827 03:25:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:15:18.827 03:25:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:15:18.827 03:25:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:15:18.827 03:25:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:15:18.827 03:25:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2372027 00:15:18.827 03:25:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:15:18.827 03:25:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2372027' 00:15:18.827 Process pid: 2372027 00:15:18.827 03:25:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:18.827 03:25:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2372027 00:15:18.827 03:25:03 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@827 -- # '[' -z 2372027 ']' 00:15:18.827 03:25:03 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:18.827 03:25:03 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:18.827 03:25:03 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:18.827 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:18.827 03:25:03 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:18.827 03:25:03 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:18.827 [2024-07-21 03:25:03.997335] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:15:18.827 [2024-07-21 03:25:03.997411] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:18.827 EAL: No free 2048 kB hugepages reported on node 1 00:15:18.827 [2024-07-21 03:25:04.057523] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:19.084 [2024-07-21 03:25:04.143516] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:19.084 [2024-07-21 03:25:04.143560] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:19.084 [2024-07-21 03:25:04.143581] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:19.084 [2024-07-21 03:25:04.143592] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:19.084 [2024-07-21 03:25:04.143602] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:19.084 [2024-07-21 03:25:04.143662] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:19.084 [2024-07-21 03:25:04.143728] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:19.084 [2024-07-21 03:25:04.143787] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:19.084 [2024-07-21 03:25:04.143790] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:19.084 03:25:04 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:19.084 03:25:04 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@860 -- # return 0 00:15:19.084 03:25:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:15:20.015 03:25:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:15:20.273 03:25:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:15:20.273 03:25:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:15:20.273 03:25:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:20.273 03:25:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:15:20.273 03:25:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:20.838 Malloc1 00:15:20.838 03:25:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:15:20.838 03:25:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:15:21.095 03:25:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:15:21.352 03:25:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:21.352 03:25:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:15:21.352 03:25:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:21.610 Malloc2 00:15:21.610 03:25:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:15:21.866 03:25:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:15:22.124 03:25:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:15:22.381 03:25:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:15:22.381 03:25:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:15:22.381 03:25:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:22.381 03:25:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:15:22.381 03:25:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:15:22.381 03:25:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:15:22.381 [2024-07-21 03:25:07.665125] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:15:22.381 [2024-07-21 03:25:07.665163] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2372458 ] 00:15:22.381 EAL: No free 2048 kB hugepages reported on node 1 00:15:22.640 [2024-07-21 03:25:07.698926] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:15:22.640 [2024-07-21 03:25:07.708092] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:22.640 [2024-07-21 03:25:07.708119] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f1620f8b000 00:15:22.640 [2024-07-21 03:25:07.709089] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:22.640 [2024-07-21 03:25:07.712637] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:22.640 [2024-07-21 03:25:07.713099] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:22.640 [2024-07-21 03:25:07.714104] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:22.640 [2024-07-21 03:25:07.715107] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:22.640 [2024-07-21 03:25:07.716111] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:22.640 [2024-07-21 03:25:07.717115] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:22.640 [2024-07-21 03:25:07.718120] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:22.640 [2024-07-21 03:25:07.719125] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:22.640 [2024-07-21 03:25:07.719145] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f161fd3d000 00:15:22.640 [2024-07-21 03:25:07.720260] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:22.640 [2024-07-21 03:25:07.735944] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:15:22.640 [2024-07-21 03:25:07.735976] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:15:22.640 [2024-07-21 03:25:07.738230] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:15:22.640 [2024-07-21 03:25:07.738280] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:15:22.640 [2024-07-21 03:25:07.738366] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:15:22.640 [2024-07-21 03:25:07.738395] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:15:22.640 [2024-07-21 03:25:07.738405] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:15:22.641 [2024-07-21 03:25:07.739232] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:15:22.641 [2024-07-21 03:25:07.739256] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:15:22.641 [2024-07-21 03:25:07.739269] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:15:22.641 [2024-07-21 03:25:07.740239] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:15:22.641 [2024-07-21 03:25:07.740258] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:15:22.641 [2024-07-21 03:25:07.740272] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:15:22.641 [2024-07-21 03:25:07.741243] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:15:22.641 [2024-07-21 03:25:07.741261] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:22.641 [2024-07-21 03:25:07.742250] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:15:22.641 [2024-07-21 03:25:07.742268] nvme_ctrlr.c:3751:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:15:22.641 [2024-07-21 03:25:07.742277] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:15:22.641 [2024-07-21 03:25:07.742288] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:22.641 [2024-07-21 03:25:07.742398] nvme_ctrlr.c:3944:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:15:22.641 [2024-07-21 03:25:07.742406] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:22.641 [2024-07-21 03:25:07.742414] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:15:22.641 [2024-07-21 03:25:07.746637] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:15:22.641 [2024-07-21 03:25:07.747280] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:15:22.641 [2024-07-21 03:25:07.748287] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:15:22.641 [2024-07-21 03:25:07.749281] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:22.641 [2024-07-21 03:25:07.749369] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:22.641 [2024-07-21 03:25:07.750298] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:15:22.641 [2024-07-21 03:25:07.750314] nvme_ctrlr.c:3786:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:22.641 [2024-07-21 03:25:07.750324] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:15:22.641 [2024-07-21 03:25:07.750347] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:15:22.641 [2024-07-21 03:25:07.750367] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:15:22.641 [2024-07-21 03:25:07.750395] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:22.641 [2024-07-21 03:25:07.750404] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:22.641 [2024-07-21 03:25:07.750424] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:22.641 [2024-07-21 03:25:07.750471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:15:22.641 [2024-07-21 03:25:07.750491] nvme_ctrlr.c:1986:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:15:22.641 [2024-07-21 03:25:07.750499] nvme_ctrlr.c:1990:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:15:22.641 [2024-07-21 03:25:07.750507] nvme_ctrlr.c:1993:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:15:22.641 [2024-07-21 03:25:07.750514] nvme_ctrlr.c:2004:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:15:22.641 [2024-07-21 03:25:07.750521] nvme_ctrlr.c:2017:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:15:22.641 [2024-07-21 03:25:07.750529] nvme_ctrlr.c:2032:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:15:22.641 [2024-07-21 03:25:07.750537] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:15:22.641 [2024-07-21 03:25:07.750548] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:15:22.641 [2024-07-21 03:25:07.750563] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:15:22.641 [2024-07-21 03:25:07.750575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:15:22.641 [2024-07-21 03:25:07.750591] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:22.641 [2024-07-21 03:25:07.750604] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:22.641 [2024-07-21 03:25:07.750636] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:22.641 [2024-07-21 03:25:07.750650] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:22.641 [2024-07-21 03:25:07.750659] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:15:22.641 [2024-07-21 03:25:07.750676] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:22.641 [2024-07-21 03:25:07.750691] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:15:22.641 [2024-07-21 03:25:07.750703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:15:22.641 [2024-07-21 03:25:07.750714] nvme_ctrlr.c:2892:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:15:22.641 [2024-07-21 03:25:07.750722] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:15:22.641 [2024-07-21 03:25:07.750738] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:15:22.641 [2024-07-21 03:25:07.750753] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:15:22.641 [2024-07-21 03:25:07.750767] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:22.641 [2024-07-21 03:25:07.750781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:15:22.641 [2024-07-21 03:25:07.750847] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:15:22.641 [2024-07-21 03:25:07.750862] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:15:22.641 [2024-07-21 03:25:07.750875] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:15:22.641 [2024-07-21 03:25:07.750900] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:15:22.641 [2024-07-21 03:25:07.750910] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:15:22.641 [2024-07-21 03:25:07.750928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:15:22.641 [2024-07-21 03:25:07.750959] nvme_ctrlr.c:4570:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:15:22.641 [2024-07-21 03:25:07.750974] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:15:22.641 [2024-07-21 03:25:07.750988] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:15:22.641 [2024-07-21 03:25:07.751000] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:22.641 [2024-07-21 03:25:07.751009] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:22.641 [2024-07-21 03:25:07.751018] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:22.641 [2024-07-21 03:25:07.751039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:15:22.641 [2024-07-21 03:25:07.751060] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:15:22.641 [2024-07-21 03:25:07.751074] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:15:22.641 [2024-07-21 03:25:07.751086] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:22.642 [2024-07-21 03:25:07.751095] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:22.642 [2024-07-21 03:25:07.751105] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:22.642 [2024-07-21 03:25:07.751121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:15:22.642 [2024-07-21 03:25:07.751134] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:15:22.642 [2024-07-21 03:25:07.751146] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:15:22.642 [2024-07-21 03:25:07.751159] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:15:22.642 [2024-07-21 03:25:07.751172] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:15:22.642 [2024-07-21 03:25:07.751181] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:15:22.642 [2024-07-21 03:25:07.751189] nvme_ctrlr.c:2992:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:15:22.642 [2024-07-21 03:25:07.751212] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:15:22.642 [2024-07-21 03:25:07.751220] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:15:22.642 [2024-07-21 03:25:07.751250] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:15:22.642 [2024-07-21 03:25:07.751268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:15:22.642 [2024-07-21 03:25:07.751287] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:15:22.642 [2024-07-21 03:25:07.751298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:15:22.642 [2024-07-21 03:25:07.751314] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:15:22.642 [2024-07-21 03:25:07.751326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:15:22.642 [2024-07-21 03:25:07.751341] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:22.642 [2024-07-21 03:25:07.751352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:15:22.642 [2024-07-21 03:25:07.751370] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:15:22.642 [2024-07-21 03:25:07.751379] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:15:22.642 [2024-07-21 03:25:07.751385] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:15:22.642 [2024-07-21 03:25:07.751392] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:15:22.642 [2024-07-21 03:25:07.751401] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:15:22.642 [2024-07-21 03:25:07.751412] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:15:22.642 [2024-07-21 03:25:07.751420] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:15:22.642 [2024-07-21 03:25:07.751429] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:15:22.642 [2024-07-21 03:25:07.751440] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:15:22.642 [2024-07-21 03:25:07.751448] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:22.642 [2024-07-21 03:25:07.751456] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:22.642 [2024-07-21 03:25:07.751468] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:15:22.642 [2024-07-21 03:25:07.751476] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:15:22.642 [2024-07-21 03:25:07.751485] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:15:22.642 [2024-07-21 03:25:07.751500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:15:22.642 [2024-07-21 03:25:07.751520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:15:22.642 [2024-07-21 03:25:07.751535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:15:22.642 [2024-07-21 03:25:07.751549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:15:22.642 ===================================================== 00:15:22.642 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:22.642 ===================================================== 00:15:22.642 Controller Capabilities/Features 00:15:22.642 ================================ 00:15:22.642 Vendor ID: 4e58 00:15:22.642 Subsystem Vendor ID: 4e58 00:15:22.642 Serial Number: SPDK1 00:15:22.642 Model Number: SPDK bdev Controller 00:15:22.642 Firmware Version: 24.05.1 00:15:22.642 Recommended Arb Burst: 6 00:15:22.642 IEEE OUI Identifier: 8d 6b 50 00:15:22.642 Multi-path I/O 00:15:22.642 May have multiple subsystem ports: Yes 00:15:22.642 May have multiple controllers: Yes 00:15:22.642 Associated with SR-IOV VF: No 00:15:22.642 Max Data Transfer Size: 131072 00:15:22.642 Max Number of Namespaces: 32 00:15:22.642 Max Number of I/O Queues: 127 00:15:22.642 NVMe Specification Version (VS): 1.3 00:15:22.642 NVMe Specification Version (Identify): 1.3 00:15:22.642 Maximum Queue Entries: 256 00:15:22.642 Contiguous Queues Required: Yes 00:15:22.642 Arbitration Mechanisms Supported 00:15:22.642 Weighted Round Robin: Not Supported 00:15:22.642 Vendor Specific: Not Supported 00:15:22.642 Reset Timeout: 15000 ms 00:15:22.642 Doorbell Stride: 4 bytes 00:15:22.642 NVM Subsystem Reset: Not Supported 00:15:22.642 Command Sets Supported 00:15:22.642 NVM Command Set: Supported 00:15:22.642 Boot Partition: Not Supported 00:15:22.642 Memory Page Size Minimum: 4096 bytes 00:15:22.642 Memory Page Size Maximum: 4096 bytes 00:15:22.642 Persistent Memory Region: Not Supported 00:15:22.642 Optional Asynchronous Events Supported 00:15:22.642 Namespace Attribute Notices: Supported 00:15:22.642 Firmware Activation Notices: Not Supported 00:15:22.642 ANA Change Notices: Not Supported 00:15:22.642 PLE Aggregate Log Change Notices: Not Supported 00:15:22.642 LBA Status Info Alert Notices: Not Supported 00:15:22.642 EGE Aggregate Log Change Notices: Not Supported 00:15:22.642 Normal NVM Subsystem Shutdown event: Not Supported 00:15:22.642 Zone Descriptor Change Notices: Not Supported 00:15:22.642 Discovery Log Change Notices: Not Supported 00:15:22.642 Controller Attributes 00:15:22.642 128-bit Host Identifier: Supported 00:15:22.642 Non-Operational Permissive Mode: Not Supported 00:15:22.642 NVM Sets: Not Supported 00:15:22.642 Read Recovery Levels: Not Supported 00:15:22.642 Endurance Groups: Not Supported 00:15:22.642 Predictable Latency Mode: Not Supported 00:15:22.642 Traffic Based Keep ALive: Not Supported 00:15:22.642 Namespace Granularity: Not Supported 00:15:22.642 SQ Associations: Not Supported 00:15:22.642 UUID List: Not Supported 00:15:22.642 Multi-Domain Subsystem: Not Supported 00:15:22.642 Fixed Capacity Management: Not Supported 00:15:22.642 Variable Capacity Management: Not Supported 00:15:22.642 Delete Endurance Group: Not Supported 00:15:22.642 Delete NVM Set: Not Supported 00:15:22.642 Extended LBA Formats Supported: Not Supported 00:15:22.642 Flexible Data Placement Supported: Not Supported 00:15:22.642 00:15:22.642 Controller Memory Buffer Support 00:15:22.642 ================================ 00:15:22.642 Supported: No 00:15:22.642 00:15:22.642 Persistent Memory Region Support 00:15:22.643 ================================ 00:15:22.643 Supported: No 00:15:22.643 00:15:22.643 Admin Command Set Attributes 00:15:22.643 ============================ 00:15:22.643 Security Send/Receive: Not Supported 00:15:22.643 Format NVM: Not Supported 00:15:22.643 Firmware Activate/Download: Not Supported 00:15:22.643 Namespace Management: Not Supported 00:15:22.643 Device Self-Test: Not Supported 00:15:22.643 Directives: Not Supported 00:15:22.643 NVMe-MI: Not Supported 00:15:22.643 Virtualization Management: Not Supported 00:15:22.643 Doorbell Buffer Config: Not Supported 00:15:22.643 Get LBA Status Capability: Not Supported 00:15:22.643 Command & Feature Lockdown Capability: Not Supported 00:15:22.643 Abort Command Limit: 4 00:15:22.643 Async Event Request Limit: 4 00:15:22.643 Number of Firmware Slots: N/A 00:15:22.643 Firmware Slot 1 Read-Only: N/A 00:15:22.643 Firmware Activation Without Reset: N/A 00:15:22.643 Multiple Update Detection Support: N/A 00:15:22.643 Firmware Update Granularity: No Information Provided 00:15:22.643 Per-Namespace SMART Log: No 00:15:22.643 Asymmetric Namespace Access Log Page: Not Supported 00:15:22.643 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:15:22.643 Command Effects Log Page: Supported 00:15:22.643 Get Log Page Extended Data: Supported 00:15:22.643 Telemetry Log Pages: Not Supported 00:15:22.643 Persistent Event Log Pages: Not Supported 00:15:22.643 Supported Log Pages Log Page: May Support 00:15:22.643 Commands Supported & Effects Log Page: Not Supported 00:15:22.643 Feature Identifiers & Effects Log Page:May Support 00:15:22.643 NVMe-MI Commands & Effects Log Page: May Support 00:15:22.643 Data Area 4 for Telemetry Log: Not Supported 00:15:22.643 Error Log Page Entries Supported: 128 00:15:22.643 Keep Alive: Supported 00:15:22.643 Keep Alive Granularity: 10000 ms 00:15:22.643 00:15:22.643 NVM Command Set Attributes 00:15:22.643 ========================== 00:15:22.643 Submission Queue Entry Size 00:15:22.643 Max: 64 00:15:22.643 Min: 64 00:15:22.643 Completion Queue Entry Size 00:15:22.643 Max: 16 00:15:22.643 Min: 16 00:15:22.643 Number of Namespaces: 32 00:15:22.643 Compare Command: Supported 00:15:22.643 Write Uncorrectable Command: Not Supported 00:15:22.643 Dataset Management Command: Supported 00:15:22.643 Write Zeroes Command: Supported 00:15:22.643 Set Features Save Field: Not Supported 00:15:22.643 Reservations: Not Supported 00:15:22.643 Timestamp: Not Supported 00:15:22.643 Copy: Supported 00:15:22.643 Volatile Write Cache: Present 00:15:22.643 Atomic Write Unit (Normal): 1 00:15:22.643 Atomic Write Unit (PFail): 1 00:15:22.643 Atomic Compare & Write Unit: 1 00:15:22.643 Fused Compare & Write: Supported 00:15:22.643 Scatter-Gather List 00:15:22.643 SGL Command Set: Supported (Dword aligned) 00:15:22.643 SGL Keyed: Not Supported 00:15:22.643 SGL Bit Bucket Descriptor: Not Supported 00:15:22.643 SGL Metadata Pointer: Not Supported 00:15:22.643 Oversized SGL: Not Supported 00:15:22.643 SGL Metadata Address: Not Supported 00:15:22.643 SGL Offset: Not Supported 00:15:22.643 Transport SGL Data Block: Not Supported 00:15:22.643 Replay Protected Memory Block: Not Supported 00:15:22.643 00:15:22.643 Firmware Slot Information 00:15:22.643 ========================= 00:15:22.643 Active slot: 1 00:15:22.643 Slot 1 Firmware Revision: 24.05.1 00:15:22.643 00:15:22.643 00:15:22.643 Commands Supported and Effects 00:15:22.643 ============================== 00:15:22.643 Admin Commands 00:15:22.643 -------------- 00:15:22.643 Get Log Page (02h): Supported 00:15:22.643 Identify (06h): Supported 00:15:22.643 Abort (08h): Supported 00:15:22.643 Set Features (09h): Supported 00:15:22.643 Get Features (0Ah): Supported 00:15:22.643 Asynchronous Event Request (0Ch): Supported 00:15:22.643 Keep Alive (18h): Supported 00:15:22.643 I/O Commands 00:15:22.643 ------------ 00:15:22.643 Flush (00h): Supported LBA-Change 00:15:22.643 Write (01h): Supported LBA-Change 00:15:22.643 Read (02h): Supported 00:15:22.643 Compare (05h): Supported 00:15:22.643 Write Zeroes (08h): Supported LBA-Change 00:15:22.643 Dataset Management (09h): Supported LBA-Change 00:15:22.643 Copy (19h): Supported LBA-Change 00:15:22.643 Unknown (79h): Supported LBA-Change 00:15:22.643 Unknown (7Ah): Supported 00:15:22.643 00:15:22.643 Error Log 00:15:22.643 ========= 00:15:22.643 00:15:22.643 Arbitration 00:15:22.643 =========== 00:15:22.643 Arbitration Burst: 1 00:15:22.643 00:15:22.643 Power Management 00:15:22.643 ================ 00:15:22.643 Number of Power States: 1 00:15:22.643 Current Power State: Power State #0 00:15:22.643 Power State #0: 00:15:22.643 Max Power: 0.00 W 00:15:22.643 Non-Operational State: Operational 00:15:22.643 Entry Latency: Not Reported 00:15:22.643 Exit Latency: Not Reported 00:15:22.643 Relative Read Throughput: 0 00:15:22.643 Relative Read Latency: 0 00:15:22.643 Relative Write Throughput: 0 00:15:22.643 Relative Write Latency: 0 00:15:22.643 Idle Power: Not Reported 00:15:22.643 Active Power: Not Reported 00:15:22.643 Non-Operational Permissive Mode: Not Supported 00:15:22.643 00:15:22.643 Health Information 00:15:22.643 ================== 00:15:22.643 Critical Warnings: 00:15:22.643 Available Spare Space: OK 00:15:22.643 Temperature: OK 00:15:22.643 Device Reliability: OK 00:15:22.643 Read Only: No 00:15:22.643 Volatile Memory Backup: OK 00:15:22.643 Current Temperature: 0 Kelvin[2024-07-21 03:25:07.751708] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:15:22.643 [2024-07-21 03:25:07.751725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:15:22.643 [2024-07-21 03:25:07.751764] nvme_ctrlr.c:4234:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:15:22.643 [2024-07-21 03:25:07.751782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.643 [2024-07-21 03:25:07.751793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.643 [2024-07-21 03:25:07.751802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.643 [2024-07-21 03:25:07.751812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.643 [2024-07-21 03:25:07.752309] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:15:22.643 [2024-07-21 03:25:07.752329] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:15:22.643 [2024-07-21 03:25:07.753310] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:22.643 [2024-07-21 03:25:07.753377] nvme_ctrlr.c:1084:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:15:22.643 [2024-07-21 03:25:07.753391] nvme_ctrlr.c:1087:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:15:22.643 [2024-07-21 03:25:07.754320] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:15:22.643 [2024-07-21 03:25:07.754342] nvme_ctrlr.c:1206:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:15:22.643 [2024-07-21 03:25:07.754394] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:15:22.644 [2024-07-21 03:25:07.756359] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:22.644 (-273 Celsius) 00:15:22.644 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:15:22.644 Available Spare: 0% 00:15:22.644 Available Spare Threshold: 0% 00:15:22.644 Life Percentage Used: 0% 00:15:22.644 Data Units Read: 0 00:15:22.644 Data Units Written: 0 00:15:22.644 Host Read Commands: 0 00:15:22.644 Host Write Commands: 0 00:15:22.644 Controller Busy Time: 0 minutes 00:15:22.644 Power Cycles: 0 00:15:22.644 Power On Hours: 0 hours 00:15:22.644 Unsafe Shutdowns: 0 00:15:22.644 Unrecoverable Media Errors: 0 00:15:22.644 Lifetime Error Log Entries: 0 00:15:22.644 Warning Temperature Time: 0 minutes 00:15:22.644 Critical Temperature Time: 0 minutes 00:15:22.644 00:15:22.644 Number of Queues 00:15:22.644 ================ 00:15:22.644 Number of I/O Submission Queues: 127 00:15:22.644 Number of I/O Completion Queues: 127 00:15:22.644 00:15:22.644 Active Namespaces 00:15:22.644 ================= 00:15:22.644 Namespace ID:1 00:15:22.644 Error Recovery Timeout: Unlimited 00:15:22.644 Command Set Identifier: NVM (00h) 00:15:22.644 Deallocate: Supported 00:15:22.644 Deallocated/Unwritten Error: Not Supported 00:15:22.644 Deallocated Read Value: Unknown 00:15:22.644 Deallocate in Write Zeroes: Not Supported 00:15:22.644 Deallocated Guard Field: 0xFFFF 00:15:22.644 Flush: Supported 00:15:22.644 Reservation: Supported 00:15:22.644 Namespace Sharing Capabilities: Multiple Controllers 00:15:22.644 Size (in LBAs): 131072 (0GiB) 00:15:22.644 Capacity (in LBAs): 131072 (0GiB) 00:15:22.644 Utilization (in LBAs): 131072 (0GiB) 00:15:22.644 NGUID: AC270079305A444EA9EB009AEE2BA716 00:15:22.644 UUID: ac270079-305a-444e-a9eb-009aee2ba716 00:15:22.644 Thin Provisioning: Not Supported 00:15:22.644 Per-NS Atomic Units: Yes 00:15:22.644 Atomic Boundary Size (Normal): 0 00:15:22.644 Atomic Boundary Size (PFail): 0 00:15:22.644 Atomic Boundary Offset: 0 00:15:22.644 Maximum Single Source Range Length: 65535 00:15:22.644 Maximum Copy Length: 65535 00:15:22.644 Maximum Source Range Count: 1 00:15:22.644 NGUID/EUI64 Never Reused: No 00:15:22.644 Namespace Write Protected: No 00:15:22.644 Number of LBA Formats: 1 00:15:22.644 Current LBA Format: LBA Format #00 00:15:22.644 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:22.644 00:15:22.644 03:25:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:15:22.644 EAL: No free 2048 kB hugepages reported on node 1 00:15:22.902 [2024-07-21 03:25:07.988443] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:28.165 Initializing NVMe Controllers 00:15:28.165 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:28.165 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:15:28.165 Initialization complete. Launching workers. 00:15:28.165 ======================================================== 00:15:28.165 Latency(us) 00:15:28.165 Device Information : IOPS MiB/s Average min max 00:15:28.165 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 35911.60 140.28 3565.39 1155.58 9532.01 00:15:28.165 ======================================================== 00:15:28.165 Total : 35911.60 140.28 3565.39 1155.58 9532.01 00:15:28.165 00:15:28.165 [2024-07-21 03:25:13.014626] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:28.165 03:25:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:15:28.165 EAL: No free 2048 kB hugepages reported on node 1 00:15:28.165 [2024-07-21 03:25:13.258714] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:33.458 Initializing NVMe Controllers 00:15:33.458 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:33.458 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:15:33.458 Initialization complete. Launching workers. 00:15:33.458 ======================================================== 00:15:33.458 Latency(us) 00:15:33.458 Device Information : IOPS MiB/s Average min max 00:15:33.458 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16050.00 62.70 7983.49 5969.40 15110.01 00:15:33.458 ======================================================== 00:15:33.458 Total : 16050.00 62.70 7983.49 5969.40 15110.01 00:15:33.458 00:15:33.458 [2024-07-21 03:25:18.293742] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:33.458 03:25:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:15:33.458 EAL: No free 2048 kB hugepages reported on node 1 00:15:33.458 [2024-07-21 03:25:18.505842] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:38.712 [2024-07-21 03:25:23.596003] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:38.712 Initializing NVMe Controllers 00:15:38.712 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:38.712 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:38.712 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:15:38.712 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:15:38.712 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:15:38.712 Initialization complete. Launching workers. 00:15:38.712 Starting thread on core 2 00:15:38.712 Starting thread on core 1 00:15:38.712 Starting thread on core 3 00:15:38.712 03:25:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:15:38.712 EAL: No free 2048 kB hugepages reported on node 1 00:15:38.712 [2024-07-21 03:25:23.883124] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:41.991 [2024-07-21 03:25:26.941210] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:41.991 Initializing NVMe Controllers 00:15:41.991 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:41.991 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:41.991 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:15:41.991 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:15:41.991 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:15:41.991 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:15:41.991 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:15:41.991 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:15:41.991 Initialization complete. Launching workers. 00:15:41.991 Starting thread on core 1 with urgent priority queue 00:15:41.991 Starting thread on core 2 with urgent priority queue 00:15:41.991 Starting thread on core 3 with urgent priority queue 00:15:41.991 Starting thread on core 0 with urgent priority queue 00:15:41.991 SPDK bdev Controller (SPDK1 ) core 0: 6072.67 IO/s 16.47 secs/100000 ios 00:15:41.991 SPDK bdev Controller (SPDK1 ) core 1: 6181.33 IO/s 16.18 secs/100000 ios 00:15:41.991 SPDK bdev Controller (SPDK1 ) core 2: 5722.33 IO/s 17.48 secs/100000 ios 00:15:41.991 SPDK bdev Controller (SPDK1 ) core 3: 5977.33 IO/s 16.73 secs/100000 ios 00:15:41.991 ======================================================== 00:15:41.991 00:15:41.991 03:25:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:15:41.991 EAL: No free 2048 kB hugepages reported on node 1 00:15:41.991 [2024-07-21 03:25:27.235136] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:41.991 Initializing NVMe Controllers 00:15:41.991 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:41.991 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:41.991 Namespace ID: 1 size: 0GB 00:15:41.991 Initialization complete. 00:15:41.991 INFO: using host memory buffer for IO 00:15:41.991 Hello world! 00:15:41.991 [2024-07-21 03:25:27.268759] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:42.248 03:25:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:15:42.248 EAL: No free 2048 kB hugepages reported on node 1 00:15:42.505 [2024-07-21 03:25:27.569111] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:43.438 Initializing NVMe Controllers 00:15:43.438 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:43.438 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:43.438 Initialization complete. Launching workers. 00:15:43.438 submit (in ns) avg, min, max = 7790.4, 3477.8, 4003978.9 00:15:43.438 complete (in ns) avg, min, max = 26418.4, 2056.7, 5994232.2 00:15:43.438 00:15:43.438 Submit histogram 00:15:43.438 ================ 00:15:43.438 Range in us Cumulative Count 00:15:43.438 3.461 - 3.484: 0.0076% ( 1) 00:15:43.438 3.484 - 3.508: 0.0227% ( 2) 00:15:43.438 3.508 - 3.532: 0.4915% ( 62) 00:15:43.438 3.532 - 3.556: 1.7769% ( 170) 00:15:43.438 3.556 - 3.579: 4.7486% ( 393) 00:15:43.438 3.579 - 3.603: 10.0718% ( 704) 00:15:43.438 3.603 - 3.627: 18.8280% ( 1158) 00:15:43.438 3.627 - 3.650: 27.2212% ( 1110) 00:15:43.438 3.650 - 3.674: 35.9168% ( 1150) 00:15:43.438 3.674 - 3.698: 42.9036% ( 924) 00:15:43.438 3.698 - 3.721: 49.8828% ( 923) 00:15:43.438 3.721 - 3.745: 54.6616% ( 632) 00:15:43.438 3.745 - 3.769: 58.7448% ( 540) 00:15:43.438 3.769 - 3.793: 62.3894% ( 482) 00:15:43.438 3.793 - 3.816: 65.7467% ( 444) 00:15:43.438 3.816 - 3.840: 69.2703% ( 466) 00:15:43.438 3.840 - 3.864: 73.5425% ( 565) 00:15:43.438 3.864 - 3.887: 77.3762% ( 507) 00:15:43.438 3.887 - 3.911: 81.0208% ( 482) 00:15:43.438 3.911 - 3.935: 84.2117% ( 422) 00:15:43.438 3.935 - 3.959: 86.4272% ( 293) 00:15:43.438 3.959 - 3.982: 88.0302% ( 212) 00:15:43.438 3.982 - 4.006: 89.7013% ( 221) 00:15:43.438 4.006 - 4.030: 90.8053% ( 146) 00:15:43.438 4.030 - 4.053: 91.7505% ( 125) 00:15:43.438 4.053 - 4.077: 92.7032% ( 126) 00:15:43.438 4.077 - 4.101: 93.5879% ( 117) 00:15:43.438 4.101 - 4.124: 94.3592% ( 102) 00:15:43.438 4.124 - 4.148: 94.9641% ( 80) 00:15:43.438 4.148 - 4.172: 95.4253% ( 61) 00:15:43.438 4.172 - 4.196: 95.8715% ( 59) 00:15:43.438 4.196 - 4.219: 96.1890% ( 42) 00:15:43.438 4.219 - 4.243: 96.4083% ( 29) 00:15:43.438 4.243 - 4.267: 96.5217% ( 15) 00:15:43.438 4.267 - 4.290: 96.6427% ( 16) 00:15:43.438 4.290 - 4.314: 96.7713% ( 17) 00:15:43.438 4.314 - 4.338: 96.8620% ( 12) 00:15:43.438 4.338 - 4.361: 96.9225% ( 8) 00:15:43.438 4.361 - 4.385: 97.0510% ( 17) 00:15:43.438 4.385 - 4.409: 97.0888% ( 5) 00:15:43.438 4.409 - 4.433: 97.1267% ( 5) 00:15:43.438 4.433 - 4.456: 97.1796% ( 7) 00:15:43.438 4.456 - 4.480: 97.2174% ( 5) 00:15:43.438 4.480 - 4.504: 97.2476% ( 4) 00:15:43.438 4.504 - 4.527: 97.2930% ( 6) 00:15:43.438 4.527 - 4.551: 97.3233% ( 4) 00:15:43.438 4.551 - 4.575: 97.3384% ( 2) 00:15:43.438 4.575 - 4.599: 97.3459% ( 1) 00:15:43.438 4.646 - 4.670: 97.3686% ( 3) 00:15:43.438 4.670 - 4.693: 97.3837% ( 2) 00:15:43.438 4.693 - 4.717: 97.3913% ( 1) 00:15:43.438 4.717 - 4.741: 97.4064% ( 2) 00:15:43.438 4.741 - 4.764: 97.4291% ( 3) 00:15:43.438 4.764 - 4.788: 97.4669% ( 5) 00:15:43.438 4.788 - 4.812: 97.4745% ( 1) 00:15:43.438 4.812 - 4.836: 97.5047% ( 4) 00:15:43.438 4.836 - 4.859: 97.5123% ( 1) 00:15:43.438 4.859 - 4.883: 97.5501% ( 5) 00:15:43.438 4.883 - 4.907: 97.5803% ( 4) 00:15:43.438 4.907 - 4.930: 97.6181% ( 5) 00:15:43.438 4.930 - 4.954: 97.6786% ( 8) 00:15:43.438 4.954 - 4.978: 97.7391% ( 8) 00:15:43.438 4.978 - 5.001: 97.7845% ( 6) 00:15:43.438 5.001 - 5.025: 97.8299% ( 6) 00:15:43.438 5.025 - 5.049: 97.8526% ( 3) 00:15:43.438 5.049 - 5.073: 97.8677% ( 2) 00:15:43.438 5.073 - 5.096: 97.8979% ( 4) 00:15:43.438 5.096 - 5.120: 97.9433% ( 6) 00:15:43.438 5.120 - 5.144: 97.9509% ( 1) 00:15:43.439 5.144 - 5.167: 97.9660% ( 2) 00:15:43.439 5.167 - 5.191: 97.9962% ( 4) 00:15:43.439 5.191 - 5.215: 98.0113% ( 2) 00:15:43.439 5.215 - 5.239: 98.0416% ( 4) 00:15:43.439 5.239 - 5.262: 98.0643% ( 3) 00:15:43.439 5.262 - 5.286: 98.0794% ( 2) 00:15:43.439 5.286 - 5.310: 98.0945% ( 2) 00:15:43.439 5.310 - 5.333: 98.1096% ( 2) 00:15:43.439 5.357 - 5.381: 98.1323% ( 3) 00:15:43.439 5.404 - 5.428: 98.1399% ( 1) 00:15:43.439 5.428 - 5.452: 98.1550% ( 2) 00:15:43.439 5.452 - 5.476: 98.1701% ( 2) 00:15:43.439 5.476 - 5.499: 98.1777% ( 1) 00:15:43.439 5.523 - 5.547: 98.1853% ( 1) 00:15:43.439 5.547 - 5.570: 98.1928% ( 1) 00:15:43.439 5.570 - 5.594: 98.2004% ( 1) 00:15:43.439 5.618 - 5.641: 98.2079% ( 1) 00:15:43.439 5.879 - 5.902: 98.2155% ( 1) 00:15:43.439 5.997 - 6.021: 98.2231% ( 1) 00:15:43.439 6.044 - 6.068: 98.2306% ( 1) 00:15:43.439 6.068 - 6.116: 98.2457% ( 2) 00:15:43.439 6.116 - 6.163: 98.2760% ( 4) 00:15:43.439 6.210 - 6.258: 98.2836% ( 1) 00:15:43.439 6.305 - 6.353: 98.2911% ( 1) 00:15:43.439 6.495 - 6.542: 98.2987% ( 1) 00:15:43.439 6.590 - 6.637: 98.3062% ( 1) 00:15:43.439 6.637 - 6.684: 98.3138% ( 1) 00:15:43.439 6.779 - 6.827: 98.3214% ( 1) 00:15:43.439 6.827 - 6.874: 98.3289% ( 1) 00:15:43.439 6.874 - 6.921: 98.3516% ( 3) 00:15:43.439 7.064 - 7.111: 98.3592% ( 1) 00:15:43.439 7.111 - 7.159: 98.3743% ( 2) 00:15:43.439 7.253 - 7.301: 98.3970% ( 3) 00:15:43.439 7.348 - 7.396: 98.4197% ( 3) 00:15:43.439 7.396 - 7.443: 98.4348% ( 2) 00:15:43.439 7.538 - 7.585: 98.4423% ( 1) 00:15:43.439 7.585 - 7.633: 98.4575% ( 2) 00:15:43.439 7.633 - 7.680: 98.4650% ( 1) 00:15:43.439 7.680 - 7.727: 98.4953% ( 4) 00:15:43.439 7.727 - 7.775: 98.5028% ( 1) 00:15:43.439 7.775 - 7.822: 98.5104% ( 1) 00:15:43.439 8.059 - 8.107: 98.5180% ( 1) 00:15:43.439 8.107 - 8.154: 98.5331% ( 2) 00:15:43.439 8.201 - 8.249: 98.5482% ( 2) 00:15:43.439 8.249 - 8.296: 98.5633% ( 2) 00:15:43.439 8.296 - 8.344: 98.5709% ( 1) 00:15:43.439 8.391 - 8.439: 98.5784% ( 1) 00:15:43.439 8.439 - 8.486: 98.5860% ( 1) 00:15:43.439 8.486 - 8.533: 98.5936% ( 1) 00:15:43.439 8.533 - 8.581: 98.6011% ( 1) 00:15:43.439 8.581 - 8.628: 98.6087% ( 1) 00:15:43.439 8.628 - 8.676: 98.6163% ( 1) 00:15:43.439 8.676 - 8.723: 98.6314% ( 2) 00:15:43.439 8.723 - 8.770: 98.6465% ( 2) 00:15:43.439 8.770 - 8.818: 98.6541% ( 1) 00:15:43.439 9.197 - 9.244: 98.6692% ( 2) 00:15:43.439 9.244 - 9.292: 98.6767% ( 1) 00:15:43.439 9.434 - 9.481: 98.6843% ( 1) 00:15:43.439 9.576 - 9.624: 98.6919% ( 1) 00:15:43.439 9.813 - 9.861: 98.7070% ( 2) 00:15:43.439 10.098 - 10.145: 98.7146% ( 1) 00:15:43.439 10.145 - 10.193: 98.7221% ( 1) 00:15:43.439 10.382 - 10.430: 98.7297% ( 1) 00:15:43.439 10.619 - 10.667: 98.7372% ( 1) 00:15:43.439 11.283 - 11.330: 98.7448% ( 1) 00:15:43.439 11.330 - 11.378: 98.7524% ( 1) 00:15:43.439 11.567 - 11.615: 98.7599% ( 1) 00:15:43.439 11.710 - 11.757: 98.7675% ( 1) 00:15:43.439 11.757 - 11.804: 98.7750% ( 1) 00:15:43.439 11.852 - 11.899: 98.7826% ( 1) 00:15:43.439 12.326 - 12.421: 98.7902% ( 1) 00:15:43.439 12.421 - 12.516: 98.7977% ( 1) 00:15:43.439 12.516 - 12.610: 98.8053% ( 1) 00:15:43.439 12.610 - 12.705: 98.8129% ( 1) 00:15:43.439 12.800 - 12.895: 98.8204% ( 1) 00:15:43.439 12.990 - 13.084: 98.8280% ( 1) 00:15:43.439 13.084 - 13.179: 98.8355% ( 1) 00:15:43.439 13.369 - 13.464: 98.8431% ( 1) 00:15:43.439 13.464 - 13.559: 98.8582% ( 2) 00:15:43.439 13.559 - 13.653: 98.8658% ( 1) 00:15:43.439 13.843 - 13.938: 98.8809% ( 2) 00:15:43.439 13.938 - 14.033: 98.8885% ( 1) 00:15:43.439 14.127 - 14.222: 98.8960% ( 1) 00:15:43.439 14.222 - 14.317: 98.9187% ( 3) 00:15:43.439 14.791 - 14.886: 98.9263% ( 1) 00:15:43.439 17.067 - 17.161: 98.9338% ( 1) 00:15:43.439 17.161 - 17.256: 98.9565% ( 3) 00:15:43.439 17.256 - 17.351: 98.9792% ( 3) 00:15:43.439 17.351 - 17.446: 99.0170% ( 5) 00:15:43.439 17.446 - 17.541: 99.0473% ( 4) 00:15:43.439 17.541 - 17.636: 99.0699% ( 3) 00:15:43.439 17.636 - 17.730: 99.0775% ( 1) 00:15:43.439 17.730 - 17.825: 99.1153% ( 5) 00:15:43.439 17.825 - 17.920: 99.1758% ( 8) 00:15:43.439 17.920 - 18.015: 99.2060% ( 4) 00:15:43.439 18.015 - 18.110: 99.2590% ( 7) 00:15:43.439 18.110 - 18.204: 99.3346% ( 10) 00:15:43.439 18.204 - 18.299: 99.3875% ( 7) 00:15:43.439 18.299 - 18.394: 99.4405% ( 7) 00:15:43.439 18.394 - 18.489: 99.5388% ( 13) 00:15:43.439 18.489 - 18.584: 99.5992% ( 8) 00:15:43.439 18.584 - 18.679: 99.6295% ( 4) 00:15:43.439 18.679 - 18.773: 99.6749% ( 6) 00:15:43.439 18.773 - 18.868: 99.6824% ( 1) 00:15:43.439 18.868 - 18.963: 99.6975% ( 2) 00:15:43.439 18.963 - 19.058: 99.7127% ( 2) 00:15:43.439 19.058 - 19.153: 99.7353% ( 3) 00:15:43.439 19.153 - 19.247: 99.7580% ( 3) 00:15:43.439 19.247 - 19.342: 99.7656% ( 1) 00:15:43.439 19.342 - 19.437: 99.7732% ( 1) 00:15:43.439 19.532 - 19.627: 99.7807% ( 1) 00:15:43.439 19.627 - 19.721: 99.7883% ( 1) 00:15:43.439 19.721 - 19.816: 99.7958% ( 1) 00:15:43.439 20.006 - 20.101: 99.8034% ( 1) 00:15:43.439 20.575 - 20.670: 99.8110% ( 1) 00:15:43.439 20.764 - 20.859: 99.8185% ( 1) 00:15:43.439 21.807 - 21.902: 99.8261% ( 1) 00:15:43.439 22.756 - 22.850: 99.8336% ( 1) 00:15:43.439 23.324 - 23.419: 99.8412% ( 1) 00:15:43.439 24.178 - 24.273: 99.8488% ( 1) 00:15:43.439 25.031 - 25.221: 99.8563% ( 1) 00:15:43.439 26.359 - 26.548: 99.8639% ( 1) 00:15:43.439 26.927 - 27.117: 99.8715% ( 1) 00:15:43.439 27.117 - 27.307: 99.8790% ( 1) 00:15:43.439 28.634 - 28.824: 99.8866% ( 1) 00:15:43.439 30.720 - 30.910: 99.8941% ( 1) 00:15:43.439 34.892 - 35.081: 99.9017% ( 1) 00:15:43.439 3009.801 - 3021.938: 99.9093% ( 1) 00:15:43.439 3640.889 - 3665.161: 99.9168% ( 1) 00:15:43.439 3980.705 - 4004.978: 100.0000% ( 11) 00:15:43.439 00:15:43.439 Complete histogram 00:15:43.439 ================== 00:15:43.439 Range in us Cumulative Count 00:15:43.439 2.050 - 2.062: 0.5066% ( 67) 00:15:43.439 2.062 - 2.074: 26.4348% ( 3429) 00:15:43.439 2.074 - 2.086: 35.1531% ( 1153) 00:15:43.439 2.086 - 2.098: 39.2514% ( 542) 00:15:43.439 2.098 - 2.110: 55.9319% ( 2206) 00:15:43.439 2.110 - 2.121: 59.2287% ( 436) 00:15:43.439 2.121 - 2.133: 63.1909% ( 524) 00:15:43.439 2.133 - 2.145: 72.0151% ( 1167) 00:15:43.439 2.145 - 2.157: 73.7013% ( 223) 00:15:43.439 2.157 - 2.169: 76.5293% ( 374) 00:15:43.439 2.169 - 2.181: 80.5747% ( 535) 00:15:43.439 2.181 - 2.193: 81.6106% ( 137) 00:15:43.439 2.193 - 2.204: 83.1607% ( 205) 00:15:43.439 2.204 - 2.216: 86.6994% ( 468) 00:15:43.439 2.216 - 2.228: 89.2703% ( 340) 00:15:43.439 2.228 - 2.240: 90.9036% ( 216) 00:15:43.439 2.240 - 2.252: 92.9830% ( 275) 00:15:43.439 2.252 - 2.264: 93.4367% ( 60) 00:15:43.439 2.264 - 2.276: 93.7164% ( 37) 00:15:43.439 2.276 - 2.287: 94.0718% ( 47) 00:15:43.439 2.287 - 2.299: 94.7902% ( 95) 00:15:43.439 2.299 - 2.311: 95.3270% ( 71) 00:15:43.439 2.311 - 2.323: 95.4556% ( 17) 00:15:43.439 2.323 - 2.335: 95.4858% ( 4) 00:15:43.439 2.335 - 2.347: 95.5766% ( 12) 00:15:43.439 2.347 - 2.359: 95.6975% ( 16) 00:15:43.439 2.359 - 2.370: 96.2420% ( 72) 00:15:43.439 2.370 - 2.382: 96.7183% ( 63) 00:15:43.439 2.382 - 2.394: 97.0359% ( 42) 00:15:43.439 2.394 - 2.406: 97.2930% ( 34) 00:15:43.439 2.406 - 2.418: 97.4518% ( 21) 00:15:43.439 2.418 - 2.430: 97.6181% ( 22) 00:15:43.439 2.430 - 2.441: 97.7467% ( 17) 00:15:43.439 2.441 - 2.453: 97.8904% ( 19) 00:15:43.439 2.453 - 2.465: 97.9584% ( 9) 00:15:43.439 2.465 - 2.477: 98.0416% ( 11) 00:15:43.439 2.477 - 2.489: 98.1777% ( 18) 00:15:43.439 2.489 - 2.501: 98.2457% ( 9) 00:15:43.439 2.501 - 2.513: 98.2987% ( 7) 00:15:43.439 2.513 - 2.524: 98.3289% ( 4) 00:15:43.439 2.524 - 2.536: 98.3365% ( 1) 00:15:43.439 2.536 - 2.548: 98.3592% ( 3) 00:15:43.439 2.548 - 2.560: 98.3819% ( 3) 00:15:43.439 2.572 - 2.584: 98.3894% ( 1) 00:15:43.439 2.607 - 2.619: 98.4045% ( 2) 00:15:43.439 2.619 - 2.631: 98.4121% ( 1) 00:15:43.439 2.667 - 2.679: 98.4197% ( 1) 00:15:43.439 2.679 - 2.690: 98.4272% ( 1) 00:15:43.439 2.690 - 2.702: 98.4348% ( 1) 00:15:43.439 2.714 - 2.726: 98.4423% ( 1) 00:15:43.439 2.726 - 2.738: 98.4499% ( 1) 00:15:43.439 2.738 - 2.750: 98.4650% ( 2) 00:15:43.439 2.750 - 2.761: 9[2024-07-21 03:25:28.594115] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:43.439 8.4726% ( 1) 00:15:43.439 2.809 - 2.821: 98.4802% ( 1) 00:15:43.439 2.821 - 2.833: 98.4877% ( 1) 00:15:43.439 2.833 - 2.844: 98.5028% ( 2) 00:15:43.439 2.868 - 2.880: 98.5104% ( 1) 00:15:43.439 3.319 - 3.342: 98.5255% ( 2) 00:15:43.439 3.342 - 3.366: 98.5406% ( 2) 00:15:43.439 3.366 - 3.390: 98.5558% ( 2) 00:15:43.439 3.390 - 3.413: 98.5633% ( 1) 00:15:43.439 3.413 - 3.437: 98.5709% ( 1) 00:15:43.439 3.461 - 3.484: 98.5784% ( 1) 00:15:43.439 3.484 - 3.508: 98.5936% ( 2) 00:15:43.439 3.508 - 3.532: 98.6011% ( 1) 00:15:43.439 3.532 - 3.556: 98.6163% ( 2) 00:15:43.439 3.556 - 3.579: 98.6238% ( 1) 00:15:43.439 3.579 - 3.603: 98.6389% ( 2) 00:15:43.439 3.603 - 3.627: 98.6465% ( 1) 00:15:43.439 3.627 - 3.650: 98.6616% ( 2) 00:15:43.439 3.674 - 3.698: 98.6692% ( 1) 00:15:43.439 3.745 - 3.769: 98.6767% ( 1) 00:15:43.439 3.769 - 3.793: 98.6843% ( 1) 00:15:43.439 3.793 - 3.816: 98.6919% ( 1) 00:15:43.439 3.816 - 3.840: 98.7070% ( 2) 00:15:43.439 3.840 - 3.864: 98.7146% ( 1) 00:15:43.439 3.911 - 3.935: 98.7221% ( 1) 00:15:43.439 4.006 - 4.030: 98.7297% ( 1) 00:15:43.439 4.053 - 4.077: 98.7372% ( 1) 00:15:43.439 4.978 - 5.001: 98.7448% ( 1) 00:15:43.439 5.049 - 5.073: 98.7524% ( 1) 00:15:43.439 5.191 - 5.215: 98.7599% ( 1) 00:15:43.439 5.381 - 5.404: 98.7675% ( 1) 00:15:43.439 5.452 - 5.476: 98.7750% ( 1) 00:15:43.439 5.547 - 5.570: 98.7826% ( 1) 00:15:43.439 5.713 - 5.736: 98.7902% ( 1) 00:15:43.439 5.760 - 5.784: 98.7977% ( 1) 00:15:43.439 5.831 - 5.855: 98.8053% ( 1) 00:15:43.439 6.044 - 6.068: 98.8129% ( 1) 00:15:43.439 6.116 - 6.163: 98.8280% ( 2) 00:15:43.439 6.684 - 6.732: 98.8355% ( 1) 00:15:43.439 7.064 - 7.111: 98.8431% ( 1) 00:15:43.439 15.360 - 15.455: 98.8582% ( 2) 00:15:43.439 15.644 - 15.739: 98.8733% ( 2) 00:15:43.439 15.739 - 15.834: 98.8960% ( 3) 00:15:43.439 15.834 - 15.929: 98.9263% ( 4) 00:15:43.439 15.929 - 16.024: 98.9414% ( 2) 00:15:43.439 16.024 - 16.119: 98.9490% ( 1) 00:15:43.439 16.119 - 16.213: 98.9868% ( 5) 00:15:43.439 16.213 - 16.308: 99.0170% ( 4) 00:15:43.439 16.308 - 16.403: 99.0473% ( 4) 00:15:43.439 16.403 - 16.498: 99.1002% ( 7) 00:15:43.439 16.498 - 16.593: 99.1229% ( 3) 00:15:43.439 16.593 - 16.687: 99.1607% ( 5) 00:15:43.439 16.687 - 16.782: 99.2060% ( 6) 00:15:43.439 16.782 - 16.877: 99.2212% ( 2) 00:15:43.439 16.877 - 16.972: 99.2514% ( 4) 00:15:43.439 16.972 - 17.067: 99.2817% ( 4) 00:15:43.439 17.067 - 17.161: 99.2968% ( 2) 00:15:43.439 17.161 - 17.256: 99.3119% ( 2) 00:15:43.439 17.351 - 17.446: 99.3270% ( 2) 00:15:43.439 17.446 - 17.541: 99.3422% ( 2) 00:15:43.439 17.541 - 17.636: 99.3573% ( 2) 00:15:43.439 17.730 - 17.825: 99.3648% ( 1) 00:15:43.439 17.825 - 17.920: 99.3724% ( 1) 00:15:43.439 17.920 - 18.015: 99.3875% ( 2) 00:15:43.439 18.015 - 18.110: 99.3951% ( 1) 00:15:43.439 18.584 - 18.679: 99.4026% ( 1) 00:15:43.439 3980.705 - 4004.978: 99.8412% ( 58) 00:15:43.439 4004.978 - 4029.250: 99.9849% ( 19) 00:15:43.439 5971.058 - 5995.330: 100.0000% ( 2) 00:15:43.439 00:15:43.439 03:25:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:15:43.439 03:25:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:15:43.439 03:25:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:15:43.439 03:25:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:15:43.439 03:25:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:43.696 [ 00:15:43.696 { 00:15:43.696 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:43.696 "subtype": "Discovery", 00:15:43.696 "listen_addresses": [], 00:15:43.696 "allow_any_host": true, 00:15:43.696 "hosts": [] 00:15:43.696 }, 00:15:43.696 { 00:15:43.696 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:43.696 "subtype": "NVMe", 00:15:43.696 "listen_addresses": [ 00:15:43.696 { 00:15:43.696 "trtype": "VFIOUSER", 00:15:43.696 "adrfam": "IPv4", 00:15:43.696 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:43.696 "trsvcid": "0" 00:15:43.696 } 00:15:43.696 ], 00:15:43.697 "allow_any_host": true, 00:15:43.697 "hosts": [], 00:15:43.697 "serial_number": "SPDK1", 00:15:43.697 "model_number": "SPDK bdev Controller", 00:15:43.697 "max_namespaces": 32, 00:15:43.697 "min_cntlid": 1, 00:15:43.697 "max_cntlid": 65519, 00:15:43.697 "namespaces": [ 00:15:43.697 { 00:15:43.697 "nsid": 1, 00:15:43.697 "bdev_name": "Malloc1", 00:15:43.697 "name": "Malloc1", 00:15:43.697 "nguid": "AC270079305A444EA9EB009AEE2BA716", 00:15:43.697 "uuid": "ac270079-305a-444e-a9eb-009aee2ba716" 00:15:43.697 } 00:15:43.697 ] 00:15:43.697 }, 00:15:43.697 { 00:15:43.697 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:43.697 "subtype": "NVMe", 00:15:43.697 "listen_addresses": [ 00:15:43.697 { 00:15:43.697 "trtype": "VFIOUSER", 00:15:43.697 "adrfam": "IPv4", 00:15:43.697 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:43.697 "trsvcid": "0" 00:15:43.697 } 00:15:43.697 ], 00:15:43.697 "allow_any_host": true, 00:15:43.697 "hosts": [], 00:15:43.697 "serial_number": "SPDK2", 00:15:43.697 "model_number": "SPDK bdev Controller", 00:15:43.697 "max_namespaces": 32, 00:15:43.697 "min_cntlid": 1, 00:15:43.697 "max_cntlid": 65519, 00:15:43.697 "namespaces": [ 00:15:43.697 { 00:15:43.697 "nsid": 1, 00:15:43.697 "bdev_name": "Malloc2", 00:15:43.697 "name": "Malloc2", 00:15:43.697 "nguid": "403CF3C1C50F4DA180877A40C9021DC5", 00:15:43.697 "uuid": "403cf3c1-c50f-4da1-8087-7a40c9021dc5" 00:15:43.697 } 00:15:43.697 ] 00:15:43.697 } 00:15:43.697 ] 00:15:43.697 03:25:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:15:43.697 03:25:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=2374969 00:15:43.697 03:25:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:15:43.697 03:25:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:15:43.697 03:25:28 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1261 -- # local i=0 00:15:43.697 03:25:28 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:43.697 03:25:28 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:43.697 03:25:28 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # return 0 00:15:43.697 03:25:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:15:43.697 03:25:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:15:43.697 EAL: No free 2048 kB hugepages reported on node 1 00:15:43.954 [2024-07-21 03:25:29.094131] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:43.954 Malloc3 00:15:43.954 03:25:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:15:44.210 [2024-07-21 03:25:29.457761] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:44.210 03:25:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:44.210 Asynchronous Event Request test 00:15:44.210 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:44.210 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:44.210 Registering asynchronous event callbacks... 00:15:44.210 Starting namespace attribute notice tests for all controllers... 00:15:44.210 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:15:44.210 aer_cb - Changed Namespace 00:15:44.210 Cleaning up... 00:15:44.467 [ 00:15:44.467 { 00:15:44.467 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:44.467 "subtype": "Discovery", 00:15:44.467 "listen_addresses": [], 00:15:44.467 "allow_any_host": true, 00:15:44.467 "hosts": [] 00:15:44.467 }, 00:15:44.467 { 00:15:44.467 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:44.467 "subtype": "NVMe", 00:15:44.467 "listen_addresses": [ 00:15:44.467 { 00:15:44.467 "trtype": "VFIOUSER", 00:15:44.467 "adrfam": "IPv4", 00:15:44.467 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:44.467 "trsvcid": "0" 00:15:44.467 } 00:15:44.467 ], 00:15:44.467 "allow_any_host": true, 00:15:44.467 "hosts": [], 00:15:44.467 "serial_number": "SPDK1", 00:15:44.467 "model_number": "SPDK bdev Controller", 00:15:44.467 "max_namespaces": 32, 00:15:44.467 "min_cntlid": 1, 00:15:44.467 "max_cntlid": 65519, 00:15:44.467 "namespaces": [ 00:15:44.467 { 00:15:44.467 "nsid": 1, 00:15:44.467 "bdev_name": "Malloc1", 00:15:44.467 "name": "Malloc1", 00:15:44.467 "nguid": "AC270079305A444EA9EB009AEE2BA716", 00:15:44.467 "uuid": "ac270079-305a-444e-a9eb-009aee2ba716" 00:15:44.467 }, 00:15:44.467 { 00:15:44.467 "nsid": 2, 00:15:44.467 "bdev_name": "Malloc3", 00:15:44.467 "name": "Malloc3", 00:15:44.467 "nguid": "28EB5A09118E44A697E0E46E7D89A836", 00:15:44.467 "uuid": "28eb5a09-118e-44a6-97e0-e46e7d89a836" 00:15:44.467 } 00:15:44.467 ] 00:15:44.467 }, 00:15:44.467 { 00:15:44.467 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:44.467 "subtype": "NVMe", 00:15:44.467 "listen_addresses": [ 00:15:44.467 { 00:15:44.467 "trtype": "VFIOUSER", 00:15:44.467 "adrfam": "IPv4", 00:15:44.467 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:44.467 "trsvcid": "0" 00:15:44.467 } 00:15:44.467 ], 00:15:44.467 "allow_any_host": true, 00:15:44.467 "hosts": [], 00:15:44.467 "serial_number": "SPDK2", 00:15:44.467 "model_number": "SPDK bdev Controller", 00:15:44.467 "max_namespaces": 32, 00:15:44.467 "min_cntlid": 1, 00:15:44.467 "max_cntlid": 65519, 00:15:44.467 "namespaces": [ 00:15:44.467 { 00:15:44.467 "nsid": 1, 00:15:44.467 "bdev_name": "Malloc2", 00:15:44.467 "name": "Malloc2", 00:15:44.467 "nguid": "403CF3C1C50F4DA180877A40C9021DC5", 00:15:44.467 "uuid": "403cf3c1-c50f-4da1-8087-7a40c9021dc5" 00:15:44.467 } 00:15:44.467 ] 00:15:44.467 } 00:15:44.467 ] 00:15:44.467 03:25:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 2374969 00:15:44.467 03:25:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:44.467 03:25:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:15:44.467 03:25:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:15:44.467 03:25:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:15:44.467 [2024-07-21 03:25:29.728996] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:15:44.467 [2024-07-21 03:25:29.729034] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2374990 ] 00:15:44.467 EAL: No free 2048 kB hugepages reported on node 1 00:15:44.467 [2024-07-21 03:25:29.762719] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:15:44.467 [2024-07-21 03:25:29.770928] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:44.467 [2024-07-21 03:25:29.770957] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fef98602000 00:15:44.467 [2024-07-21 03:25:29.771924] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:44.467 [2024-07-21 03:25:29.772929] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:44.467 [2024-07-21 03:25:29.773938] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:44.467 [2024-07-21 03:25:29.774946] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:44.467 [2024-07-21 03:25:29.775948] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:44.467 [2024-07-21 03:25:29.776957] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:44.467 [2024-07-21 03:25:29.777961] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:44.467 [2024-07-21 03:25:29.778984] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:44.725 [2024-07-21 03:25:29.779979] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:44.725 [2024-07-21 03:25:29.780002] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fef973b4000 00:15:44.725 [2024-07-21 03:25:29.781385] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:44.725 [2024-07-21 03:25:29.802549] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:15:44.725 [2024-07-21 03:25:29.802581] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:15:44.725 [2024-07-21 03:25:29.804690] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:15:44.725 [2024-07-21 03:25:29.804745] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:15:44.725 [2024-07-21 03:25:29.804831] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:15:44.725 [2024-07-21 03:25:29.804854] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:15:44.725 [2024-07-21 03:25:29.804864] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:15:44.725 [2024-07-21 03:25:29.805698] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:15:44.725 [2024-07-21 03:25:29.805722] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:15:44.725 [2024-07-21 03:25:29.805736] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:15:44.725 [2024-07-21 03:25:29.806720] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:15:44.725 [2024-07-21 03:25:29.806742] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:15:44.725 [2024-07-21 03:25:29.806757] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:15:44.725 [2024-07-21 03:25:29.807717] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:15:44.725 [2024-07-21 03:25:29.807737] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:44.725 [2024-07-21 03:25:29.808724] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:15:44.725 [2024-07-21 03:25:29.808744] nvme_ctrlr.c:3751:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:15:44.725 [2024-07-21 03:25:29.808753] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:15:44.725 [2024-07-21 03:25:29.808765] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:44.725 [2024-07-21 03:25:29.808874] nvme_ctrlr.c:3944:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:15:44.725 [2024-07-21 03:25:29.808886] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:44.725 [2024-07-21 03:25:29.808895] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:15:44.725 [2024-07-21 03:25:29.809730] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:15:44.725 [2024-07-21 03:25:29.810734] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:15:44.725 [2024-07-21 03:25:29.811742] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:15:44.725 [2024-07-21 03:25:29.812735] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:44.725 [2024-07-21 03:25:29.812827] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:44.725 [2024-07-21 03:25:29.813756] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:15:44.725 [2024-07-21 03:25:29.813775] nvme_ctrlr.c:3786:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:44.725 [2024-07-21 03:25:29.813785] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:15:44.725 [2024-07-21 03:25:29.813809] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:15:44.725 [2024-07-21 03:25:29.813822] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:15:44.725 [2024-07-21 03:25:29.813844] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:44.725 [2024-07-21 03:25:29.813854] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:44.725 [2024-07-21 03:25:29.813871] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:44.725 [2024-07-21 03:25:29.822628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:15:44.725 [2024-07-21 03:25:29.822654] nvme_ctrlr.c:1986:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:15:44.725 [2024-07-21 03:25:29.822680] nvme_ctrlr.c:1990:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:15:44.725 [2024-07-21 03:25:29.822687] nvme_ctrlr.c:1993:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:15:44.725 [2024-07-21 03:25:29.822695] nvme_ctrlr.c:2004:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:15:44.725 [2024-07-21 03:25:29.822704] nvme_ctrlr.c:2017:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:15:44.725 [2024-07-21 03:25:29.822712] nvme_ctrlr.c:2032:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:15:44.725 [2024-07-21 03:25:29.822720] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:15:44.725 [2024-07-21 03:25:29.822732] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:15:44.725 [2024-07-21 03:25:29.822748] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:15:44.725 [2024-07-21 03:25:29.830626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:15:44.725 [2024-07-21 03:25:29.830654] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:44.725 [2024-07-21 03:25:29.830668] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:44.725 [2024-07-21 03:25:29.830681] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:44.725 [2024-07-21 03:25:29.830693] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:44.725 [2024-07-21 03:25:29.830702] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:15:44.725 [2024-07-21 03:25:29.830719] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:44.725 [2024-07-21 03:25:29.830734] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:15:44.725 [2024-07-21 03:25:29.838622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:15:44.725 [2024-07-21 03:25:29.838641] nvme_ctrlr.c:2892:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:15:44.725 [2024-07-21 03:25:29.838650] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:15:44.725 [2024-07-21 03:25:29.838677] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:15:44.725 [2024-07-21 03:25:29.838692] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:15:44.725 [2024-07-21 03:25:29.838707] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:44.725 [2024-07-21 03:25:29.846627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:15:44.725 [2024-07-21 03:25:29.846713] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:15:44.725 [2024-07-21 03:25:29.846730] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:15:44.725 [2024-07-21 03:25:29.846744] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:15:44.725 [2024-07-21 03:25:29.846753] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:15:44.725 [2024-07-21 03:25:29.846763] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:15:44.725 [2024-07-21 03:25:29.854624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:15:44.725 [2024-07-21 03:25:29.854648] nvme_ctrlr.c:4570:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:15:44.725 [2024-07-21 03:25:29.854668] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:15:44.725 [2024-07-21 03:25:29.854683] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:15:44.725 [2024-07-21 03:25:29.854695] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:44.725 [2024-07-21 03:25:29.854708] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:44.725 [2024-07-21 03:25:29.854718] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:44.725 [2024-07-21 03:25:29.862624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:15:44.725 [2024-07-21 03:25:29.862654] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:15:44.725 [2024-07-21 03:25:29.862671] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:15:44.725 [2024-07-21 03:25:29.862684] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:44.725 [2024-07-21 03:25:29.862693] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:44.725 [2024-07-21 03:25:29.862703] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:44.725 [2024-07-21 03:25:29.870626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:15:44.725 [2024-07-21 03:25:29.870649] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:15:44.725 [2024-07-21 03:25:29.870661] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:15:44.725 [2024-07-21 03:25:29.870675] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:15:44.725 [2024-07-21 03:25:29.870686] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:15:44.725 [2024-07-21 03:25:29.870695] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:15:44.725 [2024-07-21 03:25:29.870703] nvme_ctrlr.c:2992:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:15:44.725 [2024-07-21 03:25:29.870711] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:15:44.725 [2024-07-21 03:25:29.870720] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:15:44.725 [2024-07-21 03:25:29.870750] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:15:44.725 [2024-07-21 03:25:29.878630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:15:44.725 [2024-07-21 03:25:29.878657] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:15:44.725 [2024-07-21 03:25:29.886628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:15:44.725 [2024-07-21 03:25:29.886653] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:15:44.725 [2024-07-21 03:25:29.894625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:15:44.725 [2024-07-21 03:25:29.894652] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:44.725 [2024-07-21 03:25:29.902625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:15:44.725 [2024-07-21 03:25:29.902652] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:15:44.725 [2024-07-21 03:25:29.902666] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:15:44.725 [2024-07-21 03:25:29.902673] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:15:44.725 [2024-07-21 03:25:29.902679] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:15:44.725 [2024-07-21 03:25:29.902689] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:15:44.725 [2024-07-21 03:25:29.902701] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:15:44.725 [2024-07-21 03:25:29.902709] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:15:44.725 [2024-07-21 03:25:29.902718] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:15:44.725 [2024-07-21 03:25:29.902729] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:15:44.725 [2024-07-21 03:25:29.902737] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:44.725 [2024-07-21 03:25:29.902746] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:44.725 [2024-07-21 03:25:29.902758] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:15:44.725 [2024-07-21 03:25:29.902766] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:15:44.725 [2024-07-21 03:25:29.902774] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:15:44.725 [2024-07-21 03:25:29.910645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:15:44.725 [2024-07-21 03:25:29.910695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:15:44.725 [2024-07-21 03:25:29.910711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:15:44.725 [2024-07-21 03:25:29.910726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:15:44.725 ===================================================== 00:15:44.725 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:44.725 ===================================================== 00:15:44.725 Controller Capabilities/Features 00:15:44.725 ================================ 00:15:44.725 Vendor ID: 4e58 00:15:44.725 Subsystem Vendor ID: 4e58 00:15:44.725 Serial Number: SPDK2 00:15:44.725 Model Number: SPDK bdev Controller 00:15:44.725 Firmware Version: 24.05.1 00:15:44.725 Recommended Arb Burst: 6 00:15:44.725 IEEE OUI Identifier: 8d 6b 50 00:15:44.725 Multi-path I/O 00:15:44.725 May have multiple subsystem ports: Yes 00:15:44.725 May have multiple controllers: Yes 00:15:44.725 Associated with SR-IOV VF: No 00:15:44.725 Max Data Transfer Size: 131072 00:15:44.725 Max Number of Namespaces: 32 00:15:44.725 Max Number of I/O Queues: 127 00:15:44.725 NVMe Specification Version (VS): 1.3 00:15:44.725 NVMe Specification Version (Identify): 1.3 00:15:44.725 Maximum Queue Entries: 256 00:15:44.725 Contiguous Queues Required: Yes 00:15:44.725 Arbitration Mechanisms Supported 00:15:44.725 Weighted Round Robin: Not Supported 00:15:44.725 Vendor Specific: Not Supported 00:15:44.725 Reset Timeout: 15000 ms 00:15:44.725 Doorbell Stride: 4 bytes 00:15:44.725 NVM Subsystem Reset: Not Supported 00:15:44.725 Command Sets Supported 00:15:44.725 NVM Command Set: Supported 00:15:44.725 Boot Partition: Not Supported 00:15:44.725 Memory Page Size Minimum: 4096 bytes 00:15:44.726 Memory Page Size Maximum: 4096 bytes 00:15:44.726 Persistent Memory Region: Not Supported 00:15:44.726 Optional Asynchronous Events Supported 00:15:44.726 Namespace Attribute Notices: Supported 00:15:44.726 Firmware Activation Notices: Not Supported 00:15:44.726 ANA Change Notices: Not Supported 00:15:44.726 PLE Aggregate Log Change Notices: Not Supported 00:15:44.726 LBA Status Info Alert Notices: Not Supported 00:15:44.726 EGE Aggregate Log Change Notices: Not Supported 00:15:44.726 Normal NVM Subsystem Shutdown event: Not Supported 00:15:44.726 Zone Descriptor Change Notices: Not Supported 00:15:44.726 Discovery Log Change Notices: Not Supported 00:15:44.726 Controller Attributes 00:15:44.726 128-bit Host Identifier: Supported 00:15:44.726 Non-Operational Permissive Mode: Not Supported 00:15:44.726 NVM Sets: Not Supported 00:15:44.726 Read Recovery Levels: Not Supported 00:15:44.726 Endurance Groups: Not Supported 00:15:44.726 Predictable Latency Mode: Not Supported 00:15:44.726 Traffic Based Keep ALive: Not Supported 00:15:44.726 Namespace Granularity: Not Supported 00:15:44.726 SQ Associations: Not Supported 00:15:44.726 UUID List: Not Supported 00:15:44.726 Multi-Domain Subsystem: Not Supported 00:15:44.726 Fixed Capacity Management: Not Supported 00:15:44.726 Variable Capacity Management: Not Supported 00:15:44.726 Delete Endurance Group: Not Supported 00:15:44.726 Delete NVM Set: Not Supported 00:15:44.726 Extended LBA Formats Supported: Not Supported 00:15:44.726 Flexible Data Placement Supported: Not Supported 00:15:44.726 00:15:44.726 Controller Memory Buffer Support 00:15:44.726 ================================ 00:15:44.726 Supported: No 00:15:44.726 00:15:44.726 Persistent Memory Region Support 00:15:44.726 ================================ 00:15:44.726 Supported: No 00:15:44.726 00:15:44.726 Admin Command Set Attributes 00:15:44.726 ============================ 00:15:44.726 Security Send/Receive: Not Supported 00:15:44.726 Format NVM: Not Supported 00:15:44.726 Firmware Activate/Download: Not Supported 00:15:44.726 Namespace Management: Not Supported 00:15:44.726 Device Self-Test: Not Supported 00:15:44.726 Directives: Not Supported 00:15:44.726 NVMe-MI: Not Supported 00:15:44.726 Virtualization Management: Not Supported 00:15:44.726 Doorbell Buffer Config: Not Supported 00:15:44.726 Get LBA Status Capability: Not Supported 00:15:44.726 Command & Feature Lockdown Capability: Not Supported 00:15:44.726 Abort Command Limit: 4 00:15:44.726 Async Event Request Limit: 4 00:15:44.726 Number of Firmware Slots: N/A 00:15:44.726 Firmware Slot 1 Read-Only: N/A 00:15:44.726 Firmware Activation Without Reset: N/A 00:15:44.726 Multiple Update Detection Support: N/A 00:15:44.726 Firmware Update Granularity: No Information Provided 00:15:44.726 Per-Namespace SMART Log: No 00:15:44.726 Asymmetric Namespace Access Log Page: Not Supported 00:15:44.726 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:15:44.726 Command Effects Log Page: Supported 00:15:44.726 Get Log Page Extended Data: Supported 00:15:44.726 Telemetry Log Pages: Not Supported 00:15:44.726 Persistent Event Log Pages: Not Supported 00:15:44.726 Supported Log Pages Log Page: May Support 00:15:44.726 Commands Supported & Effects Log Page: Not Supported 00:15:44.726 Feature Identifiers & Effects Log Page:May Support 00:15:44.726 NVMe-MI Commands & Effects Log Page: May Support 00:15:44.726 Data Area 4 for Telemetry Log: Not Supported 00:15:44.726 Error Log Page Entries Supported: 128 00:15:44.726 Keep Alive: Supported 00:15:44.726 Keep Alive Granularity: 10000 ms 00:15:44.726 00:15:44.726 NVM Command Set Attributes 00:15:44.726 ========================== 00:15:44.726 Submission Queue Entry Size 00:15:44.726 Max: 64 00:15:44.726 Min: 64 00:15:44.726 Completion Queue Entry Size 00:15:44.726 Max: 16 00:15:44.726 Min: 16 00:15:44.726 Number of Namespaces: 32 00:15:44.726 Compare Command: Supported 00:15:44.726 Write Uncorrectable Command: Not Supported 00:15:44.726 Dataset Management Command: Supported 00:15:44.726 Write Zeroes Command: Supported 00:15:44.726 Set Features Save Field: Not Supported 00:15:44.726 Reservations: Not Supported 00:15:44.726 Timestamp: Not Supported 00:15:44.726 Copy: Supported 00:15:44.726 Volatile Write Cache: Present 00:15:44.726 Atomic Write Unit (Normal): 1 00:15:44.726 Atomic Write Unit (PFail): 1 00:15:44.726 Atomic Compare & Write Unit: 1 00:15:44.726 Fused Compare & Write: Supported 00:15:44.726 Scatter-Gather List 00:15:44.726 SGL Command Set: Supported (Dword aligned) 00:15:44.726 SGL Keyed: Not Supported 00:15:44.726 SGL Bit Bucket Descriptor: Not Supported 00:15:44.726 SGL Metadata Pointer: Not Supported 00:15:44.726 Oversized SGL: Not Supported 00:15:44.726 SGL Metadata Address: Not Supported 00:15:44.726 SGL Offset: Not Supported 00:15:44.726 Transport SGL Data Block: Not Supported 00:15:44.726 Replay Protected Memory Block: Not Supported 00:15:44.726 00:15:44.726 Firmware Slot Information 00:15:44.726 ========================= 00:15:44.726 Active slot: 1 00:15:44.726 Slot 1 Firmware Revision: 24.05.1 00:15:44.726 00:15:44.726 00:15:44.726 Commands Supported and Effects 00:15:44.726 ============================== 00:15:44.726 Admin Commands 00:15:44.726 -------------- 00:15:44.726 Get Log Page (02h): Supported 00:15:44.726 Identify (06h): Supported 00:15:44.726 Abort (08h): Supported 00:15:44.726 Set Features (09h): Supported 00:15:44.726 Get Features (0Ah): Supported 00:15:44.726 Asynchronous Event Request (0Ch): Supported 00:15:44.726 Keep Alive (18h): Supported 00:15:44.726 I/O Commands 00:15:44.726 ------------ 00:15:44.726 Flush (00h): Supported LBA-Change 00:15:44.726 Write (01h): Supported LBA-Change 00:15:44.726 Read (02h): Supported 00:15:44.726 Compare (05h): Supported 00:15:44.726 Write Zeroes (08h): Supported LBA-Change 00:15:44.726 Dataset Management (09h): Supported LBA-Change 00:15:44.726 Copy (19h): Supported LBA-Change 00:15:44.726 Unknown (79h): Supported LBA-Change 00:15:44.726 Unknown (7Ah): Supported 00:15:44.726 00:15:44.726 Error Log 00:15:44.726 ========= 00:15:44.726 00:15:44.726 Arbitration 00:15:44.726 =========== 00:15:44.726 Arbitration Burst: 1 00:15:44.726 00:15:44.726 Power Management 00:15:44.726 ================ 00:15:44.726 Number of Power States: 1 00:15:44.726 Current Power State: Power State #0 00:15:44.726 Power State #0: 00:15:44.726 Max Power: 0.00 W 00:15:44.726 Non-Operational State: Operational 00:15:44.726 Entry Latency: Not Reported 00:15:44.726 Exit Latency: Not Reported 00:15:44.726 Relative Read Throughput: 0 00:15:44.726 Relative Read Latency: 0 00:15:44.726 Relative Write Throughput: 0 00:15:44.726 Relative Write Latency: 0 00:15:44.726 Idle Power: Not Reported 00:15:44.726 Active Power: Not Reported 00:15:44.726 Non-Operational Permissive Mode: Not Supported 00:15:44.726 00:15:44.726 Health Information 00:15:44.726 ================== 00:15:44.726 Critical Warnings: 00:15:44.726 Available Spare Space: OK 00:15:44.726 Temperature: OK 00:15:44.726 Device Reliability: OK 00:15:44.726 Read Only: No 00:15:44.726 Volatile Memory Backup: OK 00:15:44.726 Current Temperature: 0 Kelvin[2024-07-21 03:25:29.910851] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:15:44.726 [2024-07-21 03:25:29.918626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:15:44.726 [2024-07-21 03:25:29.918673] nvme_ctrlr.c:4234:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:15:44.726 [2024-07-21 03:25:29.918690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.726 [2024-07-21 03:25:29.918701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.726 [2024-07-21 03:25:29.918711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.726 [2024-07-21 03:25:29.918720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.726 [2024-07-21 03:25:29.918800] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:15:44.726 [2024-07-21 03:25:29.918821] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:15:44.726 [2024-07-21 03:25:29.919810] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:44.726 [2024-07-21 03:25:29.919879] nvme_ctrlr.c:1084:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:15:44.726 [2024-07-21 03:25:29.919903] nvme_ctrlr.c:1087:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:15:44.726 [2024-07-21 03:25:29.920819] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:15:44.726 [2024-07-21 03:25:29.920844] nvme_ctrlr.c:1206:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:15:44.726 [2024-07-21 03:25:29.920895] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:15:44.726 [2024-07-21 03:25:29.922090] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:44.726 (-273 Celsius) 00:15:44.726 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:15:44.726 Available Spare: 0% 00:15:44.726 Available Spare Threshold: 0% 00:15:44.726 Life Percentage Used: 0% 00:15:44.726 Data Units Read: 0 00:15:44.726 Data Units Written: 0 00:15:44.726 Host Read Commands: 0 00:15:44.726 Host Write Commands: 0 00:15:44.726 Controller Busy Time: 0 minutes 00:15:44.726 Power Cycles: 0 00:15:44.726 Power On Hours: 0 hours 00:15:44.726 Unsafe Shutdowns: 0 00:15:44.726 Unrecoverable Media Errors: 0 00:15:44.726 Lifetime Error Log Entries: 0 00:15:44.726 Warning Temperature Time: 0 minutes 00:15:44.726 Critical Temperature Time: 0 minutes 00:15:44.726 00:15:44.726 Number of Queues 00:15:44.726 ================ 00:15:44.726 Number of I/O Submission Queues: 127 00:15:44.726 Number of I/O Completion Queues: 127 00:15:44.726 00:15:44.726 Active Namespaces 00:15:44.726 ================= 00:15:44.726 Namespace ID:1 00:15:44.726 Error Recovery Timeout: Unlimited 00:15:44.726 Command Set Identifier: NVM (00h) 00:15:44.726 Deallocate: Supported 00:15:44.726 Deallocated/Unwritten Error: Not Supported 00:15:44.726 Deallocated Read Value: Unknown 00:15:44.726 Deallocate in Write Zeroes: Not Supported 00:15:44.726 Deallocated Guard Field: 0xFFFF 00:15:44.726 Flush: Supported 00:15:44.726 Reservation: Supported 00:15:44.726 Namespace Sharing Capabilities: Multiple Controllers 00:15:44.726 Size (in LBAs): 131072 (0GiB) 00:15:44.726 Capacity (in LBAs): 131072 (0GiB) 00:15:44.726 Utilization (in LBAs): 131072 (0GiB) 00:15:44.726 NGUID: 403CF3C1C50F4DA180877A40C9021DC5 00:15:44.726 UUID: 403cf3c1-c50f-4da1-8087-7a40c9021dc5 00:15:44.726 Thin Provisioning: Not Supported 00:15:44.726 Per-NS Atomic Units: Yes 00:15:44.726 Atomic Boundary Size (Normal): 0 00:15:44.726 Atomic Boundary Size (PFail): 0 00:15:44.726 Atomic Boundary Offset: 0 00:15:44.726 Maximum Single Source Range Length: 65535 00:15:44.726 Maximum Copy Length: 65535 00:15:44.726 Maximum Source Range Count: 1 00:15:44.726 NGUID/EUI64 Never Reused: No 00:15:44.726 Namespace Write Protected: No 00:15:44.726 Number of LBA Formats: 1 00:15:44.726 Current LBA Format: LBA Format #00 00:15:44.726 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:44.726 00:15:44.726 03:25:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:15:44.726 EAL: No free 2048 kB hugepages reported on node 1 00:15:44.983 [2024-07-21 03:25:30.152328] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:50.238 Initializing NVMe Controllers 00:15:50.238 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:50.238 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:15:50.238 Initialization complete. Launching workers. 00:15:50.238 ======================================================== 00:15:50.238 Latency(us) 00:15:50.238 Device Information : IOPS MiB/s Average min max 00:15:50.239 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 36234.63 141.54 3531.87 1164.78 7325.74 00:15:50.239 ======================================================== 00:15:50.239 Total : 36234.63 141.54 3531.87 1164.78 7325.74 00:15:50.239 00:15:50.239 [2024-07-21 03:25:35.252998] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:50.239 03:25:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:15:50.239 EAL: No free 2048 kB hugepages reported on node 1 00:15:50.239 [2024-07-21 03:25:35.483567] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:55.493 Initializing NVMe Controllers 00:15:55.493 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:55.493 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:15:55.493 Initialization complete. Launching workers. 00:15:55.493 ======================================================== 00:15:55.493 Latency(us) 00:15:55.493 Device Information : IOPS MiB/s Average min max 00:15:55.493 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 34306.59 134.01 3730.67 1184.02 7632.33 00:15:55.493 ======================================================== 00:15:55.493 Total : 34306.59 134.01 3730.67 1184.02 7632.33 00:15:55.493 00:15:55.493 [2024-07-21 03:25:40.504725] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:55.493 03:25:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:15:55.493 EAL: No free 2048 kB hugepages reported on node 1 00:15:55.493 [2024-07-21 03:25:40.715410] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:00.810 [2024-07-21 03:25:45.850764] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:00.810 Initializing NVMe Controllers 00:16:00.810 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:00.810 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:00.810 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:16:00.810 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:16:00.810 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:16:00.810 Initialization complete. Launching workers. 00:16:00.810 Starting thread on core 2 00:16:00.810 Starting thread on core 3 00:16:00.810 Starting thread on core 1 00:16:00.810 03:25:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:16:00.810 EAL: No free 2048 kB hugepages reported on node 1 00:16:01.067 [2024-07-21 03:25:46.147046] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:05.249 [2024-07-21 03:25:49.835924] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:05.249 Initializing NVMe Controllers 00:16:05.249 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:05.249 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:05.249 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:16:05.249 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:16:05.249 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:16:05.249 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:16:05.249 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:16:05.249 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:16:05.249 Initialization complete. Launching workers. 00:16:05.249 Starting thread on core 1 with urgent priority queue 00:16:05.249 Starting thread on core 2 with urgent priority queue 00:16:05.249 Starting thread on core 3 with urgent priority queue 00:16:05.249 Starting thread on core 0 with urgent priority queue 00:16:05.249 SPDK bdev Controller (SPDK2 ) core 0: 2277.00 IO/s 43.92 secs/100000 ios 00:16:05.249 SPDK bdev Controller (SPDK2 ) core 1: 2386.33 IO/s 41.91 secs/100000 ios 00:16:05.249 SPDK bdev Controller (SPDK2 ) core 2: 2441.67 IO/s 40.96 secs/100000 ios 00:16:05.249 SPDK bdev Controller (SPDK2 ) core 3: 2038.33 IO/s 49.06 secs/100000 ios 00:16:05.249 ======================================================== 00:16:05.249 00:16:05.249 03:25:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:16:05.249 EAL: No free 2048 kB hugepages reported on node 1 00:16:05.249 [2024-07-21 03:25:50.132161] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:05.249 Initializing NVMe Controllers 00:16:05.249 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:05.249 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:05.249 Namespace ID: 1 size: 0GB 00:16:05.249 Initialization complete. 00:16:05.249 INFO: using host memory buffer for IO 00:16:05.249 Hello world! 00:16:05.249 [2024-07-21 03:25:50.145228] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:05.249 03:25:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:16:05.249 EAL: No free 2048 kB hugepages reported on node 1 00:16:05.249 [2024-07-21 03:25:50.437290] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:06.622 Initializing NVMe Controllers 00:16:06.622 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:06.622 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:06.622 Initialization complete. Launching workers. 00:16:06.622 submit (in ns) avg, min, max = 8180.7, 3501.1, 4017506.7 00:16:06.622 complete (in ns) avg, min, max = 25840.7, 2041.1, 4107618.9 00:16:06.622 00:16:06.622 Submit histogram 00:16:06.622 ================ 00:16:06.622 Range in us Cumulative Count 00:16:06.622 3.484 - 3.508: 0.0525% ( 7) 00:16:06.622 3.508 - 3.532: 0.7872% ( 98) 00:16:06.622 3.532 - 3.556: 1.7543% ( 129) 00:16:06.622 3.556 - 3.579: 5.3452% ( 479) 00:16:06.622 3.579 - 3.603: 10.4955% ( 687) 00:16:06.622 3.603 - 3.627: 19.4617% ( 1196) 00:16:06.622 3.627 - 3.650: 28.6528% ( 1226) 00:16:06.622 3.650 - 3.674: 37.8964% ( 1233) 00:16:06.622 3.674 - 3.698: 44.6660% ( 903) 00:16:06.622 3.698 - 3.721: 49.9288% ( 702) 00:16:06.622 3.721 - 3.745: 53.8871% ( 528) 00:16:06.622 3.745 - 3.769: 57.5830% ( 493) 00:16:06.622 3.769 - 3.793: 61.1140% ( 471) 00:16:06.622 3.793 - 3.816: 64.6675% ( 474) 00:16:06.622 3.816 - 3.840: 68.4159% ( 500) 00:16:06.622 3.840 - 3.864: 73.0340% ( 616) 00:16:06.622 3.864 - 3.887: 77.3221% ( 572) 00:16:06.622 3.887 - 3.911: 81.1230% ( 507) 00:16:06.622 3.911 - 3.935: 84.1592% ( 405) 00:16:06.622 3.935 - 3.959: 86.0634% ( 254) 00:16:06.622 3.959 - 3.982: 87.3979% ( 178) 00:16:06.622 3.982 - 4.006: 88.7923% ( 186) 00:16:06.622 4.006 - 4.030: 89.8493% ( 141) 00:16:06.622 4.030 - 4.053: 90.8839% ( 138) 00:16:06.622 4.053 - 4.077: 91.9409% ( 141) 00:16:06.622 4.077 - 4.101: 93.1179% ( 157) 00:16:06.622 4.101 - 4.124: 93.8976% ( 104) 00:16:06.622 4.124 - 4.148: 94.6098% ( 95) 00:16:06.622 4.148 - 4.172: 95.1121% ( 67) 00:16:06.622 4.172 - 4.196: 95.4719% ( 48) 00:16:06.622 4.196 - 4.219: 95.7943% ( 43) 00:16:06.622 4.219 - 4.243: 96.0642% ( 36) 00:16:06.622 4.243 - 4.267: 96.2891% ( 30) 00:16:06.622 4.267 - 4.290: 96.4315% ( 19) 00:16:06.622 4.290 - 4.314: 96.6114% ( 24) 00:16:06.622 4.314 - 4.338: 96.6864% ( 10) 00:16:06.622 4.338 - 4.361: 96.7989% ( 15) 00:16:06.622 4.361 - 4.385: 96.8813% ( 11) 00:16:06.622 4.385 - 4.409: 96.9563% ( 10) 00:16:06.622 4.409 - 4.433: 97.0088% ( 7) 00:16:06.622 4.433 - 4.456: 97.0463% ( 5) 00:16:06.622 4.456 - 4.480: 97.0912% ( 6) 00:16:06.622 4.480 - 4.504: 97.1062% ( 2) 00:16:06.622 4.504 - 4.527: 97.1212% ( 2) 00:16:06.622 4.527 - 4.551: 97.1287% ( 1) 00:16:06.622 4.575 - 4.599: 97.1437% ( 2) 00:16:06.622 4.622 - 4.646: 97.1587% ( 2) 00:16:06.622 4.646 - 4.670: 97.1812% ( 3) 00:16:06.622 4.693 - 4.717: 97.2037% ( 3) 00:16:06.622 4.717 - 4.741: 97.2112% ( 1) 00:16:06.622 4.741 - 4.764: 97.2262% ( 2) 00:16:06.622 4.764 - 4.788: 97.2412% ( 2) 00:16:06.622 4.788 - 4.812: 97.2862% ( 6) 00:16:06.622 4.812 - 4.836: 97.3161% ( 4) 00:16:06.622 4.836 - 4.859: 97.3761% ( 8) 00:16:06.622 4.859 - 4.883: 97.3986% ( 3) 00:16:06.622 4.883 - 4.907: 97.4586% ( 8) 00:16:06.622 4.907 - 4.930: 97.4961% ( 5) 00:16:06.622 4.930 - 4.954: 97.5560% ( 8) 00:16:06.622 4.954 - 4.978: 97.5635% ( 1) 00:16:06.622 4.978 - 5.001: 97.6160% ( 7) 00:16:06.622 5.001 - 5.025: 97.6460% ( 4) 00:16:06.622 5.025 - 5.049: 97.6685% ( 3) 00:16:06.622 5.049 - 5.073: 97.6985% ( 4) 00:16:06.622 5.073 - 5.096: 97.7135% ( 2) 00:16:06.622 5.096 - 5.120: 97.7360% ( 3) 00:16:06.622 5.120 - 5.144: 97.7585% ( 3) 00:16:06.622 5.144 - 5.167: 97.7659% ( 1) 00:16:06.622 5.167 - 5.191: 97.7959% ( 4) 00:16:06.622 5.191 - 5.215: 97.8334% ( 5) 00:16:06.622 5.215 - 5.239: 97.8559% ( 3) 00:16:06.622 5.239 - 5.262: 97.8634% ( 1) 00:16:06.622 5.262 - 5.286: 97.8709% ( 1) 00:16:06.622 5.310 - 5.333: 97.8784% ( 1) 00:16:06.622 5.357 - 5.381: 97.8859% ( 1) 00:16:06.622 5.381 - 5.404: 97.9309% ( 6) 00:16:06.622 5.404 - 5.428: 97.9459% ( 2) 00:16:06.622 5.428 - 5.452: 97.9609% ( 2) 00:16:06.622 5.452 - 5.476: 97.9684% ( 1) 00:16:06.622 5.713 - 5.736: 97.9759% ( 1) 00:16:06.622 5.784 - 5.807: 97.9834% ( 1) 00:16:06.622 5.807 - 5.831: 97.9909% ( 1) 00:16:06.622 5.831 - 5.855: 97.9984% ( 1) 00:16:06.622 5.855 - 5.879: 98.0058% ( 1) 00:16:06.622 5.879 - 5.902: 98.0208% ( 2) 00:16:06.622 5.950 - 5.973: 98.0283% ( 1) 00:16:06.622 5.973 - 5.997: 98.0358% ( 1) 00:16:06.622 5.997 - 6.021: 98.0583% ( 3) 00:16:06.622 6.021 - 6.044: 98.0808% ( 3) 00:16:06.622 6.044 - 6.068: 98.0883% ( 1) 00:16:06.622 6.258 - 6.305: 98.1033% ( 2) 00:16:06.622 6.447 - 6.495: 98.1108% ( 1) 00:16:06.622 6.542 - 6.590: 98.1258% ( 2) 00:16:06.622 6.590 - 6.637: 98.1333% ( 1) 00:16:06.622 6.779 - 6.827: 98.1483% ( 2) 00:16:06.622 6.874 - 6.921: 98.1558% ( 1) 00:16:06.622 6.969 - 7.016: 98.1633% ( 1) 00:16:06.622 7.016 - 7.064: 98.1708% ( 1) 00:16:06.622 7.064 - 7.111: 98.1783% ( 1) 00:16:06.622 7.159 - 7.206: 98.1858% ( 1) 00:16:06.622 7.253 - 7.301: 98.1933% ( 1) 00:16:06.622 7.301 - 7.348: 98.2008% ( 1) 00:16:06.622 7.348 - 7.396: 98.2083% ( 1) 00:16:06.622 7.396 - 7.443: 98.2158% ( 1) 00:16:06.622 7.443 - 7.490: 98.2382% ( 3) 00:16:06.622 7.585 - 7.633: 98.2457% ( 1) 00:16:06.622 7.633 - 7.680: 98.2532% ( 1) 00:16:06.622 7.680 - 7.727: 98.2682% ( 2) 00:16:06.622 7.727 - 7.775: 98.2832% ( 2) 00:16:06.622 7.775 - 7.822: 98.2982% ( 2) 00:16:06.622 7.822 - 7.870: 98.3132% ( 2) 00:16:06.622 7.917 - 7.964: 98.3207% ( 1) 00:16:06.622 7.964 - 8.012: 98.3282% ( 1) 00:16:06.622 8.012 - 8.059: 98.3432% ( 2) 00:16:06.622 8.059 - 8.107: 98.3732% ( 4) 00:16:06.622 8.107 - 8.154: 98.3957% ( 3) 00:16:06.622 8.154 - 8.201: 98.4032% ( 1) 00:16:06.622 8.344 - 8.391: 98.4182% ( 2) 00:16:06.622 8.439 - 8.486: 98.4407% ( 3) 00:16:06.622 8.581 - 8.628: 98.4632% ( 3) 00:16:06.622 8.676 - 8.723: 98.4706% ( 1) 00:16:06.622 8.723 - 8.770: 98.4856% ( 2) 00:16:06.622 8.770 - 8.818: 98.4931% ( 1) 00:16:06.622 9.007 - 9.055: 98.5006% ( 1) 00:16:06.622 9.102 - 9.150: 98.5081% ( 1) 00:16:06.622 9.292 - 9.339: 98.5156% ( 1) 00:16:06.622 9.339 - 9.387: 98.5231% ( 1) 00:16:06.622 9.387 - 9.434: 98.5306% ( 1) 00:16:06.622 9.481 - 9.529: 98.5381% ( 1) 00:16:06.622 9.529 - 9.576: 98.5456% ( 1) 00:16:06.622 9.908 - 9.956: 98.5531% ( 1) 00:16:06.622 10.098 - 10.145: 98.5606% ( 1) 00:16:06.622 10.240 - 10.287: 98.5681% ( 1) 00:16:06.622 10.619 - 10.667: 98.5756% ( 1) 00:16:06.622 11.093 - 11.141: 98.5831% ( 1) 00:16:06.622 11.188 - 11.236: 98.5906% ( 1) 00:16:06.622 11.378 - 11.425: 98.5981% ( 1) 00:16:06.623 11.473 - 11.520: 98.6056% ( 1) 00:16:06.623 11.520 - 11.567: 98.6131% ( 1) 00:16:06.623 11.710 - 11.757: 98.6206% ( 1) 00:16:06.623 11.947 - 11.994: 98.6281% ( 1) 00:16:06.623 12.089 - 12.136: 98.6356% ( 1) 00:16:06.623 12.136 - 12.231: 98.6431% ( 1) 00:16:06.623 12.421 - 12.516: 98.6506% ( 1) 00:16:06.623 12.990 - 13.084: 98.6581% ( 1) 00:16:06.623 13.084 - 13.179: 98.6731% ( 2) 00:16:06.623 13.179 - 13.274: 98.6881% ( 2) 00:16:06.623 13.369 - 13.464: 98.6956% ( 1) 00:16:06.623 13.464 - 13.559: 98.7031% ( 1) 00:16:06.623 13.653 - 13.748: 98.7105% ( 1) 00:16:06.623 13.748 - 13.843: 98.7255% ( 2) 00:16:06.623 13.938 - 14.033: 98.7330% ( 1) 00:16:06.623 15.550 - 15.644: 98.7405% ( 1) 00:16:06.623 17.067 - 17.161: 98.7555% ( 2) 00:16:06.623 17.256 - 17.351: 98.7630% ( 1) 00:16:06.623 17.351 - 17.446: 98.7930% ( 4) 00:16:06.623 17.446 - 17.541: 98.8380% ( 6) 00:16:06.623 17.541 - 17.636: 98.8755% ( 5) 00:16:06.623 17.636 - 17.730: 98.9504% ( 10) 00:16:06.623 17.730 - 17.825: 98.9954% ( 6) 00:16:06.623 17.825 - 17.920: 99.0404% ( 6) 00:16:06.623 17.920 - 18.015: 99.0554% ( 2) 00:16:06.623 18.015 - 18.110: 99.1604% ( 14) 00:16:06.623 18.110 - 18.204: 99.2803% ( 16) 00:16:06.623 18.204 - 18.299: 99.3853% ( 14) 00:16:06.623 18.299 - 18.394: 99.4227% ( 5) 00:16:06.623 18.394 - 18.489: 99.4752% ( 7) 00:16:06.623 18.489 - 18.584: 99.5652% ( 12) 00:16:06.623 18.584 - 18.679: 99.6402% ( 10) 00:16:06.623 18.679 - 18.773: 99.7076% ( 9) 00:16:06.623 18.773 - 18.868: 99.7601% ( 7) 00:16:06.623 18.868 - 18.963: 99.7976% ( 5) 00:16:06.623 18.963 - 19.058: 99.8126% ( 2) 00:16:06.623 19.153 - 19.247: 99.8351% ( 3) 00:16:06.623 19.247 - 19.342: 99.8501% ( 2) 00:16:06.623 19.627 - 19.721: 99.8651% ( 2) 00:16:06.623 20.101 - 20.196: 99.8726% ( 1) 00:16:06.623 23.988 - 24.083: 99.8801% ( 1) 00:16:06.623 24.083 - 24.178: 99.8875% ( 1) 00:16:06.623 28.065 - 28.255: 99.8950% ( 1) 00:16:06.623 3980.705 - 4004.978: 99.9700% ( 10) 00:16:06.623 4004.978 - 4029.250: 100.0000% ( 4) 00:16:06.623 00:16:06.623 Complete histogram 00:16:06.623 ================== 00:16:06.623 Range in us Cumulative Count 00:16:06.623 2.039 - 2.050: 7.8717% ( 1050) 00:16:06.623 2.050 - 2.062: 32.1838% ( 3243) 00:16:06.623 2.062 - 2.074: 35.3925% ( 428) 00:16:06.623 2.074 - 2.086: 45.3782% ( 1332) 00:16:06.623 2.086 - 2.098: 54.6443% ( 1236) 00:16:06.623 2.098 - 2.110: 56.8109% ( 289) 00:16:06.623 2.110 - 2.121: 64.6450% ( 1045) 00:16:06.623 2.121 - 2.133: 68.6108% ( 529) 00:16:06.623 2.133 - 2.145: 69.6979% ( 145) 00:16:06.623 2.145 - 2.157: 74.1885% ( 599) 00:16:06.623 2.157 - 2.169: 76.6774% ( 332) 00:16:06.623 2.169 - 2.181: 77.3746% ( 93) 00:16:06.623 2.181 - 2.193: 82.2026% ( 644) 00:16:06.623 2.193 - 2.204: 84.9314% ( 364) 00:16:06.623 2.204 - 2.216: 86.9855% ( 274) 00:16:06.623 2.216 - 2.228: 90.3966% ( 455) 00:16:06.623 2.228 - 2.240: 91.9484% ( 207) 00:16:06.623 2.240 - 2.252: 92.6606% ( 95) 00:16:06.623 2.252 - 2.264: 93.0729% ( 55) 00:16:06.623 2.264 - 2.276: 93.4103% ( 45) 00:16:06.623 2.276 - 2.287: 94.2874% ( 117) 00:16:06.623 2.287 - 2.299: 94.8722% ( 78) 00:16:06.623 2.299 - 2.311: 95.0371% ( 22) 00:16:06.623 2.311 - 2.323: 95.1496% ( 15) 00:16:06.623 2.323 - 2.335: 95.2470% ( 13) 00:16:06.623 2.335 - 2.347: 95.3670% ( 16) 00:16:06.623 2.347 - 2.359: 95.6593% ( 39) 00:16:06.623 2.359 - 2.370: 96.0942% ( 58) 00:16:06.623 2.370 - 2.382: 96.3041% ( 28) 00:16:06.623 2.382 - 2.394: 96.4315% ( 17) 00:16:06.623 2.394 - 2.406: 96.6789% ( 33) 00:16:06.623 2.406 - 2.418: 96.8963% ( 29) 00:16:06.623 2.418 - 2.430: 97.0687% ( 23) 00:16:06.623 2.430 - 2.441: 97.3011% ( 31) 00:16:06.623 2.441 - 2.453: 97.4661% ( 22) 00:16:06.623 2.453 - 2.465: 97.5710% ( 14) 00:16:06.623 2.465 - 2.477: 97.7210% ( 20) 00:16:06.623 2.477 - 2.489: 97.8184% ( 13) 00:16:06.623 2.489 - 2.501: 97.9084% ( 12) 00:16:06.623 2.501 - 2.513: 97.9984% ( 12) 00:16:06.623 2.513 - 2.524: 98.0658% ( 9) 00:16:06.623 2.524 - 2.536: 98.1558% ( 12) 00:16:06.623 2.536 - 2.548: 98.1858% ( 4) 00:16:06.623 2.548 - 2.560: 98.2308% ( 6) 00:16:06.623 2.560 - 2.572: 98.2607% ( 4) 00:16:06.623 2.572 - 2.584: 98.2757% ( 2) 00:16:06.623 2.584 - 2.596: 98.2982% ( 3) 00:16:06.623 2.596 - 2.607: 98.3132% ( 2) 00:16:06.623 2.607 - 2.619: 98.3207% ( 1) 00:16:06.623 2.619 - 2.631: 98.3432% ( 3) 00:16:06.623 2.643 - 2.655: 98.3582% ( 2) 00:16:06.623 2.667 - 2.679: 98.3657% ( 1) 00:16:06.623 2.690 - 2.702: 98.3732% ( 1) 00:16:06.623 2.702 - 2.714: 98.3807% ( 1) 00:16:06.623 2.738 - 2.750: 98.3882% ( 1) 00:16:06.623 2.750 - 2.761: 98.4032% ( 2) 00:16:06.623 2.809 - 2.821: 98.4107% ( 1) 00:16:06.623 3.342 - 3.366: 98.4332% ( 3) 00:16:06.623 3.366 - 3.390: 98.4407% ( 1) 00:16:06.623 3.390 - 3.413: 98.4482% ( 1) 00:16:06.623 3.413 - 3.437: 98.4557% ( 1) 00:16:06.623 3.484 - 3.508: 98.4781% ( 3) 00:16:06.623 3.508 - 3.532: 98.5006% ( 3) 00:16:06.623 3.532 - 3.556: 98.5156% ( 2) 00:16:06.623 3.603 - 3.627: 98.5231% ( 1) 00:16:06.623 3.627 - 3.650: 98.5306% ( 1) 00:16:06.623 3.650 - 3.674: 98.5381% ( 1) 00:16:06.623 3.674 - 3.698: 98.5531% ( 2) 00:16:06.623 3.698 - 3.721: 98.5606% ( 1) 00:16:06.623 3.721 - 3.745: 9[2024-07-21 03:25:51.534491] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:06.623 8.5756% ( 2) 00:16:06.623 3.745 - 3.769: 98.5831% ( 1) 00:16:06.623 3.769 - 3.793: 98.5906% ( 1) 00:16:06.623 3.911 - 3.935: 98.5981% ( 1) 00:16:06.623 4.053 - 4.077: 98.6056% ( 1) 00:16:06.623 4.124 - 4.148: 98.6131% ( 1) 00:16:06.623 4.409 - 4.433: 98.6206% ( 1) 00:16:06.623 4.433 - 4.456: 98.6281% ( 1) 00:16:06.623 4.836 - 4.859: 98.6356% ( 1) 00:16:06.623 5.049 - 5.073: 98.6431% ( 1) 00:16:06.623 5.404 - 5.428: 98.6506% ( 1) 00:16:06.623 5.523 - 5.547: 98.6581% ( 1) 00:16:06.623 5.570 - 5.594: 98.6656% ( 1) 00:16:06.623 5.760 - 5.784: 98.6731% ( 1) 00:16:06.623 5.807 - 5.831: 98.6806% ( 1) 00:16:06.623 5.831 - 5.855: 98.6881% ( 1) 00:16:06.623 5.902 - 5.926: 98.6956% ( 1) 00:16:06.623 6.258 - 6.305: 98.7031% ( 1) 00:16:06.623 6.353 - 6.400: 98.7105% ( 1) 00:16:06.623 6.400 - 6.447: 98.7180% ( 1) 00:16:06.623 6.684 - 6.732: 98.7255% ( 1) 00:16:06.623 6.827 - 6.874: 98.7405% ( 2) 00:16:06.623 6.921 - 6.969: 98.7480% ( 1) 00:16:06.623 7.016 - 7.064: 98.7555% ( 1) 00:16:06.623 7.159 - 7.206: 98.7630% ( 1) 00:16:06.623 9.244 - 9.292: 98.7705% ( 1) 00:16:06.623 15.265 - 15.360: 98.7780% ( 1) 00:16:06.623 15.550 - 15.644: 98.7855% ( 1) 00:16:06.623 15.739 - 15.834: 98.8005% ( 2) 00:16:06.623 15.834 - 15.929: 98.8155% ( 2) 00:16:06.623 15.929 - 16.024: 98.8380% ( 3) 00:16:06.623 16.024 - 16.119: 98.8830% ( 6) 00:16:06.623 16.119 - 16.213: 98.9280% ( 6) 00:16:06.623 16.213 - 16.308: 98.9729% ( 6) 00:16:06.623 16.308 - 16.403: 98.9954% ( 3) 00:16:06.623 16.403 - 16.498: 99.0854% ( 12) 00:16:06.623 16.498 - 16.593: 99.1304% ( 6) 00:16:06.623 16.593 - 16.687: 99.1679% ( 5) 00:16:06.623 16.687 - 16.782: 99.1903% ( 3) 00:16:06.623 16.782 - 16.877: 99.2278% ( 5) 00:16:06.623 16.877 - 16.972: 99.2653% ( 5) 00:16:06.623 16.972 - 17.067: 99.2953% ( 4) 00:16:06.623 17.067 - 17.161: 99.3328% ( 5) 00:16:06.623 17.161 - 17.256: 99.3403% ( 1) 00:16:06.623 17.256 - 17.351: 99.3478% ( 1) 00:16:06.623 17.351 - 17.446: 99.3553% ( 1) 00:16:06.623 17.446 - 17.541: 99.3703% ( 2) 00:16:06.623 17.730 - 17.825: 99.3778% ( 1) 00:16:06.623 17.825 - 17.920: 99.3853% ( 1) 00:16:06.623 18.299 - 18.394: 99.3928% ( 1) 00:16:06.623 19.247 - 19.342: 99.4003% ( 1) 00:16:06.623 20.290 - 20.385: 99.4078% ( 1) 00:16:06.623 3009.801 - 3021.938: 99.4152% ( 1) 00:16:06.623 3980.705 - 4004.978: 99.9250% ( 68) 00:16:06.623 4004.978 - 4029.250: 99.9925% ( 9) 00:16:06.623 4102.068 - 4126.341: 100.0000% ( 1) 00:16:06.623 00:16:06.623 03:25:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:16:06.623 03:25:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:16:06.623 03:25:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:16:06.623 03:25:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:16:06.623 03:25:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:06.623 [ 00:16:06.623 { 00:16:06.623 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:06.623 "subtype": "Discovery", 00:16:06.623 "listen_addresses": [], 00:16:06.623 "allow_any_host": true, 00:16:06.623 "hosts": [] 00:16:06.623 }, 00:16:06.623 { 00:16:06.623 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:06.623 "subtype": "NVMe", 00:16:06.623 "listen_addresses": [ 00:16:06.623 { 00:16:06.623 "trtype": "VFIOUSER", 00:16:06.623 "adrfam": "IPv4", 00:16:06.623 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:06.623 "trsvcid": "0" 00:16:06.623 } 00:16:06.624 ], 00:16:06.624 "allow_any_host": true, 00:16:06.624 "hosts": [], 00:16:06.624 "serial_number": "SPDK1", 00:16:06.624 "model_number": "SPDK bdev Controller", 00:16:06.624 "max_namespaces": 32, 00:16:06.624 "min_cntlid": 1, 00:16:06.624 "max_cntlid": 65519, 00:16:06.624 "namespaces": [ 00:16:06.624 { 00:16:06.624 "nsid": 1, 00:16:06.624 "bdev_name": "Malloc1", 00:16:06.624 "name": "Malloc1", 00:16:06.624 "nguid": "AC270079305A444EA9EB009AEE2BA716", 00:16:06.624 "uuid": "ac270079-305a-444e-a9eb-009aee2ba716" 00:16:06.624 }, 00:16:06.624 { 00:16:06.624 "nsid": 2, 00:16:06.624 "bdev_name": "Malloc3", 00:16:06.624 "name": "Malloc3", 00:16:06.624 "nguid": "28EB5A09118E44A697E0E46E7D89A836", 00:16:06.624 "uuid": "28eb5a09-118e-44a6-97e0-e46e7d89a836" 00:16:06.624 } 00:16:06.624 ] 00:16:06.624 }, 00:16:06.624 { 00:16:06.624 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:06.624 "subtype": "NVMe", 00:16:06.624 "listen_addresses": [ 00:16:06.624 { 00:16:06.624 "trtype": "VFIOUSER", 00:16:06.624 "adrfam": "IPv4", 00:16:06.624 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:06.624 "trsvcid": "0" 00:16:06.624 } 00:16:06.624 ], 00:16:06.624 "allow_any_host": true, 00:16:06.624 "hosts": [], 00:16:06.624 "serial_number": "SPDK2", 00:16:06.624 "model_number": "SPDK bdev Controller", 00:16:06.624 "max_namespaces": 32, 00:16:06.624 "min_cntlid": 1, 00:16:06.624 "max_cntlid": 65519, 00:16:06.624 "namespaces": [ 00:16:06.624 { 00:16:06.624 "nsid": 1, 00:16:06.624 "bdev_name": "Malloc2", 00:16:06.624 "name": "Malloc2", 00:16:06.624 "nguid": "403CF3C1C50F4DA180877A40C9021DC5", 00:16:06.624 "uuid": "403cf3c1-c50f-4da1-8087-7a40c9021dc5" 00:16:06.624 } 00:16:06.624 ] 00:16:06.624 } 00:16:06.624 ] 00:16:06.624 03:25:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:16:06.624 03:25:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=2377631 00:16:06.624 03:25:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:16:06.624 03:25:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:16:06.624 03:25:51 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1261 -- # local i=0 00:16:06.624 03:25:51 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:06.624 03:25:51 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:06.624 03:25:51 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # return 0 00:16:06.624 03:25:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:16:06.624 03:25:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:16:06.624 EAL: No free 2048 kB hugepages reported on node 1 00:16:06.883 [2024-07-21 03:25:51.981083] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:06.883 Malloc4 00:16:06.883 03:25:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:16:07.141 [2024-07-21 03:25:52.346777] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:07.141 03:25:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:07.141 Asynchronous Event Request test 00:16:07.141 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:07.141 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:07.141 Registering asynchronous event callbacks... 00:16:07.141 Starting namespace attribute notice tests for all controllers... 00:16:07.141 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:16:07.141 aer_cb - Changed Namespace 00:16:07.141 Cleaning up... 00:16:07.399 [ 00:16:07.399 { 00:16:07.399 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:07.399 "subtype": "Discovery", 00:16:07.399 "listen_addresses": [], 00:16:07.399 "allow_any_host": true, 00:16:07.399 "hosts": [] 00:16:07.399 }, 00:16:07.399 { 00:16:07.399 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:07.399 "subtype": "NVMe", 00:16:07.399 "listen_addresses": [ 00:16:07.399 { 00:16:07.399 "trtype": "VFIOUSER", 00:16:07.399 "adrfam": "IPv4", 00:16:07.399 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:07.399 "trsvcid": "0" 00:16:07.399 } 00:16:07.399 ], 00:16:07.399 "allow_any_host": true, 00:16:07.399 "hosts": [], 00:16:07.399 "serial_number": "SPDK1", 00:16:07.399 "model_number": "SPDK bdev Controller", 00:16:07.399 "max_namespaces": 32, 00:16:07.399 "min_cntlid": 1, 00:16:07.399 "max_cntlid": 65519, 00:16:07.399 "namespaces": [ 00:16:07.399 { 00:16:07.399 "nsid": 1, 00:16:07.399 "bdev_name": "Malloc1", 00:16:07.399 "name": "Malloc1", 00:16:07.399 "nguid": "AC270079305A444EA9EB009AEE2BA716", 00:16:07.399 "uuid": "ac270079-305a-444e-a9eb-009aee2ba716" 00:16:07.399 }, 00:16:07.399 { 00:16:07.399 "nsid": 2, 00:16:07.399 "bdev_name": "Malloc3", 00:16:07.399 "name": "Malloc3", 00:16:07.399 "nguid": "28EB5A09118E44A697E0E46E7D89A836", 00:16:07.399 "uuid": "28eb5a09-118e-44a6-97e0-e46e7d89a836" 00:16:07.399 } 00:16:07.399 ] 00:16:07.399 }, 00:16:07.399 { 00:16:07.399 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:07.399 "subtype": "NVMe", 00:16:07.399 "listen_addresses": [ 00:16:07.399 { 00:16:07.399 "trtype": "VFIOUSER", 00:16:07.399 "adrfam": "IPv4", 00:16:07.399 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:07.399 "trsvcid": "0" 00:16:07.399 } 00:16:07.399 ], 00:16:07.399 "allow_any_host": true, 00:16:07.399 "hosts": [], 00:16:07.399 "serial_number": "SPDK2", 00:16:07.399 "model_number": "SPDK bdev Controller", 00:16:07.399 "max_namespaces": 32, 00:16:07.399 "min_cntlid": 1, 00:16:07.399 "max_cntlid": 65519, 00:16:07.399 "namespaces": [ 00:16:07.399 { 00:16:07.399 "nsid": 1, 00:16:07.399 "bdev_name": "Malloc2", 00:16:07.399 "name": "Malloc2", 00:16:07.399 "nguid": "403CF3C1C50F4DA180877A40C9021DC5", 00:16:07.399 "uuid": "403cf3c1-c50f-4da1-8087-7a40c9021dc5" 00:16:07.399 }, 00:16:07.399 { 00:16:07.399 "nsid": 2, 00:16:07.399 "bdev_name": "Malloc4", 00:16:07.399 "name": "Malloc4", 00:16:07.399 "nguid": "FB88A6E5207A40589B71C86AA17BF43B", 00:16:07.399 "uuid": "fb88a6e5-207a-4058-9b71-c86aa17bf43b" 00:16:07.399 } 00:16:07.399 ] 00:16:07.399 } 00:16:07.399 ] 00:16:07.399 03:25:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 2377631 00:16:07.399 03:25:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:16:07.399 03:25:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 2372027 00:16:07.399 03:25:52 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@946 -- # '[' -z 2372027 ']' 00:16:07.399 03:25:52 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@950 -- # kill -0 2372027 00:16:07.399 03:25:52 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@951 -- # uname 00:16:07.399 03:25:52 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:07.399 03:25:52 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2372027 00:16:07.399 03:25:52 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:16:07.399 03:25:52 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:16:07.399 03:25:52 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2372027' 00:16:07.399 killing process with pid 2372027 00:16:07.399 03:25:52 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@965 -- # kill 2372027 00:16:07.399 03:25:52 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@970 -- # wait 2372027 00:16:07.966 03:25:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:16:07.966 03:25:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:16:07.966 03:25:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:16:07.966 03:25:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:16:07.966 03:25:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:16:07.966 03:25:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2377771 00:16:07.966 03:25:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:16:07.966 03:25:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2377771' 00:16:07.966 Process pid: 2377771 00:16:07.966 03:25:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:07.966 03:25:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2377771 00:16:07.966 03:25:52 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@827 -- # '[' -z 2377771 ']' 00:16:07.966 03:25:52 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:07.966 03:25:52 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:07.966 03:25:52 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:07.966 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:07.966 03:25:52 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:07.966 03:25:52 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:16:07.966 [2024-07-21 03:25:53.037711] thread.c:2937:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:16:07.966 [2024-07-21 03:25:53.038852] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:16:07.966 [2024-07-21 03:25:53.038929] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:07.966 EAL: No free 2048 kB hugepages reported on node 1 00:16:07.966 [2024-07-21 03:25:53.103237] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:07.966 [2024-07-21 03:25:53.193653] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:07.966 [2024-07-21 03:25:53.193718] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:07.966 [2024-07-21 03:25:53.193747] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:07.966 [2024-07-21 03:25:53.193761] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:07.966 [2024-07-21 03:25:53.193773] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:07.966 [2024-07-21 03:25:53.193831] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:07.966 [2024-07-21 03:25:53.193886] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:07.966 [2024-07-21 03:25:53.193998] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:07.966 [2024-07-21 03:25:53.194001] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:08.223 [2024-07-21 03:25:53.292072] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:16:08.223 [2024-07-21 03:25:53.292310] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:16:08.223 [2024-07-21 03:25:53.292568] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:16:08.223 [2024-07-21 03:25:53.293177] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:16:08.223 [2024-07-21 03:25:53.293407] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:16:08.223 03:25:53 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:08.223 03:25:53 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@860 -- # return 0 00:16:08.223 03:25:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:16:09.154 03:25:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:16:09.413 03:25:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:16:09.413 03:25:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:16:09.413 03:25:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:09.413 03:25:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:16:09.413 03:25:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:16:09.671 Malloc1 00:16:09.671 03:25:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:16:09.929 03:25:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:16:10.186 03:25:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:16:10.443 03:25:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:10.443 03:25:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:16:10.443 03:25:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:16:10.701 Malloc2 00:16:10.701 03:25:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:16:10.957 03:25:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:16:11.214 03:25:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:16:11.471 03:25:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:16:11.471 03:25:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 2377771 00:16:11.471 03:25:56 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@946 -- # '[' -z 2377771 ']' 00:16:11.471 03:25:56 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@950 -- # kill -0 2377771 00:16:11.471 03:25:56 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@951 -- # uname 00:16:11.471 03:25:56 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:11.471 03:25:56 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2377771 00:16:11.471 03:25:56 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:16:11.471 03:25:56 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:16:11.471 03:25:56 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2377771' 00:16:11.471 killing process with pid 2377771 00:16:11.471 03:25:56 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@965 -- # kill 2377771 00:16:11.471 03:25:56 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@970 -- # wait 2377771 00:16:11.728 03:25:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:16:11.728 03:25:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:16:11.728 00:16:11.728 real 0m53.026s 00:16:11.728 user 3m29.584s 00:16:11.728 sys 0m4.381s 00:16:11.728 03:25:56 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:11.729 03:25:56 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:16:11.729 ************************************ 00:16:11.729 END TEST nvmf_vfio_user 00:16:11.729 ************************************ 00:16:11.729 03:25:56 nvmf_tcp -- nvmf/nvmf.sh@42 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:16:11.729 03:25:56 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:16:11.729 03:25:56 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:11.729 03:25:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:11.729 ************************************ 00:16:11.729 START TEST nvmf_vfio_user_nvme_compliance 00:16:11.729 ************************************ 00:16:11.729 03:25:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:16:11.729 * Looking for test storage... 00:16:11.729 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:16:11.729 03:25:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:11.729 03:25:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:16:11.729 03:25:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:11.729 03:25:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:11.729 03:25:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:11.729 03:25:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:11.729 03:25:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:11.729 03:25:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:11.729 03:25:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:11.729 03:25:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:11.729 03:25:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:11.729 03:25:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:11.729 03:25:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:11.729 03:25:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:11.729 03:25:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:11.729 03:25:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:11.729 03:25:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:11.729 03:25:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:11.729 03:25:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:11.729 03:25:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:11.729 03:25:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:11.729 03:25:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:11.729 03:25:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:11.730 03:25:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:11.730 03:25:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:11.730 03:25:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:16:11.730 03:25:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:11.730 03:25:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # : 0 00:16:11.730 03:25:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:11.730 03:25:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:11.730 03:25:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:11.730 03:25:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:11.730 03:25:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:11.730 03:25:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:11.730 03:25:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:11.730 03:25:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:11.730 03:25:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:11.730 03:25:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:11.730 03:25:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:16:11.730 03:25:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:16:11.730 03:25:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:16:11.730 03:25:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=2378253 00:16:11.730 03:25:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:16:11.730 03:25:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 2378253' 00:16:11.730 Process pid: 2378253 00:16:11.730 03:25:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:11.730 03:25:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 2378253 00:16:11.730 03:25:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@827 -- # '[' -z 2378253 ']' 00:16:11.730 03:25:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:11.730 03:25:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:11.730 03:25:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:11.730 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:11.730 03:25:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:11.730 03:25:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:11.988 [2024-07-21 03:25:57.070465] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:16:11.988 [2024-07-21 03:25:57.070568] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:11.988 EAL: No free 2048 kB hugepages reported on node 1 00:16:11.988 [2024-07-21 03:25:57.133330] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:11.988 [2024-07-21 03:25:57.217318] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:11.988 [2024-07-21 03:25:57.217377] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:11.988 [2024-07-21 03:25:57.217402] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:11.988 [2024-07-21 03:25:57.217414] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:11.988 [2024-07-21 03:25:57.217424] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:11.988 [2024-07-21 03:25:57.217488] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:11.988 [2024-07-21 03:25:57.217548] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:11.988 [2024-07-21 03:25:57.217550] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:12.246 03:25:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:12.246 03:25:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@860 -- # return 0 00:16:12.246 03:25:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:16:13.179 03:25:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:16:13.179 03:25:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:16:13.179 03:25:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:16:13.179 03:25:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:13.179 03:25:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:13.179 03:25:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:13.179 03:25:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:16:13.179 03:25:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:16:13.179 03:25:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:13.179 03:25:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:13.179 malloc0 00:16:13.179 03:25:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:13.179 03:25:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:16:13.179 03:25:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:13.179 03:25:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:13.179 03:25:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:13.179 03:25:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:16:13.179 03:25:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:13.179 03:25:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:13.179 03:25:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:13.179 03:25:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:16:13.179 03:25:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:13.179 03:25:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:13.179 03:25:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:13.179 03:25:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:16:13.179 EAL: No free 2048 kB hugepages reported on node 1 00:16:13.437 00:16:13.437 00:16:13.437 CUnit - A unit testing framework for C - Version 2.1-3 00:16:13.437 http://cunit.sourceforge.net/ 00:16:13.437 00:16:13.437 00:16:13.437 Suite: nvme_compliance 00:16:13.437 Test: admin_identify_ctrlr_verify_dptr ...[2024-07-21 03:25:58.570869] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:13.437 [2024-07-21 03:25:58.572305] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:16:13.437 [2024-07-21 03:25:58.572330] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:16:13.437 [2024-07-21 03:25:58.572342] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:16:13.437 [2024-07-21 03:25:58.576902] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:13.437 passed 00:16:13.437 Test: admin_identify_ctrlr_verify_fused ...[2024-07-21 03:25:58.662491] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:13.437 [2024-07-21 03:25:58.665508] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:13.437 passed 00:16:13.694 Test: admin_identify_ns ...[2024-07-21 03:25:58.752016] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:13.694 [2024-07-21 03:25:58.812644] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:16:13.694 [2024-07-21 03:25:58.820630] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:16:13.694 [2024-07-21 03:25:58.841775] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:13.694 passed 00:16:13.694 Test: admin_get_features_mandatory_features ...[2024-07-21 03:25:58.922421] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:13.694 [2024-07-21 03:25:58.927459] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:13.694 passed 00:16:13.952 Test: admin_get_features_optional_features ...[2024-07-21 03:25:59.012076] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:13.952 [2024-07-21 03:25:59.015093] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:13.952 passed 00:16:13.952 Test: admin_set_features_number_of_queues ...[2024-07-21 03:25:59.100444] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:13.952 [2024-07-21 03:25:59.205845] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:13.952 passed 00:16:14.209 Test: admin_get_log_page_mandatory_logs ...[2024-07-21 03:25:59.291190] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:14.209 [2024-07-21 03:25:59.294213] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:14.209 passed 00:16:14.209 Test: admin_get_log_page_with_lpo ...[2024-07-21 03:25:59.378375] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:14.209 [2024-07-21 03:25:59.446649] ctrlr.c:2654:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:16:14.209 [2024-07-21 03:25:59.459762] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:14.209 passed 00:16:14.466 Test: fabric_property_get ...[2024-07-21 03:25:59.541921] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:14.466 [2024-07-21 03:25:59.543206] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:16:14.466 [2024-07-21 03:25:59.546974] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:14.466 passed 00:16:14.466 Test: admin_delete_io_sq_use_admin_qid ...[2024-07-21 03:25:59.631498] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:14.466 [2024-07-21 03:25:59.632821] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:16:14.466 [2024-07-21 03:25:59.634520] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:14.466 passed 00:16:14.466 Test: admin_delete_io_sq_delete_sq_twice ...[2024-07-21 03:25:59.715737] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:14.723 [2024-07-21 03:25:59.803625] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:16:14.723 [2024-07-21 03:25:59.819621] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:16:14.723 [2024-07-21 03:25:59.824847] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:14.723 passed 00:16:14.723 Test: admin_delete_io_cq_use_admin_qid ...[2024-07-21 03:25:59.905545] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:14.723 [2024-07-21 03:25:59.909909] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:16:14.723 [2024-07-21 03:25:59.911580] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:14.723 passed 00:16:14.723 Test: admin_delete_io_cq_delete_cq_first ...[2024-07-21 03:25:59.992327] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:14.980 [2024-07-21 03:26:00.067622] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:16:14.980 [2024-07-21 03:26:00.091625] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:16:14.980 [2024-07-21 03:26:00.096852] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:14.980 passed 00:16:14.980 Test: admin_create_io_cq_verify_iv_pc ...[2024-07-21 03:26:00.184394] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:14.980 [2024-07-21 03:26:00.185710] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:16:14.980 [2024-07-21 03:26:00.185749] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:16:14.980 [2024-07-21 03:26:00.187414] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:14.980 passed 00:16:14.980 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-07-21 03:26:00.266155] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:15.237 [2024-07-21 03:26:00.358625] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:16:15.237 [2024-07-21 03:26:00.366622] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:16:15.237 [2024-07-21 03:26:00.374624] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:16:15.237 [2024-07-21 03:26:00.382628] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:16:15.237 [2024-07-21 03:26:00.411734] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:15.237 passed 00:16:15.237 Test: admin_create_io_sq_verify_pc ...[2024-07-21 03:26:00.492968] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:15.237 [2024-07-21 03:26:00.509636] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:16:15.237 [2024-07-21 03:26:00.527312] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:15.494 passed 00:16:15.494 Test: admin_create_io_qp_max_qps ...[2024-07-21 03:26:00.613927] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:16.426 [2024-07-21 03:26:01.713632] nvme_ctrlr.c:5342:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:16:16.991 [2024-07-21 03:26:02.087987] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:16.991 passed 00:16:16.991 Test: admin_create_io_sq_shared_cq ...[2024-07-21 03:26:02.170162] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:17.249 [2024-07-21 03:26:02.305638] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:16:17.249 [2024-07-21 03:26:02.342706] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:17.249 passed 00:16:17.249 00:16:17.249 Run Summary: Type Total Ran Passed Failed Inactive 00:16:17.249 suites 1 1 n/a 0 0 00:16:17.249 tests 18 18 18 0 0 00:16:17.249 asserts 360 360 360 0 n/a 00:16:17.249 00:16:17.249 Elapsed time = 1.565 seconds 00:16:17.249 03:26:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 2378253 00:16:17.249 03:26:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@946 -- # '[' -z 2378253 ']' 00:16:17.249 03:26:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@950 -- # kill -0 2378253 00:16:17.249 03:26:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@951 -- # uname 00:16:17.249 03:26:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:17.249 03:26:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2378253 00:16:17.249 03:26:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:16:17.249 03:26:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:16:17.249 03:26:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2378253' 00:16:17.249 killing process with pid 2378253 00:16:17.249 03:26:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@965 -- # kill 2378253 00:16:17.249 03:26:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@970 -- # wait 2378253 00:16:17.507 03:26:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:16:17.507 03:26:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:16:17.507 00:16:17.507 real 0m5.705s 00:16:17.507 user 0m16.026s 00:16:17.507 sys 0m0.568s 00:16:17.507 03:26:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:17.507 03:26:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:17.507 ************************************ 00:16:17.507 END TEST nvmf_vfio_user_nvme_compliance 00:16:17.507 ************************************ 00:16:17.507 03:26:02 nvmf_tcp -- nvmf/nvmf.sh@43 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:16:17.507 03:26:02 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:16:17.507 03:26:02 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:17.507 03:26:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:17.507 ************************************ 00:16:17.507 START TEST nvmf_vfio_user_fuzz 00:16:17.507 ************************************ 00:16:17.507 03:26:02 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:16:17.507 * Looking for test storage... 00:16:17.507 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:17.507 03:26:02 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:17.507 03:26:02 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:16:17.507 03:26:02 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:17.507 03:26:02 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:17.507 03:26:02 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:17.507 03:26:02 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:17.507 03:26:02 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:17.507 03:26:02 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:17.507 03:26:02 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:17.507 03:26:02 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:17.507 03:26:02 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:17.508 03:26:02 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:17.508 03:26:02 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:17.508 03:26:02 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:17.508 03:26:02 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:17.508 03:26:02 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:17.508 03:26:02 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:17.508 03:26:02 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:17.508 03:26:02 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:17.508 03:26:02 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:17.508 03:26:02 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:17.508 03:26:02 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:17.508 03:26:02 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:17.508 03:26:02 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:17.508 03:26:02 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:17.508 03:26:02 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:16:17.508 03:26:02 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:17.508 03:26:02 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # : 0 00:16:17.508 03:26:02 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:17.508 03:26:02 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:17.508 03:26:02 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:17.508 03:26:02 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:17.508 03:26:02 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:17.508 03:26:02 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:17.508 03:26:02 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:17.508 03:26:02 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:17.508 03:26:02 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:16:17.508 03:26:02 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:16:17.508 03:26:02 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:16:17.508 03:26:02 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:16:17.508 03:26:02 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:16:17.508 03:26:02 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:16:17.508 03:26:02 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:16:17.508 03:26:02 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=2379029 00:16:17.508 03:26:02 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:16:17.508 03:26:02 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 2379029' 00:16:17.508 Process pid: 2379029 00:16:17.508 03:26:02 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:17.508 03:26:02 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 2379029 00:16:17.508 03:26:02 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@827 -- # '[' -z 2379029 ']' 00:16:17.508 03:26:02 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:17.508 03:26:02 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:17.508 03:26:02 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:17.508 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:17.508 03:26:02 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:17.508 03:26:02 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:17.766 03:26:03 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:17.766 03:26:03 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@860 -- # return 0 00:16:17.766 03:26:03 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:16:19.158 03:26:04 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:16:19.158 03:26:04 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:19.158 03:26:04 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:19.158 03:26:04 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:19.158 03:26:04 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:16:19.158 03:26:04 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:16:19.158 03:26:04 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:19.158 03:26:04 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:19.158 malloc0 00:16:19.158 03:26:04 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:19.158 03:26:04 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:16:19.158 03:26:04 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:19.158 03:26:04 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:19.158 03:26:04 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:19.158 03:26:04 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:16:19.158 03:26:04 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:19.158 03:26:04 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:19.158 03:26:04 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:19.158 03:26:04 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:16:19.158 03:26:04 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:19.158 03:26:04 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:19.158 03:26:04 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:19.158 03:26:04 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:16:19.158 03:26:04 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:16:51.208 Fuzzing completed. Shutting down the fuzz application 00:16:51.208 00:16:51.208 Dumping successful admin opcodes: 00:16:51.208 8, 9, 10, 24, 00:16:51.208 Dumping successful io opcodes: 00:16:51.208 0, 00:16:51.208 NS: 0x200003a1ef00 I/O qp, Total commands completed: 567860, total successful commands: 2183, random_seed: 2044995392 00:16:51.209 NS: 0x200003a1ef00 admin qp, Total commands completed: 72632, total successful commands: 572, random_seed: 181807552 00:16:51.209 03:26:34 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:16:51.209 03:26:34 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:51.209 03:26:34 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:51.209 03:26:34 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:51.209 03:26:34 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 2379029 00:16:51.209 03:26:34 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@946 -- # '[' -z 2379029 ']' 00:16:51.209 03:26:34 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@950 -- # kill -0 2379029 00:16:51.209 03:26:34 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@951 -- # uname 00:16:51.209 03:26:34 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:51.209 03:26:34 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2379029 00:16:51.209 03:26:34 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:16:51.209 03:26:34 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:16:51.209 03:26:34 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2379029' 00:16:51.209 killing process with pid 2379029 00:16:51.209 03:26:34 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@965 -- # kill 2379029 00:16:51.209 03:26:34 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@970 -- # wait 2379029 00:16:51.209 03:26:34 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:16:51.209 03:26:34 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:16:51.209 00:16:51.209 real 0m32.176s 00:16:51.209 user 0m31.214s 00:16:51.209 sys 0m28.804s 00:16:51.209 03:26:34 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:51.209 03:26:34 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:51.209 ************************************ 00:16:51.209 END TEST nvmf_vfio_user_fuzz 00:16:51.209 ************************************ 00:16:51.209 03:26:34 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:16:51.209 03:26:34 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:16:51.209 03:26:34 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:51.209 03:26:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:51.209 ************************************ 00:16:51.209 START TEST nvmf_host_management 00:16:51.209 ************************************ 00:16:51.209 03:26:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:16:51.209 * Looking for test storage... 00:16:51.209 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:51.209 03:26:34 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:51.209 03:26:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:16:51.209 03:26:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:51.209 03:26:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:51.209 03:26:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:51.209 03:26:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:51.209 03:26:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:51.209 03:26:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:51.209 03:26:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:51.209 03:26:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:51.209 03:26:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:51.209 03:26:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:51.209 03:26:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:51.209 03:26:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:51.209 03:26:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:51.209 03:26:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:51.209 03:26:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:51.209 03:26:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:51.209 03:26:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:51.209 03:26:34 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:51.209 03:26:34 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:51.209 03:26:34 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:51.209 03:26:34 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:51.209 03:26:34 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:51.209 03:26:34 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:51.209 03:26:34 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:16:51.209 03:26:34 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:51.209 03:26:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:16:51.209 03:26:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:51.209 03:26:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:51.209 03:26:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:51.209 03:26:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:51.209 03:26:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:51.209 03:26:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:51.209 03:26:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:51.209 03:26:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:51.209 03:26:35 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:51.209 03:26:35 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:51.209 03:26:35 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:16:51.209 03:26:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:51.209 03:26:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:51.209 03:26:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:51.209 03:26:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:51.209 03:26:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:51.209 03:26:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:51.209 03:26:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:51.209 03:26:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:51.209 03:26:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:51.209 03:26:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:51.209 03:26:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:16:51.209 03:26:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:51.775 03:26:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:51.775 03:26:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:16:51.775 03:26:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:51.775 03:26:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:51.775 03:26:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:51.775 03:26:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:51.775 03:26:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:51.775 03:26:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:16:51.775 03:26:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:51.775 03:26:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:16:51.775 03:26:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:16:51.775 03:26:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:16:51.775 03:26:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:16:51.775 03:26:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:16:51.775 03:26:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:16:51.775 03:26:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:51.775 03:26:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:51.775 03:26:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:51.775 03:26:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:51.775 03:26:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:51.775 03:26:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:51.775 03:26:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:51.775 03:26:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:51.775 03:26:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:51.775 03:26:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:51.775 03:26:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:51.775 03:26:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:51.775 03:26:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:51.775 03:26:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:51.775 03:26:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:51.775 03:26:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:51.775 03:26:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:51.775 03:26:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:51.775 03:26:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:51.775 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:51.775 03:26:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:51.775 03:26:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:51.775 03:26:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:51.775 03:26:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:51.775 03:26:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:51.775 03:26:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:51.775 03:26:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:51.775 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:51.775 03:26:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:51.775 03:26:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:51.775 03:26:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:51.775 03:26:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:51.775 03:26:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:51.775 03:26:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:51.775 03:26:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:51.775 03:26:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:51.775 03:26:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:51.775 03:26:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:51.775 03:26:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:51.775 03:26:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:51.775 03:26:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:51.775 03:26:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:51.775 03:26:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:51.775 03:26:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:51.775 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:51.775 03:26:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:51.775 03:26:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:51.775 03:26:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:51.775 03:26:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:51.775 03:26:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:51.775 03:26:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:51.775 03:26:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:51.775 03:26:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:51.775 03:26:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:51.775 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:51.775 03:26:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:51.775 03:26:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:51.775 03:26:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:16:51.775 03:26:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:51.775 03:26:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:51.775 03:26:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:51.775 03:26:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:51.775 03:26:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:51.775 03:26:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:51.775 03:26:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:51.775 03:26:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:51.775 03:26:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:51.775 03:26:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:51.775 03:26:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:51.776 03:26:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:51.776 03:26:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:51.776 03:26:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:51.776 03:26:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:51.776 03:26:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:51.776 03:26:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:51.776 03:26:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:51.776 03:26:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:51.776 03:26:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:51.776 03:26:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:51.776 03:26:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:51.776 03:26:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:51.776 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:51.776 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.111 ms 00:16:51.776 00:16:51.776 --- 10.0.0.2 ping statistics --- 00:16:51.776 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:51.776 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:16:51.776 03:26:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:51.776 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:51.776 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.075 ms 00:16:51.776 00:16:51.776 --- 10.0.0.1 ping statistics --- 00:16:51.776 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:51.776 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:16:51.776 03:26:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:51.776 03:26:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:16:51.776 03:26:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:51.776 03:26:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:51.776 03:26:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:51.776 03:26:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:51.776 03:26:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:51.776 03:26:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:51.776 03:26:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:51.776 03:26:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:16:51.776 03:26:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:16:51.776 03:26:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:16:51.776 03:26:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:51.776 03:26:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@720 -- # xtrace_disable 00:16:51.776 03:26:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:51.776 03:26:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=2384416 00:16:51.776 03:26:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:16:51.776 03:26:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 2384416 00:16:51.776 03:26:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@827 -- # '[' -z 2384416 ']' 00:16:52.034 03:26:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:52.034 03:26:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:52.034 03:26:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:52.034 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:52.034 03:26:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:52.034 03:26:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:52.034 [2024-07-21 03:26:37.130231] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:16:52.034 [2024-07-21 03:26:37.130325] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:52.034 EAL: No free 2048 kB hugepages reported on node 1 00:16:52.034 [2024-07-21 03:26:37.199908] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:52.034 [2024-07-21 03:26:37.292346] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:52.034 [2024-07-21 03:26:37.292405] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:52.034 [2024-07-21 03:26:37.292422] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:52.034 [2024-07-21 03:26:37.292435] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:52.034 [2024-07-21 03:26:37.292447] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:52.034 [2024-07-21 03:26:37.292543] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:52.034 [2024-07-21 03:26:37.292640] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:52.034 [2024-07-21 03:26:37.292709] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:16:52.034 [2024-07-21 03:26:37.292712] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:52.293 03:26:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:52.293 03:26:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@860 -- # return 0 00:16:52.293 03:26:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:52.293 03:26:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:52.293 03:26:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:52.293 03:26:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:52.293 03:26:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:52.293 03:26:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.293 03:26:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:52.293 [2024-07-21 03:26:37.444264] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:52.293 03:26:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.293 03:26:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:16:52.293 03:26:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@720 -- # xtrace_disable 00:16:52.293 03:26:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:52.293 03:26:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:16:52.293 03:26:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:16:52.293 03:26:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:16:52.293 03:26:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.293 03:26:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:52.293 Malloc0 00:16:52.293 [2024-07-21 03:26:37.505186] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:52.293 03:26:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.293 03:26:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:16:52.293 03:26:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:52.293 03:26:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:52.293 03:26:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=2384574 00:16:52.293 03:26:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 2384574 /var/tmp/bdevperf.sock 00:16:52.293 03:26:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@827 -- # '[' -z 2384574 ']' 00:16:52.293 03:26:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:52.293 03:26:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:16:52.293 03:26:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:16:52.293 03:26:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:52.293 03:26:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:52.293 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:52.293 03:26:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:16:52.293 03:26:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:52.293 03:26:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:16:52.293 03:26:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:52.293 03:26:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:52.293 03:26:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:52.293 { 00:16:52.293 "params": { 00:16:52.293 "name": "Nvme$subsystem", 00:16:52.293 "trtype": "$TEST_TRANSPORT", 00:16:52.293 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:52.293 "adrfam": "ipv4", 00:16:52.293 "trsvcid": "$NVMF_PORT", 00:16:52.293 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:52.293 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:52.293 "hdgst": ${hdgst:-false}, 00:16:52.293 "ddgst": ${ddgst:-false} 00:16:52.293 }, 00:16:52.293 "method": "bdev_nvme_attach_controller" 00:16:52.293 } 00:16:52.293 EOF 00:16:52.293 )") 00:16:52.293 03:26:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:16:52.293 03:26:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:16:52.293 03:26:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:16:52.293 03:26:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:52.293 "params": { 00:16:52.293 "name": "Nvme0", 00:16:52.293 "trtype": "tcp", 00:16:52.293 "traddr": "10.0.0.2", 00:16:52.293 "adrfam": "ipv4", 00:16:52.293 "trsvcid": "4420", 00:16:52.293 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:52.293 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:16:52.293 "hdgst": false, 00:16:52.293 "ddgst": false 00:16:52.293 }, 00:16:52.293 "method": "bdev_nvme_attach_controller" 00:16:52.293 }' 00:16:52.293 [2024-07-21 03:26:37.584381] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:16:52.293 [2024-07-21 03:26:37.584453] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2384574 ] 00:16:52.551 EAL: No free 2048 kB hugepages reported on node 1 00:16:52.551 [2024-07-21 03:26:37.645695] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:52.551 [2024-07-21 03:26:37.732452] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:52.810 Running I/O for 10 seconds... 00:16:52.810 03:26:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:52.810 03:26:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@860 -- # return 0 00:16:52.810 03:26:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:16:52.810 03:26:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.810 03:26:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:52.810 03:26:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.810 03:26:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:52.810 03:26:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:16:52.810 03:26:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:16:52.810 03:26:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:16:52.810 03:26:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:16:52.810 03:26:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:16:52.810 03:26:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:16:52.810 03:26:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:16:52.810 03:26:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:16:52.810 03:26:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:16:52.810 03:26:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.810 03:26:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:52.810 03:26:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.810 03:26:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:16:52.810 03:26:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:16:52.810 03:26:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:16:53.071 03:26:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:16:53.071 03:26:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:16:53.071 03:26:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:16:53.071 03:26:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:16:53.071 03:26:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:53.071 03:26:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:53.071 03:26:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:53.071 03:26:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=542 00:16:53.071 03:26:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 542 -ge 100 ']' 00:16:53.071 03:26:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:16:53.072 03:26:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:16:53.072 03:26:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:16:53.072 03:26:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:16:53.072 03:26:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:53.072 03:26:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:53.072 [2024-07-21 03:26:38.340001] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4d120 is same with the state(5) to be set 00:16:53.072 [2024-07-21 03:26:38.340104] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4d120 is same with the state(5) to be set 00:16:53.072 [2024-07-21 03:26:38.340122] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4d120 is same with the state(5) to be set 00:16:53.072 [2024-07-21 03:26:38.340134] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4d120 is same with the state(5) to be set 00:16:53.072 [2024-07-21 03:26:38.340147] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4d120 is same with the state(5) to be set 00:16:53.072 [2024-07-21 03:26:38.340159] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4d120 is same with the state(5) to be set 00:16:53.072 [2024-07-21 03:26:38.340171] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4d120 is same with the state(5) to be set 00:16:53.072 [2024-07-21 03:26:38.340183] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4d120 is same with the state(5) to be set 00:16:53.072 [2024-07-21 03:26:38.340195] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4d120 is same with the state(5) to be set 00:16:53.072 [2024-07-21 03:26:38.340206] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4d120 is same with the state(5) to be set 00:16:53.072 [2024-07-21 03:26:38.340218] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4d120 is same with the state(5) to be set 00:16:53.072 [2024-07-21 03:26:38.340239] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4d120 is same with the state(5) to be set 00:16:53.072 [2024-07-21 03:26:38.340252] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4d120 is same with the state(5) to be set 00:16:53.072 [2024-07-21 03:26:38.340263] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4d120 is same with the state(5) to be set 00:16:53.072 [2024-07-21 03:26:38.340275] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4d120 is same with the state(5) to be set 00:16:53.072 [2024-07-21 03:26:38.340287] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4d120 is same with the state(5) to be set 00:16:53.072 [2024-07-21 03:26:38.340299] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4d120 is same with the state(5) to be set 00:16:53.072 [2024-07-21 03:26:38.340311] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4d120 is same with the state(5) to be set 00:16:53.072 [2024-07-21 03:26:38.340322] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4d120 is same with the state(5) to be set 00:16:53.072 [2024-07-21 03:26:38.340337] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4d120 is same with the state(5) to be set 00:16:53.072 [2024-07-21 03:26:38.340359] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4d120 is same with the state(5) to be set 00:16:53.072 [2024-07-21 03:26:38.340380] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4d120 is same with the state(5) to be set 00:16:53.072 [2024-07-21 03:26:38.340400] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4d120 is same with the state(5) to be set 00:16:53.072 [2024-07-21 03:26:38.340414] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4d120 is same with the state(5) to be set 00:16:53.072 [2024-07-21 03:26:38.340426] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4d120 is same with the state(5) to be set 00:16:53.072 [2024-07-21 03:26:38.340438] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4d120 is same with the state(5) to be set 00:16:53.072 [2024-07-21 03:26:38.340449] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4d120 is same with the state(5) to be set 00:16:53.072 [2024-07-21 03:26:38.340461] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4d120 is same with the state(5) to be set 00:16:53.072 [2024-07-21 03:26:38.340472] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4d120 is same with the state(5) to be set 00:16:53.072 [2024-07-21 03:26:38.340484] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4d120 is same with the state(5) to be set 00:16:53.072 [2024-07-21 03:26:38.340496] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4d120 is same with the state(5) to be set 00:16:53.072 [2024-07-21 03:26:38.340508] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4d120 is same with the state(5) to be set 00:16:53.072 [2024-07-21 03:26:38.340520] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4d120 is same with the state(5) to be set 00:16:53.072 [2024-07-21 03:26:38.340531] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4d120 is same with the state(5) to be set 00:16:53.072 [2024-07-21 03:26:38.340543] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4d120 is same with the state(5) to be set 00:16:53.072 [2024-07-21 03:26:38.340555] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4d120 is same with the state(5) to be set 00:16:53.072 [2024-07-21 03:26:38.340566] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4d120 is same with the state(5) to be set 00:16:53.072 [2024-07-21 03:26:38.340578] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4d120 is same with the state(5) to be set 00:16:53.072 [2024-07-21 03:26:38.340594] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4d120 is same with the state(5) to be set 00:16:53.072 [2024-07-21 03:26:38.340607] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4d120 is same with the state(5) to be set 00:16:53.072 [2024-07-21 03:26:38.340628] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4d120 is same with the state(5) to be set 00:16:53.072 [2024-07-21 03:26:38.340642] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4d120 is same with the state(5) to be set 00:16:53.072 [2024-07-21 03:26:38.340654] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4d120 is same with the state(5) to be set 00:16:53.072 [2024-07-21 03:26:38.340665] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4d120 is same with the state(5) to be set 00:16:53.072 [2024-07-21 03:26:38.340677] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4d120 is same with the state(5) to be set 00:16:53.072 [2024-07-21 03:26:38.340689] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4d120 is same with the state(5) to be set 00:16:53.072 [2024-07-21 03:26:38.340700] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4d120 is same with the state(5) to be set 00:16:53.072 [2024-07-21 03:26:38.340712] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4d120 is same with the state(5) to be set 00:16:53.072 [2024-07-21 03:26:38.340724] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4d120 is same with the state(5) to be set 00:16:53.072 [2024-07-21 03:26:38.340736] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4d120 is same with the state(5) to be set 00:16:53.072 [2024-07-21 03:26:38.340748] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4d120 is same with the state(5) to be set 00:16:53.072 [2024-07-21 03:26:38.340760] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4d120 is same with the state(5) to be set 00:16:53.072 [2024-07-21 03:26:38.340771] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4d120 is same with the state(5) to be set 00:16:53.072 [2024-07-21 03:26:38.340783] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4d120 is same with the state(5) to be set 00:16:53.072 [2024-07-21 03:26:38.340794] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4d120 is same with the state(5) to be set 00:16:53.072 [2024-07-21 03:26:38.340806] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4d120 is same with the state(5) to be set 00:16:53.072 [2024-07-21 03:26:38.340818] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4d120 is same with the state(5) to be set 00:16:53.072 [2024-07-21 03:26:38.340829] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4d120 is same with the state(5) to be set 00:16:53.072 [2024-07-21 03:26:38.340841] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4d120 is same with the state(5) to be set 00:16:53.072 [2024-07-21 03:26:38.340853] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4d120 is same with the state(5) to be set 00:16:53.072 [2024-07-21 03:26:38.340864] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4d120 is same with the state(5) to be set 00:16:53.072 [2024-07-21 03:26:38.340877] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4d120 is same with the state(5) to be set 00:16:53.072 [2024-07-21 03:26:38.340889] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4d120 is same with the state(5) to be set 00:16:53.072 03:26:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:53.072 03:26:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:16:53.072 03:26:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:53.072 03:26:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:53.072 03:26:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:53.072 03:26:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:16:53.072 [2024-07-21 03:26:38.355083] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:53.072 [2024-07-21 03:26:38.355124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.072 [2024-07-21 03:26:38.355142] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:53.072 [2024-07-21 03:26:38.355156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.073 [2024-07-21 03:26:38.355170] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:53.073 [2024-07-21 03:26:38.355183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.073 [2024-07-21 03:26:38.355196] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:53.073 [2024-07-21 03:26:38.355214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.073 [2024-07-21 03:26:38.355228] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc681e0 is same with the state(5) to be set 00:16:53.073 [2024-07-21 03:26:38.355326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:81664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.073 [2024-07-21 03:26:38.355350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.073 [2024-07-21 03:26:38.355375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:81792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.073 [2024-07-21 03:26:38.355391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.073 [2024-07-21 03:26:38.355408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.073 [2024-07-21 03:26:38.355423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.073 [2024-07-21 03:26:38.355439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:82048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.073 [2024-07-21 03:26:38.355454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.073 [2024-07-21 03:26:38.355470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:82176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.073 [2024-07-21 03:26:38.355485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.073 [2024-07-21 03:26:38.355501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:82304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.073 [2024-07-21 03:26:38.355516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.073 [2024-07-21 03:26:38.355532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:82432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.073 [2024-07-21 03:26:38.355546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.073 [2024-07-21 03:26:38.355562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:82560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.073 [2024-07-21 03:26:38.355583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.073 [2024-07-21 03:26:38.355600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:82688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.073 [2024-07-21 03:26:38.355623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.073 [2024-07-21 03:26:38.355641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:82816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.073 [2024-07-21 03:26:38.355656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.073 [2024-07-21 03:26:38.355672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:82944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.073 [2024-07-21 03:26:38.355687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.073 [2024-07-21 03:26:38.355703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:83072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.073 [2024-07-21 03:26:38.355718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.073 [2024-07-21 03:26:38.355733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:83200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.073 [2024-07-21 03:26:38.355748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.073 [2024-07-21 03:26:38.355764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:83328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.073 [2024-07-21 03:26:38.355779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.073 [2024-07-21 03:26:38.355795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:83456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.073 [2024-07-21 03:26:38.355810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.073 [2024-07-21 03:26:38.355826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:83584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.073 [2024-07-21 03:26:38.355840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.073 [2024-07-21 03:26:38.355856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:83712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.073 [2024-07-21 03:26:38.355871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.073 [2024-07-21 03:26:38.355886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:83840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.073 [2024-07-21 03:26:38.355901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.073 [2024-07-21 03:26:38.355919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.073 [2024-07-21 03:26:38.355934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.073 [2024-07-21 03:26:38.355950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:84096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.073 [2024-07-21 03:26:38.355969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.073 [2024-07-21 03:26:38.355986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.073 [2024-07-21 03:26:38.356001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.073 [2024-07-21 03:26:38.356017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.073 [2024-07-21 03:26:38.356032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.073 [2024-07-21 03:26:38.356048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.073 [2024-07-21 03:26:38.356063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.073 [2024-07-21 03:26:38.356079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:84608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.073 [2024-07-21 03:26:38.356094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.073 [2024-07-21 03:26:38.356110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:84736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.073 [2024-07-21 03:26:38.356124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.073 [2024-07-21 03:26:38.356140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.073 [2024-07-21 03:26:38.356154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.073 [2024-07-21 03:26:38.356170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.073 [2024-07-21 03:26:38.356185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.073 [2024-07-21 03:26:38.356201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.073 [2024-07-21 03:26:38.356216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.073 [2024-07-21 03:26:38.356232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:85248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.073 [2024-07-21 03:26:38.356246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.073 [2024-07-21 03:26:38.356262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.073 [2024-07-21 03:26:38.356277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.073 [2024-07-21 03:26:38.356292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.073 [2024-07-21 03:26:38.356307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.073 [2024-07-21 03:26:38.356323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.073 [2024-07-21 03:26:38.356337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.073 [2024-07-21 03:26:38.356353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:85760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.073 [2024-07-21 03:26:38.356371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.074 [2024-07-21 03:26:38.356388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.074 [2024-07-21 03:26:38.356403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.074 [2024-07-21 03:26:38.356419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:86016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.074 [2024-07-21 03:26:38.356434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.074 [2024-07-21 03:26:38.356450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:86144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.074 [2024-07-21 03:26:38.356465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.074 [2024-07-21 03:26:38.356481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.074 [2024-07-21 03:26:38.356496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.074 [2024-07-21 03:26:38.356512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.074 [2024-07-21 03:26:38.356526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.074 [2024-07-21 03:26:38.356542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.074 [2024-07-21 03:26:38.356557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.074 [2024-07-21 03:26:38.356573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:86656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.074 [2024-07-21 03:26:38.356588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.074 [2024-07-21 03:26:38.356604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:86784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.074 [2024-07-21 03:26:38.356629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.074 [2024-07-21 03:26:38.356646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:86912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.074 [2024-07-21 03:26:38.356661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.074 [2024-07-21 03:26:38.356677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.074 [2024-07-21 03:26:38.356692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.074 [2024-07-21 03:26:38.356708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:87168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.074 [2024-07-21 03:26:38.356722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.074 [2024-07-21 03:26:38.356738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:87296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.074 [2024-07-21 03:26:38.356757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.074 [2024-07-21 03:26:38.356774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.074 [2024-07-21 03:26:38.356789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.074 [2024-07-21 03:26:38.356804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:87552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.074 [2024-07-21 03:26:38.356819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.074 [2024-07-21 03:26:38.356835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:87680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.074 [2024-07-21 03:26:38.356849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.074 [2024-07-21 03:26:38.356865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:87808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.074 [2024-07-21 03:26:38.356879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.074 [2024-07-21 03:26:38.356895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:87936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.074 [2024-07-21 03:26:38.356909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.074 [2024-07-21 03:26:38.356926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:88064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.074 [2024-07-21 03:26:38.356941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.074 [2024-07-21 03:26:38.356956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:88192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.074 [2024-07-21 03:26:38.356971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.074 [2024-07-21 03:26:38.356987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:88320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.074 [2024-07-21 03:26:38.357001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.074 [2024-07-21 03:26:38.357017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:88448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.074 [2024-07-21 03:26:38.357031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.074 [2024-07-21 03:26:38.357047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:88576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.074 [2024-07-21 03:26:38.357062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.074 [2024-07-21 03:26:38.357077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:88704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.074 [2024-07-21 03:26:38.357092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.074 [2024-07-21 03:26:38.357108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:88832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.074 [2024-07-21 03:26:38.357123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.074 [2024-07-21 03:26:38.357141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:88960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.074 [2024-07-21 03:26:38.357157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.074 [2024-07-21 03:26:38.357173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:89088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.074 [2024-07-21 03:26:38.357188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.074 [2024-07-21 03:26:38.357204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:89216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.074 [2024-07-21 03:26:38.357219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.074 [2024-07-21 03:26:38.357234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:89344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.074 [2024-07-21 03:26:38.357249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.074 [2024-07-21 03:26:38.357265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:89472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.074 [2024-07-21 03:26:38.357280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.074 [2024-07-21 03:26:38.357296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:89600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.074 [2024-07-21 03:26:38.357310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.074 [2024-07-21 03:26:38.357326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:89728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.074 [2024-07-21 03:26:38.357341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.074 [2024-07-21 03:26:38.357420] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1079110 was disconnected and freed. reset controller. 00:16:53.074 [2024-07-21 03:26:38.358529] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:16:53.074 task offset: 81664 on job bdev=Nvme0n1 fails 00:16:53.074 00:16:53.074 Latency(us) 00:16:53.074 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:53.074 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:53.074 Job: Nvme0n1 ended in about 0.41 seconds with error 00:16:53.074 Verification LBA range: start 0x0 length 0x400 00:16:53.075 Nvme0n1 : 0.41 1572.04 98.25 157.70 0.00 35949.30 2694.26 33787.45 00:16:53.075 =================================================================================================================== 00:16:53.075 Total : 1572.04 98.25 157.70 0.00 35949.30 2694.26 33787.45 00:16:53.075 [2024-07-21 03:26:38.360385] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:53.075 [2024-07-21 03:26:38.360428] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc681e0 (9): Bad file descriptor 00:16:53.075 [2024-07-21 03:26:38.370426] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:54.449 03:26:39 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 2384574 00:16:54.449 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (2384574) - No such process 00:16:54.449 03:26:39 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:16:54.449 03:26:39 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:16:54.449 03:26:39 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:16:54.449 03:26:39 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:16:54.449 03:26:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:16:54.449 03:26:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:16:54.449 03:26:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:54.449 03:26:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:54.449 { 00:16:54.449 "params": { 00:16:54.449 "name": "Nvme$subsystem", 00:16:54.449 "trtype": "$TEST_TRANSPORT", 00:16:54.449 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:54.449 "adrfam": "ipv4", 00:16:54.449 "trsvcid": "$NVMF_PORT", 00:16:54.449 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:54.449 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:54.449 "hdgst": ${hdgst:-false}, 00:16:54.449 "ddgst": ${ddgst:-false} 00:16:54.449 }, 00:16:54.449 "method": "bdev_nvme_attach_controller" 00:16:54.449 } 00:16:54.449 EOF 00:16:54.449 )") 00:16:54.449 03:26:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:16:54.449 03:26:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:16:54.449 03:26:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:16:54.449 03:26:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:54.449 "params": { 00:16:54.449 "name": "Nvme0", 00:16:54.449 "trtype": "tcp", 00:16:54.449 "traddr": "10.0.0.2", 00:16:54.449 "adrfam": "ipv4", 00:16:54.449 "trsvcid": "4420", 00:16:54.449 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:54.449 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:16:54.449 "hdgst": false, 00:16:54.449 "ddgst": false 00:16:54.449 }, 00:16:54.449 "method": "bdev_nvme_attach_controller" 00:16:54.449 }' 00:16:54.449 [2024-07-21 03:26:39.399983] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:16:54.449 [2024-07-21 03:26:39.400057] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2384736 ] 00:16:54.449 EAL: No free 2048 kB hugepages reported on node 1 00:16:54.449 [2024-07-21 03:26:39.461108] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:54.449 [2024-07-21 03:26:39.549545] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:54.449 Running I/O for 1 seconds... 00:16:55.825 00:16:55.825 Latency(us) 00:16:55.825 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:55.825 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:55.825 Verification LBA range: start 0x0 length 0x400 00:16:55.825 Nvme0n1 : 1.01 1589.55 99.35 0.00 0.00 39620.16 6043.88 34952.53 00:16:55.825 =================================================================================================================== 00:16:55.825 Total : 1589.55 99.35 0.00 0.00 39620.16 6043.88 34952.53 00:16:55.825 03:26:40 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:16:55.825 03:26:40 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:16:55.825 03:26:40 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:16:55.825 03:26:40 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:16:55.825 03:26:40 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:16:55.825 03:26:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:55.825 03:26:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:16:55.825 03:26:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:55.825 03:26:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:16:55.825 03:26:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:55.825 03:26:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:55.825 rmmod nvme_tcp 00:16:55.825 rmmod nvme_fabrics 00:16:55.825 rmmod nvme_keyring 00:16:55.825 03:26:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:55.825 03:26:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:16:55.825 03:26:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:16:55.825 03:26:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 2384416 ']' 00:16:55.825 03:26:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 2384416 00:16:55.825 03:26:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@946 -- # '[' -z 2384416 ']' 00:16:55.825 03:26:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@950 -- # kill -0 2384416 00:16:55.825 03:26:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@951 -- # uname 00:16:55.825 03:26:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:55.825 03:26:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2384416 00:16:55.825 03:26:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:16:55.825 03:26:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:16:55.825 03:26:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2384416' 00:16:55.825 killing process with pid 2384416 00:16:55.825 03:26:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@965 -- # kill 2384416 00:16:55.825 03:26:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@970 -- # wait 2384416 00:16:56.083 [2024-07-21 03:26:41.253086] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:16:56.083 03:26:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:56.083 03:26:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:56.083 03:26:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:56.083 03:26:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:56.083 03:26:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:56.083 03:26:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:56.083 03:26:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:56.083 03:26:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:58.610 03:26:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:58.610 03:26:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:16:58.610 00:16:58.610 real 0m8.391s 00:16:58.610 user 0m18.601s 00:16:58.610 sys 0m2.577s 00:16:58.610 03:26:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:58.610 03:26:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:58.610 ************************************ 00:16:58.610 END TEST nvmf_host_management 00:16:58.610 ************************************ 00:16:58.610 03:26:43 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:16:58.610 03:26:43 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:16:58.610 03:26:43 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:58.610 03:26:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:58.610 ************************************ 00:16:58.610 START TEST nvmf_lvol 00:16:58.610 ************************************ 00:16:58.610 03:26:43 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:16:58.610 * Looking for test storage... 00:16:58.610 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:58.610 03:26:43 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:58.610 03:26:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:16:58.610 03:26:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:58.610 03:26:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:58.610 03:26:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:58.610 03:26:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:58.610 03:26:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:58.610 03:26:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:58.610 03:26:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:58.610 03:26:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:58.610 03:26:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:58.610 03:26:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:58.610 03:26:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:58.610 03:26:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:58.610 03:26:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:58.610 03:26:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:58.610 03:26:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:58.610 03:26:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:58.610 03:26:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:58.610 03:26:43 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:58.610 03:26:43 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:58.610 03:26:43 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:58.610 03:26:43 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:58.610 03:26:43 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:58.610 03:26:43 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:58.610 03:26:43 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:16:58.610 03:26:43 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:58.610 03:26:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:16:58.610 03:26:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:58.610 03:26:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:58.610 03:26:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:58.610 03:26:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:58.610 03:26:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:58.610 03:26:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:58.610 03:26:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:58.610 03:26:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:58.610 03:26:43 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:58.610 03:26:43 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:58.610 03:26:43 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:16:58.611 03:26:43 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:16:58.611 03:26:43 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:58.611 03:26:43 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:16:58.611 03:26:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:58.611 03:26:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:58.611 03:26:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:58.611 03:26:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:58.611 03:26:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:58.611 03:26:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:58.611 03:26:43 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:58.611 03:26:43 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:58.611 03:26:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:58.611 03:26:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:58.611 03:26:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:16:58.611 03:26:43 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:17:00.534 03:26:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:00.534 03:26:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:17:00.534 03:26:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:00.534 03:26:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:00.534 03:26:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:00.534 03:26:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:00.534 03:26:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:00.534 03:26:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:17:00.534 03:26:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:00.534 03:26:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:17:00.534 03:26:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:17:00.534 03:26:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:17:00.534 03:26:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:17:00.534 03:26:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:17:00.534 03:26:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:17:00.534 03:26:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:00.534 03:26:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:00.534 03:26:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:00.534 03:26:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:00.534 03:26:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:00.534 03:26:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:00.534 03:26:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:00.534 03:26:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:00.534 03:26:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:00.534 03:26:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:00.534 03:26:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:00.534 03:26:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:00.534 03:26:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:00.534 03:26:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:00.534 03:26:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:00.534 03:26:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:00.534 03:26:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:00.534 03:26:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:00.534 03:26:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:00.534 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:00.534 03:26:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:00.534 03:26:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:00.534 03:26:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:00.534 03:26:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:00.534 03:26:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:00.534 03:26:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:00.534 03:26:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:00.534 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:00.534 03:26:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:00.534 03:26:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:00.534 03:26:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:00.534 03:26:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:00.534 03:26:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:00.534 03:26:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:00.534 03:26:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:00.534 03:26:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:00.534 03:26:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:00.534 03:26:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:00.534 03:26:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:00.534 03:26:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:00.534 03:26:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:00.534 03:26:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:00.534 03:26:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:00.534 03:26:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:00.534 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:00.534 03:26:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:00.534 03:26:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:00.534 03:26:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:00.534 03:26:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:00.534 03:26:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:00.534 03:26:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:00.534 03:26:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:00.534 03:26:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:00.534 03:26:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:00.534 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:00.534 03:26:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:00.534 03:26:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:00.534 03:26:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:17:00.534 03:26:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:00.534 03:26:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:00.534 03:26:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:00.534 03:26:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:00.534 03:26:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:00.534 03:26:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:00.534 03:26:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:00.534 03:26:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:00.534 03:26:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:00.534 03:26:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:00.534 03:26:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:00.534 03:26:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:00.534 03:26:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:00.534 03:26:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:00.534 03:26:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:00.534 03:26:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:00.534 03:26:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:00.534 03:26:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:00.534 03:26:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:00.534 03:26:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:00.534 03:26:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:00.534 03:26:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:00.534 03:26:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:00.534 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:00.534 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.181 ms 00:17:00.534 00:17:00.534 --- 10.0.0.2 ping statistics --- 00:17:00.534 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:00.534 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:17:00.534 03:26:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:00.534 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:00.534 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.163 ms 00:17:00.534 00:17:00.534 --- 10.0.0.1 ping statistics --- 00:17:00.534 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:00.534 rtt min/avg/max/mdev = 0.163/0.163/0.163/0.000 ms 00:17:00.534 03:26:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:00.534 03:26:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:17:00.534 03:26:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:00.534 03:26:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:00.534 03:26:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:00.534 03:26:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:00.534 03:26:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:00.534 03:26:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:00.534 03:26:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:00.534 03:26:45 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:17:00.534 03:26:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:00.534 03:26:45 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:00.534 03:26:45 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:17:00.534 03:26:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=2386926 00:17:00.534 03:26:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:17:00.534 03:26:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 2386926 00:17:00.534 03:26:45 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@827 -- # '[' -z 2386926 ']' 00:17:00.534 03:26:45 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:00.534 03:26:45 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:00.534 03:26:45 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:00.534 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:00.534 03:26:45 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:00.534 03:26:45 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:17:00.534 [2024-07-21 03:26:45.561450] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:17:00.534 [2024-07-21 03:26:45.561541] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:00.534 EAL: No free 2048 kB hugepages reported on node 1 00:17:00.534 [2024-07-21 03:26:45.632511] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:00.534 [2024-07-21 03:26:45.723101] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:00.534 [2024-07-21 03:26:45.723162] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:00.534 [2024-07-21 03:26:45.723188] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:00.534 [2024-07-21 03:26:45.723203] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:00.534 [2024-07-21 03:26:45.723216] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:00.534 [2024-07-21 03:26:45.723274] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:00.534 [2024-07-21 03:26:45.723329] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:00.534 [2024-07-21 03:26:45.723346] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:00.534 03:26:45 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:00.534 03:26:45 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@860 -- # return 0 00:17:00.534 03:26:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:00.534 03:26:45 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:00.534 03:26:45 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:17:00.534 03:26:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:00.790 03:26:45 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:00.790 [2024-07-21 03:26:46.076861] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:00.790 03:26:46 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:01.354 03:26:46 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:17:01.354 03:26:46 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:01.354 03:26:46 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:17:01.354 03:26:46 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:17:01.612 03:26:46 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:17:01.868 03:26:47 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=a4d226cb-3c8a-4ff0-b589-34f73f9e3266 00:17:01.868 03:26:47 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u a4d226cb-3c8a-4ff0-b589-34f73f9e3266 lvol 20 00:17:02.125 03:26:47 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=1dd080e1-f5f8-437f-be6b-f03ac868ea63 00:17:02.125 03:26:47 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:17:02.382 03:26:47 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 1dd080e1-f5f8-437f-be6b-f03ac868ea63 00:17:02.639 03:26:47 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:17:02.896 [2024-07-21 03:26:48.094555] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:02.896 03:26:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:03.153 03:26:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=2387237 00:17:03.153 03:26:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:17:03.153 03:26:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:17:03.153 EAL: No free 2048 kB hugepages reported on node 1 00:17:04.116 03:26:49 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 1dd080e1-f5f8-437f-be6b-f03ac868ea63 MY_SNAPSHOT 00:17:04.373 03:26:49 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=c50cb303-7e94-4b32-961e-32109db63025 00:17:04.373 03:26:49 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 1dd080e1-f5f8-437f-be6b-f03ac868ea63 30 00:17:04.936 03:26:49 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone c50cb303-7e94-4b32-961e-32109db63025 MY_CLONE 00:17:04.936 03:26:50 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=8a709f80-c756-4a2c-8f39-43605386b97b 00:17:04.936 03:26:50 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 8a709f80-c756-4a2c-8f39-43605386b97b 00:17:05.867 03:26:50 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 2387237 00:17:13.965 Initializing NVMe Controllers 00:17:13.965 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:17:13.965 Controller IO queue size 128, less than required. 00:17:13.965 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:13.965 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:17:13.965 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:17:13.965 Initialization complete. Launching workers. 00:17:13.965 ======================================================== 00:17:13.965 Latency(us) 00:17:13.965 Device Information : IOPS MiB/s Average min max 00:17:13.965 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10652.30 41.61 12024.78 2050.19 70470.65 00:17:13.965 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10633.30 41.54 12049.08 2338.61 60424.18 00:17:13.965 ======================================================== 00:17:13.965 Total : 21285.60 83.15 12036.92 2050.19 70470.65 00:17:13.965 00:17:13.965 03:26:58 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:17:13.965 03:26:59 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 1dd080e1-f5f8-437f-be6b-f03ac868ea63 00:17:14.222 03:26:59 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a4d226cb-3c8a-4ff0-b589-34f73f9e3266 00:17:14.480 03:26:59 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:17:14.480 03:26:59 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:17:14.480 03:26:59 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:17:14.480 03:26:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:14.480 03:26:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:17:14.480 03:26:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:14.480 03:26:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:17:14.480 03:26:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:14.480 03:26:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:14.480 rmmod nvme_tcp 00:17:14.480 rmmod nvme_fabrics 00:17:14.480 rmmod nvme_keyring 00:17:14.480 03:26:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:14.480 03:26:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:17:14.480 03:26:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:17:14.480 03:26:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 2386926 ']' 00:17:14.480 03:26:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 2386926 00:17:14.480 03:26:59 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@946 -- # '[' -z 2386926 ']' 00:17:14.480 03:26:59 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@950 -- # kill -0 2386926 00:17:14.480 03:26:59 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@951 -- # uname 00:17:14.480 03:26:59 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:14.480 03:26:59 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2386926 00:17:14.480 03:26:59 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:17:14.480 03:26:59 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:17:14.480 03:26:59 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2386926' 00:17:14.480 killing process with pid 2386926 00:17:14.480 03:26:59 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@965 -- # kill 2386926 00:17:14.480 03:26:59 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@970 -- # wait 2386926 00:17:14.738 03:26:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:14.738 03:26:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:14.738 03:26:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:14.738 03:26:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:14.738 03:26:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:14.738 03:26:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:14.738 03:26:59 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:14.738 03:26:59 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:17.271 03:27:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:17.271 00:17:17.271 real 0m18.605s 00:17:17.271 user 1m3.430s 00:17:17.271 sys 0m5.710s 00:17:17.271 03:27:01 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:17.271 03:27:01 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:17:17.271 ************************************ 00:17:17.271 END TEST nvmf_lvol 00:17:17.271 ************************************ 00:17:17.271 03:27:01 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:17:17.271 03:27:01 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:17:17.271 03:27:01 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:17.271 03:27:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:17.271 ************************************ 00:17:17.271 START TEST nvmf_lvs_grow 00:17:17.271 ************************************ 00:17:17.271 03:27:02 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:17:17.271 * Looking for test storage... 00:17:17.271 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:17.271 03:27:02 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:17.271 03:27:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:17:17.271 03:27:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:17.271 03:27:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:17.271 03:27:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:17.271 03:27:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:17.271 03:27:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:17.271 03:27:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:17.271 03:27:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:17.271 03:27:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:17.271 03:27:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:17.271 03:27:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:17.271 03:27:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:17.271 03:27:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:17.271 03:27:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:17.271 03:27:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:17.271 03:27:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:17.271 03:27:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:17.271 03:27:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:17.271 03:27:02 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:17.271 03:27:02 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:17.271 03:27:02 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:17.271 03:27:02 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:17.271 03:27:02 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:17.271 03:27:02 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:17.271 03:27:02 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:17:17.271 03:27:02 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:17.271 03:27:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:17:17.271 03:27:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:17.271 03:27:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:17.271 03:27:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:17.271 03:27:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:17.271 03:27:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:17.271 03:27:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:17.271 03:27:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:17.271 03:27:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:17.271 03:27:02 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:17.271 03:27:02 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:17.271 03:27:02 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:17:17.271 03:27:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:17.271 03:27:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:17.271 03:27:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:17.271 03:27:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:17.271 03:27:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:17.271 03:27:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:17.271 03:27:02 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:17.271 03:27:02 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:17.271 03:27:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:17.271 03:27:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:17.271 03:27:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:17:17.271 03:27:02 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:19.170 03:27:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:19.170 03:27:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:17:19.170 03:27:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:19.170 03:27:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:19.170 03:27:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:19.170 03:27:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:19.170 03:27:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:19.170 03:27:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:17:19.170 03:27:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:19.170 03:27:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:17:19.170 03:27:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:17:19.170 03:27:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:17:19.170 03:27:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:17:19.170 03:27:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:17:19.170 03:27:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:17:19.170 03:27:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:19.170 03:27:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:19.170 03:27:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:19.170 03:27:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:19.170 03:27:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:19.170 03:27:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:19.170 03:27:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:19.170 03:27:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:19.170 03:27:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:19.170 03:27:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:19.170 03:27:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:19.170 03:27:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:19.170 03:27:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:19.170 03:27:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:19.171 03:27:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:19.171 03:27:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:19.171 03:27:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:19.171 03:27:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:19.171 03:27:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:19.171 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:19.171 03:27:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:19.171 03:27:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:19.171 03:27:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:19.171 03:27:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:19.171 03:27:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:19.171 03:27:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:19.171 03:27:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:19.171 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:19.171 03:27:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:19.171 03:27:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:19.171 03:27:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:19.171 03:27:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:19.171 03:27:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:19.171 03:27:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:19.171 03:27:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:19.171 03:27:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:19.171 03:27:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:19.171 03:27:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:19.171 03:27:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:19.171 03:27:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:19.171 03:27:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:19.171 03:27:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:19.171 03:27:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:19.171 03:27:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:19.171 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:19.171 03:27:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:19.171 03:27:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:19.171 03:27:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:19.171 03:27:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:19.171 03:27:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:19.171 03:27:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:19.171 03:27:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:19.171 03:27:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:19.171 03:27:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:19.171 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:19.171 03:27:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:19.171 03:27:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:19.171 03:27:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:17:19.171 03:27:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:19.171 03:27:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:19.171 03:27:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:19.171 03:27:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:19.171 03:27:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:19.171 03:27:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:19.171 03:27:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:19.171 03:27:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:19.171 03:27:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:19.171 03:27:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:19.171 03:27:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:19.171 03:27:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:19.171 03:27:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:19.171 03:27:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:19.171 03:27:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:19.171 03:27:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:19.171 03:27:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:19.171 03:27:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:19.171 03:27:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:19.171 03:27:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:19.171 03:27:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:19.171 03:27:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:19.171 03:27:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:19.171 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:19.171 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.221 ms 00:17:19.171 00:17:19.171 --- 10.0.0.2 ping statistics --- 00:17:19.171 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:19.171 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:17:19.171 03:27:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:19.171 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:19.171 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.069 ms 00:17:19.171 00:17:19.171 --- 10.0.0.1 ping statistics --- 00:17:19.171 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:19.171 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:17:19.171 03:27:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:19.171 03:27:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:17:19.171 03:27:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:19.171 03:27:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:19.171 03:27:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:19.171 03:27:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:19.171 03:27:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:19.171 03:27:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:19.171 03:27:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:19.171 03:27:04 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:17:19.171 03:27:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:19.171 03:27:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:19.171 03:27:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:19.171 03:27:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=2390599 00:17:19.171 03:27:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:17:19.171 03:27:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 2390599 00:17:19.171 03:27:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # '[' -z 2390599 ']' 00:17:19.171 03:27:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:19.171 03:27:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:19.171 03:27:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:19.171 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:19.171 03:27:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:19.171 03:27:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:19.171 [2024-07-21 03:27:04.282988] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:17:19.171 [2024-07-21 03:27:04.283061] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:19.171 EAL: No free 2048 kB hugepages reported on node 1 00:17:19.171 [2024-07-21 03:27:04.350036] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:19.171 [2024-07-21 03:27:04.439689] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:19.171 [2024-07-21 03:27:04.439744] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:19.171 [2024-07-21 03:27:04.439758] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:19.171 [2024-07-21 03:27:04.439771] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:19.171 [2024-07-21 03:27:04.439782] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:19.171 [2024-07-21 03:27:04.439814] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:19.429 03:27:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:19.429 03:27:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # return 0 00:17:19.429 03:27:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:19.429 03:27:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:19.429 03:27:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:19.429 03:27:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:19.429 03:27:04 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:19.686 [2024-07-21 03:27:04.801316] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:19.686 03:27:04 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:17:19.686 03:27:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:17:19.686 03:27:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:19.686 03:27:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:19.686 ************************************ 00:17:19.686 START TEST lvs_grow_clean 00:17:19.686 ************************************ 00:17:19.686 03:27:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1121 -- # lvs_grow 00:17:19.686 03:27:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:17:19.686 03:27:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:17:19.686 03:27:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:17:19.686 03:27:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:17:19.686 03:27:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:17:19.686 03:27:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:17:19.686 03:27:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:19.686 03:27:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:19.686 03:27:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:19.944 03:27:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:17:19.944 03:27:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:17:20.202 03:27:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=863dabaa-4c51-4d0c-a689-9bdccca29280 00:17:20.202 03:27:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 863dabaa-4c51-4d0c-a689-9bdccca29280 00:17:20.202 03:27:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:17:20.459 03:27:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:17:20.459 03:27:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:17:20.459 03:27:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 863dabaa-4c51-4d0c-a689-9bdccca29280 lvol 150 00:17:20.716 03:27:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=aaaf7f66-6620-4f49-97c4-e0322709ee19 00:17:20.716 03:27:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:20.716 03:27:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:17:20.974 [2024-07-21 03:27:06.109829] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:17:20.974 [2024-07-21 03:27:06.109930] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:17:20.974 true 00:17:20.974 03:27:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 863dabaa-4c51-4d0c-a689-9bdccca29280 00:17:20.974 03:27:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:17:21.232 03:27:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:17:21.232 03:27:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:17:21.490 03:27:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 aaaf7f66-6620-4f49-97c4-e0322709ee19 00:17:21.747 03:27:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:17:22.005 [2024-07-21 03:27:07.165075] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:22.005 03:27:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:22.263 03:27:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2391196 00:17:22.263 03:27:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:22.263 03:27:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2391196 /var/tmp/bdevperf.sock 00:17:22.263 03:27:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:17:22.263 03:27:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@827 -- # '[' -z 2391196 ']' 00:17:22.263 03:27:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:22.263 03:27:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:22.263 03:27:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:22.263 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:22.263 03:27:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:22.263 03:27:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:17:22.263 [2024-07-21 03:27:07.508358] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:17:22.263 [2024-07-21 03:27:07.508431] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2391196 ] 00:17:22.263 EAL: No free 2048 kB hugepages reported on node 1 00:17:22.263 [2024-07-21 03:27:07.568688] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:22.521 [2024-07-21 03:27:07.656414] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:22.521 03:27:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:22.521 03:27:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # return 0 00:17:22.521 03:27:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:17:23.085 Nvme0n1 00:17:23.085 03:27:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:17:23.343 [ 00:17:23.343 { 00:17:23.343 "name": "Nvme0n1", 00:17:23.343 "aliases": [ 00:17:23.343 "aaaf7f66-6620-4f49-97c4-e0322709ee19" 00:17:23.343 ], 00:17:23.343 "product_name": "NVMe disk", 00:17:23.343 "block_size": 4096, 00:17:23.343 "num_blocks": 38912, 00:17:23.343 "uuid": "aaaf7f66-6620-4f49-97c4-e0322709ee19", 00:17:23.343 "assigned_rate_limits": { 00:17:23.343 "rw_ios_per_sec": 0, 00:17:23.343 "rw_mbytes_per_sec": 0, 00:17:23.343 "r_mbytes_per_sec": 0, 00:17:23.343 "w_mbytes_per_sec": 0 00:17:23.343 }, 00:17:23.343 "claimed": false, 00:17:23.343 "zoned": false, 00:17:23.343 "supported_io_types": { 00:17:23.343 "read": true, 00:17:23.343 "write": true, 00:17:23.343 "unmap": true, 00:17:23.343 "write_zeroes": true, 00:17:23.343 "flush": true, 00:17:23.343 "reset": true, 00:17:23.343 "compare": true, 00:17:23.343 "compare_and_write": true, 00:17:23.343 "abort": true, 00:17:23.343 "nvme_admin": true, 00:17:23.343 "nvme_io": true 00:17:23.343 }, 00:17:23.343 "memory_domains": [ 00:17:23.343 { 00:17:23.343 "dma_device_id": "system", 00:17:23.343 "dma_device_type": 1 00:17:23.343 } 00:17:23.343 ], 00:17:23.343 "driver_specific": { 00:17:23.343 "nvme": [ 00:17:23.343 { 00:17:23.343 "trid": { 00:17:23.343 "trtype": "TCP", 00:17:23.343 "adrfam": "IPv4", 00:17:23.343 "traddr": "10.0.0.2", 00:17:23.343 "trsvcid": "4420", 00:17:23.343 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:17:23.343 }, 00:17:23.343 "ctrlr_data": { 00:17:23.343 "cntlid": 1, 00:17:23.343 "vendor_id": "0x8086", 00:17:23.343 "model_number": "SPDK bdev Controller", 00:17:23.343 "serial_number": "SPDK0", 00:17:23.343 "firmware_revision": "24.05.1", 00:17:23.343 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:23.343 "oacs": { 00:17:23.343 "security": 0, 00:17:23.343 "format": 0, 00:17:23.343 "firmware": 0, 00:17:23.343 "ns_manage": 0 00:17:23.343 }, 00:17:23.343 "multi_ctrlr": true, 00:17:23.343 "ana_reporting": false 00:17:23.343 }, 00:17:23.343 "vs": { 00:17:23.343 "nvme_version": "1.3" 00:17:23.343 }, 00:17:23.343 "ns_data": { 00:17:23.343 "id": 1, 00:17:23.343 "can_share": true 00:17:23.343 } 00:17:23.343 } 00:17:23.343 ], 00:17:23.343 "mp_policy": "active_passive" 00:17:23.343 } 00:17:23.343 } 00:17:23.343 ] 00:17:23.343 03:27:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2391574 00:17:23.343 03:27:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:17:23.343 03:27:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:23.343 Running I/O for 10 seconds... 00:17:24.716 Latency(us) 00:17:24.716 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:24.716 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:24.716 Nvme0n1 : 1.00 14242.00 55.63 0.00 0.00 0.00 0.00 0.00 00:17:24.716 =================================================================================================================== 00:17:24.716 Total : 14242.00 55.63 0.00 0.00 0.00 0.00 0.00 00:17:24.716 00:17:25.280 03:27:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 863dabaa-4c51-4d0c-a689-9bdccca29280 00:17:25.538 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:25.538 Nvme0n1 : 2.00 14550.50 56.84 0.00 0.00 0.00 0.00 0.00 00:17:25.538 =================================================================================================================== 00:17:25.538 Total : 14550.50 56.84 0.00 0.00 0.00 0.00 0.00 00:17:25.538 00:17:25.538 true 00:17:25.538 03:27:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 863dabaa-4c51-4d0c-a689-9bdccca29280 00:17:25.538 03:27:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:17:25.796 03:27:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:17:25.796 03:27:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:17:25.796 03:27:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 2391574 00:17:26.382 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:26.382 Nvme0n1 : 3.00 14632.33 57.16 0.00 0.00 0.00 0.00 0.00 00:17:26.382 =================================================================================================================== 00:17:26.382 Total : 14632.33 57.16 0.00 0.00 0.00 0.00 0.00 00:17:26.382 00:17:27.315 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:27.315 Nvme0n1 : 4.00 14831.75 57.94 0.00 0.00 0.00 0.00 0.00 00:17:27.315 =================================================================================================================== 00:17:27.315 Total : 14831.75 57.94 0.00 0.00 0.00 0.00 0.00 00:17:27.315 00:17:28.686 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:28.686 Nvme0n1 : 5.00 14837.20 57.96 0.00 0.00 0.00 0.00 0.00 00:17:28.686 =================================================================================================================== 00:17:28.686 Total : 14837.20 57.96 0.00 0.00 0.00 0.00 0.00 00:17:28.686 00:17:29.619 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:29.619 Nvme0n1 : 6.00 14862.00 58.05 0.00 0.00 0.00 0.00 0.00 00:17:29.619 =================================================================================================================== 00:17:29.619 Total : 14862.00 58.05 0.00 0.00 0.00 0.00 0.00 00:17:29.619 00:17:30.554 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:30.554 Nvme0n1 : 7.00 14952.29 58.41 0.00 0.00 0.00 0.00 0.00 00:17:30.554 =================================================================================================================== 00:17:30.554 Total : 14952.29 58.41 0.00 0.00 0.00 0.00 0.00 00:17:30.554 00:17:31.488 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:31.488 Nvme0n1 : 8.00 14964.50 58.46 0.00 0.00 0.00 0.00 0.00 00:17:31.488 =================================================================================================================== 00:17:31.488 Total : 14964.50 58.46 0.00 0.00 0.00 0.00 0.00 00:17:31.488 00:17:32.420 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:32.420 Nvme0n1 : 9.00 15044.56 58.77 0.00 0.00 0.00 0.00 0.00 00:17:32.420 =================================================================================================================== 00:17:32.420 Total : 15044.56 58.77 0.00 0.00 0.00 0.00 0.00 00:17:32.420 00:17:33.351 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:33.351 Nvme0n1 : 10.00 15064.70 58.85 0.00 0.00 0.00 0.00 0.00 00:17:33.351 =================================================================================================================== 00:17:33.351 Total : 15064.70 58.85 0.00 0.00 0.00 0.00 0.00 00:17:33.351 00:17:33.351 00:17:33.351 Latency(us) 00:17:33.351 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:33.351 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:33.351 Nvme0n1 : 10.00 15071.75 58.87 0.00 0.00 8488.00 2500.08 17864.63 00:17:33.351 =================================================================================================================== 00:17:33.351 Total : 15071.75 58.87 0.00 0.00 8488.00 2500.08 17864.63 00:17:33.351 0 00:17:33.351 03:27:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2391196 00:17:33.351 03:27:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@946 -- # '[' -z 2391196 ']' 00:17:33.351 03:27:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # kill -0 2391196 00:17:33.351 03:27:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@951 -- # uname 00:17:33.351 03:27:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:33.351 03:27:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2391196 00:17:33.608 03:27:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:17:33.608 03:27:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:17:33.608 03:27:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2391196' 00:17:33.608 killing process with pid 2391196 00:17:33.608 03:27:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@965 -- # kill 2391196 00:17:33.608 Received shutdown signal, test time was about 10.000000 seconds 00:17:33.608 00:17:33.608 Latency(us) 00:17:33.608 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:33.608 =================================================================================================================== 00:17:33.608 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:33.608 03:27:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@970 -- # wait 2391196 00:17:33.608 03:27:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:34.225 03:27:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:17:34.225 03:27:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 863dabaa-4c51-4d0c-a689-9bdccca29280 00:17:34.225 03:27:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:17:34.481 03:27:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:17:34.482 03:27:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:17:34.482 03:27:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:34.739 [2024-07-21 03:27:20.027513] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:17:34.997 03:27:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 863dabaa-4c51-4d0c-a689-9bdccca29280 00:17:34.997 03:27:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:17:34.997 03:27:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 863dabaa-4c51-4d0c-a689-9bdccca29280 00:17:34.997 03:27:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:34.997 03:27:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:34.997 03:27:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:34.997 03:27:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:34.997 03:27:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:34.997 03:27:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:34.997 03:27:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:34.997 03:27:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:17:34.997 03:27:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 863dabaa-4c51-4d0c-a689-9bdccca29280 00:17:35.255 request: 00:17:35.255 { 00:17:35.255 "uuid": "863dabaa-4c51-4d0c-a689-9bdccca29280", 00:17:35.255 "method": "bdev_lvol_get_lvstores", 00:17:35.255 "req_id": 1 00:17:35.255 } 00:17:35.255 Got JSON-RPC error response 00:17:35.255 response: 00:17:35.255 { 00:17:35.255 "code": -19, 00:17:35.255 "message": "No such device" 00:17:35.255 } 00:17:35.255 03:27:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:17:35.255 03:27:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:35.255 03:27:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:35.255 03:27:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:35.255 03:27:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:35.513 aio_bdev 00:17:35.513 03:27:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev aaaf7f66-6620-4f49-97c4-e0322709ee19 00:17:35.513 03:27:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@895 -- # local bdev_name=aaaf7f66-6620-4f49-97c4-e0322709ee19 00:17:35.513 03:27:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:17:35.513 03:27:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local i 00:17:35.513 03:27:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:17:35.513 03:27:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:17:35.513 03:27:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:35.770 03:27:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b aaaf7f66-6620-4f49-97c4-e0322709ee19 -t 2000 00:17:36.028 [ 00:17:36.028 { 00:17:36.028 "name": "aaaf7f66-6620-4f49-97c4-e0322709ee19", 00:17:36.028 "aliases": [ 00:17:36.028 "lvs/lvol" 00:17:36.028 ], 00:17:36.028 "product_name": "Logical Volume", 00:17:36.028 "block_size": 4096, 00:17:36.028 "num_blocks": 38912, 00:17:36.028 "uuid": "aaaf7f66-6620-4f49-97c4-e0322709ee19", 00:17:36.028 "assigned_rate_limits": { 00:17:36.028 "rw_ios_per_sec": 0, 00:17:36.028 "rw_mbytes_per_sec": 0, 00:17:36.028 "r_mbytes_per_sec": 0, 00:17:36.028 "w_mbytes_per_sec": 0 00:17:36.028 }, 00:17:36.028 "claimed": false, 00:17:36.028 "zoned": false, 00:17:36.028 "supported_io_types": { 00:17:36.028 "read": true, 00:17:36.028 "write": true, 00:17:36.028 "unmap": true, 00:17:36.028 "write_zeroes": true, 00:17:36.028 "flush": false, 00:17:36.028 "reset": true, 00:17:36.028 "compare": false, 00:17:36.028 "compare_and_write": false, 00:17:36.028 "abort": false, 00:17:36.028 "nvme_admin": false, 00:17:36.028 "nvme_io": false 00:17:36.028 }, 00:17:36.028 "driver_specific": { 00:17:36.028 "lvol": { 00:17:36.028 "lvol_store_uuid": "863dabaa-4c51-4d0c-a689-9bdccca29280", 00:17:36.028 "base_bdev": "aio_bdev", 00:17:36.028 "thin_provision": false, 00:17:36.028 "num_allocated_clusters": 38, 00:17:36.028 "snapshot": false, 00:17:36.028 "clone": false, 00:17:36.028 "esnap_clone": false 00:17:36.028 } 00:17:36.028 } 00:17:36.028 } 00:17:36.028 ] 00:17:36.028 03:27:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # return 0 00:17:36.028 03:27:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 863dabaa-4c51-4d0c-a689-9bdccca29280 00:17:36.028 03:27:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:17:36.286 03:27:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:17:36.286 03:27:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 863dabaa-4c51-4d0c-a689-9bdccca29280 00:17:36.286 03:27:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:17:36.543 03:27:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:17:36.543 03:27:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete aaaf7f66-6620-4f49-97c4-e0322709ee19 00:17:36.801 03:27:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 863dabaa-4c51-4d0c-a689-9bdccca29280 00:17:37.059 03:27:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:37.317 03:27:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:37.317 00:17:37.317 real 0m17.699s 00:17:37.317 user 0m16.969s 00:17:37.317 sys 0m2.004s 00:17:37.317 03:27:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:37.317 03:27:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:17:37.317 ************************************ 00:17:37.317 END TEST lvs_grow_clean 00:17:37.317 ************************************ 00:17:37.317 03:27:22 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:17:37.317 03:27:22 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:17:37.317 03:27:22 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:37.317 03:27:22 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:37.317 ************************************ 00:17:37.317 START TEST lvs_grow_dirty 00:17:37.317 ************************************ 00:17:37.317 03:27:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1121 -- # lvs_grow dirty 00:17:37.317 03:27:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:17:37.317 03:27:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:17:37.317 03:27:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:17:37.317 03:27:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:17:37.317 03:27:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:17:37.317 03:27:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:17:37.317 03:27:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:37.317 03:27:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:37.317 03:27:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:37.883 03:27:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:17:37.883 03:27:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:17:37.883 03:27:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=0efdc8e5-0e00-420f-b98d-87aa6f761659 00:17:37.883 03:27:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0efdc8e5-0e00-420f-b98d-87aa6f761659 00:17:37.883 03:27:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:17:38.139 03:27:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:17:38.140 03:27:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:17:38.140 03:27:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 0efdc8e5-0e00-420f-b98d-87aa6f761659 lvol 150 00:17:38.703 03:27:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=3ddf269c-bd2e-45a0-88e6-43b7d313d57c 00:17:38.703 03:27:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:38.703 03:27:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:17:38.703 [2024-07-21 03:27:23.992185] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:17:38.703 [2024-07-21 03:27:23.992299] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:17:38.703 true 00:17:38.703 03:27:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0efdc8e5-0e00-420f-b98d-87aa6f761659 00:17:38.703 03:27:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:17:39.268 03:27:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:17:39.268 03:27:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:17:39.525 03:27:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 3ddf269c-bd2e-45a0-88e6-43b7d313d57c 00:17:39.783 03:27:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:17:40.040 [2024-07-21 03:27:25.119550] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:40.040 03:27:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:40.298 03:27:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2393715 00:17:40.298 03:27:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:17:40.298 03:27:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:40.298 03:27:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2393715 /var/tmp/bdevperf.sock 00:17:40.298 03:27:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@827 -- # '[' -z 2393715 ']' 00:17:40.298 03:27:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:40.298 03:27:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:40.298 03:27:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:40.298 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:40.298 03:27:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:40.298 03:27:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:17:40.298 [2024-07-21 03:27:25.455784] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:17:40.298 [2024-07-21 03:27:25.455864] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2393715 ] 00:17:40.298 EAL: No free 2048 kB hugepages reported on node 1 00:17:40.298 [2024-07-21 03:27:25.522479] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:40.556 [2024-07-21 03:27:25.614352] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:40.556 03:27:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:40.556 03:27:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # return 0 00:17:40.556 03:27:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:17:41.121 Nvme0n1 00:17:41.121 03:27:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:17:41.378 [ 00:17:41.378 { 00:17:41.378 "name": "Nvme0n1", 00:17:41.378 "aliases": [ 00:17:41.378 "3ddf269c-bd2e-45a0-88e6-43b7d313d57c" 00:17:41.378 ], 00:17:41.378 "product_name": "NVMe disk", 00:17:41.378 "block_size": 4096, 00:17:41.378 "num_blocks": 38912, 00:17:41.378 "uuid": "3ddf269c-bd2e-45a0-88e6-43b7d313d57c", 00:17:41.378 "assigned_rate_limits": { 00:17:41.378 "rw_ios_per_sec": 0, 00:17:41.378 "rw_mbytes_per_sec": 0, 00:17:41.378 "r_mbytes_per_sec": 0, 00:17:41.378 "w_mbytes_per_sec": 0 00:17:41.378 }, 00:17:41.378 "claimed": false, 00:17:41.378 "zoned": false, 00:17:41.378 "supported_io_types": { 00:17:41.378 "read": true, 00:17:41.378 "write": true, 00:17:41.378 "unmap": true, 00:17:41.378 "write_zeroes": true, 00:17:41.378 "flush": true, 00:17:41.378 "reset": true, 00:17:41.378 "compare": true, 00:17:41.378 "compare_and_write": true, 00:17:41.378 "abort": true, 00:17:41.378 "nvme_admin": true, 00:17:41.378 "nvme_io": true 00:17:41.378 }, 00:17:41.378 "memory_domains": [ 00:17:41.378 { 00:17:41.378 "dma_device_id": "system", 00:17:41.378 "dma_device_type": 1 00:17:41.378 } 00:17:41.378 ], 00:17:41.378 "driver_specific": { 00:17:41.378 "nvme": [ 00:17:41.378 { 00:17:41.378 "trid": { 00:17:41.378 "trtype": "TCP", 00:17:41.378 "adrfam": "IPv4", 00:17:41.378 "traddr": "10.0.0.2", 00:17:41.378 "trsvcid": "4420", 00:17:41.378 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:17:41.378 }, 00:17:41.378 "ctrlr_data": { 00:17:41.378 "cntlid": 1, 00:17:41.378 "vendor_id": "0x8086", 00:17:41.378 "model_number": "SPDK bdev Controller", 00:17:41.378 "serial_number": "SPDK0", 00:17:41.378 "firmware_revision": "24.05.1", 00:17:41.378 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:41.378 "oacs": { 00:17:41.378 "security": 0, 00:17:41.378 "format": 0, 00:17:41.378 "firmware": 0, 00:17:41.378 "ns_manage": 0 00:17:41.378 }, 00:17:41.378 "multi_ctrlr": true, 00:17:41.378 "ana_reporting": false 00:17:41.378 }, 00:17:41.378 "vs": { 00:17:41.378 "nvme_version": "1.3" 00:17:41.378 }, 00:17:41.378 "ns_data": { 00:17:41.378 "id": 1, 00:17:41.378 "can_share": true 00:17:41.378 } 00:17:41.378 } 00:17:41.378 ], 00:17:41.378 "mp_policy": "active_passive" 00:17:41.378 } 00:17:41.378 } 00:17:41.378 ] 00:17:41.378 03:27:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2393852 00:17:41.378 03:27:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:41.378 03:27:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:17:41.378 Running I/O for 10 seconds... 00:17:42.796 Latency(us) 00:17:42.796 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:42.796 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:42.796 Nvme0n1 : 1.00 14479.00 56.56 0.00 0.00 0.00 0.00 0.00 00:17:42.796 =================================================================================================================== 00:17:42.796 Total : 14479.00 56.56 0.00 0.00 0.00 0.00 0.00 00:17:42.796 00:17:43.381 03:27:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 0efdc8e5-0e00-420f-b98d-87aa6f761659 00:17:43.381 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:43.381 Nvme0n1 : 2.00 14579.00 56.95 0.00 0.00 0.00 0.00 0.00 00:17:43.381 =================================================================================================================== 00:17:43.381 Total : 14579.00 56.95 0.00 0.00 0.00 0.00 0.00 00:17:43.381 00:17:43.638 true 00:17:43.638 03:27:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0efdc8e5-0e00-420f-b98d-87aa6f761659 00:17:43.638 03:27:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:17:43.896 03:27:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:17:43.896 03:27:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:17:43.896 03:27:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 2393852 00:17:44.461 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:44.461 Nvme0n1 : 3.00 14655.33 57.25 0.00 0.00 0.00 0.00 0.00 00:17:44.461 =================================================================================================================== 00:17:44.461 Total : 14655.33 57.25 0.00 0.00 0.00 0.00 0.00 00:17:44.461 00:17:45.396 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:45.396 Nvme0n1 : 4.00 14707.00 57.45 0.00 0.00 0.00 0.00 0.00 00:17:45.396 =================================================================================================================== 00:17:45.396 Total : 14707.00 57.45 0.00 0.00 0.00 0.00 0.00 00:17:45.396 00:17:46.771 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:46.771 Nvme0n1 : 5.00 14762.80 57.67 0.00 0.00 0.00 0.00 0.00 00:17:46.771 =================================================================================================================== 00:17:46.771 Total : 14762.80 57.67 0.00 0.00 0.00 0.00 0.00 00:17:46.771 00:17:47.704 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:47.704 Nvme0n1 : 6.00 14811.17 57.86 0.00 0.00 0.00 0.00 0.00 00:17:47.704 =================================================================================================================== 00:17:47.704 Total : 14811.17 57.86 0.00 0.00 0.00 0.00 0.00 00:17:47.704 00:17:48.635 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:48.635 Nvme0n1 : 7.00 14919.00 58.28 0.00 0.00 0.00 0.00 0.00 00:17:48.635 =================================================================================================================== 00:17:48.635 Total : 14919.00 58.28 0.00 0.00 0.00 0.00 0.00 00:17:48.635 00:17:49.566 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:49.566 Nvme0n1 : 8.00 15006.75 58.62 0.00 0.00 0.00 0.00 0.00 00:17:49.566 =================================================================================================================== 00:17:49.566 Total : 15006.75 58.62 0.00 0.00 0.00 0.00 0.00 00:17:49.566 00:17:50.497 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:50.497 Nvme0n1 : 9.00 15025.67 58.69 0.00 0.00 0.00 0.00 0.00 00:17:50.497 =================================================================================================================== 00:17:50.497 Total : 15025.67 58.69 0.00 0.00 0.00 0.00 0.00 00:17:50.497 00:17:51.428 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:51.428 Nvme0n1 : 10.00 15106.20 59.01 0.00 0.00 0.00 0.00 0.00 00:17:51.428 =================================================================================================================== 00:17:51.428 Total : 15106.20 59.01 0.00 0.00 0.00 0.00 0.00 00:17:51.428 00:17:51.428 00:17:51.428 Latency(us) 00:17:51.428 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:51.428 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:51.428 Nvme0n1 : 10.01 15102.99 59.00 0.00 0.00 8468.51 4708.88 15631.55 00:17:51.428 =================================================================================================================== 00:17:51.428 Total : 15102.99 59.00 0.00 0.00 8468.51 4708.88 15631.55 00:17:51.428 0 00:17:51.428 03:27:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2393715 00:17:51.428 03:27:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@946 -- # '[' -z 2393715 ']' 00:17:51.428 03:27:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # kill -0 2393715 00:17:51.428 03:27:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@951 -- # uname 00:17:51.428 03:27:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:51.428 03:27:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2393715 00:17:51.428 03:27:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:17:51.428 03:27:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:17:51.428 03:27:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2393715' 00:17:51.428 killing process with pid 2393715 00:17:51.428 03:27:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@965 -- # kill 2393715 00:17:51.428 Received shutdown signal, test time was about 10.000000 seconds 00:17:51.428 00:17:51.428 Latency(us) 00:17:51.428 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:51.428 =================================================================================================================== 00:17:51.428 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:51.428 03:27:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@970 -- # wait 2393715 00:17:51.685 03:27:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:51.942 03:27:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:17:52.507 03:27:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0efdc8e5-0e00-420f-b98d-87aa6f761659 00:17:52.507 03:27:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:17:52.507 03:27:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:17:52.507 03:27:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:17:52.507 03:27:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 2390599 00:17:52.507 03:27:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 2390599 00:17:52.507 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 2390599 Killed "${NVMF_APP[@]}" "$@" 00:17:52.764 03:27:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:17:52.764 03:27:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:17:52.764 03:27:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:52.764 03:27:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:52.764 03:27:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:17:52.764 03:27:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=2395183 00:17:52.764 03:27:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:17:52.765 03:27:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 2395183 00:17:52.765 03:27:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@827 -- # '[' -z 2395183 ']' 00:17:52.765 03:27:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:52.765 03:27:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:52.765 03:27:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:52.765 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:52.765 03:27:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:52.765 03:27:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:17:52.765 [2024-07-21 03:27:37.867193] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:17:52.765 [2024-07-21 03:27:37.867278] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:52.765 EAL: No free 2048 kB hugepages reported on node 1 00:17:52.765 [2024-07-21 03:27:37.931518] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:52.765 [2024-07-21 03:27:38.017926] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:52.765 [2024-07-21 03:27:38.017989] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:52.765 [2024-07-21 03:27:38.018006] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:52.765 [2024-07-21 03:27:38.018020] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:52.765 [2024-07-21 03:27:38.018033] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:52.765 [2024-07-21 03:27:38.018070] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:53.021 03:27:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:53.021 03:27:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # return 0 00:17:53.021 03:27:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:53.021 03:27:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:53.021 03:27:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:17:53.021 03:27:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:53.021 03:27:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:53.278 [2024-07-21 03:27:38.374491] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:17:53.278 [2024-07-21 03:27:38.374635] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:17:53.278 [2024-07-21 03:27:38.374693] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:17:53.278 03:27:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:17:53.278 03:27:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 3ddf269c-bd2e-45a0-88e6-43b7d313d57c 00:17:53.278 03:27:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@895 -- # local bdev_name=3ddf269c-bd2e-45a0-88e6-43b7d313d57c 00:17:53.278 03:27:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:17:53.278 03:27:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local i 00:17:53.278 03:27:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:17:53.278 03:27:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:17:53.278 03:27:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:53.536 03:27:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 3ddf269c-bd2e-45a0-88e6-43b7d313d57c -t 2000 00:17:53.794 [ 00:17:53.794 { 00:17:53.794 "name": "3ddf269c-bd2e-45a0-88e6-43b7d313d57c", 00:17:53.794 "aliases": [ 00:17:53.794 "lvs/lvol" 00:17:53.794 ], 00:17:53.794 "product_name": "Logical Volume", 00:17:53.794 "block_size": 4096, 00:17:53.794 "num_blocks": 38912, 00:17:53.794 "uuid": "3ddf269c-bd2e-45a0-88e6-43b7d313d57c", 00:17:53.794 "assigned_rate_limits": { 00:17:53.794 "rw_ios_per_sec": 0, 00:17:53.794 "rw_mbytes_per_sec": 0, 00:17:53.794 "r_mbytes_per_sec": 0, 00:17:53.794 "w_mbytes_per_sec": 0 00:17:53.794 }, 00:17:53.794 "claimed": false, 00:17:53.794 "zoned": false, 00:17:53.794 "supported_io_types": { 00:17:53.794 "read": true, 00:17:53.794 "write": true, 00:17:53.794 "unmap": true, 00:17:53.794 "write_zeroes": true, 00:17:53.794 "flush": false, 00:17:53.794 "reset": true, 00:17:53.794 "compare": false, 00:17:53.794 "compare_and_write": false, 00:17:53.794 "abort": false, 00:17:53.794 "nvme_admin": false, 00:17:53.794 "nvme_io": false 00:17:53.794 }, 00:17:53.794 "driver_specific": { 00:17:53.794 "lvol": { 00:17:53.794 "lvol_store_uuid": "0efdc8e5-0e00-420f-b98d-87aa6f761659", 00:17:53.794 "base_bdev": "aio_bdev", 00:17:53.794 "thin_provision": false, 00:17:53.794 "num_allocated_clusters": 38, 00:17:53.794 "snapshot": false, 00:17:53.794 "clone": false, 00:17:53.794 "esnap_clone": false 00:17:53.794 } 00:17:53.794 } 00:17:53.794 } 00:17:53.794 ] 00:17:53.794 03:27:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # return 0 00:17:53.794 03:27:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0efdc8e5-0e00-420f-b98d-87aa6f761659 00:17:53.794 03:27:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:17:54.051 03:27:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:17:54.051 03:27:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0efdc8e5-0e00-420f-b98d-87aa6f761659 00:17:54.051 03:27:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:17:54.309 03:27:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:17:54.309 03:27:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:54.567 [2024-07-21 03:27:39.723817] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:17:54.567 03:27:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0efdc8e5-0e00-420f-b98d-87aa6f761659 00:17:54.567 03:27:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:17:54.567 03:27:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0efdc8e5-0e00-420f-b98d-87aa6f761659 00:17:54.567 03:27:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:54.567 03:27:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:54.567 03:27:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:54.567 03:27:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:54.567 03:27:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:54.567 03:27:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:54.567 03:27:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:54.567 03:27:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:17:54.567 03:27:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0efdc8e5-0e00-420f-b98d-87aa6f761659 00:17:54.825 request: 00:17:54.825 { 00:17:54.825 "uuid": "0efdc8e5-0e00-420f-b98d-87aa6f761659", 00:17:54.825 "method": "bdev_lvol_get_lvstores", 00:17:54.825 "req_id": 1 00:17:54.825 } 00:17:54.825 Got JSON-RPC error response 00:17:54.825 response: 00:17:54.825 { 00:17:54.825 "code": -19, 00:17:54.825 "message": "No such device" 00:17:54.825 } 00:17:54.825 03:27:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:17:54.825 03:27:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:54.825 03:27:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:54.825 03:27:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:54.825 03:27:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:55.082 aio_bdev 00:17:55.082 03:27:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 3ddf269c-bd2e-45a0-88e6-43b7d313d57c 00:17:55.083 03:27:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@895 -- # local bdev_name=3ddf269c-bd2e-45a0-88e6-43b7d313d57c 00:17:55.083 03:27:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:17:55.083 03:27:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local i 00:17:55.083 03:27:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:17:55.083 03:27:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:17:55.083 03:27:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:55.340 03:27:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 3ddf269c-bd2e-45a0-88e6-43b7d313d57c -t 2000 00:17:55.597 [ 00:17:55.597 { 00:17:55.597 "name": "3ddf269c-bd2e-45a0-88e6-43b7d313d57c", 00:17:55.597 "aliases": [ 00:17:55.597 "lvs/lvol" 00:17:55.597 ], 00:17:55.597 "product_name": "Logical Volume", 00:17:55.597 "block_size": 4096, 00:17:55.597 "num_blocks": 38912, 00:17:55.597 "uuid": "3ddf269c-bd2e-45a0-88e6-43b7d313d57c", 00:17:55.597 "assigned_rate_limits": { 00:17:55.597 "rw_ios_per_sec": 0, 00:17:55.597 "rw_mbytes_per_sec": 0, 00:17:55.597 "r_mbytes_per_sec": 0, 00:17:55.597 "w_mbytes_per_sec": 0 00:17:55.597 }, 00:17:55.597 "claimed": false, 00:17:55.597 "zoned": false, 00:17:55.597 "supported_io_types": { 00:17:55.597 "read": true, 00:17:55.597 "write": true, 00:17:55.597 "unmap": true, 00:17:55.597 "write_zeroes": true, 00:17:55.597 "flush": false, 00:17:55.597 "reset": true, 00:17:55.597 "compare": false, 00:17:55.597 "compare_and_write": false, 00:17:55.597 "abort": false, 00:17:55.597 "nvme_admin": false, 00:17:55.597 "nvme_io": false 00:17:55.597 }, 00:17:55.597 "driver_specific": { 00:17:55.597 "lvol": { 00:17:55.597 "lvol_store_uuid": "0efdc8e5-0e00-420f-b98d-87aa6f761659", 00:17:55.597 "base_bdev": "aio_bdev", 00:17:55.597 "thin_provision": false, 00:17:55.597 "num_allocated_clusters": 38, 00:17:55.597 "snapshot": false, 00:17:55.597 "clone": false, 00:17:55.597 "esnap_clone": false 00:17:55.597 } 00:17:55.598 } 00:17:55.598 } 00:17:55.598 ] 00:17:55.598 03:27:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # return 0 00:17:55.598 03:27:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0efdc8e5-0e00-420f-b98d-87aa6f761659 00:17:55.598 03:27:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:17:55.855 03:27:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:17:55.855 03:27:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0efdc8e5-0e00-420f-b98d-87aa6f761659 00:17:55.855 03:27:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:17:56.112 03:27:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:17:56.112 03:27:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 3ddf269c-bd2e-45a0-88e6-43b7d313d57c 00:17:56.369 03:27:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 0efdc8e5-0e00-420f-b98d-87aa6f761659 00:17:56.626 03:27:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:56.882 03:27:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:56.882 00:17:56.882 real 0m19.492s 00:17:56.882 user 0m49.141s 00:17:56.882 sys 0m4.820s 00:17:56.882 03:27:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:56.882 03:27:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:17:56.882 ************************************ 00:17:56.882 END TEST lvs_grow_dirty 00:17:56.882 ************************************ 00:17:56.882 03:27:42 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:17:56.882 03:27:42 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@804 -- # type=--id 00:17:56.882 03:27:42 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@805 -- # id=0 00:17:56.882 03:27:42 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # '[' --id = --pid ']' 00:17:56.882 03:27:42 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:17:56.882 03:27:42 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # shm_files=nvmf_trace.0 00:17:56.882 03:27:42 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # [[ -z nvmf_trace.0 ]] 00:17:56.882 03:27:42 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # for n in $shm_files 00:17:56.882 03:27:42 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@817 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:17:56.882 nvmf_trace.0 00:17:56.882 03:27:42 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # return 0 00:17:56.882 03:27:42 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:17:56.882 03:27:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:56.882 03:27:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:17:56.882 03:27:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:56.882 03:27:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:17:56.882 03:27:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:56.882 03:27:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:56.882 rmmod nvme_tcp 00:17:56.882 rmmod nvme_fabrics 00:17:56.882 rmmod nvme_keyring 00:17:56.882 03:27:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:56.882 03:27:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:17:56.882 03:27:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:17:56.882 03:27:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 2395183 ']' 00:17:56.882 03:27:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 2395183 00:17:56.882 03:27:42 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@946 -- # '[' -z 2395183 ']' 00:17:56.882 03:27:42 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # kill -0 2395183 00:17:56.882 03:27:42 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@951 -- # uname 00:17:56.882 03:27:42 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:56.882 03:27:42 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2395183 00:17:57.139 03:27:42 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:17:57.139 03:27:42 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:17:57.139 03:27:42 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2395183' 00:17:57.139 killing process with pid 2395183 00:17:57.139 03:27:42 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@965 -- # kill 2395183 00:17:57.139 03:27:42 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@970 -- # wait 2395183 00:17:57.139 03:27:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:57.139 03:27:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:57.139 03:27:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:57.139 03:27:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:57.139 03:27:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:57.139 03:27:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:57.139 03:27:42 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:57.139 03:27:42 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:59.666 03:27:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:59.666 00:17:59.666 real 0m42.463s 00:17:59.666 user 1m11.847s 00:17:59.666 sys 0m8.646s 00:17:59.666 03:27:44 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:59.666 03:27:44 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:59.666 ************************************ 00:17:59.666 END TEST nvmf_lvs_grow 00:17:59.666 ************************************ 00:17:59.666 03:27:44 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:17:59.666 03:27:44 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:17:59.666 03:27:44 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:59.666 03:27:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:59.666 ************************************ 00:17:59.666 START TEST nvmf_bdev_io_wait 00:17:59.666 ************************************ 00:17:59.666 03:27:44 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:17:59.666 * Looking for test storage... 00:17:59.666 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:59.666 03:27:44 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:59.666 03:27:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:17:59.666 03:27:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:59.666 03:27:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:59.666 03:27:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:59.666 03:27:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:59.666 03:27:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:59.666 03:27:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:59.666 03:27:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:59.666 03:27:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:59.666 03:27:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:59.666 03:27:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:59.666 03:27:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:59.666 03:27:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:59.666 03:27:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:59.666 03:27:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:59.666 03:27:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:59.666 03:27:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:59.666 03:27:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:59.666 03:27:44 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:59.666 03:27:44 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:59.666 03:27:44 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:59.666 03:27:44 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:59.666 03:27:44 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:59.666 03:27:44 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:59.666 03:27:44 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:17:59.666 03:27:44 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:59.666 03:27:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:17:59.666 03:27:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:59.666 03:27:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:59.666 03:27:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:59.666 03:27:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:59.666 03:27:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:59.666 03:27:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:59.666 03:27:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:59.666 03:27:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:59.666 03:27:44 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:59.666 03:27:44 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:59.666 03:27:44 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:17:59.666 03:27:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:59.666 03:27:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:59.666 03:27:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:59.666 03:27:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:59.666 03:27:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:59.666 03:27:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:59.666 03:27:44 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:59.666 03:27:44 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:59.666 03:27:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:59.666 03:27:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:59.666 03:27:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:17:59.666 03:27:44 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:01.600 03:27:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:01.600 03:27:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:18:01.600 03:27:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:01.600 03:27:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:01.600 03:27:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:01.600 03:27:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:01.600 03:27:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:01.600 03:27:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:18:01.600 03:27:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:01.600 03:27:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:18:01.600 03:27:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:18:01.600 03:27:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:18:01.600 03:27:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:18:01.600 03:27:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:18:01.600 03:27:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:18:01.600 03:27:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:01.600 03:27:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:01.600 03:27:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:01.600 03:27:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:01.600 03:27:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:01.600 03:27:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:01.600 03:27:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:01.600 03:27:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:01.600 03:27:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:01.600 03:27:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:01.600 03:27:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:01.600 03:27:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:01.600 03:27:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:01.600 03:27:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:01.600 03:27:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:01.600 03:27:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:01.600 03:27:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:01.600 03:27:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:01.600 03:27:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:01.600 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:01.600 03:27:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:01.600 03:27:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:01.600 03:27:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:01.600 03:27:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:01.600 03:27:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:01.600 03:27:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:01.600 03:27:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:01.600 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:01.600 03:27:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:01.600 03:27:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:01.600 03:27:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:01.600 03:27:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:01.600 03:27:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:01.600 03:27:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:01.600 03:27:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:01.600 03:27:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:01.600 03:27:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:01.600 03:27:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:01.600 03:27:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:01.600 03:27:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:01.600 03:27:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:01.600 03:27:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:01.600 03:27:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:01.600 03:27:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:01.600 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:01.600 03:27:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:01.600 03:27:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:01.600 03:27:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:01.600 03:27:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:01.600 03:27:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:01.600 03:27:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:01.600 03:27:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:01.600 03:27:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:01.600 03:27:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:01.600 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:01.600 03:27:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:01.600 03:27:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:01.600 03:27:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:18:01.600 03:27:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:01.600 03:27:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:01.600 03:27:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:01.600 03:27:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:01.600 03:27:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:01.600 03:27:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:01.600 03:27:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:01.600 03:27:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:01.600 03:27:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:01.600 03:27:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:01.600 03:27:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:01.600 03:27:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:01.601 03:27:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:01.601 03:27:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:01.601 03:27:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:01.601 03:27:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:01.601 03:27:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:01.601 03:27:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:01.601 03:27:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:01.601 03:27:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:01.601 03:27:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:01.601 03:27:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:01.601 03:27:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:01.601 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:01.601 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.195 ms 00:18:01.601 00:18:01.601 --- 10.0.0.2 ping statistics --- 00:18:01.601 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:01.601 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:18:01.601 03:27:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:01.601 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:01.601 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.138 ms 00:18:01.601 00:18:01.601 --- 10.0.0.1 ping statistics --- 00:18:01.601 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:01.601 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:18:01.601 03:27:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:01.601 03:27:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:18:01.601 03:27:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:01.601 03:27:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:01.601 03:27:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:01.601 03:27:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:01.601 03:27:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:01.601 03:27:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:01.601 03:27:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:01.601 03:27:46 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:18:01.601 03:27:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:01.601 03:27:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@720 -- # xtrace_disable 00:18:01.601 03:27:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:01.601 03:27:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=2397692 00:18:01.601 03:27:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:18:01.601 03:27:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 2397692 00:18:01.601 03:27:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@827 -- # '[' -z 2397692 ']' 00:18:01.601 03:27:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:01.601 03:27:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:01.601 03:27:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:01.601 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:01.601 03:27:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:01.601 03:27:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:01.601 [2024-07-21 03:27:46.802760] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:18:01.601 [2024-07-21 03:27:46.802837] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:01.601 EAL: No free 2048 kB hugepages reported on node 1 00:18:01.601 [2024-07-21 03:27:46.872718] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:01.859 [2024-07-21 03:27:46.965414] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:01.859 [2024-07-21 03:27:46.965474] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:01.859 [2024-07-21 03:27:46.965490] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:01.859 [2024-07-21 03:27:46.965504] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:01.859 [2024-07-21 03:27:46.965516] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:01.859 [2024-07-21 03:27:46.965610] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:01.859 [2024-07-21 03:27:46.965667] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:01.859 [2024-07-21 03:27:46.965784] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:01.859 [2024-07-21 03:27:46.965786] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:01.859 03:27:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:01.859 03:27:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # return 0 00:18:01.859 03:27:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:01.859 03:27:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:01.859 03:27:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:01.859 03:27:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:01.859 03:27:47 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:18:01.859 03:27:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:01.859 03:27:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:01.859 03:27:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:01.859 03:27:47 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:18:01.859 03:27:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:01.859 03:27:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:01.859 03:27:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:01.859 03:27:47 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:01.859 03:27:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:01.859 03:27:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:01.859 [2024-07-21 03:27:47.102762] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:01.859 03:27:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:01.859 03:27:47 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:01.859 03:27:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:01.860 03:27:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:01.860 Malloc0 00:18:01.860 03:27:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:01.860 03:27:47 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:01.860 03:27:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:01.860 03:27:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:01.860 03:27:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:01.860 03:27:47 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:01.860 03:27:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:01.860 03:27:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:01.860 03:27:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:01.860 03:27:47 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:01.860 03:27:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:01.860 03:27:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:01.860 [2024-07-21 03:27:47.163000] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:01.860 03:27:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:01.860 03:27:47 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=2397731 00:18:01.860 03:27:47 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:18:01.860 03:27:47 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:18:01.860 03:27:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:18:01.860 03:27:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:18:01.860 03:27:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:01.860 03:27:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:01.860 { 00:18:01.860 "params": { 00:18:01.860 "name": "Nvme$subsystem", 00:18:01.860 "trtype": "$TEST_TRANSPORT", 00:18:01.860 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:01.860 "adrfam": "ipv4", 00:18:01.860 "trsvcid": "$NVMF_PORT", 00:18:01.860 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:01.860 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:01.860 "hdgst": ${hdgst:-false}, 00:18:01.860 "ddgst": ${ddgst:-false} 00:18:01.860 }, 00:18:01.860 "method": "bdev_nvme_attach_controller" 00:18:01.860 } 00:18:01.860 EOF 00:18:01.860 )") 00:18:01.860 03:27:47 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=2397733 00:18:01.860 03:27:47 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:18:01.860 03:27:47 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:18:01.860 03:27:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:18:01.860 03:27:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:18:01.860 03:27:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:18:01.860 03:27:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:01.860 03:27:47 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=2397736 00:18:01.860 03:27:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:01.860 { 00:18:01.860 "params": { 00:18:01.860 "name": "Nvme$subsystem", 00:18:01.860 "trtype": "$TEST_TRANSPORT", 00:18:01.860 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:01.860 "adrfam": "ipv4", 00:18:01.860 "trsvcid": "$NVMF_PORT", 00:18:01.860 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:01.860 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:01.860 "hdgst": ${hdgst:-false}, 00:18:01.860 "ddgst": ${ddgst:-false} 00:18:01.860 }, 00:18:01.860 "method": "bdev_nvme_attach_controller" 00:18:01.860 } 00:18:01.860 EOF 00:18:01.860 )") 00:18:01.860 03:27:47 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:18:01.860 03:27:47 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:18:01.860 03:27:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:18:01.860 03:27:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:18:01.860 03:27:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:02.118 03:27:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:02.118 { 00:18:02.118 "params": { 00:18:02.118 "name": "Nvme$subsystem", 00:18:02.118 "trtype": "$TEST_TRANSPORT", 00:18:02.118 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:02.118 "adrfam": "ipv4", 00:18:02.118 "trsvcid": "$NVMF_PORT", 00:18:02.118 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:02.118 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:02.118 "hdgst": ${hdgst:-false}, 00:18:02.118 "ddgst": ${ddgst:-false} 00:18:02.118 }, 00:18:02.118 "method": "bdev_nvme_attach_controller" 00:18:02.118 } 00:18:02.118 EOF 00:18:02.118 )") 00:18:02.118 03:27:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:18:02.118 03:27:47 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=2397740 00:18:02.118 03:27:47 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:18:02.118 03:27:47 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:18:02.118 03:27:47 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:18:02.118 03:27:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:18:02.118 03:27:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:18:02.118 03:27:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:02.118 03:27:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:02.118 { 00:18:02.118 "params": { 00:18:02.118 "name": "Nvme$subsystem", 00:18:02.118 "trtype": "$TEST_TRANSPORT", 00:18:02.118 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:02.118 "adrfam": "ipv4", 00:18:02.118 "trsvcid": "$NVMF_PORT", 00:18:02.118 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:02.118 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:02.118 "hdgst": ${hdgst:-false}, 00:18:02.118 "ddgst": ${ddgst:-false} 00:18:02.118 }, 00:18:02.118 "method": "bdev_nvme_attach_controller" 00:18:02.118 } 00:18:02.118 EOF 00:18:02.118 )") 00:18:02.118 03:27:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:18:02.118 03:27:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:18:02.118 03:27:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:18:02.118 03:27:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:18:02.118 03:27:47 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 2397731 00:18:02.118 03:27:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:18:02.118 03:27:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:02.118 "params": { 00:18:02.118 "name": "Nvme1", 00:18:02.118 "trtype": "tcp", 00:18:02.118 "traddr": "10.0.0.2", 00:18:02.118 "adrfam": "ipv4", 00:18:02.118 "trsvcid": "4420", 00:18:02.118 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:02.118 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:02.118 "hdgst": false, 00:18:02.118 "ddgst": false 00:18:02.118 }, 00:18:02.118 "method": "bdev_nvme_attach_controller" 00:18:02.118 }' 00:18:02.118 03:27:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:18:02.118 03:27:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:18:02.118 03:27:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:02.118 "params": { 00:18:02.118 "name": "Nvme1", 00:18:02.118 "trtype": "tcp", 00:18:02.118 "traddr": "10.0.0.2", 00:18:02.118 "adrfam": "ipv4", 00:18:02.118 "trsvcid": "4420", 00:18:02.118 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:02.118 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:02.118 "hdgst": false, 00:18:02.118 "ddgst": false 00:18:02.118 }, 00:18:02.118 "method": "bdev_nvme_attach_controller" 00:18:02.118 }' 00:18:02.118 03:27:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:18:02.118 03:27:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:18:02.118 03:27:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:02.118 "params": { 00:18:02.119 "name": "Nvme1", 00:18:02.119 "trtype": "tcp", 00:18:02.119 "traddr": "10.0.0.2", 00:18:02.119 "adrfam": "ipv4", 00:18:02.119 "trsvcid": "4420", 00:18:02.119 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:02.119 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:02.119 "hdgst": false, 00:18:02.119 "ddgst": false 00:18:02.119 }, 00:18:02.119 "method": "bdev_nvme_attach_controller" 00:18:02.119 }' 00:18:02.119 03:27:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:18:02.119 03:27:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:02.119 "params": { 00:18:02.119 "name": "Nvme1", 00:18:02.119 "trtype": "tcp", 00:18:02.119 "traddr": "10.0.0.2", 00:18:02.119 "adrfam": "ipv4", 00:18:02.119 "trsvcid": "4420", 00:18:02.119 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:02.119 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:02.119 "hdgst": false, 00:18:02.119 "ddgst": false 00:18:02.119 }, 00:18:02.119 "method": "bdev_nvme_attach_controller" 00:18:02.119 }' 00:18:02.119 [2024-07-21 03:27:47.209388] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:18:02.119 [2024-07-21 03:27:47.209479] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:18:02.119 [2024-07-21 03:27:47.209542] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:18:02.119 [2024-07-21 03:27:47.209626] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:18:02.119 [2024-07-21 03:27:47.209645] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:18:02.119 [2024-07-21 03:27:47.209709] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:18:02.119 [2024-07-21 03:27:47.209817] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:18:02.119 [2024-07-21 03:27:47.209879] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:18:02.119 EAL: No free 2048 kB hugepages reported on node 1 00:18:02.119 EAL: No free 2048 kB hugepages reported on node 1 00:18:02.119 [2024-07-21 03:27:47.382864] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:02.119 EAL: No free 2048 kB hugepages reported on node 1 00:18:02.377 [2024-07-21 03:27:47.449383] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:02.377 [2024-07-21 03:27:47.453034] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:18:02.377 [2024-07-21 03:27:47.517428] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:18:02.377 EAL: No free 2048 kB hugepages reported on node 1 00:18:02.377 [2024-07-21 03:27:47.547234] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:02.377 [2024-07-21 03:27:47.625001] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:18:02.377 [2024-07-21 03:27:47.652501] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:02.635 [2024-07-21 03:27:47.729338] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:18:02.635 Running I/O for 1 seconds... 00:18:02.635 Running I/O for 1 seconds... 00:18:02.635 Running I/O for 1 seconds... 00:18:02.894 Running I/O for 1 seconds... 00:18:03.829 00:18:03.829 Latency(us) 00:18:03.829 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:03.829 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:18:03.829 Nvme1n1 : 1.02 6730.96 26.29 0.00 0.00 18849.28 8835.22 29127.11 00:18:03.829 =================================================================================================================== 00:18:03.829 Total : 6730.96 26.29 0.00 0.00 18849.28 8835.22 29127.11 00:18:03.829 00:18:03.829 Latency(us) 00:18:03.829 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:03.829 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:18:03.829 Nvme1n1 : 1.00 191866.60 749.48 0.00 0.00 664.55 274.58 879.88 00:18:03.829 =================================================================================================================== 00:18:03.829 Total : 191866.60 749.48 0.00 0.00 664.55 274.58 879.88 00:18:03.829 00:18:03.829 Latency(us) 00:18:03.829 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:03.829 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:18:03.829 Nvme1n1 : 1.01 6547.35 25.58 0.00 0.00 19487.54 5606.97 37865.24 00:18:03.829 =================================================================================================================== 00:18:03.829 Total : 6547.35 25.58 0.00 0.00 19487.54 5606.97 37865.24 00:18:03.829 00:18:03.829 Latency(us) 00:18:03.829 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:03.829 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:18:03.829 Nvme1n1 : 1.01 9397.93 36.71 0.00 0.00 13558.60 7475.96 26408.58 00:18:03.829 =================================================================================================================== 00:18:03.829 Total : 9397.93 36.71 0.00 0.00 13558.60 7475.96 26408.58 00:18:04.087 03:27:49 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 2397733 00:18:04.087 03:27:49 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 2397736 00:18:04.087 03:27:49 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 2397740 00:18:04.087 03:27:49 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:04.087 03:27:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:04.087 03:27:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:04.087 03:27:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:04.087 03:27:49 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:18:04.087 03:27:49 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:18:04.087 03:27:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:04.087 03:27:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:18:04.087 03:27:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:04.087 03:27:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:18:04.087 03:27:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:04.087 03:27:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:04.087 rmmod nvme_tcp 00:18:04.087 rmmod nvme_fabrics 00:18:04.087 rmmod nvme_keyring 00:18:04.087 03:27:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:04.087 03:27:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:18:04.087 03:27:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:18:04.087 03:27:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 2397692 ']' 00:18:04.087 03:27:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 2397692 00:18:04.087 03:27:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@946 -- # '[' -z 2397692 ']' 00:18:04.087 03:27:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # kill -0 2397692 00:18:04.087 03:27:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@951 -- # uname 00:18:04.087 03:27:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:04.087 03:27:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2397692 00:18:04.087 03:27:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:18:04.087 03:27:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:18:04.087 03:27:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2397692' 00:18:04.087 killing process with pid 2397692 00:18:04.087 03:27:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@965 -- # kill 2397692 00:18:04.087 03:27:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@970 -- # wait 2397692 00:18:04.346 03:27:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:04.346 03:27:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:04.346 03:27:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:04.346 03:27:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:04.346 03:27:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:04.346 03:27:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:04.346 03:27:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:04.346 03:27:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:06.874 03:27:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:06.874 00:18:06.874 real 0m7.074s 00:18:06.874 user 0m15.739s 00:18:06.874 sys 0m3.602s 00:18:06.874 03:27:51 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:06.874 03:27:51 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:06.874 ************************************ 00:18:06.874 END TEST nvmf_bdev_io_wait 00:18:06.874 ************************************ 00:18:06.874 03:27:51 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:18:06.874 03:27:51 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:18:06.874 03:27:51 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:06.874 03:27:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:06.874 ************************************ 00:18:06.874 START TEST nvmf_queue_depth 00:18:06.874 ************************************ 00:18:06.874 03:27:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:18:06.874 * Looking for test storage... 00:18:06.874 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:06.874 03:27:51 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:06.874 03:27:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:18:06.874 03:27:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:06.874 03:27:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:06.874 03:27:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:06.874 03:27:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:06.874 03:27:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:06.874 03:27:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:06.874 03:27:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:06.874 03:27:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:06.874 03:27:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:06.874 03:27:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:06.874 03:27:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:06.874 03:27:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:06.874 03:27:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:06.874 03:27:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:06.874 03:27:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:06.874 03:27:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:06.874 03:27:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:06.874 03:27:51 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:06.874 03:27:51 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:06.874 03:27:51 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:06.874 03:27:51 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:06.874 03:27:51 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:06.874 03:27:51 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:06.874 03:27:51 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:18:06.874 03:27:51 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:06.874 03:27:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:18:06.874 03:27:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:06.874 03:27:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:06.874 03:27:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:06.874 03:27:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:06.874 03:27:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:06.874 03:27:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:06.874 03:27:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:06.874 03:27:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:06.874 03:27:51 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:18:06.874 03:27:51 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:18:06.874 03:27:51 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:06.874 03:27:51 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:18:06.874 03:27:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:06.874 03:27:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:06.874 03:27:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:06.874 03:27:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:06.874 03:27:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:06.874 03:27:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:06.874 03:27:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:06.874 03:27:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:06.874 03:27:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:06.874 03:27:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:06.874 03:27:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:18:06.874 03:27:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:08.773 03:27:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:08.773 03:27:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:18:08.773 03:27:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:08.773 03:27:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:08.773 03:27:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:08.773 03:27:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:08.773 03:27:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:08.773 03:27:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:18:08.773 03:27:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:08.773 03:27:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:18:08.773 03:27:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:18:08.773 03:27:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:18:08.773 03:27:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:18:08.773 03:27:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:18:08.773 03:27:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:18:08.773 03:27:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:08.773 03:27:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:08.773 03:27:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:08.773 03:27:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:08.773 03:27:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:08.773 03:27:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:08.773 03:27:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:08.773 03:27:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:08.773 03:27:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:08.773 03:27:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:08.773 03:27:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:08.773 03:27:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:08.773 03:27:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:08.773 03:27:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:08.773 03:27:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:08.773 03:27:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:08.773 03:27:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:08.773 03:27:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:08.773 03:27:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:08.773 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:08.773 03:27:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:08.773 03:27:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:08.773 03:27:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:08.773 03:27:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:08.773 03:27:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:08.773 03:27:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:08.773 03:27:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:08.773 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:08.773 03:27:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:08.773 03:27:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:08.773 03:27:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:08.773 03:27:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:08.773 03:27:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:08.773 03:27:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:08.773 03:27:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:08.773 03:27:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:08.773 03:27:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:08.773 03:27:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:08.773 03:27:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:08.773 03:27:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:08.773 03:27:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:08.773 03:27:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:08.773 03:27:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:08.773 03:27:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:08.773 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:08.773 03:27:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:08.773 03:27:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:08.773 03:27:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:08.773 03:27:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:08.773 03:27:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:08.773 03:27:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:08.773 03:27:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:08.773 03:27:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:08.773 03:27:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:08.773 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:08.773 03:27:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:08.773 03:27:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:08.773 03:27:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:18:08.773 03:27:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:08.773 03:27:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:08.773 03:27:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:08.773 03:27:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:08.773 03:27:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:08.773 03:27:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:08.773 03:27:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:08.773 03:27:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:08.774 03:27:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:08.774 03:27:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:08.774 03:27:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:08.774 03:27:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:08.774 03:27:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:08.774 03:27:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:08.774 03:27:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:08.774 03:27:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:08.774 03:27:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:08.774 03:27:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:08.774 03:27:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:08.774 03:27:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:08.774 03:27:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:08.774 03:27:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:08.774 03:27:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:08.774 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:08.774 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.208 ms 00:18:08.774 00:18:08.774 --- 10.0.0.2 ping statistics --- 00:18:08.774 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:08.774 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:18:08.774 03:27:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:08.774 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:08.774 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.138 ms 00:18:08.774 00:18:08.774 --- 10.0.0.1 ping statistics --- 00:18:08.774 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:08.774 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:18:08.774 03:27:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:08.774 03:27:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:18:08.774 03:27:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:08.774 03:27:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:08.774 03:27:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:08.774 03:27:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:08.774 03:27:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:08.774 03:27:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:08.774 03:27:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:08.774 03:27:53 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:18:08.774 03:27:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:08.774 03:27:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@720 -- # xtrace_disable 00:18:08.774 03:27:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:08.774 03:27:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=2399947 00:18:08.774 03:27:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:08.774 03:27:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 2399947 00:18:08.774 03:27:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@827 -- # '[' -z 2399947 ']' 00:18:08.774 03:27:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:08.774 03:27:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:08.774 03:27:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:08.774 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:08.774 03:27:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:08.774 03:27:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:08.774 [2024-07-21 03:27:53.939498] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:18:08.774 [2024-07-21 03:27:53.939571] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:08.774 EAL: No free 2048 kB hugepages reported on node 1 00:18:08.774 [2024-07-21 03:27:54.005716] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:09.030 [2024-07-21 03:27:54.093809] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:09.030 [2024-07-21 03:27:54.093856] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:09.030 [2024-07-21 03:27:54.093885] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:09.030 [2024-07-21 03:27:54.093907] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:09.030 [2024-07-21 03:27:54.093919] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:09.030 [2024-07-21 03:27:54.093963] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:09.030 03:27:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:09.030 03:27:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@860 -- # return 0 00:18:09.030 03:27:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:09.030 03:27:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:09.030 03:27:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:09.030 03:27:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:09.030 03:27:54 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:09.030 03:27:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:09.030 03:27:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:09.030 [2024-07-21 03:27:54.238416] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:09.030 03:27:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:09.030 03:27:54 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:09.030 03:27:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:09.030 03:27:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:09.030 Malloc0 00:18:09.030 03:27:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:09.030 03:27:54 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:09.030 03:27:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:09.030 03:27:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:09.030 03:27:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:09.030 03:27:54 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:09.030 03:27:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:09.030 03:27:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:09.030 03:27:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:09.030 03:27:54 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:09.030 03:27:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:09.030 03:27:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:09.030 [2024-07-21 03:27:54.301389] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:09.030 03:27:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:09.030 03:27:54 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=2399972 00:18:09.030 03:27:54 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:18:09.030 03:27:54 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:09.030 03:27:54 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 2399972 /var/tmp/bdevperf.sock 00:18:09.030 03:27:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@827 -- # '[' -z 2399972 ']' 00:18:09.030 03:27:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:09.030 03:27:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:09.030 03:27:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:09.030 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:09.030 03:27:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:09.030 03:27:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:09.287 [2024-07-21 03:27:54.348767] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:18:09.287 [2024-07-21 03:27:54.348847] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2399972 ] 00:18:09.287 EAL: No free 2048 kB hugepages reported on node 1 00:18:09.287 [2024-07-21 03:27:54.412290] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:09.287 [2024-07-21 03:27:54.504902] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:09.545 03:27:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:09.545 03:27:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@860 -- # return 0 00:18:09.545 03:27:54 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:09.545 03:27:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:09.545 03:27:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:09.545 NVMe0n1 00:18:09.545 03:27:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:09.545 03:27:54 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:09.802 Running I/O for 10 seconds... 00:18:19.801 00:18:19.801 Latency(us) 00:18:19.801 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:19.801 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:18:19.801 Verification LBA range: start 0x0 length 0x4000 00:18:19.801 NVMe0n1 : 10.10 8603.78 33.61 0.00 0.00 118522.97 24369.68 71070.15 00:18:19.801 =================================================================================================================== 00:18:19.801 Total : 8603.78 33.61 0.00 0.00 118522.97 24369.68 71070.15 00:18:19.801 0 00:18:19.801 03:28:05 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 2399972 00:18:19.801 03:28:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@946 -- # '[' -z 2399972 ']' 00:18:19.801 03:28:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@950 -- # kill -0 2399972 00:18:19.801 03:28:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # uname 00:18:19.801 03:28:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:19.801 03:28:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2399972 00:18:20.059 03:28:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:18:20.059 03:28:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:18:20.059 03:28:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2399972' 00:18:20.059 killing process with pid 2399972 00:18:20.059 03:28:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@965 -- # kill 2399972 00:18:20.059 Received shutdown signal, test time was about 10.000000 seconds 00:18:20.059 00:18:20.059 Latency(us) 00:18:20.059 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:20.059 =================================================================================================================== 00:18:20.059 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:20.059 03:28:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@970 -- # wait 2399972 00:18:20.059 03:28:05 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:18:20.059 03:28:05 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:18:20.059 03:28:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:20.059 03:28:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:18:20.059 03:28:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:20.059 03:28:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:18:20.059 03:28:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:20.059 03:28:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:20.059 rmmod nvme_tcp 00:18:20.059 rmmod nvme_fabrics 00:18:20.059 rmmod nvme_keyring 00:18:20.317 03:28:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:20.317 03:28:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:18:20.317 03:28:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:18:20.317 03:28:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 2399947 ']' 00:18:20.317 03:28:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 2399947 00:18:20.317 03:28:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@946 -- # '[' -z 2399947 ']' 00:18:20.317 03:28:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@950 -- # kill -0 2399947 00:18:20.317 03:28:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # uname 00:18:20.317 03:28:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:20.317 03:28:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2399947 00:18:20.317 03:28:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:18:20.318 03:28:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:18:20.318 03:28:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2399947' 00:18:20.318 killing process with pid 2399947 00:18:20.318 03:28:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@965 -- # kill 2399947 00:18:20.318 03:28:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@970 -- # wait 2399947 00:18:20.575 03:28:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:20.575 03:28:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:20.575 03:28:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:20.575 03:28:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:20.575 03:28:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:20.575 03:28:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:20.575 03:28:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:20.575 03:28:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:22.474 03:28:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:22.474 00:18:22.474 real 0m16.044s 00:18:22.474 user 0m22.616s 00:18:22.474 sys 0m3.068s 00:18:22.474 03:28:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:22.474 03:28:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:22.474 ************************************ 00:18:22.474 END TEST nvmf_queue_depth 00:18:22.474 ************************************ 00:18:22.474 03:28:07 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:18:22.474 03:28:07 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:18:22.474 03:28:07 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:22.474 03:28:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:22.474 ************************************ 00:18:22.474 START TEST nvmf_target_multipath 00:18:22.474 ************************************ 00:18:22.474 03:28:07 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:18:22.732 * Looking for test storage... 00:18:22.732 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:22.732 03:28:07 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:22.732 03:28:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:18:22.732 03:28:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:22.732 03:28:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:22.732 03:28:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:22.732 03:28:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:22.732 03:28:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:22.732 03:28:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:22.732 03:28:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:22.732 03:28:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:22.732 03:28:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:22.732 03:28:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:22.732 03:28:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:22.732 03:28:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:22.732 03:28:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:22.732 03:28:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:22.732 03:28:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:22.732 03:28:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:22.732 03:28:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:22.732 03:28:07 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:22.732 03:28:07 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:22.732 03:28:07 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:22.732 03:28:07 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:22.732 03:28:07 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:22.732 03:28:07 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:22.732 03:28:07 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:18:22.732 03:28:07 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:22.732 03:28:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:18:22.732 03:28:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:22.732 03:28:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:22.732 03:28:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:22.732 03:28:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:22.732 03:28:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:22.732 03:28:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:22.732 03:28:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:22.732 03:28:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:22.732 03:28:07 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:22.732 03:28:07 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:22.732 03:28:07 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:18:22.732 03:28:07 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:22.732 03:28:07 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:18:22.732 03:28:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:22.732 03:28:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:22.732 03:28:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:22.732 03:28:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:22.732 03:28:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:22.732 03:28:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:22.732 03:28:07 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:22.732 03:28:07 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:22.732 03:28:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:22.732 03:28:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:22.732 03:28:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:18:22.732 03:28:07 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:18:24.632 03:28:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:24.632 03:28:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:18:24.632 03:28:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:24.632 03:28:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:24.632 03:28:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:24.632 03:28:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:24.632 03:28:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:24.632 03:28:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:18:24.632 03:28:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:24.632 03:28:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:18:24.632 03:28:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:18:24.632 03:28:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:18:24.632 03:28:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:18:24.632 03:28:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:18:24.632 03:28:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:18:24.632 03:28:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:24.632 03:28:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:24.632 03:28:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:24.632 03:28:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:24.632 03:28:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:24.632 03:28:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:24.632 03:28:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:24.632 03:28:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:24.632 03:28:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:24.632 03:28:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:24.632 03:28:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:24.632 03:28:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:24.632 03:28:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:24.632 03:28:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:24.632 03:28:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:24.632 03:28:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:24.632 03:28:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:24.632 03:28:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:24.632 03:28:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:24.632 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:24.632 03:28:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:24.632 03:28:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:24.632 03:28:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:24.632 03:28:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:24.632 03:28:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:24.632 03:28:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:24.632 03:28:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:24.632 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:24.632 03:28:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:24.632 03:28:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:24.632 03:28:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:24.632 03:28:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:24.632 03:28:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:24.632 03:28:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:24.632 03:28:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:24.632 03:28:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:24.632 03:28:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:24.632 03:28:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:24.632 03:28:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:24.632 03:28:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:24.632 03:28:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:24.632 03:28:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:24.632 03:28:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:24.632 03:28:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:24.632 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:24.632 03:28:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:24.632 03:28:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:24.632 03:28:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:24.632 03:28:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:24.632 03:28:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:24.632 03:28:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:24.632 03:28:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:24.632 03:28:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:24.632 03:28:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:24.632 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:24.632 03:28:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:24.632 03:28:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:24.632 03:28:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:18:24.632 03:28:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:24.632 03:28:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:24.632 03:28:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:24.632 03:28:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:24.632 03:28:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:24.632 03:28:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:24.632 03:28:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:24.632 03:28:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:24.632 03:28:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:24.632 03:28:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:24.632 03:28:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:24.632 03:28:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:24.632 03:28:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:24.632 03:28:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:24.632 03:28:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:24.632 03:28:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:24.632 03:28:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:24.632 03:28:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:24.632 03:28:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:24.632 03:28:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:24.632 03:28:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:24.632 03:28:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:24.632 03:28:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:24.632 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:24.632 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.131 ms 00:18:24.632 00:18:24.632 --- 10.0.0.2 ping statistics --- 00:18:24.632 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:24.632 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:18:24.632 03:28:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:24.632 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:24.632 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.120 ms 00:18:24.632 00:18:24.632 --- 10.0.0.1 ping statistics --- 00:18:24.632 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:24.632 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:18:24.632 03:28:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:24.632 03:28:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:18:24.632 03:28:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:24.632 03:28:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:24.632 03:28:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:24.632 03:28:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:24.632 03:28:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:24.632 03:28:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:24.632 03:28:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:24.890 03:28:09 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:18:24.890 03:28:09 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:18:24.890 only one NIC for nvmf test 00:18:24.890 03:28:09 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:18:24.890 03:28:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:24.890 03:28:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:18:24.890 03:28:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:24.890 03:28:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:18:24.890 03:28:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:24.890 03:28:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:24.890 rmmod nvme_tcp 00:18:24.890 rmmod nvme_fabrics 00:18:24.890 rmmod nvme_keyring 00:18:24.890 03:28:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:24.890 03:28:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:18:24.891 03:28:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:18:24.891 03:28:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:18:24.891 03:28:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:24.891 03:28:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:24.891 03:28:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:24.891 03:28:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:24.891 03:28:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:24.891 03:28:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:24.891 03:28:10 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:24.891 03:28:10 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:26.830 03:28:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:26.830 03:28:12 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:18:26.830 03:28:12 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:18:26.830 03:28:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:26.830 03:28:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:18:26.830 03:28:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:26.830 03:28:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:18:26.830 03:28:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:26.830 03:28:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:26.830 03:28:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:26.830 03:28:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:18:26.830 03:28:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:18:26.830 03:28:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:18:26.830 03:28:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:26.830 03:28:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:26.830 03:28:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:26.830 03:28:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:26.830 03:28:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:26.830 03:28:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:26.830 03:28:12 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:26.830 03:28:12 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:26.830 03:28:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:26.830 00:18:26.830 real 0m4.306s 00:18:26.830 user 0m0.814s 00:18:26.830 sys 0m1.490s 00:18:26.830 03:28:12 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:26.830 03:28:12 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:18:26.830 ************************************ 00:18:26.830 END TEST nvmf_target_multipath 00:18:26.830 ************************************ 00:18:26.830 03:28:12 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:18:26.830 03:28:12 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:18:26.830 03:28:12 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:26.830 03:28:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:26.830 ************************************ 00:18:26.830 START TEST nvmf_zcopy 00:18:26.830 ************************************ 00:18:26.830 03:28:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:18:27.089 * Looking for test storage... 00:18:27.089 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:27.089 03:28:12 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:27.089 03:28:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:18:27.089 03:28:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:27.089 03:28:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:27.089 03:28:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:27.089 03:28:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:27.089 03:28:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:27.089 03:28:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:27.089 03:28:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:27.089 03:28:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:27.089 03:28:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:27.089 03:28:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:27.089 03:28:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:27.089 03:28:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:27.089 03:28:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:27.089 03:28:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:27.089 03:28:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:27.089 03:28:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:27.089 03:28:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:27.090 03:28:12 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:27.090 03:28:12 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:27.090 03:28:12 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:27.090 03:28:12 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:27.090 03:28:12 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:27.090 03:28:12 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:27.090 03:28:12 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:18:27.090 03:28:12 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:27.090 03:28:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:18:27.090 03:28:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:27.090 03:28:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:27.090 03:28:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:27.090 03:28:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:27.090 03:28:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:27.090 03:28:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:27.090 03:28:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:27.090 03:28:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:27.090 03:28:12 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:18:27.090 03:28:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:27.090 03:28:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:27.090 03:28:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:27.090 03:28:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:27.090 03:28:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:27.090 03:28:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:27.090 03:28:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:27.090 03:28:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:27.090 03:28:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:27.090 03:28:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:27.090 03:28:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:18:27.090 03:28:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:28.985 03:28:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:28.985 03:28:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:18:28.986 03:28:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:28.986 03:28:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:28.986 03:28:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:28.986 03:28:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:28.986 03:28:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:28.986 03:28:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:18:28.986 03:28:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:28.986 03:28:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:18:28.986 03:28:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:18:28.986 03:28:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:18:28.986 03:28:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:18:28.986 03:28:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:18:28.986 03:28:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:18:28.986 03:28:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:28.986 03:28:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:28.986 03:28:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:28.986 03:28:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:28.986 03:28:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:28.986 03:28:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:28.986 03:28:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:28.986 03:28:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:28.986 03:28:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:28.986 03:28:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:28.986 03:28:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:28.986 03:28:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:28.986 03:28:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:28.986 03:28:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:28.986 03:28:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:28.986 03:28:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:28.986 03:28:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:28.986 03:28:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:28.986 03:28:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:28.986 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:28.986 03:28:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:28.986 03:28:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:28.986 03:28:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:28.986 03:28:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:28.986 03:28:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:28.986 03:28:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:28.986 03:28:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:28.986 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:28.986 03:28:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:28.986 03:28:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:28.986 03:28:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:28.986 03:28:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:28.986 03:28:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:28.986 03:28:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:28.986 03:28:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:28.986 03:28:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:28.986 03:28:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:28.986 03:28:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:28.986 03:28:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:28.986 03:28:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:28.986 03:28:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:28.986 03:28:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:28.986 03:28:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:28.986 03:28:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:28.986 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:28.986 03:28:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:28.986 03:28:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:28.986 03:28:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:28.986 03:28:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:28.986 03:28:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:28.986 03:28:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:28.986 03:28:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:28.986 03:28:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:28.986 03:28:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:28.986 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:28.986 03:28:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:28.986 03:28:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:28.986 03:28:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:18:28.986 03:28:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:28.986 03:28:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:28.986 03:28:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:28.986 03:28:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:28.986 03:28:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:28.986 03:28:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:28.986 03:28:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:28.986 03:28:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:28.986 03:28:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:28.986 03:28:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:28.986 03:28:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:28.986 03:28:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:28.986 03:28:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:28.986 03:28:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:28.986 03:28:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:28.986 03:28:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:28.986 03:28:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:28.986 03:28:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:28.986 03:28:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:28.986 03:28:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:28.986 03:28:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:29.244 03:28:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:29.244 03:28:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:29.244 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:29.244 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.168 ms 00:18:29.244 00:18:29.244 --- 10.0.0.2 ping statistics --- 00:18:29.244 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:29.244 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:18:29.244 03:28:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:29.244 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:29.244 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.071 ms 00:18:29.244 00:18:29.244 --- 10.0.0.1 ping statistics --- 00:18:29.244 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:29.244 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:18:29.244 03:28:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:29.244 03:28:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:18:29.244 03:28:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:29.244 03:28:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:29.244 03:28:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:29.244 03:28:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:29.244 03:28:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:29.244 03:28:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:29.244 03:28:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:29.244 03:28:14 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:18:29.244 03:28:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:29.244 03:28:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@720 -- # xtrace_disable 00:18:29.244 03:28:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:29.244 03:28:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=2405153 00:18:29.244 03:28:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 2405153 00:18:29.244 03:28:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@827 -- # '[' -z 2405153 ']' 00:18:29.244 03:28:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:29.244 03:28:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:29.244 03:28:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:29.244 03:28:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:29.244 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:29.244 03:28:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:29.244 03:28:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:29.244 [2024-07-21 03:28:14.387984] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:18:29.244 [2024-07-21 03:28:14.388082] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:29.244 EAL: No free 2048 kB hugepages reported on node 1 00:18:29.244 [2024-07-21 03:28:14.458291] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:29.244 [2024-07-21 03:28:14.552423] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:29.244 [2024-07-21 03:28:14.552486] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:29.244 [2024-07-21 03:28:14.552518] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:29.244 [2024-07-21 03:28:14.552533] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:29.244 [2024-07-21 03:28:14.552545] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:29.244 [2024-07-21 03:28:14.552585] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:29.502 03:28:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:29.502 03:28:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@860 -- # return 0 00:18:29.502 03:28:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:29.502 03:28:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:29.502 03:28:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:29.502 03:28:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:29.502 03:28:14 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:18:29.502 03:28:14 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:18:29.502 03:28:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:29.502 03:28:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:29.502 [2024-07-21 03:28:14.698278] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:29.502 03:28:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:29.502 03:28:14 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:18:29.502 03:28:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:29.502 03:28:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:29.502 03:28:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:29.502 03:28:14 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:29.502 03:28:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:29.502 03:28:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:29.502 [2024-07-21 03:28:14.714458] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:29.502 03:28:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:29.502 03:28:14 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:29.502 03:28:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:29.502 03:28:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:29.502 03:28:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:29.502 03:28:14 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:18:29.502 03:28:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:29.502 03:28:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:29.502 malloc0 00:18:29.502 03:28:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:29.502 03:28:14 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:29.502 03:28:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:29.502 03:28:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:29.502 03:28:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:29.502 03:28:14 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:18:29.502 03:28:14 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:18:29.502 03:28:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:18:29.502 03:28:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:18:29.502 03:28:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:29.502 03:28:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:29.502 { 00:18:29.502 "params": { 00:18:29.502 "name": "Nvme$subsystem", 00:18:29.502 "trtype": "$TEST_TRANSPORT", 00:18:29.502 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:29.502 "adrfam": "ipv4", 00:18:29.502 "trsvcid": "$NVMF_PORT", 00:18:29.502 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:29.502 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:29.502 "hdgst": ${hdgst:-false}, 00:18:29.502 "ddgst": ${ddgst:-false} 00:18:29.502 }, 00:18:29.502 "method": "bdev_nvme_attach_controller" 00:18:29.502 } 00:18:29.503 EOF 00:18:29.503 )") 00:18:29.503 03:28:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:18:29.503 03:28:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:18:29.503 03:28:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:18:29.503 03:28:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:29.503 "params": { 00:18:29.503 "name": "Nvme1", 00:18:29.503 "trtype": "tcp", 00:18:29.503 "traddr": "10.0.0.2", 00:18:29.503 "adrfam": "ipv4", 00:18:29.503 "trsvcid": "4420", 00:18:29.503 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:29.503 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:29.503 "hdgst": false, 00:18:29.503 "ddgst": false 00:18:29.503 }, 00:18:29.503 "method": "bdev_nvme_attach_controller" 00:18:29.503 }' 00:18:29.503 [2024-07-21 03:28:14.795851] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:18:29.503 [2024-07-21 03:28:14.795954] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2405178 ] 00:18:29.760 EAL: No free 2048 kB hugepages reported on node 1 00:18:29.760 [2024-07-21 03:28:14.859812] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:29.760 [2024-07-21 03:28:14.947657] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:30.018 Running I/O for 10 seconds... 00:18:39.982 00:18:39.982 Latency(us) 00:18:39.982 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:39.982 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:18:39.982 Verification LBA range: start 0x0 length 0x1000 00:18:39.982 Nvme1n1 : 10.01 5827.26 45.53 0.00 0.00 21905.25 3155.44 31457.28 00:18:39.982 =================================================================================================================== 00:18:39.982 Total : 5827.26 45.53 0.00 0.00 21905.25 3155.44 31457.28 00:18:40.242 03:28:25 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=2406487 00:18:40.242 03:28:25 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:18:40.242 03:28:25 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:40.242 03:28:25 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:18:40.242 03:28:25 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:18:40.242 03:28:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:18:40.242 03:28:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:18:40.242 03:28:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:40.242 03:28:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:40.242 { 00:18:40.242 "params": { 00:18:40.242 "name": "Nvme$subsystem", 00:18:40.242 "trtype": "$TEST_TRANSPORT", 00:18:40.242 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:40.242 "adrfam": "ipv4", 00:18:40.242 "trsvcid": "$NVMF_PORT", 00:18:40.242 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:40.242 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:40.242 "hdgst": ${hdgst:-false}, 00:18:40.242 "ddgst": ${ddgst:-false} 00:18:40.242 }, 00:18:40.242 "method": "bdev_nvme_attach_controller" 00:18:40.242 } 00:18:40.242 EOF 00:18:40.242 )") 00:18:40.242 03:28:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:18:40.242 [2024-07-21 03:28:25.393315] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.242 [2024-07-21 03:28:25.393366] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.242 03:28:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:18:40.242 03:28:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:18:40.242 03:28:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:40.242 "params": { 00:18:40.242 "name": "Nvme1", 00:18:40.242 "trtype": "tcp", 00:18:40.242 "traddr": "10.0.0.2", 00:18:40.242 "adrfam": "ipv4", 00:18:40.242 "trsvcid": "4420", 00:18:40.242 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:40.242 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:40.242 "hdgst": false, 00:18:40.242 "ddgst": false 00:18:40.242 }, 00:18:40.242 "method": "bdev_nvme_attach_controller" 00:18:40.242 }' 00:18:40.242 [2024-07-21 03:28:25.401269] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.242 [2024-07-21 03:28:25.401297] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.242 [2024-07-21 03:28:25.409282] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.242 [2024-07-21 03:28:25.409306] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.242 [2024-07-21 03:28:25.417290] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.242 [2024-07-21 03:28:25.417311] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.242 [2024-07-21 03:28:25.425309] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.242 [2024-07-21 03:28:25.425330] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.242 [2024-07-21 03:28:25.430226] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:18:40.242 [2024-07-21 03:28:25.430291] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2406487 ] 00:18:40.242 [2024-07-21 03:28:25.433333] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.242 [2024-07-21 03:28:25.433353] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.242 [2024-07-21 03:28:25.441360] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.242 [2024-07-21 03:28:25.441383] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.242 [2024-07-21 03:28:25.449378] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.242 [2024-07-21 03:28:25.449397] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.242 [2024-07-21 03:28:25.457399] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.242 [2024-07-21 03:28:25.457419] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.242 EAL: No free 2048 kB hugepages reported on node 1 00:18:40.242 [2024-07-21 03:28:25.465437] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.242 [2024-07-21 03:28:25.465462] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.242 [2024-07-21 03:28:25.473459] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.242 [2024-07-21 03:28:25.473484] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.242 [2024-07-21 03:28:25.481480] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.242 [2024-07-21 03:28:25.481505] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.242 [2024-07-21 03:28:25.489503] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.242 [2024-07-21 03:28:25.489529] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.242 [2024-07-21 03:28:25.495999] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:40.242 [2024-07-21 03:28:25.497523] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.242 [2024-07-21 03:28:25.497548] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.242 [2024-07-21 03:28:25.505581] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.242 [2024-07-21 03:28:25.505630] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.242 [2024-07-21 03:28:25.513597] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.242 [2024-07-21 03:28:25.513639] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.242 [2024-07-21 03:28:25.521595] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.242 [2024-07-21 03:28:25.521629] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.242 [2024-07-21 03:28:25.529624] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.242 [2024-07-21 03:28:25.529663] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.242 [2024-07-21 03:28:25.537657] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.242 [2024-07-21 03:28:25.537679] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.242 [2024-07-21 03:28:25.545678] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.242 [2024-07-21 03:28:25.545701] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.242 [2024-07-21 03:28:25.553766] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.242 [2024-07-21 03:28:25.553822] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.501 [2024-07-21 03:28:25.561723] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.501 [2024-07-21 03:28:25.561752] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.501 [2024-07-21 03:28:25.569734] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.501 [2024-07-21 03:28:25.569757] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.501 [2024-07-21 03:28:25.577762] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.501 [2024-07-21 03:28:25.577785] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.501 [2024-07-21 03:28:25.585767] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.501 [2024-07-21 03:28:25.585789] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.501 [2024-07-21 03:28:25.592376] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:40.501 [2024-07-21 03:28:25.593788] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.501 [2024-07-21 03:28:25.593810] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.501 [2024-07-21 03:28:25.601816] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.501 [2024-07-21 03:28:25.601838] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.501 [2024-07-21 03:28:25.609860] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.501 [2024-07-21 03:28:25.609913] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.501 [2024-07-21 03:28:25.617888] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.501 [2024-07-21 03:28:25.617943] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.501 [2024-07-21 03:28:25.625927] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.501 [2024-07-21 03:28:25.625980] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.501 [2024-07-21 03:28:25.633953] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.501 [2024-07-21 03:28:25.633993] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.501 [2024-07-21 03:28:25.641992] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.501 [2024-07-21 03:28:25.642034] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.501 [2024-07-21 03:28:25.650008] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.501 [2024-07-21 03:28:25.650049] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.501 [2024-07-21 03:28:25.658035] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.501 [2024-07-21 03:28:25.658080] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.501 [2024-07-21 03:28:25.666006] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.501 [2024-07-21 03:28:25.666032] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.501 [2024-07-21 03:28:25.674071] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.501 [2024-07-21 03:28:25.674112] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.501 [2024-07-21 03:28:25.682094] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.501 [2024-07-21 03:28:25.682134] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.501 [2024-07-21 03:28:25.690103] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.501 [2024-07-21 03:28:25.690138] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.501 [2024-07-21 03:28:25.698106] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.501 [2024-07-21 03:28:25.698131] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.501 [2024-07-21 03:28:25.706136] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.501 [2024-07-21 03:28:25.706166] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.501 [2024-07-21 03:28:25.714158] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.501 [2024-07-21 03:28:25.714195] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.501 [2024-07-21 03:28:25.722178] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.501 [2024-07-21 03:28:25.722206] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.501 [2024-07-21 03:28:25.730203] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.501 [2024-07-21 03:28:25.730230] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.501 [2024-07-21 03:28:25.738228] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.501 [2024-07-21 03:28:25.738256] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.501 [2024-07-21 03:28:25.746249] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.501 [2024-07-21 03:28:25.746276] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.501 [2024-07-21 03:28:25.754270] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.501 [2024-07-21 03:28:25.754296] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.501 [2024-07-21 03:28:25.762293] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.501 [2024-07-21 03:28:25.762318] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.501 [2024-07-21 03:28:25.770313] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.501 [2024-07-21 03:28:25.770338] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.501 [2024-07-21 03:28:25.778337] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.501 [2024-07-21 03:28:25.778362] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.501 [2024-07-21 03:28:25.786359] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.501 [2024-07-21 03:28:25.786385] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.501 [2024-07-21 03:28:25.794384] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.501 [2024-07-21 03:28:25.794411] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.501 [2024-07-21 03:28:25.802408] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.501 [2024-07-21 03:28:25.802434] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.501 [2024-07-21 03:28:25.810434] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.501 [2024-07-21 03:28:25.810464] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.759 [2024-07-21 03:28:25.818461] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.759 [2024-07-21 03:28:25.818491] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.759 [2024-07-21 03:28:25.826476] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.759 [2024-07-21 03:28:25.826503] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.759 [2024-07-21 03:28:25.834504] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.759 [2024-07-21 03:28:25.834532] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.759 [2024-07-21 03:28:25.842520] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.759 [2024-07-21 03:28:25.842546] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.759 [2024-07-21 03:28:25.850542] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.759 [2024-07-21 03:28:25.850567] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.759 [2024-07-21 03:28:25.858563] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.759 [2024-07-21 03:28:25.858588] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.759 [2024-07-21 03:28:25.866583] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.759 [2024-07-21 03:28:25.866608] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.760 [2024-07-21 03:28:25.874609] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.760 [2024-07-21 03:28:25.874645] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.760 [2024-07-21 03:28:25.882639] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.760 [2024-07-21 03:28:25.882665] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.760 [2024-07-21 03:28:25.890681] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.760 [2024-07-21 03:28:25.890707] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.760 [2024-07-21 03:28:25.898691] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.760 [2024-07-21 03:28:25.898719] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.760 Running I/O for 5 seconds... 00:18:40.760 [2024-07-21 03:28:25.910809] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.760 [2024-07-21 03:28:25.910839] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.760 [2024-07-21 03:28:25.921362] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.760 [2024-07-21 03:28:25.921391] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.760 [2024-07-21 03:28:25.935056] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.760 [2024-07-21 03:28:25.935084] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.760 [2024-07-21 03:28:25.947483] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.760 [2024-07-21 03:28:25.947512] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.760 [2024-07-21 03:28:25.959957] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.760 [2024-07-21 03:28:25.960000] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.760 [2024-07-21 03:28:25.972698] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.760 [2024-07-21 03:28:25.972726] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.760 [2024-07-21 03:28:25.985035] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.760 [2024-07-21 03:28:25.985067] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.760 [2024-07-21 03:28:25.997294] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.760 [2024-07-21 03:28:25.997322] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.760 [2024-07-21 03:28:26.009733] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.760 [2024-07-21 03:28:26.009775] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.760 [2024-07-21 03:28:26.022247] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.760 [2024-07-21 03:28:26.022279] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.760 [2024-07-21 03:28:26.034076] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.760 [2024-07-21 03:28:26.034118] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.760 [2024-07-21 03:28:26.046504] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.760 [2024-07-21 03:28:26.046533] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.760 [2024-07-21 03:28:26.057936] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.760 [2024-07-21 03:28:26.057964] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.760 [2024-07-21 03:28:26.068782] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.760 [2024-07-21 03:28:26.068811] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.018 [2024-07-21 03:28:26.080321] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.018 [2024-07-21 03:28:26.080351] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.018 [2024-07-21 03:28:26.092030] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.018 [2024-07-21 03:28:26.092059] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.018 [2024-07-21 03:28:26.103622] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.018 [2024-07-21 03:28:26.103650] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.018 [2024-07-21 03:28:26.115360] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.018 [2024-07-21 03:28:26.115389] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.018 [2024-07-21 03:28:26.126525] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.018 [2024-07-21 03:28:26.126553] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.018 [2024-07-21 03:28:26.138112] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.018 [2024-07-21 03:28:26.138140] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.018 [2024-07-21 03:28:26.149379] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.018 [2024-07-21 03:28:26.149407] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.019 [2024-07-21 03:28:26.160953] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.019 [2024-07-21 03:28:26.160982] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.019 [2024-07-21 03:28:26.172896] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.019 [2024-07-21 03:28:26.172926] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.019 [2024-07-21 03:28:26.184408] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.019 [2024-07-21 03:28:26.184436] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.019 [2024-07-21 03:28:26.197935] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.019 [2024-07-21 03:28:26.197964] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.019 [2024-07-21 03:28:26.208461] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.019 [2024-07-21 03:28:26.208489] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.019 [2024-07-21 03:28:26.220513] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.019 [2024-07-21 03:28:26.220542] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.019 [2024-07-21 03:28:26.232137] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.019 [2024-07-21 03:28:26.232165] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.019 [2024-07-21 03:28:26.243782] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.019 [2024-07-21 03:28:26.243813] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.019 [2024-07-21 03:28:26.255426] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.019 [2024-07-21 03:28:26.255455] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.019 [2024-07-21 03:28:26.266681] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.019 [2024-07-21 03:28:26.266709] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.019 [2024-07-21 03:28:26.277846] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.019 [2024-07-21 03:28:26.277874] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.019 [2024-07-21 03:28:26.289253] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.019 [2024-07-21 03:28:26.289281] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.019 [2024-07-21 03:28:26.300788] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.019 [2024-07-21 03:28:26.300817] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.019 [2024-07-21 03:28:26.314280] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.019 [2024-07-21 03:28:26.314308] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.019 [2024-07-21 03:28:26.325200] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.019 [2024-07-21 03:28:26.325229] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.277 [2024-07-21 03:28:26.336861] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.277 [2024-07-21 03:28:26.336889] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.277 [2024-07-21 03:28:26.348736] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.277 [2024-07-21 03:28:26.348764] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.277 [2024-07-21 03:28:26.359940] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.277 [2024-07-21 03:28:26.359969] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.277 [2024-07-21 03:28:26.373315] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.277 [2024-07-21 03:28:26.373343] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.277 [2024-07-21 03:28:26.383792] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.277 [2024-07-21 03:28:26.383820] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.277 [2024-07-21 03:28:26.395016] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.277 [2024-07-21 03:28:26.395045] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.277 [2024-07-21 03:28:26.406525] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.277 [2024-07-21 03:28:26.406553] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.277 [2024-07-21 03:28:26.418066] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.277 [2024-07-21 03:28:26.418093] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.277 [2024-07-21 03:28:26.429294] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.277 [2024-07-21 03:28:26.429323] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.277 [2024-07-21 03:28:26.440283] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.277 [2024-07-21 03:28:26.440310] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.277 [2024-07-21 03:28:26.451511] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.277 [2024-07-21 03:28:26.451539] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.277 [2024-07-21 03:28:26.462827] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.277 [2024-07-21 03:28:26.462855] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.277 [2024-07-21 03:28:26.474578] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.277 [2024-07-21 03:28:26.474606] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.277 [2024-07-21 03:28:26.486527] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.277 [2024-07-21 03:28:26.486556] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.277 [2024-07-21 03:28:26.498454] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.277 [2024-07-21 03:28:26.498483] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.277 [2024-07-21 03:28:26.510204] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.277 [2024-07-21 03:28:26.510240] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.277 [2024-07-21 03:28:26.521976] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.277 [2024-07-21 03:28:26.522003] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.277 [2024-07-21 03:28:26.533380] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.277 [2024-07-21 03:28:26.533408] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.277 [2024-07-21 03:28:26.544489] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.277 [2024-07-21 03:28:26.544517] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.277 [2024-07-21 03:28:26.556629] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.277 [2024-07-21 03:28:26.556656] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.277 [2024-07-21 03:28:26.567931] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.277 [2024-07-21 03:28:26.567959] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.277 [2024-07-21 03:28:26.581239] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.277 [2024-07-21 03:28:26.581268] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.536 [2024-07-21 03:28:26.592214] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.536 [2024-07-21 03:28:26.592242] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.536 [2024-07-21 03:28:26.603169] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.536 [2024-07-21 03:28:26.603198] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.536 [2024-07-21 03:28:26.614462] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.536 [2024-07-21 03:28:26.614490] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.536 [2024-07-21 03:28:26.628014] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.536 [2024-07-21 03:28:26.628042] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.536 [2024-07-21 03:28:26.638704] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.536 [2024-07-21 03:28:26.638732] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.536 [2024-07-21 03:28:26.650545] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.536 [2024-07-21 03:28:26.650572] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.536 [2024-07-21 03:28:26.662493] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.536 [2024-07-21 03:28:26.662520] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.536 [2024-07-21 03:28:26.675874] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.536 [2024-07-21 03:28:26.675901] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.536 [2024-07-21 03:28:26.686394] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.536 [2024-07-21 03:28:26.686422] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.536 [2024-07-21 03:28:26.697790] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.536 [2024-07-21 03:28:26.697817] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.536 [2024-07-21 03:28:26.709712] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.536 [2024-07-21 03:28:26.709743] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.536 [2024-07-21 03:28:26.723517] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.536 [2024-07-21 03:28:26.723544] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.536 [2024-07-21 03:28:26.735131] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.536 [2024-07-21 03:28:26.735165] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.536 [2024-07-21 03:28:26.747568] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.536 [2024-07-21 03:28:26.747596] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.536 [2024-07-21 03:28:26.759835] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.536 [2024-07-21 03:28:26.759863] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.536 [2024-07-21 03:28:26.772282] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.536 [2024-07-21 03:28:26.772309] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.536 [2024-07-21 03:28:26.784268] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.536 [2024-07-21 03:28:26.784313] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.536 [2024-07-21 03:28:26.796336] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.536 [2024-07-21 03:28:26.796377] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.536 [2024-07-21 03:28:26.808327] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.536 [2024-07-21 03:28:26.808370] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.536 [2024-07-21 03:28:26.820326] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.536 [2024-07-21 03:28:26.820356] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.536 [2024-07-21 03:28:26.832903] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.536 [2024-07-21 03:28:26.832929] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.536 [2024-07-21 03:28:26.844993] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.536 [2024-07-21 03:28:26.845021] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.794 [2024-07-21 03:28:26.856833] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.794 [2024-07-21 03:28:26.856861] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.794 [2024-07-21 03:28:26.869438] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.794 [2024-07-21 03:28:26.869465] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.794 [2024-07-21 03:28:26.881922] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.794 [2024-07-21 03:28:26.881964] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.794 [2024-07-21 03:28:26.893875] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.794 [2024-07-21 03:28:26.893917] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.794 [2024-07-21 03:28:26.906310] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.794 [2024-07-21 03:28:26.906337] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.794 [2024-07-21 03:28:26.918430] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.794 [2024-07-21 03:28:26.918473] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.794 [2024-07-21 03:28:26.930457] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.794 [2024-07-21 03:28:26.930499] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.794 [2024-07-21 03:28:26.942181] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.794 [2024-07-21 03:28:26.942207] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.794 [2024-07-21 03:28:26.954630] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.794 [2024-07-21 03:28:26.954673] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.794 [2024-07-21 03:28:26.967149] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.794 [2024-07-21 03:28:26.967183] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.794 [2024-07-21 03:28:26.978923] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.794 [2024-07-21 03:28:26.978950] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.794 [2024-07-21 03:28:26.991303] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.794 [2024-07-21 03:28:26.991330] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.794 [2024-07-21 03:28:27.003850] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.794 [2024-07-21 03:28:27.003893] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.794 [2024-07-21 03:28:27.015789] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.794 [2024-07-21 03:28:27.015817] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.794 [2024-07-21 03:28:27.028712] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.794 [2024-07-21 03:28:27.028740] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.794 [2024-07-21 03:28:27.041136] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.794 [2024-07-21 03:28:27.041163] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.794 [2024-07-21 03:28:27.052970] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.794 [2024-07-21 03:28:27.052998] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.794 [2024-07-21 03:28:27.064754] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.794 [2024-07-21 03:28:27.064782] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.794 [2024-07-21 03:28:27.077027] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.794 [2024-07-21 03:28:27.077058] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.794 [2024-07-21 03:28:27.088780] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.794 [2024-07-21 03:28:27.088808] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.794 [2024-07-21 03:28:27.101042] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.794 [2024-07-21 03:28:27.101070] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.052 [2024-07-21 03:28:27.113801] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.052 [2024-07-21 03:28:27.113830] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.052 [2024-07-21 03:28:27.126332] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.052 [2024-07-21 03:28:27.126361] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.052 [2024-07-21 03:28:27.138199] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.052 [2024-07-21 03:28:27.138226] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.052 [2024-07-21 03:28:27.150639] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.052 [2024-07-21 03:28:27.150681] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.052 [2024-07-21 03:28:27.162330] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.052 [2024-07-21 03:28:27.162358] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.052 [2024-07-21 03:28:27.174228] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.052 [2024-07-21 03:28:27.174255] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.052 [2024-07-21 03:28:27.186524] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.052 [2024-07-21 03:28:27.186550] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.052 [2024-07-21 03:28:27.199049] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.052 [2024-07-21 03:28:27.199081] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.052 [2024-07-21 03:28:27.213483] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.052 [2024-07-21 03:28:27.213510] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.052 [2024-07-21 03:28:27.225029] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.052 [2024-07-21 03:28:27.225056] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.052 [2024-07-21 03:28:27.237108] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.052 [2024-07-21 03:28:27.237136] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.052 [2024-07-21 03:28:27.249469] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.052 [2024-07-21 03:28:27.249495] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.052 [2024-07-21 03:28:27.261992] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.052 [2024-07-21 03:28:27.262035] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.052 [2024-07-21 03:28:27.273583] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.052 [2024-07-21 03:28:27.273610] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.052 [2024-07-21 03:28:27.285357] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.052 [2024-07-21 03:28:27.285385] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.052 [2024-07-21 03:28:27.297393] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.052 [2024-07-21 03:28:27.297422] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.052 [2024-07-21 03:28:27.309351] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.052 [2024-07-21 03:28:27.309394] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.052 [2024-07-21 03:28:27.321254] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.052 [2024-07-21 03:28:27.321281] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.052 [2024-07-21 03:28:27.333235] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.052 [2024-07-21 03:28:27.333262] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.052 [2024-07-21 03:28:27.345308] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.052 [2024-07-21 03:28:27.345336] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.052 [2024-07-21 03:28:27.357285] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.052 [2024-07-21 03:28:27.357328] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.310 [2024-07-21 03:28:27.369640] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.310 [2024-07-21 03:28:27.369688] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.310 [2024-07-21 03:28:27.381807] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.310 [2024-07-21 03:28:27.381849] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.310 [2024-07-21 03:28:27.393704] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.310 [2024-07-21 03:28:27.393732] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.310 [2024-07-21 03:28:27.406137] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.310 [2024-07-21 03:28:27.406164] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.310 [2024-07-21 03:28:27.418919] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.310 [2024-07-21 03:28:27.418964] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.310 [2024-07-21 03:28:27.431045] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.310 [2024-07-21 03:28:27.431072] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.310 [2024-07-21 03:28:27.443270] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.310 [2024-07-21 03:28:27.443298] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.310 [2024-07-21 03:28:27.455102] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.310 [2024-07-21 03:28:27.455128] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.310 [2024-07-21 03:28:27.467179] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.310 [2024-07-21 03:28:27.467210] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.310 [2024-07-21 03:28:27.479315] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.310 [2024-07-21 03:28:27.479341] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.311 [2024-07-21 03:28:27.491377] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.311 [2024-07-21 03:28:27.491404] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.311 [2024-07-21 03:28:27.503325] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.311 [2024-07-21 03:28:27.503366] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.311 [2024-07-21 03:28:27.515096] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.311 [2024-07-21 03:28:27.515139] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.311 [2024-07-21 03:28:27.527311] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.311 [2024-07-21 03:28:27.527338] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.311 [2024-07-21 03:28:27.539787] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.311 [2024-07-21 03:28:27.539815] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.311 [2024-07-21 03:28:27.551776] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.311 [2024-07-21 03:28:27.551818] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.311 [2024-07-21 03:28:27.564022] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.311 [2024-07-21 03:28:27.564049] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.311 [2024-07-21 03:28:27.576453] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.311 [2024-07-21 03:28:27.576480] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.311 [2024-07-21 03:28:27.590528] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.311 [2024-07-21 03:28:27.590555] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.311 [2024-07-21 03:28:27.602656] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.311 [2024-07-21 03:28:27.602684] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.311 [2024-07-21 03:28:27.616364] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.311 [2024-07-21 03:28:27.616391] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.569 [2024-07-21 03:28:27.627965] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.569 [2024-07-21 03:28:27.628008] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.569 [2024-07-21 03:28:27.640207] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.569 [2024-07-21 03:28:27.640234] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.569 [2024-07-21 03:28:27.652807] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.569 [2024-07-21 03:28:27.652839] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.569 [2024-07-21 03:28:27.664700] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.569 [2024-07-21 03:28:27.664728] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.569 [2024-07-21 03:28:27.676663] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.569 [2024-07-21 03:28:27.676691] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.569 [2024-07-21 03:28:27.689258] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.569 [2024-07-21 03:28:27.689285] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.569 [2024-07-21 03:28:27.701194] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.569 [2024-07-21 03:28:27.701221] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.569 [2024-07-21 03:28:27.713141] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.569 [2024-07-21 03:28:27.713169] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.569 [2024-07-21 03:28:27.725176] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.569 [2024-07-21 03:28:27.725204] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.569 [2024-07-21 03:28:27.737313] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.569 [2024-07-21 03:28:27.737340] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.569 [2024-07-21 03:28:27.749644] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.569 [2024-07-21 03:28:27.749686] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.569 [2024-07-21 03:28:27.761505] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.569 [2024-07-21 03:28:27.761532] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.569 [2024-07-21 03:28:27.773922] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.569 [2024-07-21 03:28:27.773949] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.569 [2024-07-21 03:28:27.786140] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.569 [2024-07-21 03:28:27.786168] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.569 [2024-07-21 03:28:27.798506] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.569 [2024-07-21 03:28:27.798533] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.569 [2024-07-21 03:28:27.810714] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.569 [2024-07-21 03:28:27.810756] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.569 [2024-07-21 03:28:27.822275] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.569 [2024-07-21 03:28:27.822302] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.569 [2024-07-21 03:28:27.834323] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.569 [2024-07-21 03:28:27.834367] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.569 [2024-07-21 03:28:27.846520] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.569 [2024-07-21 03:28:27.846547] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.569 [2024-07-21 03:28:27.858173] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.569 [2024-07-21 03:28:27.858200] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.569 [2024-07-21 03:28:27.869756] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.569 [2024-07-21 03:28:27.869783] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.569 [2024-07-21 03:28:27.881670] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.569 [2024-07-21 03:28:27.881715] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.828 [2024-07-21 03:28:27.893405] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.828 [2024-07-21 03:28:27.893432] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.828 [2024-07-21 03:28:27.904906] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.828 [2024-07-21 03:28:27.904933] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.828 [2024-07-21 03:28:27.917178] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.828 [2024-07-21 03:28:27.917205] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.828 [2024-07-21 03:28:27.929241] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.828 [2024-07-21 03:28:27.929268] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.828 [2024-07-21 03:28:27.941161] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.828 [2024-07-21 03:28:27.941188] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.828 [2024-07-21 03:28:27.953400] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.828 [2024-07-21 03:28:27.953426] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.828 [2024-07-21 03:28:27.965783] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.828 [2024-07-21 03:28:27.965814] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.828 [2024-07-21 03:28:27.977725] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.828 [2024-07-21 03:28:27.977754] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.828 [2024-07-21 03:28:27.989947] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.828 [2024-07-21 03:28:27.989973] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.828 [2024-07-21 03:28:28.001984] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.828 [2024-07-21 03:28:28.002011] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.828 [2024-07-21 03:28:28.013800] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.828 [2024-07-21 03:28:28.013827] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.828 [2024-07-21 03:28:28.026055] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.828 [2024-07-21 03:28:28.026082] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.828 [2024-07-21 03:28:28.038094] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.828 [2024-07-21 03:28:28.038121] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.828 [2024-07-21 03:28:28.050256] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.828 [2024-07-21 03:28:28.050283] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.828 [2024-07-21 03:28:28.062194] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.828 [2024-07-21 03:28:28.062220] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.828 [2024-07-21 03:28:28.074087] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.828 [2024-07-21 03:28:28.074115] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.828 [2024-07-21 03:28:28.085450] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.828 [2024-07-21 03:28:28.085476] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.828 [2024-07-21 03:28:28.097714] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.828 [2024-07-21 03:28:28.097754] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.828 [2024-07-21 03:28:28.111677] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.828 [2024-07-21 03:28:28.111705] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.828 [2024-07-21 03:28:28.123862] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.828 [2024-07-21 03:28:28.123889] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.828 [2024-07-21 03:28:28.135734] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.828 [2024-07-21 03:28:28.135762] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.086 [2024-07-21 03:28:28.147902] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.086 [2024-07-21 03:28:28.147930] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.086 [2024-07-21 03:28:28.160541] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.086 [2024-07-21 03:28:28.160567] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.086 [2024-07-21 03:28:28.173128] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.086 [2024-07-21 03:28:28.173172] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.086 [2024-07-21 03:28:28.185308] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.086 [2024-07-21 03:28:28.185350] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.086 [2024-07-21 03:28:28.197696] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.086 [2024-07-21 03:28:28.197724] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.086 [2024-07-21 03:28:28.210282] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.086 [2024-07-21 03:28:28.210309] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.086 [2024-07-21 03:28:28.222967] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.086 [2024-07-21 03:28:28.222994] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.086 [2024-07-21 03:28:28.235542] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.086 [2024-07-21 03:28:28.235569] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.086 [2024-07-21 03:28:28.247835] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.086 [2024-07-21 03:28:28.247866] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.086 [2024-07-21 03:28:28.259572] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.086 [2024-07-21 03:28:28.259621] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.086 [2024-07-21 03:28:28.272148] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.086 [2024-07-21 03:28:28.272175] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.086 [2024-07-21 03:28:28.285534] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.086 [2024-07-21 03:28:28.285561] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.086 [2024-07-21 03:28:28.296471] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.086 [2024-07-21 03:28:28.296498] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.086 [2024-07-21 03:28:28.309497] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.086 [2024-07-21 03:28:28.309524] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.086 [2024-07-21 03:28:28.321382] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.086 [2024-07-21 03:28:28.321410] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.086 [2024-07-21 03:28:28.333819] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.086 [2024-07-21 03:28:28.333861] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.086 [2024-07-21 03:28:28.345386] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.086 [2024-07-21 03:28:28.345419] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.086 [2024-07-21 03:28:28.360012] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.086 [2024-07-21 03:28:28.360039] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.086 [2024-07-21 03:28:28.371198] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.086 [2024-07-21 03:28:28.371224] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.086 [2024-07-21 03:28:28.383229] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.086 [2024-07-21 03:28:28.383256] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.086 [2024-07-21 03:28:28.395492] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.086 [2024-07-21 03:28:28.395520] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.344 [2024-07-21 03:28:28.407497] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.344 [2024-07-21 03:28:28.407528] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.344 [2024-07-21 03:28:28.419735] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.344 [2024-07-21 03:28:28.419763] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.344 [2024-07-21 03:28:28.431715] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.344 [2024-07-21 03:28:28.431758] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.344 [2024-07-21 03:28:28.443680] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.344 [2024-07-21 03:28:28.443708] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.344 [2024-07-21 03:28:28.456137] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.344 [2024-07-21 03:28:28.456164] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.344 [2024-07-21 03:28:28.468360] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.344 [2024-07-21 03:28:28.468388] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.344 [2024-07-21 03:28:28.480280] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.344 [2024-07-21 03:28:28.480308] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.344 [2024-07-21 03:28:28.493025] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.344 [2024-07-21 03:28:28.493057] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.344 [2024-07-21 03:28:28.505409] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.344 [2024-07-21 03:28:28.505436] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.344 [2024-07-21 03:28:28.517354] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.344 [2024-07-21 03:28:28.517381] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.344 [2024-07-21 03:28:28.529253] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.344 [2024-07-21 03:28:28.529295] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.344 [2024-07-21 03:28:28.541346] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.344 [2024-07-21 03:28:28.541374] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.344 [2024-07-21 03:28:28.553995] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.344 [2024-07-21 03:28:28.554037] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.344 [2024-07-21 03:28:28.566176] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.344 [2024-07-21 03:28:28.566203] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.344 [2024-07-21 03:28:28.578375] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.344 [2024-07-21 03:28:28.578409] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.344 [2024-07-21 03:28:28.590992] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.344 [2024-07-21 03:28:28.591024] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.344 [2024-07-21 03:28:28.603219] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.344 [2024-07-21 03:28:28.603246] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.344 [2024-07-21 03:28:28.615018] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.344 [2024-07-21 03:28:28.615061] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.344 [2024-07-21 03:28:28.627329] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.344 [2024-07-21 03:28:28.627356] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.344 [2024-07-21 03:28:28.639243] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.344 [2024-07-21 03:28:28.639270] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.344 [2024-07-21 03:28:28.651110] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.344 [2024-07-21 03:28:28.651137] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.609 [2024-07-21 03:28:28.663483] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.609 [2024-07-21 03:28:28.663515] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.609 [2024-07-21 03:28:28.677185] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.609 [2024-07-21 03:28:28.677213] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.609 [2024-07-21 03:28:28.688574] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.609 [2024-07-21 03:28:28.688626] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.609 [2024-07-21 03:28:28.701811] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.609 [2024-07-21 03:28:28.701839] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.609 [2024-07-21 03:28:28.714079] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.609 [2024-07-21 03:28:28.714125] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.609 [2024-07-21 03:28:28.726426] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.609 [2024-07-21 03:28:28.726460] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.609 [2024-07-21 03:28:28.738829] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.609 [2024-07-21 03:28:28.738861] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.609 [2024-07-21 03:28:28.750837] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.609 [2024-07-21 03:28:28.750865] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.609 [2024-07-21 03:28:28.763138] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.609 [2024-07-21 03:28:28.763169] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.609 [2024-07-21 03:28:28.775985] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.609 [2024-07-21 03:28:28.776012] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.609 [2024-07-21 03:28:28.788730] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.609 [2024-07-21 03:28:28.788758] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.609 [2024-07-21 03:28:28.800770] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.609 [2024-07-21 03:28:28.800797] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.609 [2024-07-21 03:28:28.813945] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.609 [2024-07-21 03:28:28.813995] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.609 [2024-07-21 03:28:28.826572] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.609 [2024-07-21 03:28:28.826603] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.609 [2024-07-21 03:28:28.838351] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.609 [2024-07-21 03:28:28.838377] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.609 [2024-07-21 03:28:28.850923] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.609 [2024-07-21 03:28:28.850950] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.609 [2024-07-21 03:28:28.862883] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.609 [2024-07-21 03:28:28.862924] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.609 [2024-07-21 03:28:28.875088] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.609 [2024-07-21 03:28:28.875116] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.609 [2024-07-21 03:28:28.886916] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.609 [2024-07-21 03:28:28.886942] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.609 [2024-07-21 03:28:28.898642] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.609 [2024-07-21 03:28:28.898686] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.609 [2024-07-21 03:28:28.910706] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.609 [2024-07-21 03:28:28.910733] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.866 [2024-07-21 03:28:28.923331] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.866 [2024-07-21 03:28:28.923359] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.866 [2024-07-21 03:28:28.935940] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.866 [2024-07-21 03:28:28.935982] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.866 [2024-07-21 03:28:28.947892] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.866 [2024-07-21 03:28:28.947919] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.866 [2024-07-21 03:28:28.959770] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.866 [2024-07-21 03:28:28.959812] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.866 [2024-07-21 03:28:28.972187] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.866 [2024-07-21 03:28:28.972214] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.866 [2024-07-21 03:28:28.984359] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.866 [2024-07-21 03:28:28.984385] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.866 [2024-07-21 03:28:28.996531] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.866 [2024-07-21 03:28:28.996559] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.866 [2024-07-21 03:28:29.008629] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.866 [2024-07-21 03:28:29.008656] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.866 [2024-07-21 03:28:29.020686] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.866 [2024-07-21 03:28:29.020717] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.866 [2024-07-21 03:28:29.032883] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.866 [2024-07-21 03:28:29.032910] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.866 [2024-07-21 03:28:29.044824] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.866 [2024-07-21 03:28:29.044859] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.866 [2024-07-21 03:28:29.057088] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.866 [2024-07-21 03:28:29.057115] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.866 [2024-07-21 03:28:29.068852] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.866 [2024-07-21 03:28:29.068880] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.866 [2024-07-21 03:28:29.080823] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.866 [2024-07-21 03:28:29.080852] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.866 [2024-07-21 03:28:29.093143] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.866 [2024-07-21 03:28:29.093170] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.866 [2024-07-21 03:28:29.105286] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.866 [2024-07-21 03:28:29.105314] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.866 [2024-07-21 03:28:29.117398] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.866 [2024-07-21 03:28:29.117425] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.866 [2024-07-21 03:28:29.129352] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.866 [2024-07-21 03:28:29.129379] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.866 [2024-07-21 03:28:29.141238] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.866 [2024-07-21 03:28:29.141265] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.866 [2024-07-21 03:28:29.153175] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.866 [2024-07-21 03:28:29.153201] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.866 [2024-07-21 03:28:29.165338] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.866 [2024-07-21 03:28:29.165365] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.866 [2024-07-21 03:28:29.178026] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.866 [2024-07-21 03:28:29.178054] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.123 [2024-07-21 03:28:29.189965] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.123 [2024-07-21 03:28:29.189992] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.123 [2024-07-21 03:28:29.202286] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.123 [2024-07-21 03:28:29.202313] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.123 [2024-07-21 03:28:29.214482] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.123 [2024-07-21 03:28:29.214509] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.123 [2024-07-21 03:28:29.226426] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.123 [2024-07-21 03:28:29.226453] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.123 [2024-07-21 03:28:29.238440] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.123 [2024-07-21 03:28:29.238467] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.123 [2024-07-21 03:28:29.250790] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.123 [2024-07-21 03:28:29.250818] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.123 [2024-07-21 03:28:29.262741] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.123 [2024-07-21 03:28:29.262769] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.123 [2024-07-21 03:28:29.274716] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.123 [2024-07-21 03:28:29.274744] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.123 [2024-07-21 03:28:29.286864] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.123 [2024-07-21 03:28:29.286891] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.123 [2024-07-21 03:28:29.298743] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.123 [2024-07-21 03:28:29.298770] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.123 [2024-07-21 03:28:29.310669] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.123 [2024-07-21 03:28:29.310698] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.123 [2024-07-21 03:28:29.322654] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.123 [2024-07-21 03:28:29.322688] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.123 [2024-07-21 03:28:29.334679] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.123 [2024-07-21 03:28:29.334706] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.123 [2024-07-21 03:28:29.346255] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.123 [2024-07-21 03:28:29.346282] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.123 [2024-07-21 03:28:29.358494] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.123 [2024-07-21 03:28:29.358521] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.123 [2024-07-21 03:28:29.370916] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.123 [2024-07-21 03:28:29.370943] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.123 [2024-07-21 03:28:29.383047] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.123 [2024-07-21 03:28:29.383075] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.123 [2024-07-21 03:28:29.395202] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.123 [2024-07-21 03:28:29.395229] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.123 [2024-07-21 03:28:29.407267] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.123 [2024-07-21 03:28:29.407294] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.123 [2024-07-21 03:28:29.419857] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.123 [2024-07-21 03:28:29.419899] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.123 [2024-07-21 03:28:29.432669] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.123 [2024-07-21 03:28:29.432697] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.380 [2024-07-21 03:28:29.444565] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.380 [2024-07-21 03:28:29.444593] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.380 [2024-07-21 03:28:29.456820] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.380 [2024-07-21 03:28:29.456848] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.380 [2024-07-21 03:28:29.468853] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.380 [2024-07-21 03:28:29.468881] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.380 [2024-07-21 03:28:29.480660] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.380 [2024-07-21 03:28:29.480688] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.380 [2024-07-21 03:28:29.492945] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.380 [2024-07-21 03:28:29.492972] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.380 [2024-07-21 03:28:29.505465] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.380 [2024-07-21 03:28:29.505493] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.380 [2024-07-21 03:28:29.517461] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.380 [2024-07-21 03:28:29.517487] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.380 [2024-07-21 03:28:29.529589] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.380 [2024-07-21 03:28:29.529639] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.380 [2024-07-21 03:28:29.541399] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.380 [2024-07-21 03:28:29.541426] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.380 [2024-07-21 03:28:29.553518] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.380 [2024-07-21 03:28:29.553545] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.380 [2024-07-21 03:28:29.565970] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.380 [2024-07-21 03:28:29.566014] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.380 [2024-07-21 03:28:29.578021] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.380 [2024-07-21 03:28:29.578048] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.380 [2024-07-21 03:28:29.590276] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.380 [2024-07-21 03:28:29.590303] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.380 [2024-07-21 03:28:29.602886] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.380 [2024-07-21 03:28:29.602928] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.380 [2024-07-21 03:28:29.615415] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.380 [2024-07-21 03:28:29.615443] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.380 [2024-07-21 03:28:29.627413] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.380 [2024-07-21 03:28:29.627439] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.380 [2024-07-21 03:28:29.639325] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.380 [2024-07-21 03:28:29.639353] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.380 [2024-07-21 03:28:29.651754] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.380 [2024-07-21 03:28:29.651782] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.380 [2024-07-21 03:28:29.663717] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.380 [2024-07-21 03:28:29.663746] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.380 [2024-07-21 03:28:29.676283] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.380 [2024-07-21 03:28:29.676311] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.380 [2024-07-21 03:28:29.688419] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.380 [2024-07-21 03:28:29.688450] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.638 [2024-07-21 03:28:29.701539] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.638 [2024-07-21 03:28:29.701568] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.638 [2024-07-21 03:28:29.713279] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.638 [2024-07-21 03:28:29.713307] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.638 [2024-07-21 03:28:29.725526] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.638 [2024-07-21 03:28:29.725553] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.638 [2024-07-21 03:28:29.737935] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.638 [2024-07-21 03:28:29.737962] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.638 [2024-07-21 03:28:29.750041] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.638 [2024-07-21 03:28:29.750067] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.638 [2024-07-21 03:28:29.762194] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.638 [2024-07-21 03:28:29.762221] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.638 [2024-07-21 03:28:29.774364] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.638 [2024-07-21 03:28:29.774390] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.638 [2024-07-21 03:28:29.786974] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.638 [2024-07-21 03:28:29.787014] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.638 [2024-07-21 03:28:29.799362] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.638 [2024-07-21 03:28:29.799389] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.638 [2024-07-21 03:28:29.811573] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.638 [2024-07-21 03:28:29.811601] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.638 [2024-07-21 03:28:29.823586] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.638 [2024-07-21 03:28:29.823621] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.638 [2024-07-21 03:28:29.835586] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.638 [2024-07-21 03:28:29.835636] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.638 [2024-07-21 03:28:29.847645] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.638 [2024-07-21 03:28:29.847672] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.638 [2024-07-21 03:28:29.859699] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.638 [2024-07-21 03:28:29.859726] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.638 [2024-07-21 03:28:29.871871] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.638 [2024-07-21 03:28:29.871906] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.638 [2024-07-21 03:28:29.884002] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.638 [2024-07-21 03:28:29.884030] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.638 [2024-07-21 03:28:29.896552] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.638 [2024-07-21 03:28:29.896579] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.638 [2024-07-21 03:28:29.908806] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.638 [2024-07-21 03:28:29.908834] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.638 [2024-07-21 03:28:29.921338] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.638 [2024-07-21 03:28:29.921365] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.638 [2024-07-21 03:28:29.933870] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.638 [2024-07-21 03:28:29.933902] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.638 [2024-07-21 03:28:29.946643] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.638 [2024-07-21 03:28:29.946676] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.896 [2024-07-21 03:28:29.959047] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.896 [2024-07-21 03:28:29.959075] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.896 [2024-07-21 03:28:29.971432] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.896 [2024-07-21 03:28:29.971459] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.896 [2024-07-21 03:28:29.983444] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.896 [2024-07-21 03:28:29.983471] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.896 [2024-07-21 03:28:29.994884] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.896 [2024-07-21 03:28:29.994925] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.896 [2024-07-21 03:28:30.007569] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.896 [2024-07-21 03:28:30.007623] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.896 [2024-07-21 03:28:30.020534] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.896 [2024-07-21 03:28:30.020566] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.896 [2024-07-21 03:28:30.033692] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.896 [2024-07-21 03:28:30.033736] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.896 [2024-07-21 03:28:30.046634] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.896 [2024-07-21 03:28:30.046674] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.896 [2024-07-21 03:28:30.058976] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.896 [2024-07-21 03:28:30.059003] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.896 [2024-07-21 03:28:30.070681] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.896 [2024-07-21 03:28:30.070708] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.896 [2024-07-21 03:28:30.082724] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.896 [2024-07-21 03:28:30.082751] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.896 [2024-07-21 03:28:30.094288] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.896 [2024-07-21 03:28:30.094315] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.896 [2024-07-21 03:28:30.105990] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.896 [2024-07-21 03:28:30.106031] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.896 [2024-07-21 03:28:30.116491] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.896 [2024-07-21 03:28:30.116518] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.896 [2024-07-21 03:28:30.128353] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.896 [2024-07-21 03:28:30.128381] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.896 [2024-07-21 03:28:30.142212] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.896 [2024-07-21 03:28:30.142240] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.896 [2024-07-21 03:28:30.153051] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.896 [2024-07-21 03:28:30.153078] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.896 [2024-07-21 03:28:30.164579] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.896 [2024-07-21 03:28:30.164606] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.896 [2024-07-21 03:28:30.176373] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.896 [2024-07-21 03:28:30.176400] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.896 [2024-07-21 03:28:30.188063] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.896 [2024-07-21 03:28:30.188099] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.896 [2024-07-21 03:28:30.200188] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.896 [2024-07-21 03:28:30.200215] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.153 [2024-07-21 03:28:30.214275] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.153 [2024-07-21 03:28:30.214302] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.153 [2024-07-21 03:28:30.225552] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.153 [2024-07-21 03:28:30.225579] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.153 [2024-07-21 03:28:30.237511] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.153 [2024-07-21 03:28:30.237537] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.153 [2024-07-21 03:28:30.248983] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.153 [2024-07-21 03:28:30.249010] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.153 [2024-07-21 03:28:30.261035] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.153 [2024-07-21 03:28:30.261062] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.153 [2024-07-21 03:28:30.272781] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.153 [2024-07-21 03:28:30.272809] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.153 [2024-07-21 03:28:30.284993] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.154 [2024-07-21 03:28:30.285020] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.154 [2024-07-21 03:28:30.296833] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.154 [2024-07-21 03:28:30.296860] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.154 [2024-07-21 03:28:30.308645] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.154 [2024-07-21 03:28:30.308672] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.154 [2024-07-21 03:28:30.320985] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.154 [2024-07-21 03:28:30.321013] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.154 [2024-07-21 03:28:30.332268] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.154 [2024-07-21 03:28:30.332295] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.154 [2024-07-21 03:28:30.344262] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.154 [2024-07-21 03:28:30.344289] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.154 [2024-07-21 03:28:30.356330] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.154 [2024-07-21 03:28:30.356357] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.154 [2024-07-21 03:28:30.369464] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.154 [2024-07-21 03:28:30.369491] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.154 [2024-07-21 03:28:30.380608] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.154 [2024-07-21 03:28:30.380646] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.154 [2024-07-21 03:28:30.392970] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.154 [2024-07-21 03:28:30.392997] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.154 [2024-07-21 03:28:30.405152] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.154 [2024-07-21 03:28:30.405179] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.154 [2024-07-21 03:28:30.416872] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.154 [2024-07-21 03:28:30.416906] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.154 [2024-07-21 03:28:30.428377] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.154 [2024-07-21 03:28:30.428404] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.154 [2024-07-21 03:28:30.439721] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.154 [2024-07-21 03:28:30.439749] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.154 [2024-07-21 03:28:30.451174] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.154 [2024-07-21 03:28:30.451200] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.154 [2024-07-21 03:28:30.462324] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.154 [2024-07-21 03:28:30.462351] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.412 [2024-07-21 03:28:30.473834] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.412 [2024-07-21 03:28:30.473862] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.412 [2024-07-21 03:28:30.487392] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.412 [2024-07-21 03:28:30.487420] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.412 [2024-07-21 03:28:30.498558] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.412 [2024-07-21 03:28:30.498584] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.412 [2024-07-21 03:28:30.510195] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.412 [2024-07-21 03:28:30.510222] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.412 [2024-07-21 03:28:30.521785] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.412 [2024-07-21 03:28:30.521813] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.412 [2024-07-21 03:28:30.533393] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.412 [2024-07-21 03:28:30.533419] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.412 [2024-07-21 03:28:30.546114] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.412 [2024-07-21 03:28:30.546141] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.412 [2024-07-21 03:28:30.558344] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.412 [2024-07-21 03:28:30.558374] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.412 [2024-07-21 03:28:30.570851] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.412 [2024-07-21 03:28:30.570878] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.412 [2024-07-21 03:28:30.583718] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.412 [2024-07-21 03:28:30.583746] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.412 [2024-07-21 03:28:30.605677] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.412 [2024-07-21 03:28:30.605707] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.412 [2024-07-21 03:28:30.618080] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.412 [2024-07-21 03:28:30.618124] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.412 [2024-07-21 03:28:30.629959] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.412 [2024-07-21 03:28:30.630002] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.412 [2024-07-21 03:28:30.642027] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.412 [2024-07-21 03:28:30.642054] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.412 [2024-07-21 03:28:30.654391] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.412 [2024-07-21 03:28:30.654431] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.412 [2024-07-21 03:28:30.666851] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.412 [2024-07-21 03:28:30.666893] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.412 [2024-07-21 03:28:30.679237] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.412 [2024-07-21 03:28:30.679267] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.412 [2024-07-21 03:28:30.692054] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.412 [2024-07-21 03:28:30.692085] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.412 [2024-07-21 03:28:30.704719] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.412 [2024-07-21 03:28:30.704747] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.412 [2024-07-21 03:28:30.717514] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.412 [2024-07-21 03:28:30.717542] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.670 [2024-07-21 03:28:30.731016] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.670 [2024-07-21 03:28:30.731049] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.670 [2024-07-21 03:28:30.743446] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.670 [2024-07-21 03:28:30.743478] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.670 [2024-07-21 03:28:30.756622] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.670 [2024-07-21 03:28:30.756650] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.670 [2024-07-21 03:28:30.769710] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.670 [2024-07-21 03:28:30.769738] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.670 [2024-07-21 03:28:30.782453] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.670 [2024-07-21 03:28:30.782480] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.670 [2024-07-21 03:28:30.794977] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.670 [2024-07-21 03:28:30.795005] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.670 [2024-07-21 03:28:30.807446] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.670 [2024-07-21 03:28:30.807473] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.670 [2024-07-21 03:28:30.819447] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.670 [2024-07-21 03:28:30.819473] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.670 [2024-07-21 03:28:30.831527] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.670 [2024-07-21 03:28:30.831559] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.670 [2024-07-21 03:28:30.843816] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.670 [2024-07-21 03:28:30.843844] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.670 [2024-07-21 03:28:30.856567] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.670 [2024-07-21 03:28:30.856598] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.670 [2024-07-21 03:28:30.869095] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.670 [2024-07-21 03:28:30.869121] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.670 [2024-07-21 03:28:30.881590] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.670 [2024-07-21 03:28:30.881640] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.670 [2024-07-21 03:28:30.893510] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.670 [2024-07-21 03:28:30.893548] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.670 [2024-07-21 03:28:30.906193] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.670 [2024-07-21 03:28:30.906224] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.670 [2024-07-21 03:28:30.917985] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.670 [2024-07-21 03:28:30.918027] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.670 00:18:45.670 Latency(us) 00:18:45.670 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:45.670 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:18:45.670 Nvme1n1 : 5.01 10524.14 82.22 0.00 0.00 12145.46 5170.06 19806.44 00:18:45.670 =================================================================================================================== 00:18:45.670 Total : 10524.14 82.22 0.00 0.00 12145.46 5170.06 19806.44 00:18:45.671 [2024-07-21 03:28:30.925369] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.671 [2024-07-21 03:28:30.925399] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.671 [2024-07-21 03:28:30.933388] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.671 [2024-07-21 03:28:30.933416] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.671 [2024-07-21 03:28:30.941411] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.671 [2024-07-21 03:28:30.941443] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.671 [2024-07-21 03:28:30.949485] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.671 [2024-07-21 03:28:30.949539] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.671 [2024-07-21 03:28:30.957517] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.671 [2024-07-21 03:28:30.957567] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.671 [2024-07-21 03:28:30.965517] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.671 [2024-07-21 03:28:30.965566] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.671 [2024-07-21 03:28:30.973535] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.671 [2024-07-21 03:28:30.973584] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.671 [2024-07-21 03:28:30.981564] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.671 [2024-07-21 03:28:30.981630] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.928 [2024-07-21 03:28:30.989600] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.928 [2024-07-21 03:28:30.989681] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.928 [2024-07-21 03:28:30.997640] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.928 [2024-07-21 03:28:30.997701] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.928 [2024-07-21 03:28:31.005663] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.928 [2024-07-21 03:28:31.005712] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.928 [2024-07-21 03:28:31.013673] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.928 [2024-07-21 03:28:31.013722] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.928 [2024-07-21 03:28:31.021686] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.928 [2024-07-21 03:28:31.021734] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.928 [2024-07-21 03:28:31.029715] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.928 [2024-07-21 03:28:31.029775] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.928 [2024-07-21 03:28:31.037761] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.928 [2024-07-21 03:28:31.037810] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.928 [2024-07-21 03:28:31.045768] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.928 [2024-07-21 03:28:31.045817] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.928 [2024-07-21 03:28:31.053777] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.928 [2024-07-21 03:28:31.053827] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.928 [2024-07-21 03:28:31.061754] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.928 [2024-07-21 03:28:31.061785] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.928 [2024-07-21 03:28:31.069761] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.928 [2024-07-21 03:28:31.069786] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.928 [2024-07-21 03:28:31.077829] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.928 [2024-07-21 03:28:31.077875] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.928 [2024-07-21 03:28:31.085853] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.928 [2024-07-21 03:28:31.085927] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.928 [2024-07-21 03:28:31.093850] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.928 [2024-07-21 03:28:31.093891] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.928 [2024-07-21 03:28:31.101842] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.928 [2024-07-21 03:28:31.101865] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.928 [2024-07-21 03:28:31.109922] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.928 [2024-07-21 03:28:31.109968] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.928 [2024-07-21 03:28:31.117946] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.928 [2024-07-21 03:28:31.117998] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.928 [2024-07-21 03:28:31.125962] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.928 [2024-07-21 03:28:31.125997] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.928 [2024-07-21 03:28:31.133960] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.928 [2024-07-21 03:28:31.133985] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.928 [2024-07-21 03:28:31.141974] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.928 [2024-07-21 03:28:31.141999] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.928 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (2406487) - No such process 00:18:45.928 03:28:31 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 2406487 00:18:45.928 03:28:31 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:45.928 03:28:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:45.928 03:28:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:45.928 03:28:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:45.928 03:28:31 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:18:45.928 03:28:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:45.928 03:28:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:45.928 delay0 00:18:45.928 03:28:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:45.928 03:28:31 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:18:45.928 03:28:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:45.928 03:28:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:45.928 03:28:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:45.928 03:28:31 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:18:45.928 EAL: No free 2048 kB hugepages reported on node 1 00:18:46.185 [2024-07-21 03:28:31.263564] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:18:52.816 Initializing NVMe Controllers 00:18:52.816 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:52.816 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:52.816 Initialization complete. Launching workers. 00:18:52.816 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 85 00:18:52.816 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 372, failed to submit 33 00:18:52.816 success 192, unsuccess 180, failed 0 00:18:52.816 03:28:37 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:18:52.816 03:28:37 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:18:52.816 03:28:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:52.816 03:28:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:18:52.816 03:28:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:52.816 03:28:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:18:52.816 03:28:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:52.816 03:28:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:52.816 rmmod nvme_tcp 00:18:52.816 rmmod nvme_fabrics 00:18:52.816 rmmod nvme_keyring 00:18:52.816 03:28:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:52.816 03:28:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:18:52.816 03:28:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:18:52.816 03:28:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 2405153 ']' 00:18:52.816 03:28:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 2405153 00:18:52.816 03:28:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@946 -- # '[' -z 2405153 ']' 00:18:52.816 03:28:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@950 -- # kill -0 2405153 00:18:52.816 03:28:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@951 -- # uname 00:18:52.817 03:28:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:52.817 03:28:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2405153 00:18:52.817 03:28:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:18:52.817 03:28:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:18:52.817 03:28:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2405153' 00:18:52.817 killing process with pid 2405153 00:18:52.817 03:28:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@965 -- # kill 2405153 00:18:52.817 03:28:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@970 -- # wait 2405153 00:18:52.817 03:28:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:52.817 03:28:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:52.817 03:28:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:52.817 03:28:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:52.817 03:28:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:52.817 03:28:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:52.817 03:28:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:52.817 03:28:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:54.718 03:28:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:54.718 00:18:54.718 real 0m27.642s 00:18:54.718 user 0m40.205s 00:18:54.718 sys 0m8.484s 00:18:54.718 03:28:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:54.718 03:28:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:54.718 ************************************ 00:18:54.718 END TEST nvmf_zcopy 00:18:54.718 ************************************ 00:18:54.718 03:28:39 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:18:54.718 03:28:39 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:18:54.718 03:28:39 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:54.718 03:28:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:54.718 ************************************ 00:18:54.718 START TEST nvmf_nmic 00:18:54.718 ************************************ 00:18:54.718 03:28:39 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:18:54.718 * Looking for test storage... 00:18:54.718 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:54.718 03:28:39 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:54.718 03:28:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:18:54.718 03:28:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:54.718 03:28:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:54.718 03:28:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:54.718 03:28:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:54.718 03:28:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:54.718 03:28:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:54.718 03:28:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:54.718 03:28:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:54.718 03:28:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:54.718 03:28:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:54.718 03:28:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:54.718 03:28:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:54.718 03:28:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:54.718 03:28:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:54.718 03:28:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:54.718 03:28:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:54.719 03:28:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:54.719 03:28:39 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:54.719 03:28:39 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:54.719 03:28:39 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:54.719 03:28:39 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:54.719 03:28:39 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:54.719 03:28:39 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:54.719 03:28:39 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:18:54.719 03:28:39 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:54.719 03:28:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:18:54.719 03:28:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:54.719 03:28:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:54.719 03:28:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:54.719 03:28:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:54.719 03:28:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:54.719 03:28:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:54.719 03:28:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:54.719 03:28:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:54.719 03:28:39 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:54.719 03:28:39 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:54.719 03:28:39 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:18:54.719 03:28:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:54.719 03:28:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:54.719 03:28:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:54.719 03:28:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:54.719 03:28:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:54.719 03:28:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:54.719 03:28:39 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:54.719 03:28:39 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:54.719 03:28:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:54.719 03:28:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:54.719 03:28:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:18:54.719 03:28:39 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:56.622 03:28:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:56.622 03:28:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:18:56.622 03:28:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:56.622 03:28:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:56.622 03:28:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:56.622 03:28:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:56.622 03:28:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:56.622 03:28:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:18:56.622 03:28:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:56.622 03:28:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:18:56.622 03:28:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:18:56.622 03:28:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:18:56.622 03:28:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:18:56.622 03:28:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:18:56.622 03:28:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:18:56.622 03:28:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:56.622 03:28:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:56.622 03:28:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:56.622 03:28:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:56.622 03:28:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:56.622 03:28:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:56.622 03:28:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:56.622 03:28:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:56.622 03:28:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:56.622 03:28:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:56.622 03:28:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:56.622 03:28:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:56.622 03:28:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:56.622 03:28:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:56.622 03:28:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:56.622 03:28:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:56.622 03:28:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:56.622 03:28:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:56.622 03:28:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:56.622 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:56.622 03:28:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:56.622 03:28:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:56.622 03:28:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:56.622 03:28:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:56.622 03:28:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:56.622 03:28:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:56.622 03:28:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:56.622 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:56.622 03:28:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:56.622 03:28:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:56.622 03:28:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:56.622 03:28:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:56.622 03:28:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:56.622 03:28:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:56.622 03:28:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:56.622 03:28:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:56.622 03:28:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:56.622 03:28:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:56.622 03:28:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:56.622 03:28:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:56.622 03:28:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:56.622 03:28:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:56.622 03:28:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:56.622 03:28:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:56.622 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:56.622 03:28:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:56.622 03:28:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:56.622 03:28:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:56.622 03:28:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:56.622 03:28:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:56.622 03:28:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:56.622 03:28:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:56.622 03:28:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:56.622 03:28:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:56.622 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:56.622 03:28:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:56.622 03:28:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:56.622 03:28:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:18:56.622 03:28:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:56.622 03:28:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:56.622 03:28:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:56.622 03:28:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:56.622 03:28:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:56.622 03:28:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:56.622 03:28:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:56.622 03:28:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:56.622 03:28:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:56.622 03:28:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:56.622 03:28:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:56.622 03:28:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:56.622 03:28:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:56.622 03:28:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:56.622 03:28:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:56.622 03:28:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:56.622 03:28:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:56.622 03:28:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:56.622 03:28:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:56.622 03:28:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:56.622 03:28:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:56.622 03:28:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:56.622 03:28:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:56.622 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:56.622 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.187 ms 00:18:56.622 00:18:56.622 --- 10.0.0.2 ping statistics --- 00:18:56.622 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:56.622 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:18:56.622 03:28:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:56.622 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:56.622 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.079 ms 00:18:56.622 00:18:56.622 --- 10.0.0.1 ping statistics --- 00:18:56.622 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:56.622 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:18:56.622 03:28:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:56.622 03:28:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:18:56.622 03:28:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:56.622 03:28:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:56.622 03:28:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:56.622 03:28:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:56.622 03:28:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:56.622 03:28:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:56.622 03:28:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:56.622 03:28:41 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:18:56.622 03:28:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:56.622 03:28:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@720 -- # xtrace_disable 00:18:56.622 03:28:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:56.881 03:28:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=2409742 00:18:56.881 03:28:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:56.881 03:28:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 2409742 00:18:56.881 03:28:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@827 -- # '[' -z 2409742 ']' 00:18:56.881 03:28:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:56.881 03:28:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:56.881 03:28:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:56.881 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:56.881 03:28:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:56.881 03:28:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:56.881 [2024-07-21 03:28:41.983912] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:18:56.881 [2024-07-21 03:28:41.983997] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:56.881 EAL: No free 2048 kB hugepages reported on node 1 00:18:56.881 [2024-07-21 03:28:42.053316] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:56.881 [2024-07-21 03:28:42.150205] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:56.881 [2024-07-21 03:28:42.150261] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:56.881 [2024-07-21 03:28:42.150278] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:56.881 [2024-07-21 03:28:42.150291] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:56.881 [2024-07-21 03:28:42.150303] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:56.881 [2024-07-21 03:28:42.150384] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:56.881 [2024-07-21 03:28:42.150683] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:56.881 [2024-07-21 03:28:42.150718] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:56.881 [2024-07-21 03:28:42.150721] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:57.139 03:28:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:57.139 03:28:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@860 -- # return 0 00:18:57.139 03:28:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:57.139 03:28:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:57.139 03:28:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:57.139 03:28:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:57.139 03:28:42 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:57.139 03:28:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:57.140 03:28:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:57.140 [2024-07-21 03:28:42.318559] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:57.140 03:28:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:57.140 03:28:42 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:57.140 03:28:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:57.140 03:28:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:57.140 Malloc0 00:18:57.140 03:28:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:57.140 03:28:42 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:18:57.140 03:28:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:57.140 03:28:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:57.140 03:28:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:57.140 03:28:42 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:57.140 03:28:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:57.140 03:28:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:57.140 03:28:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:57.140 03:28:42 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:57.140 03:28:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:57.140 03:28:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:57.140 [2024-07-21 03:28:42.372171] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:57.140 03:28:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:57.140 03:28:42 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:18:57.140 test case1: single bdev can't be used in multiple subsystems 00:18:57.140 03:28:42 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:18:57.140 03:28:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:57.140 03:28:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:57.140 03:28:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:57.140 03:28:42 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:18:57.140 03:28:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:57.140 03:28:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:57.140 03:28:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:57.140 03:28:42 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:18:57.140 03:28:42 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:18:57.140 03:28:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:57.140 03:28:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:57.140 [2024-07-21 03:28:42.396034] bdev.c:8035:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:18:57.140 [2024-07-21 03:28:42.396063] subsystem.c:2063:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:18:57.140 [2024-07-21 03:28:42.396094] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.140 request: 00:18:57.140 { 00:18:57.140 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:18:57.140 "namespace": { 00:18:57.140 "bdev_name": "Malloc0", 00:18:57.140 "no_auto_visible": false 00:18:57.140 }, 00:18:57.140 "method": "nvmf_subsystem_add_ns", 00:18:57.140 "req_id": 1 00:18:57.140 } 00:18:57.140 Got JSON-RPC error response 00:18:57.140 response: 00:18:57.140 { 00:18:57.140 "code": -32602, 00:18:57.140 "message": "Invalid parameters" 00:18:57.140 } 00:18:57.140 03:28:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:18:57.140 03:28:42 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:18:57.140 03:28:42 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:18:57.140 03:28:42 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:18:57.140 Adding namespace failed - expected result. 00:18:57.140 03:28:42 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:18:57.140 test case2: host connect to nvmf target in multiple paths 00:18:57.140 03:28:42 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:18:57.140 03:28:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:57.140 03:28:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:57.140 [2024-07-21 03:28:42.408139] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:18:57.140 03:28:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:57.140 03:28:42 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:58.073 03:28:43 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:18:58.637 03:28:43 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:18:58.637 03:28:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1194 -- # local i=0 00:18:58.637 03:28:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:18:58.637 03:28:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:18:58.637 03:28:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1201 -- # sleep 2 00:19:00.532 03:28:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:19:00.532 03:28:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:19:00.532 03:28:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:19:00.532 03:28:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:19:00.532 03:28:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:19:00.532 03:28:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1204 -- # return 0 00:19:00.532 03:28:45 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:19:00.532 [global] 00:19:00.532 thread=1 00:19:00.532 invalidate=1 00:19:00.532 rw=write 00:19:00.532 time_based=1 00:19:00.532 runtime=1 00:19:00.532 ioengine=libaio 00:19:00.532 direct=1 00:19:00.532 bs=4096 00:19:00.532 iodepth=1 00:19:00.532 norandommap=0 00:19:00.532 numjobs=1 00:19:00.532 00:19:00.532 verify_dump=1 00:19:00.532 verify_backlog=512 00:19:00.532 verify_state_save=0 00:19:00.532 do_verify=1 00:19:00.532 verify=crc32c-intel 00:19:00.532 [job0] 00:19:00.532 filename=/dev/nvme0n1 00:19:00.532 Could not set queue depth (nvme0n1) 00:19:00.789 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:00.789 fio-3.35 00:19:00.789 Starting 1 thread 00:19:02.159 00:19:02.159 job0: (groupid=0, jobs=1): err= 0: pid=2410375: Sun Jul 21 03:28:47 2024 00:19:02.159 read: IOPS=21, BW=86.9KiB/s (89.0kB/s)(88.0KiB/1013msec) 00:19:02.159 slat (nsec): min=6714, max=34127, avg=22912.64, stdev=10353.81 00:19:02.159 clat (usec): min=40468, max=41046, avg=40947.89, stdev=113.68 00:19:02.159 lat (usec): min=40475, max=41058, avg=40970.81, stdev=115.56 00:19:02.159 clat percentiles (usec): 00:19:02.159 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:19:02.159 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:19:02.159 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:19:02.159 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:19:02.159 | 99.99th=[41157] 00:19:02.159 write: IOPS=505, BW=2022KiB/s (2070kB/s)(2048KiB/1013msec); 0 zone resets 00:19:02.159 slat (usec): min=6, max=28740, avg=64.55, stdev=1269.79 00:19:02.159 clat (usec): min=128, max=258, avg=149.51, stdev=14.88 00:19:02.159 lat (usec): min=135, max=28928, avg=214.06, stdev=1271.58 00:19:02.159 clat percentiles (usec): 00:19:02.159 | 1.00th=[ 131], 5.00th=[ 133], 10.00th=[ 135], 20.00th=[ 137], 00:19:02.159 | 30.00th=[ 139], 40.00th=[ 143], 50.00th=[ 147], 60.00th=[ 151], 00:19:02.159 | 70.00th=[ 157], 80.00th=[ 163], 90.00th=[ 172], 95.00th=[ 178], 00:19:02.159 | 99.00th=[ 184], 99.50th=[ 188], 99.90th=[ 260], 99.95th=[ 260], 00:19:02.159 | 99.99th=[ 260] 00:19:02.159 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:19:02.159 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:02.159 lat (usec) : 250=95.69%, 500=0.19% 00:19:02.159 lat (msec) : 50=4.12% 00:19:02.159 cpu : usr=0.30%, sys=0.59%, ctx=537, majf=0, minf=2 00:19:02.159 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:02.159 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:02.159 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:02.159 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:02.159 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:02.159 00:19:02.159 Run status group 0 (all jobs): 00:19:02.159 READ: bw=86.9KiB/s (89.0kB/s), 86.9KiB/s-86.9KiB/s (89.0kB/s-89.0kB/s), io=88.0KiB (90.1kB), run=1013-1013msec 00:19:02.159 WRITE: bw=2022KiB/s (2070kB/s), 2022KiB/s-2022KiB/s (2070kB/s-2070kB/s), io=2048KiB (2097kB), run=1013-1013msec 00:19:02.159 00:19:02.159 Disk stats (read/write): 00:19:02.159 nvme0n1: ios=46/512, merge=0/0, ticks=1765/73, in_queue=1838, util=98.70% 00:19:02.159 03:28:47 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:02.159 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:19:02.159 03:28:47 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:02.159 03:28:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1215 -- # local i=0 00:19:02.159 03:28:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:19:02.159 03:28:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:02.159 03:28:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:19:02.159 03:28:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:02.159 03:28:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # return 0 00:19:02.159 03:28:47 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:19:02.159 03:28:47 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:19:02.159 03:28:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:02.159 03:28:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:19:02.159 03:28:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:02.159 03:28:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:19:02.159 03:28:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:02.159 03:28:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:02.159 rmmod nvme_tcp 00:19:02.159 rmmod nvme_fabrics 00:19:02.159 rmmod nvme_keyring 00:19:02.159 03:28:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:02.159 03:28:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:19:02.159 03:28:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:19:02.159 03:28:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 2409742 ']' 00:19:02.159 03:28:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 2409742 00:19:02.159 03:28:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@946 -- # '[' -z 2409742 ']' 00:19:02.159 03:28:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@950 -- # kill -0 2409742 00:19:02.159 03:28:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@951 -- # uname 00:19:02.159 03:28:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:02.159 03:28:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2409742 00:19:02.159 03:28:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:19:02.159 03:28:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:19:02.159 03:28:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2409742' 00:19:02.159 killing process with pid 2409742 00:19:02.159 03:28:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@965 -- # kill 2409742 00:19:02.159 03:28:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@970 -- # wait 2409742 00:19:02.418 03:28:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:02.418 03:28:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:02.418 03:28:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:02.418 03:28:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:02.418 03:28:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:02.418 03:28:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:02.418 03:28:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:02.418 03:28:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:04.322 03:28:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:04.322 00:19:04.322 real 0m9.750s 00:19:04.322 user 0m22.058s 00:19:04.322 sys 0m2.235s 00:19:04.322 03:28:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:04.322 03:28:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:04.322 ************************************ 00:19:04.322 END TEST nvmf_nmic 00:19:04.322 ************************************ 00:19:04.322 03:28:49 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:19:04.322 03:28:49 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:19:04.322 03:28:49 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:04.322 03:28:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:04.322 ************************************ 00:19:04.322 START TEST nvmf_fio_target 00:19:04.322 ************************************ 00:19:04.322 03:28:49 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:19:04.581 * Looking for test storage... 00:19:04.581 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:04.581 03:28:49 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:04.581 03:28:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:19:04.581 03:28:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:04.581 03:28:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:04.581 03:28:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:04.581 03:28:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:04.581 03:28:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:04.581 03:28:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:04.581 03:28:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:04.581 03:28:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:04.581 03:28:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:04.581 03:28:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:04.581 03:28:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:04.581 03:28:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:04.581 03:28:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:04.581 03:28:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:04.581 03:28:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:04.581 03:28:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:04.581 03:28:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:04.581 03:28:49 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:04.581 03:28:49 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:04.581 03:28:49 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:04.581 03:28:49 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:04.581 03:28:49 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:04.581 03:28:49 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:04.581 03:28:49 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:19:04.581 03:28:49 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:04.581 03:28:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:19:04.581 03:28:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:04.581 03:28:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:04.581 03:28:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:04.581 03:28:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:04.581 03:28:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:04.581 03:28:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:04.581 03:28:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:04.581 03:28:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:04.581 03:28:49 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:04.581 03:28:49 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:04.581 03:28:49 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:04.581 03:28:49 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:19:04.581 03:28:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:04.581 03:28:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:04.581 03:28:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:04.581 03:28:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:04.581 03:28:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:04.581 03:28:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:04.581 03:28:49 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:04.581 03:28:49 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:04.581 03:28:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:04.581 03:28:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:04.581 03:28:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:19:04.581 03:28:49 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.482 03:28:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:06.482 03:28:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:19:06.482 03:28:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:06.482 03:28:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:06.482 03:28:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:06.482 03:28:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:06.482 03:28:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:06.482 03:28:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:19:06.482 03:28:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:06.482 03:28:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:19:06.482 03:28:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:19:06.482 03:28:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:19:06.482 03:28:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:19:06.482 03:28:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:19:06.482 03:28:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:19:06.482 03:28:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:06.482 03:28:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:06.482 03:28:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:06.482 03:28:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:06.482 03:28:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:06.482 03:28:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:06.482 03:28:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:06.482 03:28:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:06.482 03:28:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:06.482 03:28:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:06.482 03:28:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:06.482 03:28:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:06.482 03:28:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:06.482 03:28:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:06.482 03:28:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:06.482 03:28:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:06.482 03:28:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:06.482 03:28:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:06.482 03:28:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:06.482 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:06.482 03:28:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:06.482 03:28:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:06.482 03:28:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:06.482 03:28:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:06.482 03:28:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:06.482 03:28:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:06.482 03:28:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:06.482 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:06.482 03:28:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:06.482 03:28:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:06.482 03:28:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:06.482 03:28:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:06.482 03:28:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:06.482 03:28:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:06.482 03:28:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:06.482 03:28:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:06.482 03:28:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:06.482 03:28:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:06.482 03:28:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:06.482 03:28:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:06.482 03:28:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:06.482 03:28:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:06.482 03:28:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:06.482 03:28:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:06.482 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:06.482 03:28:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:06.482 03:28:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:06.482 03:28:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:06.482 03:28:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:06.482 03:28:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:06.482 03:28:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:06.482 03:28:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:06.482 03:28:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:06.482 03:28:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:06.482 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:06.482 03:28:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:06.482 03:28:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:06.482 03:28:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:19:06.482 03:28:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:06.482 03:28:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:06.482 03:28:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:06.482 03:28:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:06.482 03:28:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:06.482 03:28:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:06.482 03:28:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:06.482 03:28:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:06.482 03:28:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:06.482 03:28:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:06.482 03:28:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:06.482 03:28:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:06.482 03:28:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:06.482 03:28:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:06.482 03:28:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:06.482 03:28:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:06.482 03:28:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:06.482 03:28:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:06.482 03:28:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:06.483 03:28:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:06.483 03:28:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:06.483 03:28:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:06.483 03:28:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:06.483 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:06.483 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.172 ms 00:19:06.483 00:19:06.483 --- 10.0.0.2 ping statistics --- 00:19:06.483 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:06.483 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:19:06.483 03:28:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:06.483 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:06.483 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.151 ms 00:19:06.483 00:19:06.483 --- 10.0.0.1 ping statistics --- 00:19:06.483 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:06.483 rtt min/avg/max/mdev = 0.151/0.151/0.151/0.000 ms 00:19:06.483 03:28:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:06.483 03:28:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:19:06.483 03:28:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:06.483 03:28:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:06.483 03:28:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:06.483 03:28:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:06.483 03:28:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:06.483 03:28:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:06.483 03:28:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:06.739 03:28:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:19:06.739 03:28:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:06.739 03:28:51 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@720 -- # xtrace_disable 00:19:06.739 03:28:51 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.739 03:28:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=2412453 00:19:06.739 03:28:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:06.739 03:28:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 2412453 00:19:06.739 03:28:51 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@827 -- # '[' -z 2412453 ']' 00:19:06.739 03:28:51 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:06.739 03:28:51 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:06.739 03:28:51 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:06.739 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:06.739 03:28:51 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:06.739 03:28:51 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.739 [2024-07-21 03:28:51.840531] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:19:06.739 [2024-07-21 03:28:51.840600] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:06.739 EAL: No free 2048 kB hugepages reported on node 1 00:19:06.739 [2024-07-21 03:28:51.912395] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:06.739 [2024-07-21 03:28:52.011824] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:06.739 [2024-07-21 03:28:52.011884] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:06.739 [2024-07-21 03:28:52.011901] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:06.739 [2024-07-21 03:28:52.011915] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:06.739 [2024-07-21 03:28:52.011926] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:06.739 [2024-07-21 03:28:52.015640] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:06.739 [2024-07-21 03:28:52.015678] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:06.739 [2024-07-21 03:28:52.015731] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:06.739 [2024-07-21 03:28:52.015735] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:06.996 03:28:52 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:06.996 03:28:52 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@860 -- # return 0 00:19:06.996 03:28:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:06.996 03:28:52 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:06.996 03:28:52 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.996 03:28:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:06.996 03:28:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:19:07.251 [2024-07-21 03:28:52.424321] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:07.251 03:28:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:07.507 03:28:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:19:07.507 03:28:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:07.764 03:28:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:19:07.764 03:28:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:08.021 03:28:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:19:08.021 03:28:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:08.278 03:28:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:19:08.278 03:28:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:19:08.535 03:28:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:08.793 03:28:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:19:08.793 03:28:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:09.050 03:28:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:19:09.050 03:28:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:09.308 03:28:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:19:09.308 03:28:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:19:09.565 03:28:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:19:09.822 03:28:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:19:09.822 03:28:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:10.080 03:28:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:19:10.080 03:28:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:10.337 03:28:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:10.593 [2024-07-21 03:28:55.723219] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:10.593 03:28:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:19:10.850 03:28:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:19:11.119 03:28:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:19:11.727 03:28:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:19:11.727 03:28:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1194 -- # local i=0 00:19:11.727 03:28:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:19:11.727 03:28:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1196 -- # [[ -n 4 ]] 00:19:11.727 03:28:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1197 -- # nvme_device_counter=4 00:19:11.727 03:28:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1201 -- # sleep 2 00:19:13.630 03:28:58 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:19:13.630 03:28:58 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:19:13.630 03:28:58 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:19:13.630 03:28:58 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1203 -- # nvme_devices=4 00:19:13.630 03:28:58 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:19:13.630 03:28:58 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1204 -- # return 0 00:19:13.630 03:28:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:19:13.630 [global] 00:19:13.630 thread=1 00:19:13.630 invalidate=1 00:19:13.630 rw=write 00:19:13.630 time_based=1 00:19:13.630 runtime=1 00:19:13.630 ioengine=libaio 00:19:13.630 direct=1 00:19:13.630 bs=4096 00:19:13.630 iodepth=1 00:19:13.630 norandommap=0 00:19:13.630 numjobs=1 00:19:13.630 00:19:13.630 verify_dump=1 00:19:13.630 verify_backlog=512 00:19:13.630 verify_state_save=0 00:19:13.630 do_verify=1 00:19:13.630 verify=crc32c-intel 00:19:13.630 [job0] 00:19:13.630 filename=/dev/nvme0n1 00:19:13.630 [job1] 00:19:13.630 filename=/dev/nvme0n2 00:19:13.630 [job2] 00:19:13.630 filename=/dev/nvme0n3 00:19:13.630 [job3] 00:19:13.630 filename=/dev/nvme0n4 00:19:13.886 Could not set queue depth (nvme0n1) 00:19:13.886 Could not set queue depth (nvme0n2) 00:19:13.886 Could not set queue depth (nvme0n3) 00:19:13.886 Could not set queue depth (nvme0n4) 00:19:13.886 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:13.886 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:13.886 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:13.886 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:13.886 fio-3.35 00:19:13.886 Starting 4 threads 00:19:15.257 00:19:15.257 job0: (groupid=0, jobs=1): err= 0: pid=2413405: Sun Jul 21 03:29:00 2024 00:19:15.257 read: IOPS=202, BW=812KiB/s (831kB/s)(832KiB/1025msec) 00:19:15.257 slat (nsec): min=15077, max=39403, avg=19859.68, stdev=6159.15 00:19:15.257 clat (usec): min=255, max=42303, avg=4278.43, stdev=12194.68 00:19:15.257 lat (usec): min=272, max=42319, avg=4298.29, stdev=12196.01 00:19:15.257 clat percentiles (usec): 00:19:15.257 | 1.00th=[ 260], 5.00th=[ 265], 10.00th=[ 265], 20.00th=[ 273], 00:19:15.257 | 30.00th=[ 273], 40.00th=[ 277], 50.00th=[ 281], 60.00th=[ 285], 00:19:15.257 | 70.00th=[ 302], 80.00th=[ 416], 90.00th=[ 603], 95.00th=[41681], 00:19:15.257 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:19:15.257 | 99.99th=[42206] 00:19:15.257 write: IOPS=499, BW=1998KiB/s (2046kB/s)(2048KiB/1025msec); 0 zone resets 00:19:15.257 slat (nsec): min=8474, max=53372, avg=21369.59, stdev=4473.74 00:19:15.257 clat (usec): min=181, max=526, avg=224.47, stdev=23.44 00:19:15.257 lat (usec): min=196, max=552, avg=245.84, stdev=24.58 00:19:15.257 clat percentiles (usec): 00:19:15.257 | 1.00th=[ 188], 5.00th=[ 200], 10.00th=[ 206], 20.00th=[ 212], 00:19:15.257 | 30.00th=[ 215], 40.00th=[ 219], 50.00th=[ 221], 60.00th=[ 225], 00:19:15.257 | 70.00th=[ 229], 80.00th=[ 235], 90.00th=[ 247], 95.00th=[ 258], 00:19:15.257 | 99.00th=[ 293], 99.50th=[ 330], 99.90th=[ 529], 99.95th=[ 529], 00:19:15.257 | 99.99th=[ 529] 00:19:15.257 bw ( KiB/s): min= 4096, max= 4096, per=51.25%, avg=4096.00, stdev= 0.00, samples=1 00:19:15.257 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:15.257 lat (usec) : 250=65.00%, 500=30.97%, 750=1.25% 00:19:15.257 lat (msec) : 50=2.78% 00:19:15.257 cpu : usr=0.78%, sys=2.25%, ctx=721, majf=0, minf=1 00:19:15.257 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:15.257 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:15.257 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:15.257 issued rwts: total=208,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:15.257 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:15.257 job1: (groupid=0, jobs=1): err= 0: pid=2413416: Sun Jul 21 03:29:00 2024 00:19:15.257 read: IOPS=24, BW=99.0KiB/s (101kB/s)(100KiB/1010msec) 00:19:15.257 slat (nsec): min=13493, max=33128, avg=25413.16, stdev=8595.72 00:19:15.257 clat (usec): min=269, max=41280, avg=36090.72, stdev=13496.37 00:19:15.257 lat (usec): min=285, max=41312, avg=36116.14, stdev=13495.86 00:19:15.257 clat percentiles (usec): 00:19:15.257 | 1.00th=[ 269], 5.00th=[ 281], 10.00th=[ 293], 20.00th=[41157], 00:19:15.257 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:19:15.257 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:19:15.257 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:19:15.257 | 99.99th=[41157] 00:19:15.257 write: IOPS=506, BW=2028KiB/s (2076kB/s)(2048KiB/1010msec); 0 zone resets 00:19:15.257 slat (nsec): min=6988, max=42146, avg=16785.40, stdev=3318.74 00:19:15.257 clat (usec): min=142, max=370, avg=186.83, stdev=28.23 00:19:15.257 lat (usec): min=154, max=380, avg=203.61, stdev=28.16 00:19:15.257 clat percentiles (usec): 00:19:15.257 | 1.00th=[ 155], 5.00th=[ 159], 10.00th=[ 165], 20.00th=[ 167], 00:19:15.257 | 30.00th=[ 169], 40.00th=[ 169], 50.00th=[ 174], 60.00th=[ 180], 00:19:15.257 | 70.00th=[ 192], 80.00th=[ 221], 90.00th=[ 233], 95.00th=[ 237], 00:19:15.257 | 99.00th=[ 251], 99.50th=[ 265], 99.90th=[ 371], 99.95th=[ 371], 00:19:15.257 | 99.99th=[ 371] 00:19:15.257 bw ( KiB/s): min= 4096, max= 4096, per=51.25%, avg=4096.00, stdev= 0.00, samples=1 00:19:15.257 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:15.257 lat (usec) : 250=93.85%, 500=2.05% 00:19:15.257 lat (msec) : 50=4.10% 00:19:15.257 cpu : usr=0.50%, sys=0.79%, ctx=538, majf=0, minf=1 00:19:15.257 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:15.257 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:15.257 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:15.257 issued rwts: total=25,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:15.257 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:15.257 job2: (groupid=0, jobs=1): err= 0: pid=2413461: Sun Jul 21 03:29:00 2024 00:19:15.257 read: IOPS=23, BW=95.1KiB/s (97.4kB/s)(96.0KiB/1009msec) 00:19:15.257 slat (nsec): min=14599, max=36707, avg=28913.00, stdev=8900.73 00:19:15.257 clat (usec): min=306, max=41512, avg=35918.43, stdev=13731.20 00:19:15.257 lat (usec): min=342, max=41546, avg=35947.34, stdev=13728.79 00:19:15.257 clat percentiles (usec): 00:19:15.257 | 1.00th=[ 306], 5.00th=[ 375], 10.00th=[ 388], 20.00th=[40633], 00:19:15.257 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:19:15.257 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:19:15.257 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:19:15.257 | 99.99th=[41681] 00:19:15.257 write: IOPS=507, BW=2030KiB/s (2078kB/s)(2048KiB/1009msec); 0 zone resets 00:19:15.257 slat (nsec): min=8349, max=45676, avg=22820.99, stdev=5061.65 00:19:15.257 clat (usec): min=190, max=1273, avg=255.42, stdev=62.80 00:19:15.257 lat (usec): min=211, max=1295, avg=278.24, stdev=63.31 00:19:15.257 clat percentiles (usec): 00:19:15.257 | 1.00th=[ 200], 5.00th=[ 208], 10.00th=[ 215], 20.00th=[ 223], 00:19:15.257 | 30.00th=[ 229], 40.00th=[ 233], 50.00th=[ 241], 60.00th=[ 253], 00:19:15.257 | 70.00th=[ 269], 80.00th=[ 281], 90.00th=[ 293], 95.00th=[ 347], 00:19:15.257 | 99.00th=[ 437], 99.50th=[ 478], 99.90th=[ 1270], 99.95th=[ 1270], 00:19:15.257 | 99.99th=[ 1270] 00:19:15.257 bw ( KiB/s): min= 4096, max= 4096, per=51.25%, avg=4096.00, stdev= 0.00, samples=1 00:19:15.257 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:15.257 lat (usec) : 250=56.16%, 500=39.74% 00:19:15.257 lat (msec) : 2=0.19%, 50=3.92% 00:19:15.257 cpu : usr=0.69%, sys=1.59%, ctx=537, majf=0, minf=2 00:19:15.257 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:15.257 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:15.257 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:15.257 issued rwts: total=24,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:15.257 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:15.257 job3: (groupid=0, jobs=1): err= 0: pid=2413476: Sun Jul 21 03:29:00 2024 00:19:15.257 read: IOPS=459, BW=1838KiB/s (1882kB/s)(1840KiB/1001msec) 00:19:15.257 slat (nsec): min=6998, max=56040, avg=8822.12, stdev=5162.15 00:19:15.257 clat (usec): min=197, max=43979, avg=1853.12, stdev=7969.64 00:19:15.257 lat (usec): min=205, max=44000, avg=1861.94, stdev=7974.33 00:19:15.257 clat percentiles (usec): 00:19:15.257 | 1.00th=[ 200], 5.00th=[ 206], 10.00th=[ 210], 20.00th=[ 217], 00:19:15.257 | 30.00th=[ 221], 40.00th=[ 227], 50.00th=[ 249], 60.00th=[ 269], 00:19:15.257 | 70.00th=[ 277], 80.00th=[ 281], 90.00th=[ 285], 95.00th=[ 297], 00:19:15.257 | 99.00th=[41157], 99.50th=[42206], 99.90th=[43779], 99.95th=[43779], 00:19:15.257 | 99.99th=[43779] 00:19:15.257 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:19:15.257 slat (nsec): min=8149, max=68523, avg=22279.71, stdev=4951.23 00:19:15.257 clat (usec): min=189, max=457, avg=250.20, stdev=32.41 00:19:15.257 lat (usec): min=203, max=502, avg=272.48, stdev=32.70 00:19:15.257 clat percentiles (usec): 00:19:15.257 | 1.00th=[ 212], 5.00th=[ 221], 10.00th=[ 225], 20.00th=[ 229], 00:19:15.257 | 30.00th=[ 231], 40.00th=[ 233], 50.00th=[ 239], 60.00th=[ 251], 00:19:15.257 | 70.00th=[ 262], 80.00th=[ 273], 90.00th=[ 285], 95.00th=[ 297], 00:19:15.257 | 99.00th=[ 396], 99.50th=[ 429], 99.90th=[ 457], 99.95th=[ 457], 00:19:15.257 | 99.99th=[ 457] 00:19:15.257 bw ( KiB/s): min= 4096, max= 4096, per=51.25%, avg=4096.00, stdev= 0.00, samples=1 00:19:15.257 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:15.257 lat (usec) : 250=55.04%, 500=43.11% 00:19:15.257 lat (msec) : 50=1.85% 00:19:15.257 cpu : usr=1.00%, sys=2.10%, ctx=973, majf=0, minf=1 00:19:15.257 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:15.257 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:15.257 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:15.257 issued rwts: total=460,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:15.257 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:15.257 00:19:15.257 Run status group 0 (all jobs): 00:19:15.257 READ: bw=2798KiB/s (2865kB/s), 95.1KiB/s-1838KiB/s (97.4kB/s-1882kB/s), io=2868KiB (2937kB), run=1001-1025msec 00:19:15.257 WRITE: bw=7992KiB/s (8184kB/s), 1998KiB/s-2046KiB/s (2046kB/s-2095kB/s), io=8192KiB (8389kB), run=1001-1025msec 00:19:15.257 00:19:15.257 Disk stats (read/write): 00:19:15.257 nvme0n1: ios=252/512, merge=0/0, ticks=654/114, in_queue=768, util=82.26% 00:19:15.257 nvme0n2: ios=44/512, merge=0/0, ticks=1685/95, in_queue=1780, util=97.84% 00:19:15.257 nvme0n3: ios=77/512, merge=0/0, ticks=990/116, in_queue=1106, util=98.15% 00:19:15.257 nvme0n4: ios=73/512, merge=0/0, ticks=1017/125, in_queue=1142, util=98.13% 00:19:15.257 03:29:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:19:15.257 [global] 00:19:15.257 thread=1 00:19:15.257 invalidate=1 00:19:15.257 rw=randwrite 00:19:15.257 time_based=1 00:19:15.257 runtime=1 00:19:15.257 ioengine=libaio 00:19:15.257 direct=1 00:19:15.257 bs=4096 00:19:15.257 iodepth=1 00:19:15.257 norandommap=0 00:19:15.257 numjobs=1 00:19:15.257 00:19:15.257 verify_dump=1 00:19:15.257 verify_backlog=512 00:19:15.257 verify_state_save=0 00:19:15.257 do_verify=1 00:19:15.257 verify=crc32c-intel 00:19:15.257 [job0] 00:19:15.257 filename=/dev/nvme0n1 00:19:15.257 [job1] 00:19:15.257 filename=/dev/nvme0n2 00:19:15.257 [job2] 00:19:15.257 filename=/dev/nvme0n3 00:19:15.257 [job3] 00:19:15.257 filename=/dev/nvme0n4 00:19:15.257 Could not set queue depth (nvme0n1) 00:19:15.257 Could not set queue depth (nvme0n2) 00:19:15.257 Could not set queue depth (nvme0n3) 00:19:15.257 Could not set queue depth (nvme0n4) 00:19:15.514 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:15.514 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:15.514 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:15.514 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:15.514 fio-3.35 00:19:15.514 Starting 4 threads 00:19:16.884 00:19:16.884 job0: (groupid=0, jobs=1): err= 0: pid=2413749: Sun Jul 21 03:29:01 2024 00:19:16.884 read: IOPS=21, BW=85.4KiB/s (87.4kB/s)(88.0KiB/1031msec) 00:19:16.884 slat (nsec): min=6999, max=32880, avg=24432.32, stdev=8707.73 00:19:16.884 clat (usec): min=40909, max=41009, avg=40963.99, stdev=24.53 00:19:16.884 lat (usec): min=40916, max=41024, avg=40988.43, stdev=24.28 00:19:16.884 clat percentiles (usec): 00:19:16.884 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:19:16.884 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:19:16.884 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:19:16.884 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:19:16.884 | 99.99th=[41157] 00:19:16.884 write: IOPS=496, BW=1986KiB/s (2034kB/s)(2048KiB/1031msec); 0 zone resets 00:19:16.884 slat (nsec): min=6488, max=43627, avg=10927.95, stdev=6148.42 00:19:16.884 clat (usec): min=174, max=589, avg=238.32, stdev=43.68 00:19:16.884 lat (usec): min=182, max=597, avg=249.25, stdev=44.30 00:19:16.884 clat percentiles (usec): 00:19:16.884 | 1.00th=[ 180], 5.00th=[ 192], 10.00th=[ 202], 20.00th=[ 210], 00:19:16.884 | 30.00th=[ 219], 40.00th=[ 227], 50.00th=[ 241], 60.00th=[ 245], 00:19:16.884 | 70.00th=[ 247], 80.00th=[ 251], 90.00th=[ 260], 95.00th=[ 285], 00:19:16.884 | 99.00th=[ 408], 99.50th=[ 537], 99.90th=[ 586], 99.95th=[ 586], 00:19:16.884 | 99.99th=[ 586] 00:19:16.884 bw ( KiB/s): min= 4096, max= 4096, per=29.46%, avg=4096.00, stdev= 0.00, samples=1 00:19:16.884 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:16.884 lat (usec) : 250=75.47%, 500=19.85%, 750=0.56% 00:19:16.884 lat (msec) : 50=4.12% 00:19:16.884 cpu : usr=0.10%, sys=0.68%, ctx=535, majf=0, minf=1 00:19:16.884 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:16.884 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:16.884 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:16.884 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:16.884 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:16.884 job1: (groupid=0, jobs=1): err= 0: pid=2413750: Sun Jul 21 03:29:01 2024 00:19:16.884 read: IOPS=25, BW=104KiB/s (106kB/s)(104KiB/1001msec) 00:19:16.884 slat (nsec): min=6398, max=33776, avg=23051.77, stdev=9650.19 00:19:16.884 clat (usec): min=319, max=42173, avg=33367.95, stdev=16436.99 00:19:16.884 lat (usec): min=340, max=42179, avg=33391.00, stdev=16441.77 00:19:16.884 clat percentiles (usec): 00:19:16.884 | 1.00th=[ 322], 5.00th=[ 343], 10.00th=[ 347], 20.00th=[40109], 00:19:16.884 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:19:16.884 | 70.00th=[41157], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:19:16.884 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:19:16.884 | 99.99th=[42206] 00:19:16.884 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:19:16.884 slat (nsec): min=6467, max=42231, avg=10989.87, stdev=5891.19 00:19:16.884 clat (usec): min=151, max=478, avg=243.79, stdev=45.03 00:19:16.884 lat (usec): min=159, max=486, avg=254.78, stdev=44.88 00:19:16.884 clat percentiles (usec): 00:19:16.884 | 1.00th=[ 163], 5.00th=[ 196], 10.00th=[ 208], 20.00th=[ 217], 00:19:16.884 | 30.00th=[ 225], 40.00th=[ 231], 50.00th=[ 235], 60.00th=[ 243], 00:19:16.884 | 70.00th=[ 251], 80.00th=[ 260], 90.00th=[ 277], 95.00th=[ 322], 00:19:16.884 | 99.00th=[ 424], 99.50th=[ 453], 99.90th=[ 478], 99.95th=[ 478], 00:19:16.884 | 99.99th=[ 478] 00:19:16.884 bw ( KiB/s): min= 4096, max= 4096, per=29.46%, avg=4096.00, stdev= 0.00, samples=1 00:19:16.884 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:16.884 lat (usec) : 250=65.61%, 500=30.48% 00:19:16.884 lat (msec) : 50=3.90% 00:19:16.884 cpu : usr=0.30%, sys=0.50%, ctx=541, majf=0, minf=1 00:19:16.884 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:16.884 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:16.884 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:16.884 issued rwts: total=26,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:16.884 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:16.884 job2: (groupid=0, jobs=1): err= 0: pid=2413754: Sun Jul 21 03:29:01 2024 00:19:16.884 read: IOPS=1834, BW=7337KiB/s (7513kB/s)(7344KiB/1001msec) 00:19:16.884 slat (nsec): min=5482, max=58012, avg=13811.37, stdev=4499.61 00:19:16.884 clat (usec): min=202, max=15284, avg=279.78, stdev=352.92 00:19:16.884 lat (usec): min=208, max=15304, avg=293.59, stdev=353.17 00:19:16.884 clat percentiles (usec): 00:19:16.884 | 1.00th=[ 215], 5.00th=[ 221], 10.00th=[ 227], 20.00th=[ 243], 00:19:16.884 | 30.00th=[ 249], 40.00th=[ 255], 50.00th=[ 262], 60.00th=[ 269], 00:19:16.884 | 70.00th=[ 293], 80.00th=[ 306], 90.00th=[ 314], 95.00th=[ 322], 00:19:16.884 | 99.00th=[ 453], 99.50th=[ 474], 99.90th=[ 537], 99.95th=[15270], 00:19:16.884 | 99.99th=[15270] 00:19:16.884 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:19:16.884 slat (nsec): min=7220, max=69392, avg=17236.41, stdev=5970.77 00:19:16.884 clat (usec): min=145, max=716, avg=198.73, stdev=30.22 00:19:16.884 lat (usec): min=156, max=727, avg=215.97, stdev=29.11 00:19:16.884 clat percentiles (usec): 00:19:16.884 | 1.00th=[ 155], 5.00th=[ 163], 10.00th=[ 174], 20.00th=[ 180], 00:19:16.884 | 30.00th=[ 184], 40.00th=[ 186], 50.00th=[ 190], 60.00th=[ 194], 00:19:16.884 | 70.00th=[ 204], 80.00th=[ 223], 90.00th=[ 241], 95.00th=[ 255], 00:19:16.884 | 99.00th=[ 277], 99.50th=[ 293], 99.90th=[ 392], 99.95th=[ 396], 00:19:16.884 | 99.99th=[ 717] 00:19:16.884 bw ( KiB/s): min= 8192, max= 8192, per=58.91%, avg=8192.00, stdev= 0.00, samples=1 00:19:16.884 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:19:16.884 lat (usec) : 250=64.13%, 500=35.68%, 750=0.15% 00:19:16.884 lat (msec) : 20=0.03% 00:19:16.884 cpu : usr=4.90%, sys=8.40%, ctx=3884, majf=0, minf=2 00:19:16.884 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:16.884 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:16.884 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:16.884 issued rwts: total=1836,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:16.884 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:16.884 job3: (groupid=0, jobs=1): err= 0: pid=2413758: Sun Jul 21 03:29:01 2024 00:19:16.884 read: IOPS=35, BW=144KiB/s (147kB/s)(144KiB/1001msec) 00:19:16.884 slat (nsec): min=6950, max=33375, avg=20790.89, stdev=9574.74 00:19:16.884 clat (usec): min=216, max=41129, avg=24037.93, stdev=20320.38 00:19:16.884 lat (usec): min=232, max=41136, avg=24058.72, stdev=20326.50 00:19:16.884 clat percentiles (usec): 00:19:16.884 | 1.00th=[ 217], 5.00th=[ 285], 10.00th=[ 310], 20.00th=[ 322], 00:19:16.884 | 30.00th=[ 334], 40.00th=[ 562], 50.00th=[41157], 60.00th=[41157], 00:19:16.884 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:19:16.884 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:19:16.884 | 99.99th=[41157] 00:19:16.884 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:19:16.884 slat (nsec): min=6682, max=46869, avg=11198.73, stdev=6181.32 00:19:16.885 clat (usec): min=147, max=583, avg=247.36, stdev=41.08 00:19:16.885 lat (usec): min=154, max=593, avg=258.56, stdev=42.09 00:19:16.885 clat percentiles (usec): 00:19:16.885 | 1.00th=[ 161], 5.00th=[ 192], 10.00th=[ 206], 20.00th=[ 221], 00:19:16.885 | 30.00th=[ 235], 40.00th=[ 241], 50.00th=[ 245], 60.00th=[ 251], 00:19:16.885 | 70.00th=[ 258], 80.00th=[ 265], 90.00th=[ 285], 95.00th=[ 318], 00:19:16.885 | 99.00th=[ 379], 99.50th=[ 465], 99.90th=[ 586], 99.95th=[ 586], 00:19:16.885 | 99.99th=[ 586] 00:19:16.885 bw ( KiB/s): min= 4096, max= 4096, per=29.46%, avg=4096.00, stdev= 0.00, samples=1 00:19:16.885 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:16.885 lat (usec) : 250=54.93%, 500=40.88%, 750=0.36% 00:19:16.885 lat (msec) : 50=3.83% 00:19:16.885 cpu : usr=0.20%, sys=0.70%, ctx=548, majf=0, minf=1 00:19:16.885 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:16.885 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:16.885 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:16.885 issued rwts: total=36,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:16.885 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:16.885 00:19:16.885 Run status group 0 (all jobs): 00:19:16.885 READ: bw=7449KiB/s (7628kB/s), 85.4KiB/s-7337KiB/s (87.4kB/s-7513kB/s), io=7680KiB (7864kB), run=1001-1031msec 00:19:16.885 WRITE: bw=13.6MiB/s (14.2MB/s), 1986KiB/s-8184KiB/s (2034kB/s-8380kB/s), io=14.0MiB (14.7MB), run=1001-1031msec 00:19:16.885 00:19:16.885 Disk stats (read/write): 00:19:16.885 nvme0n1: ios=55/512, merge=0/0, ticks=882/118, in_queue=1000, util=98.60% 00:19:16.885 nvme0n2: ios=40/512, merge=0/0, ticks=1661/120, in_queue=1781, util=98.27% 00:19:16.885 nvme0n3: ios=1536/1793, merge=0/0, ticks=387/339, in_queue=726, util=88.83% 00:19:16.885 nvme0n4: ios=57/512, merge=0/0, ticks=728/126, in_queue=854, util=90.52% 00:19:16.885 03:29:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:19:16.885 [global] 00:19:16.885 thread=1 00:19:16.885 invalidate=1 00:19:16.885 rw=write 00:19:16.885 time_based=1 00:19:16.885 runtime=1 00:19:16.885 ioengine=libaio 00:19:16.885 direct=1 00:19:16.885 bs=4096 00:19:16.885 iodepth=128 00:19:16.885 norandommap=0 00:19:16.885 numjobs=1 00:19:16.885 00:19:16.885 verify_dump=1 00:19:16.885 verify_backlog=512 00:19:16.885 verify_state_save=0 00:19:16.885 do_verify=1 00:19:16.885 verify=crc32c-intel 00:19:16.885 [job0] 00:19:16.885 filename=/dev/nvme0n1 00:19:16.885 [job1] 00:19:16.885 filename=/dev/nvme0n2 00:19:16.885 [job2] 00:19:16.885 filename=/dev/nvme0n3 00:19:16.885 [job3] 00:19:16.885 filename=/dev/nvme0n4 00:19:16.885 Could not set queue depth (nvme0n1) 00:19:16.885 Could not set queue depth (nvme0n2) 00:19:16.885 Could not set queue depth (nvme0n3) 00:19:16.885 Could not set queue depth (nvme0n4) 00:19:16.885 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:16.885 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:16.885 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:16.885 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:16.885 fio-3.35 00:19:16.885 Starting 4 threads 00:19:18.256 00:19:18.256 job0: (groupid=0, jobs=1): err= 0: pid=2413988: Sun Jul 21 03:29:03 2024 00:19:18.256 read: IOPS=3010, BW=11.8MiB/s (12.3MB/s)(11.8MiB/1004msec) 00:19:18.256 slat (usec): min=2, max=11571, avg=170.42, stdev=951.36 00:19:18.256 clat (usec): min=2976, max=57483, avg=22395.61, stdev=10212.87 00:19:18.256 lat (usec): min=5648, max=57500, avg=22566.03, stdev=10308.18 00:19:18.256 clat percentiles (usec): 00:19:18.256 | 1.00th=[ 9372], 5.00th=[11731], 10.00th=[12780], 20.00th=[13698], 00:19:18.256 | 30.00th=[15533], 40.00th=[17433], 50.00th=[19268], 60.00th=[20055], 00:19:18.256 | 70.00th=[27132], 80.00th=[32113], 90.00th=[38011], 95.00th=[43254], 00:19:18.256 | 99.00th=[49546], 99.50th=[50070], 99.90th=[51119], 99.95th=[56886], 00:19:18.256 | 99.99th=[57410] 00:19:18.256 write: IOPS=3059, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1004msec); 0 zone resets 00:19:18.256 slat (usec): min=3, max=11179, avg=135.58, stdev=766.21 00:19:18.256 clat (usec): min=4355, max=46161, avg=19438.59, stdev=9158.13 00:19:18.256 lat (usec): min=4362, max=46167, avg=19574.17, stdev=9236.10 00:19:18.256 clat percentiles (usec): 00:19:18.256 | 1.00th=[ 5276], 5.00th=[ 8717], 10.00th=[ 9110], 20.00th=[11600], 00:19:18.256 | 30.00th=[13435], 40.00th=[14353], 50.00th=[15139], 60.00th=[21365], 00:19:18.256 | 70.00th=[23462], 80.00th=[26870], 90.00th=[32637], 95.00th=[38011], 00:19:18.256 | 99.00th=[42730], 99.50th=[44303], 99.90th=[46400], 99.95th=[46400], 00:19:18.256 | 99.99th=[46400] 00:19:18.256 bw ( KiB/s): min=12288, max=12288, per=19.29%, avg=12288.00, stdev= 0.00, samples=2 00:19:18.256 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=2 00:19:18.256 lat (msec) : 4=0.02%, 10=8.58%, 20=48.97%, 50=42.08%, 100=0.34% 00:19:18.256 cpu : usr=3.49%, sys=5.28%, ctx=276, majf=0, minf=15 00:19:18.256 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:19:18.256 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:18.256 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:18.256 issued rwts: total=3023,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:18.256 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:18.256 job1: (groupid=0, jobs=1): err= 0: pid=2413990: Sun Jul 21 03:29:03 2024 00:19:18.256 read: IOPS=3562, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1006msec) 00:19:18.256 slat (usec): min=3, max=8753, avg=114.65, stdev=643.50 00:19:18.256 clat (usec): min=8445, max=32042, avg=15007.25, stdev=3058.48 00:19:18.257 lat (usec): min=8452, max=32062, avg=15121.90, stdev=3120.27 00:19:18.257 clat percentiles (usec): 00:19:18.257 | 1.00th=[ 9896], 5.00th=[10683], 10.00th=[11076], 20.00th=[12780], 00:19:18.257 | 30.00th=[13566], 40.00th=[13960], 50.00th=[14615], 60.00th=[15139], 00:19:18.257 | 70.00th=[16450], 80.00th=[17433], 90.00th=[18744], 95.00th=[19530], 00:19:18.257 | 99.00th=[24511], 99.50th=[27919], 99.90th=[32113], 99.95th=[32113], 00:19:18.257 | 99.99th=[32113] 00:19:18.257 write: IOPS=3808, BW=14.9MiB/s (15.6MB/s)(15.0MiB/1006msec); 0 zone resets 00:19:18.257 slat (usec): min=4, max=18529, avg=142.16, stdev=736.90 00:19:18.257 clat (usec): min=4783, max=43728, avg=18925.77, stdev=6935.48 00:19:18.257 lat (usec): min=6597, max=43746, avg=19067.94, stdev=6992.50 00:19:18.257 clat percentiles (usec): 00:19:18.257 | 1.00th=[10159], 5.00th=[11207], 10.00th=[11469], 20.00th=[12518], 00:19:18.257 | 30.00th=[14484], 40.00th=[15008], 50.00th=[16909], 60.00th=[17957], 00:19:18.257 | 70.00th=[22938], 80.00th=[25297], 90.00th=[28181], 95.00th=[31589], 00:19:18.257 | 99.00th=[38536], 99.50th=[39584], 99.90th=[43779], 99.95th=[43779], 00:19:18.257 | 99.99th=[43779] 00:19:18.257 bw ( KiB/s): min=13248, max=16384, per=23.26%, avg=14816.00, stdev=2217.49, samples=2 00:19:18.257 iops : min= 3312, max= 4096, avg=3704.00, stdev=554.37, samples=2 00:19:18.257 lat (msec) : 10=0.94%, 20=78.06%, 50=21.00% 00:19:18.257 cpu : usr=6.07%, sys=8.46%, ctx=363, majf=0, minf=11 00:19:18.257 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:19:18.257 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:18.257 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:18.257 issued rwts: total=3584,3831,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:18.257 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:18.257 job2: (groupid=0, jobs=1): err= 0: pid=2413991: Sun Jul 21 03:29:03 2024 00:19:18.257 read: IOPS=4055, BW=15.8MiB/s (16.6MB/s)(16.0MiB/1010msec) 00:19:18.257 slat (usec): min=2, max=23229, avg=126.18, stdev=984.04 00:19:18.257 clat (usec): min=3779, max=70705, avg=16964.51, stdev=11972.88 00:19:18.257 lat (usec): min=4576, max=70728, avg=17090.70, stdev=12038.22 00:19:18.257 clat percentiles (usec): 00:19:18.257 | 1.00th=[ 8848], 5.00th=[10159], 10.00th=[10814], 20.00th=[11600], 00:19:18.257 | 30.00th=[12125], 40.00th=[12518], 50.00th=[13304], 60.00th=[13698], 00:19:18.257 | 70.00th=[14484], 80.00th=[16450], 90.00th=[23725], 95.00th=[51119], 00:19:18.257 | 99.00th=[66847], 99.50th=[69731], 99.90th=[69731], 99.95th=[69731], 00:19:18.257 | 99.99th=[70779] 00:19:18.257 write: IOPS=4528, BW=17.7MiB/s (18.5MB/s)(17.9MiB/1010msec); 0 zone resets 00:19:18.257 slat (usec): min=3, max=13925, avg=97.72, stdev=637.11 00:19:18.257 clat (usec): min=3386, max=37555, avg=12549.20, stdev=2991.49 00:19:18.257 lat (usec): min=3392, max=43704, avg=12646.92, stdev=3074.71 00:19:18.257 clat percentiles (usec): 00:19:18.257 | 1.00th=[ 4080], 5.00th=[ 6783], 10.00th=[ 9634], 20.00th=[11863], 00:19:18.257 | 30.00th=[12125], 40.00th=[12387], 50.00th=[12780], 60.00th=[13042], 00:19:18.257 | 70.00th=[13173], 80.00th=[13435], 90.00th=[14091], 95.00th=[16057], 00:19:18.257 | 99.00th=[23462], 99.50th=[28181], 99.90th=[37487], 99.95th=[37487], 00:19:18.257 | 99.99th=[37487] 00:19:18.257 bw ( KiB/s): min=16384, max=19192, per=27.92%, avg=17788.00, stdev=1985.56, samples=2 00:19:18.257 iops : min= 4096, max= 4798, avg=4447.00, stdev=496.39, samples=2 00:19:18.257 lat (msec) : 4=0.53%, 10=7.04%, 20=85.58%, 50=4.48%, 100=2.38% 00:19:18.257 cpu : usr=3.07%, sys=6.14%, ctx=465, majf=0, minf=15 00:19:18.257 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:19:18.257 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:18.257 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:18.257 issued rwts: total=4096,4574,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:18.257 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:18.257 job3: (groupid=0, jobs=1): err= 0: pid=2413992: Sun Jul 21 03:29:03 2024 00:19:18.257 read: IOPS=4359, BW=17.0MiB/s (17.9MB/s)(17.1MiB/1002msec) 00:19:18.257 slat (usec): min=3, max=3343, avg=105.53, stdev=446.12 00:19:18.257 clat (usec): min=1016, max=17830, avg=14012.49, stdev=1555.81 00:19:18.257 lat (usec): min=2668, max=18314, avg=14118.02, stdev=1516.85 00:19:18.257 clat percentiles (usec): 00:19:18.257 | 1.00th=[ 6521], 5.00th=[11863], 10.00th=[12387], 20.00th=[13304], 00:19:18.257 | 30.00th=[13698], 40.00th=[13960], 50.00th=[14222], 60.00th=[14353], 00:19:18.257 | 70.00th=[14746], 80.00th=[15008], 90.00th=[15270], 95.00th=[15795], 00:19:18.257 | 99.00th=[16909], 99.50th=[17433], 99.90th=[17695], 99.95th=[17695], 00:19:18.257 | 99.99th=[17957] 00:19:18.257 write: IOPS=4598, BW=18.0MiB/s (18.8MB/s)(18.0MiB/1002msec); 0 zone resets 00:19:18.257 slat (usec): min=3, max=9513, avg=105.41, stdev=503.73 00:19:18.257 clat (usec): min=6694, max=23530, avg=14283.96, stdev=1365.17 00:19:18.257 lat (usec): min=6704, max=27151, avg=14389.37, stdev=1308.45 00:19:18.257 clat percentiles (usec): 00:19:18.257 | 1.00th=[10552], 5.00th=[11994], 10.00th=[13042], 20.00th=[13698], 00:19:18.257 | 30.00th=[13829], 40.00th=[14091], 50.00th=[14222], 60.00th=[14353], 00:19:18.257 | 70.00th=[14746], 80.00th=[15139], 90.00th=[15401], 95.00th=[15795], 00:19:18.257 | 99.00th=[20317], 99.50th=[20317], 99.90th=[23462], 99.95th=[23462], 00:19:18.257 | 99.99th=[23462] 00:19:18.257 bw ( KiB/s): min=17416, max=19448, per=28.93%, avg=18432.00, stdev=1436.84, samples=2 00:19:18.257 iops : min= 4354, max= 4862, avg=4608.00, stdev=359.21, samples=2 00:19:18.257 lat (msec) : 2=0.01%, 4=0.17%, 10=0.59%, 20=98.66%, 50=0.57% 00:19:18.257 cpu : usr=7.59%, sys=9.49%, ctx=466, majf=0, minf=11 00:19:18.257 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:19:18.257 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:18.257 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:18.257 issued rwts: total=4368,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:18.257 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:18.257 00:19:18.257 Run status group 0 (all jobs): 00:19:18.257 READ: bw=58.3MiB/s (61.1MB/s), 11.8MiB/s-17.0MiB/s (12.3MB/s-17.9MB/s), io=58.9MiB (61.7MB), run=1002-1010msec 00:19:18.257 WRITE: bw=62.2MiB/s (65.2MB/s), 12.0MiB/s-18.0MiB/s (12.5MB/s-18.8MB/s), io=62.8MiB (65.9MB), run=1002-1010msec 00:19:18.257 00:19:18.257 Disk stats (read/write): 00:19:18.257 nvme0n1: ios=2610/2594, merge=0/0, ticks=28524/25424, in_queue=53948, util=86.57% 00:19:18.257 nvme0n2: ios=3122/3399, merge=0/0, ticks=22296/28584, in_queue=50880, util=98.37% 00:19:18.257 nvme0n3: ios=3351/3584, merge=0/0, ticks=33960/28927, in_queue=62887, util=88.60% 00:19:18.257 nvme0n4: ios=3641/4038, merge=0/0, ticks=13020/14330, in_queue=27350, util=98.00% 00:19:18.257 03:29:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:19:18.257 [global] 00:19:18.257 thread=1 00:19:18.257 invalidate=1 00:19:18.257 rw=randwrite 00:19:18.257 time_based=1 00:19:18.257 runtime=1 00:19:18.257 ioengine=libaio 00:19:18.257 direct=1 00:19:18.257 bs=4096 00:19:18.257 iodepth=128 00:19:18.257 norandommap=0 00:19:18.257 numjobs=1 00:19:18.257 00:19:18.257 verify_dump=1 00:19:18.257 verify_backlog=512 00:19:18.257 verify_state_save=0 00:19:18.257 do_verify=1 00:19:18.257 verify=crc32c-intel 00:19:18.257 [job0] 00:19:18.257 filename=/dev/nvme0n1 00:19:18.257 [job1] 00:19:18.257 filename=/dev/nvme0n2 00:19:18.257 [job2] 00:19:18.257 filename=/dev/nvme0n3 00:19:18.257 [job3] 00:19:18.257 filename=/dev/nvme0n4 00:19:18.257 Could not set queue depth (nvme0n1) 00:19:18.257 Could not set queue depth (nvme0n2) 00:19:18.257 Could not set queue depth (nvme0n3) 00:19:18.257 Could not set queue depth (nvme0n4) 00:19:18.257 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:18.257 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:18.257 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:18.257 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:18.257 fio-3.35 00:19:18.257 Starting 4 threads 00:19:19.630 00:19:19.630 job0: (groupid=0, jobs=1): err= 0: pid=2414216: Sun Jul 21 03:29:04 2024 00:19:19.630 read: IOPS=3062, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1003msec) 00:19:19.630 slat (usec): min=2, max=18689, avg=144.64, stdev=1043.79 00:19:19.630 clat (usec): min=4409, max=43506, avg=19971.26, stdev=6431.41 00:19:19.630 lat (usec): min=4415, max=43512, avg=20115.90, stdev=6480.00 00:19:19.630 clat percentiles (usec): 00:19:19.630 | 1.00th=[ 7504], 5.00th=[ 9634], 10.00th=[12125], 20.00th=[16188], 00:19:19.630 | 30.00th=[17171], 40.00th=[18220], 50.00th=[19268], 60.00th=[20055], 00:19:19.630 | 70.00th=[21365], 80.00th=[23987], 90.00th=[26346], 95.00th=[32113], 00:19:19.630 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:19:19.630 | 99.99th=[43254] 00:19:19.630 write: IOPS=3552, BW=13.9MiB/s (14.6MB/s)(13.9MiB/1003msec); 0 zone resets 00:19:19.630 slat (usec): min=3, max=23680, avg=143.10, stdev=1018.46 00:19:19.630 clat (usec): min=1733, max=48602, avg=18410.18, stdev=5761.45 00:19:19.630 lat (usec): min=3459, max=48620, avg=18553.27, stdev=5846.44 00:19:19.630 clat percentiles (usec): 00:19:19.630 | 1.00th=[ 6194], 5.00th=[10552], 10.00th=[11076], 20.00th=[13173], 00:19:19.630 | 30.00th=[14091], 40.00th=[15401], 50.00th=[18220], 60.00th=[20841], 00:19:19.630 | 70.00th=[21890], 80.00th=[25035], 90.00th=[25822], 95.00th=[26346], 00:19:19.630 | 99.00th=[27919], 99.50th=[27919], 99.90th=[39060], 99.95th=[48497], 00:19:19.630 | 99.99th=[48497] 00:19:19.630 bw ( KiB/s): min=12288, max=15200, per=21.97%, avg=13744.00, stdev=2059.09, samples=2 00:19:19.630 iops : min= 3072, max= 3800, avg=3436.00, stdev=514.77, samples=2 00:19:19.630 lat (msec) : 2=0.02%, 4=0.08%, 10=4.36%, 20=54.95%, 50=40.60% 00:19:19.630 cpu : usr=3.49%, sys=4.29%, ctx=268, majf=0, minf=1 00:19:19.630 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:19:19.630 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:19.630 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:19.630 issued rwts: total=3072,3563,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:19.630 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:19.630 job1: (groupid=0, jobs=1): err= 0: pid=2414217: Sun Jul 21 03:29:04 2024 00:19:19.630 read: IOPS=4585, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1005msec) 00:19:19.630 slat (usec): min=2, max=9696, avg=99.55, stdev=594.61 00:19:19.630 clat (usec): min=5831, max=28823, avg=12676.04, stdev=3889.63 00:19:19.630 lat (usec): min=5835, max=28829, avg=12775.59, stdev=3938.33 00:19:19.630 clat percentiles (usec): 00:19:19.630 | 1.00th=[ 6718], 5.00th=[ 8160], 10.00th=[ 9372], 20.00th=[ 9896], 00:19:19.630 | 30.00th=[10290], 40.00th=[10683], 50.00th=[10945], 60.00th=[11994], 00:19:19.630 | 70.00th=[13829], 80.00th=[16450], 90.00th=[18220], 95.00th=[20055], 00:19:19.630 | 99.00th=[23725], 99.50th=[28705], 99.90th=[28705], 99.95th=[28705], 00:19:19.630 | 99.99th=[28705] 00:19:19.630 write: IOPS=4910, BW=19.2MiB/s (20.1MB/s)(19.3MiB/1005msec); 0 zone resets 00:19:19.630 slat (usec): min=3, max=15480, avg=99.29, stdev=439.45 00:19:19.630 clat (usec): min=4377, max=41947, avg=13965.43, stdev=6281.31 00:19:19.630 lat (usec): min=4597, max=41973, avg=14064.72, stdev=6324.84 00:19:19.630 clat percentiles (usec): 00:19:19.630 | 1.00th=[ 6718], 5.00th=[ 8455], 10.00th=[ 9634], 20.00th=[10421], 00:19:19.630 | 30.00th=[10814], 40.00th=[11207], 50.00th=[11469], 60.00th=[11863], 00:19:19.630 | 70.00th=[13173], 80.00th=[16712], 90.00th=[22152], 95.00th=[27919], 00:19:19.630 | 99.00th=[37487], 99.50th=[38011], 99.90th=[41681], 99.95th=[41681], 00:19:19.630 | 99.99th=[42206] 00:19:19.630 bw ( KiB/s): min=17984, max=20480, per=30.74%, avg=19232.00, stdev=1764.94, samples=2 00:19:19.630 iops : min= 4496, max= 5120, avg=4808.00, stdev=441.23, samples=2 00:19:19.630 lat (msec) : 10=18.10%, 20=70.87%, 50=11.03% 00:19:19.630 cpu : usr=7.17%, sys=9.66%, ctx=641, majf=0, minf=1 00:19:19.630 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:19:19.630 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:19.630 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:19.630 issued rwts: total=4608,4935,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:19.630 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:19.630 job2: (groupid=0, jobs=1): err= 0: pid=2414218: Sun Jul 21 03:29:04 2024 00:19:19.630 read: IOPS=3658, BW=14.3MiB/s (15.0MB/s)(14.3MiB/1003msec) 00:19:19.630 slat (usec): min=2, max=14096, avg=124.67, stdev=846.86 00:19:19.630 clat (usec): min=2833, max=54107, avg=16300.58, stdev=7171.78 00:19:19.630 lat (usec): min=2846, max=54154, avg=16425.25, stdev=7234.96 00:19:19.630 clat percentiles (usec): 00:19:19.630 | 1.00th=[ 5735], 5.00th=[10552], 10.00th=[11731], 20.00th=[12387], 00:19:19.630 | 30.00th=[12911], 40.00th=[13173], 50.00th=[13698], 60.00th=[14484], 00:19:19.630 | 70.00th=[15533], 80.00th=[17957], 90.00th=[26870], 95.00th=[31589], 00:19:19.630 | 99.00th=[44303], 99.50th=[44303], 99.90th=[44827], 99.95th=[45351], 00:19:19.630 | 99.99th=[54264] 00:19:19.630 write: IOPS=4083, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1003msec); 0 zone resets 00:19:19.630 slat (usec): min=3, max=20662, avg=123.45, stdev=912.92 00:19:19.630 clat (usec): min=4438, max=45659, avg=16421.52, stdev=6067.42 00:19:19.630 lat (usec): min=4443, max=45675, avg=16544.97, stdev=6160.55 00:19:19.630 clat percentiles (usec): 00:19:19.630 | 1.00th=[ 6980], 5.00th=[10945], 10.00th=[11731], 20.00th=[12649], 00:19:19.630 | 30.00th=[13173], 40.00th=[13566], 50.00th=[14091], 60.00th=[14615], 00:19:19.630 | 70.00th=[15008], 80.00th=[24249], 90.00th=[27132], 95.00th=[28443], 00:19:19.630 | 99.00th=[33162], 99.50th=[33424], 99.90th=[38011], 99.95th=[42206], 00:19:19.630 | 99.99th=[45876] 00:19:19.630 bw ( KiB/s): min=16048, max=16384, per=25.92%, avg=16216.00, stdev=237.59, samples=2 00:19:19.630 iops : min= 4012, max= 4096, avg=4054.00, stdev=59.40, samples=2 00:19:19.630 lat (msec) : 4=0.28%, 10=3.17%, 20=76.69%, 50=19.85%, 100=0.01% 00:19:19.630 cpu : usr=3.19%, sys=7.78%, ctx=278, majf=0, minf=1 00:19:19.630 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:19:19.630 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:19.630 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:19.630 issued rwts: total=3669,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:19.630 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:19.630 job3: (groupid=0, jobs=1): err= 0: pid=2414219: Sun Jul 21 03:29:04 2024 00:19:19.630 read: IOPS=3062, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1003msec) 00:19:19.630 slat (usec): min=2, max=16509, avg=166.55, stdev=1130.99 00:19:19.630 clat (usec): min=5816, max=52641, avg=20647.94, stdev=9487.32 00:19:19.630 lat (usec): min=5829, max=56902, avg=20814.49, stdev=9594.29 00:19:19.630 clat percentiles (usec): 00:19:19.630 | 1.00th=[ 9372], 5.00th=[11076], 10.00th=[12518], 20.00th=[13435], 00:19:19.630 | 30.00th=[14091], 40.00th=[14353], 50.00th=[15008], 60.00th=[19006], 00:19:19.630 | 70.00th=[29230], 80.00th=[30540], 90.00th=[33817], 95.00th=[36963], 00:19:19.630 | 99.00th=[47973], 99.50th=[49546], 99.90th=[52691], 99.95th=[52691], 00:19:19.630 | 99.99th=[52691] 00:19:19.630 write: IOPS=3117, BW=12.2MiB/s (12.8MB/s)(12.2MiB/1003msec); 0 zone resets 00:19:19.630 slat (usec): min=3, max=9836, avg=147.68, stdev=691.83 00:19:19.630 clat (usec): min=353, max=65560, avg=20179.74, stdev=12008.00 00:19:19.630 lat (usec): min=5159, max=65565, avg=20327.42, stdev=12092.80 00:19:19.630 clat percentiles (usec): 00:19:19.630 | 1.00th=[ 6587], 5.00th=[ 8979], 10.00th=[12911], 20.00th=[13566], 00:19:19.630 | 30.00th=[13829], 40.00th=[14091], 50.00th=[14353], 60.00th=[16712], 00:19:19.630 | 70.00th=[22152], 80.00th=[25560], 90.00th=[33817], 95.00th=[51643], 00:19:19.630 | 99.00th=[64226], 99.50th=[65274], 99.90th=[65799], 99.95th=[65799], 00:19:19.630 | 99.99th=[65799] 00:19:19.630 bw ( KiB/s): min=12288, max=12288, per=19.64%, avg=12288.00, stdev= 0.00, samples=2 00:19:19.630 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=2 00:19:19.630 lat (usec) : 500=0.02% 00:19:19.630 lat (msec) : 10=4.13%, 20=60.54%, 50=32.15%, 100=3.16% 00:19:19.630 cpu : usr=2.50%, sys=5.69%, ctx=354, majf=0, minf=1 00:19:19.630 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:19:19.630 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:19.630 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:19.630 issued rwts: total=3072,3127,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:19.630 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:19.630 00:19:19.630 Run status group 0 (all jobs): 00:19:19.630 READ: bw=56.1MiB/s (58.8MB/s), 12.0MiB/s-17.9MiB/s (12.5MB/s-18.8MB/s), io=56.3MiB (59.1MB), run=1003-1005msec 00:19:19.631 WRITE: bw=61.1MiB/s (64.1MB/s), 12.2MiB/s-19.2MiB/s (12.8MB/s-20.1MB/s), io=61.4MiB (64.4MB), run=1003-1005msec 00:19:19.631 00:19:19.631 Disk stats (read/write): 00:19:19.631 nvme0n1: ios=2611/2759, merge=0/0, ticks=32267/36001, in_queue=68268, util=97.70% 00:19:19.631 nvme0n2: ios=4096/4455, merge=0/0, ticks=25155/26433, in_queue=51588, util=86.59% 00:19:19.631 nvme0n3: ios=3121/3245, merge=0/0, ticks=33494/36487, in_queue=69981, util=97.28% 00:19:19.631 nvme0n4: ios=2254/2560, merge=0/0, ticks=25918/26993, in_queue=52911, util=96.11% 00:19:19.631 03:29:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:19:19.631 03:29:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=2414357 00:19:19.631 03:29:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:19:19.631 03:29:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:19:19.631 [global] 00:19:19.631 thread=1 00:19:19.631 invalidate=1 00:19:19.631 rw=read 00:19:19.631 time_based=1 00:19:19.631 runtime=10 00:19:19.631 ioengine=libaio 00:19:19.631 direct=1 00:19:19.631 bs=4096 00:19:19.631 iodepth=1 00:19:19.631 norandommap=1 00:19:19.631 numjobs=1 00:19:19.631 00:19:19.631 [job0] 00:19:19.631 filename=/dev/nvme0n1 00:19:19.631 [job1] 00:19:19.631 filename=/dev/nvme0n2 00:19:19.631 [job2] 00:19:19.631 filename=/dev/nvme0n3 00:19:19.631 [job3] 00:19:19.631 filename=/dev/nvme0n4 00:19:19.631 Could not set queue depth (nvme0n1) 00:19:19.631 Could not set queue depth (nvme0n2) 00:19:19.631 Could not set queue depth (nvme0n3) 00:19:19.631 Could not set queue depth (nvme0n4) 00:19:19.888 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:19.888 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:19.888 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:19.888 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:19.888 fio-3.35 00:19:19.888 Starting 4 threads 00:19:23.163 03:29:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:19:23.163 03:29:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:19:23.163 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=15060992, buflen=4096 00:19:23.163 fio: pid=2414573, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:19:23.163 03:29:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:23.163 03:29:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:19:23.163 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=8081408, buflen=4096 00:19:23.163 fio: pid=2414572, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:19:23.420 03:29:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:23.420 03:29:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:19:23.420 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=12251136, buflen=4096 00:19:23.420 fio: pid=2414570, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:19:23.679 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=17203200, buflen=4096 00:19:23.679 fio: pid=2414571, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:19:23.679 03:29:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:23.679 03:29:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:19:23.679 00:19:23.679 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2414570: Sun Jul 21 03:29:08 2024 00:19:23.679 read: IOPS=869, BW=3478KiB/s (3561kB/s)(11.7MiB/3440msec) 00:19:23.679 slat (usec): min=5, max=11767, avg=22.86, stdev=315.79 00:19:23.679 clat (usec): min=226, max=44275, avg=1116.15, stdev=5703.27 00:19:23.679 lat (usec): min=232, max=49968, avg=1139.01, stdev=5730.44 00:19:23.679 clat percentiles (usec): 00:19:23.679 | 1.00th=[ 235], 5.00th=[ 247], 10.00th=[ 258], 20.00th=[ 273], 00:19:23.679 | 30.00th=[ 285], 40.00th=[ 293], 50.00th=[ 302], 60.00th=[ 306], 00:19:23.679 | 70.00th=[ 318], 80.00th=[ 355], 90.00th=[ 388], 95.00th=[ 478], 00:19:23.679 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:19:23.679 | 99.99th=[44303] 00:19:23.679 bw ( KiB/s): min= 208, max=10416, per=26.67%, avg=3718.67, stdev=3780.69, samples=6 00:19:23.679 iops : min= 52, max= 2604, avg=929.67, stdev=945.17, samples=6 00:19:23.679 lat (usec) : 250=6.52%, 500=88.97%, 750=2.44%, 1000=0.03% 00:19:23.679 lat (msec) : 2=0.07%, 50=1.94% 00:19:23.679 cpu : usr=0.93%, sys=1.63%, ctx=2996, majf=0, minf=1 00:19:23.679 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:23.679 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:23.679 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:23.679 issued rwts: total=2992,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:23.679 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:23.679 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2414571: Sun Jul 21 03:29:08 2024 00:19:23.679 read: IOPS=1140, BW=4560KiB/s (4670kB/s)(16.4MiB/3684msec) 00:19:23.679 slat (usec): min=4, max=22237, avg=21.61, stdev=462.35 00:19:23.679 clat (usec): min=180, max=42971, avg=848.07, stdev=4952.08 00:19:23.679 lat (usec): min=186, max=42989, avg=868.03, stdev=4972.74 00:19:23.679 clat percentiles (usec): 00:19:23.679 | 1.00th=[ 190], 5.00th=[ 196], 10.00th=[ 200], 20.00th=[ 204], 00:19:23.679 | 30.00th=[ 208], 40.00th=[ 212], 50.00th=[ 219], 60.00th=[ 225], 00:19:23.679 | 70.00th=[ 237], 80.00th=[ 285], 90.00th=[ 306], 95.00th=[ 326], 00:19:23.679 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:19:23.679 | 99.99th=[42730] 00:19:23.679 bw ( KiB/s): min= 96, max=16560, per=29.21%, avg=4073.14, stdev=5915.18, samples=7 00:19:23.679 iops : min= 24, max= 4140, avg=1018.29, stdev=1478.80, samples=7 00:19:23.679 lat (usec) : 250=73.43%, 500=24.78%, 750=0.12%, 1000=0.07% 00:19:23.679 lat (msec) : 2=0.02%, 4=0.05%, 50=1.50% 00:19:23.679 cpu : usr=0.60%, sys=1.30%, ctx=4207, majf=0, minf=1 00:19:23.679 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:23.679 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:23.679 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:23.679 issued rwts: total=4201,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:23.679 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:23.679 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2414572: Sun Jul 21 03:29:08 2024 00:19:23.679 read: IOPS=623, BW=2491KiB/s (2551kB/s)(7892KiB/3168msec) 00:19:23.679 slat (nsec): min=5332, max=70900, avg=13073.74, stdev=8482.17 00:19:23.679 clat (usec): min=211, max=44939, avg=1577.91, stdev=7057.55 00:19:23.679 lat (usec): min=217, max=44956, avg=1590.99, stdev=7058.64 00:19:23.679 clat percentiles (usec): 00:19:23.679 | 1.00th=[ 227], 5.00th=[ 235], 10.00th=[ 243], 20.00th=[ 258], 00:19:23.679 | 30.00th=[ 277], 40.00th=[ 293], 50.00th=[ 310], 60.00th=[ 347], 00:19:23.679 | 70.00th=[ 363], 80.00th=[ 383], 90.00th=[ 478], 95.00th=[ 570], 00:19:23.679 | 99.00th=[41681], 99.50th=[41681], 99.90th=[43779], 99.95th=[44827], 00:19:23.679 | 99.99th=[44827] 00:19:23.679 bw ( KiB/s): min= 96, max= 5984, per=16.28%, avg=2270.67, stdev=2498.29, samples=6 00:19:23.679 iops : min= 24, max= 1496, avg=567.67, stdev=624.57, samples=6 00:19:23.679 lat (usec) : 250=16.31%, 500=75.38%, 750=5.17%, 1000=0.05% 00:19:23.679 lat (msec) : 50=3.04% 00:19:23.679 cpu : usr=0.44%, sys=1.26%, ctx=1976, majf=0, minf=1 00:19:23.679 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:23.679 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:23.679 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:23.679 issued rwts: total=1974,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:23.679 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:23.679 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2414573: Sun Jul 21 03:29:08 2024 00:19:23.679 read: IOPS=1275, BW=5102KiB/s (5224kB/s)(14.4MiB/2883msec) 00:19:23.679 slat (nsec): min=4883, max=65980, avg=12318.06, stdev=7051.13 00:19:23.679 clat (usec): min=211, max=42379, avg=762.40, stdev=4307.34 00:19:23.679 lat (usec): min=217, max=42386, avg=774.72, stdev=4307.83 00:19:23.679 clat percentiles (usec): 00:19:23.679 | 1.00th=[ 225], 5.00th=[ 233], 10.00th=[ 239], 20.00th=[ 251], 00:19:23.679 | 30.00th=[ 265], 40.00th=[ 281], 50.00th=[ 289], 60.00th=[ 302], 00:19:23.679 | 70.00th=[ 310], 80.00th=[ 338], 90.00th=[ 416], 95.00th=[ 474], 00:19:23.679 | 99.00th=[40633], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:19:23.679 | 99.99th=[42206] 00:19:23.679 bw ( KiB/s): min= 536, max=12056, per=38.35%, avg=5347.20, stdev=5526.98, samples=5 00:19:23.679 iops : min= 134, max= 3014, avg=1336.80, stdev=1381.74, samples=5 00:19:23.679 lat (usec) : 250=19.17%, 500=77.68%, 750=1.96% 00:19:23.679 lat (msec) : 2=0.03%, 10=0.03%, 50=1.11% 00:19:23.679 cpu : usr=1.01%, sys=2.19%, ctx=3681, majf=0, minf=1 00:19:23.679 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:23.679 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:23.679 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:23.679 issued rwts: total=3678,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:23.679 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:23.679 00:19:23.679 Run status group 0 (all jobs): 00:19:23.679 READ: bw=13.6MiB/s (14.3MB/s), 2491KiB/s-5102KiB/s (2551kB/s-5224kB/s), io=50.2MiB (52.6MB), run=2883-3684msec 00:19:23.679 00:19:23.679 Disk stats (read/write): 00:19:23.679 nvme0n1: ios=2989/0, merge=0/0, ticks=3214/0, in_queue=3214, util=95.19% 00:19:23.679 nvme0n2: ios=3978/0, merge=0/0, ticks=3483/0, in_queue=3483, util=95.20% 00:19:23.679 nvme0n3: ios=1864/0, merge=0/0, ticks=3012/0, in_queue=3012, util=96.79% 00:19:23.679 nvme0n4: ios=3713/0, merge=0/0, ticks=3817/0, in_queue=3817, util=99.80% 00:19:23.937 03:29:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:23.937 03:29:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:19:24.195 03:29:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:24.195 03:29:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:19:24.458 03:29:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:24.458 03:29:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:19:24.716 03:29:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:24.716 03:29:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:19:24.973 03:29:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:19:24.973 03:29:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 2414357 00:19:24.973 03:29:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:19:24.973 03:29:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:24.973 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:24.973 03:29:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:24.973 03:29:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1215 -- # local i=0 00:19:24.973 03:29:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:19:24.973 03:29:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:24.973 03:29:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:19:24.973 03:29:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:24.973 03:29:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # return 0 00:19:24.973 03:29:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:19:24.973 03:29:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:19:24.973 nvmf hotplug test: fio failed as expected 00:19:24.973 03:29:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:25.230 03:29:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:19:25.230 03:29:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:19:25.230 03:29:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:19:25.230 03:29:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:19:25.230 03:29:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:19:25.230 03:29:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:25.230 03:29:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:19:25.230 03:29:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:25.230 03:29:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:19:25.230 03:29:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:25.230 03:29:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:25.230 rmmod nvme_tcp 00:19:25.230 rmmod nvme_fabrics 00:19:25.230 rmmod nvme_keyring 00:19:25.487 03:29:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:25.487 03:29:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:19:25.487 03:29:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:19:25.487 03:29:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 2412453 ']' 00:19:25.487 03:29:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 2412453 00:19:25.487 03:29:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@946 -- # '[' -z 2412453 ']' 00:19:25.487 03:29:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@950 -- # kill -0 2412453 00:19:25.487 03:29:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@951 -- # uname 00:19:25.487 03:29:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:25.487 03:29:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2412453 00:19:25.487 03:29:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:19:25.487 03:29:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:19:25.487 03:29:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2412453' 00:19:25.487 killing process with pid 2412453 00:19:25.487 03:29:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@965 -- # kill 2412453 00:19:25.487 03:29:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@970 -- # wait 2412453 00:19:25.746 03:29:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:25.746 03:29:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:25.746 03:29:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:25.746 03:29:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:25.746 03:29:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:25.746 03:29:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:25.746 03:29:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:25.746 03:29:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:27.649 03:29:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:27.649 00:19:27.649 real 0m23.271s 00:19:27.649 user 1m21.840s 00:19:27.649 sys 0m6.234s 00:19:27.649 03:29:12 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:27.649 03:29:12 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.649 ************************************ 00:19:27.649 END TEST nvmf_fio_target 00:19:27.649 ************************************ 00:19:27.649 03:29:12 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:19:27.649 03:29:12 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:19:27.649 03:29:12 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:27.649 03:29:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:27.649 ************************************ 00:19:27.649 START TEST nvmf_bdevio 00:19:27.649 ************************************ 00:19:27.649 03:29:12 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:19:27.906 * Looking for test storage... 00:19:27.906 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:27.906 03:29:12 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:27.906 03:29:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:19:27.906 03:29:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:27.906 03:29:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:27.906 03:29:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:27.906 03:29:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:27.906 03:29:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:27.906 03:29:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:27.906 03:29:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:27.906 03:29:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:27.906 03:29:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:27.906 03:29:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:27.906 03:29:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:27.906 03:29:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:27.906 03:29:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:27.906 03:29:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:27.906 03:29:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:27.906 03:29:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:27.906 03:29:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:27.906 03:29:12 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:27.906 03:29:12 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:27.906 03:29:12 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:27.906 03:29:12 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:27.906 03:29:12 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:27.906 03:29:12 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:27.906 03:29:12 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:19:27.906 03:29:12 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:27.906 03:29:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:19:27.906 03:29:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:27.906 03:29:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:27.906 03:29:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:27.906 03:29:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:27.906 03:29:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:27.906 03:29:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:27.906 03:29:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:27.906 03:29:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:27.906 03:29:12 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:27.906 03:29:12 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:27.906 03:29:12 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:19:27.906 03:29:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:27.906 03:29:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:27.906 03:29:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:27.906 03:29:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:27.906 03:29:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:27.906 03:29:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:27.906 03:29:12 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:27.906 03:29:12 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:27.906 03:29:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:27.906 03:29:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:27.906 03:29:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:19:27.906 03:29:12 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:29.803 03:29:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:29.803 03:29:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:19:29.803 03:29:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:29.803 03:29:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:29.803 03:29:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:29.803 03:29:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:29.803 03:29:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:29.803 03:29:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:19:29.803 03:29:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:29.803 03:29:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:19:29.803 03:29:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:19:29.803 03:29:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:19:29.803 03:29:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:19:29.803 03:29:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:19:29.803 03:29:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:19:29.803 03:29:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:29.803 03:29:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:29.803 03:29:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:29.803 03:29:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:29.803 03:29:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:29.803 03:29:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:29.803 03:29:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:29.803 03:29:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:29.803 03:29:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:29.803 03:29:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:29.803 03:29:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:29.803 03:29:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:29.803 03:29:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:29.803 03:29:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:29.803 03:29:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:29.803 03:29:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:29.803 03:29:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:29.803 03:29:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:29.803 03:29:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:29.803 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:29.803 03:29:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:29.803 03:29:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:29.803 03:29:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:29.803 03:29:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:29.803 03:29:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:29.803 03:29:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:29.803 03:29:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:29.803 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:29.803 03:29:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:29.803 03:29:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:29.803 03:29:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:29.803 03:29:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:29.803 03:29:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:29.803 03:29:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:29.803 03:29:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:29.803 03:29:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:29.803 03:29:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:29.803 03:29:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:29.803 03:29:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:29.803 03:29:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:29.803 03:29:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:29.803 03:29:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:29.803 03:29:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:29.803 03:29:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:29.803 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:29.803 03:29:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:29.803 03:29:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:29.804 03:29:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:29.804 03:29:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:29.804 03:29:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:29.804 03:29:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:29.804 03:29:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:29.804 03:29:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:29.804 03:29:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:29.804 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:29.804 03:29:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:29.804 03:29:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:29.804 03:29:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:19:29.804 03:29:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:29.804 03:29:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:29.804 03:29:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:29.804 03:29:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:29.804 03:29:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:29.804 03:29:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:29.804 03:29:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:29.804 03:29:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:29.804 03:29:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:29.804 03:29:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:29.804 03:29:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:29.804 03:29:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:29.804 03:29:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:29.804 03:29:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:29.804 03:29:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:29.804 03:29:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:29.804 03:29:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:29.804 03:29:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:29.804 03:29:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:29.804 03:29:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:29.804 03:29:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:29.804 03:29:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:29.804 03:29:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:29.804 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:29.804 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.126 ms 00:19:29.804 00:19:29.804 --- 10.0.0.2 ping statistics --- 00:19:29.804 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:29.804 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:19:29.804 03:29:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:29.804 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:29.804 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.075 ms 00:19:29.804 00:19:29.804 --- 10.0.0.1 ping statistics --- 00:19:29.804 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:29.804 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:19:29.804 03:29:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:29.804 03:29:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:19:29.804 03:29:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:29.804 03:29:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:29.804 03:29:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:29.804 03:29:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:29.804 03:29:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:29.804 03:29:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:29.804 03:29:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:30.061 03:29:15 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:19:30.061 03:29:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:30.061 03:29:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@720 -- # xtrace_disable 00:19:30.061 03:29:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:30.061 03:29:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=2417121 00:19:30.061 03:29:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:19:30.061 03:29:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 2417121 00:19:30.061 03:29:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@827 -- # '[' -z 2417121 ']' 00:19:30.061 03:29:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:30.061 03:29:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:30.061 03:29:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:30.061 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:30.061 03:29:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:30.061 03:29:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:30.061 [2024-07-21 03:29:15.175554] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:19:30.061 [2024-07-21 03:29:15.175671] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:30.061 EAL: No free 2048 kB hugepages reported on node 1 00:19:30.061 [2024-07-21 03:29:15.244651] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:30.061 [2024-07-21 03:29:15.339822] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:30.061 [2024-07-21 03:29:15.339891] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:30.061 [2024-07-21 03:29:15.339916] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:30.061 [2024-07-21 03:29:15.339929] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:30.061 [2024-07-21 03:29:15.339940] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:30.061 [2024-07-21 03:29:15.340037] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:19:30.061 [2024-07-21 03:29:15.340094] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:19:30.061 [2024-07-21 03:29:15.340148] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:19:30.061 [2024-07-21 03:29:15.340151] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:30.318 03:29:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:30.318 03:29:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@860 -- # return 0 00:19:30.318 03:29:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:30.318 03:29:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:30.318 03:29:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:30.318 03:29:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:30.318 03:29:15 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:30.318 03:29:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.318 03:29:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:30.318 [2024-07-21 03:29:15.502439] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:30.318 03:29:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.318 03:29:15 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:30.318 03:29:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.318 03:29:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:30.318 Malloc0 00:19:30.318 03:29:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.318 03:29:15 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:30.318 03:29:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.318 03:29:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:30.318 03:29:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.318 03:29:15 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:30.318 03:29:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.318 03:29:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:30.318 03:29:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.318 03:29:15 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:30.318 03:29:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.318 03:29:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:30.318 [2024-07-21 03:29:15.555656] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:30.318 03:29:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.318 03:29:15 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:19:30.318 03:29:15 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:19:30.318 03:29:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:19:30.318 03:29:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:19:30.318 03:29:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:30.318 03:29:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:30.318 { 00:19:30.318 "params": { 00:19:30.318 "name": "Nvme$subsystem", 00:19:30.318 "trtype": "$TEST_TRANSPORT", 00:19:30.318 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:30.318 "adrfam": "ipv4", 00:19:30.318 "trsvcid": "$NVMF_PORT", 00:19:30.318 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:30.318 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:30.318 "hdgst": ${hdgst:-false}, 00:19:30.319 "ddgst": ${ddgst:-false} 00:19:30.319 }, 00:19:30.319 "method": "bdev_nvme_attach_controller" 00:19:30.319 } 00:19:30.319 EOF 00:19:30.319 )") 00:19:30.319 03:29:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:19:30.319 03:29:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:19:30.319 03:29:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:19:30.319 03:29:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:30.319 "params": { 00:19:30.319 "name": "Nvme1", 00:19:30.319 "trtype": "tcp", 00:19:30.319 "traddr": "10.0.0.2", 00:19:30.319 "adrfam": "ipv4", 00:19:30.319 "trsvcid": "4420", 00:19:30.319 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:30.319 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:30.319 "hdgst": false, 00:19:30.319 "ddgst": false 00:19:30.319 }, 00:19:30.319 "method": "bdev_nvme_attach_controller" 00:19:30.319 }' 00:19:30.319 [2024-07-21 03:29:15.602589] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:19:30.319 [2024-07-21 03:29:15.602684] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2417209 ] 00:19:30.575 EAL: No free 2048 kB hugepages reported on node 1 00:19:30.575 [2024-07-21 03:29:15.665079] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:30.575 [2024-07-21 03:29:15.757275] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:30.575 [2024-07-21 03:29:15.757327] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:30.575 [2024-07-21 03:29:15.757330] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:30.855 I/O targets: 00:19:30.856 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:19:30.856 00:19:30.856 00:19:30.856 CUnit - A unit testing framework for C - Version 2.1-3 00:19:30.856 http://cunit.sourceforge.net/ 00:19:30.856 00:19:30.856 00:19:30.856 Suite: bdevio tests on: Nvme1n1 00:19:31.113 Test: blockdev write read block ...passed 00:19:31.113 Test: blockdev write zeroes read block ...passed 00:19:31.113 Test: blockdev write zeroes read no split ...passed 00:19:31.114 Test: blockdev write zeroes read split ...passed 00:19:31.114 Test: blockdev write zeroes read split partial ...passed 00:19:31.114 Test: blockdev reset ...[2024-07-21 03:29:16.292426] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:31.114 [2024-07-21 03:29:16.292535] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23aaf80 (9): Bad file descriptor 00:19:31.371 [2024-07-21 03:29:16.428308] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:31.371 passed 00:19:31.371 Test: blockdev write read 8 blocks ...passed 00:19:31.371 Test: blockdev write read size > 128k ...passed 00:19:31.371 Test: blockdev write read invalid size ...passed 00:19:31.371 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:31.371 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:31.371 Test: blockdev write read max offset ...passed 00:19:31.371 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:31.371 Test: blockdev writev readv 8 blocks ...passed 00:19:31.371 Test: blockdev writev readv 30 x 1block ...passed 00:19:31.371 Test: blockdev writev readv block ...passed 00:19:31.371 Test: blockdev writev readv size > 128k ...passed 00:19:31.371 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:31.371 Test: blockdev comparev and writev ...[2024-07-21 03:29:16.641703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:31.371 [2024-07-21 03:29:16.641739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:31.371 [2024-07-21 03:29:16.641763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:31.372 [2024-07-21 03:29:16.641781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:31.372 [2024-07-21 03:29:16.642106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:31.372 [2024-07-21 03:29:16.642137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:31.372 [2024-07-21 03:29:16.642159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:31.372 [2024-07-21 03:29:16.642175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:31.372 [2024-07-21 03:29:16.642499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:31.372 [2024-07-21 03:29:16.642524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:31.372 [2024-07-21 03:29:16.642546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:31.372 [2024-07-21 03:29:16.642563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:31.372 [2024-07-21 03:29:16.642895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:31.372 [2024-07-21 03:29:16.642920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:31.372 [2024-07-21 03:29:16.642942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:31.372 [2024-07-21 03:29:16.642959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:31.372 passed 00:19:31.629 Test: blockdev nvme passthru rw ...passed 00:19:31.629 Test: blockdev nvme passthru vendor specific ...[2024-07-21 03:29:16.725875] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:31.629 [2024-07-21 03:29:16.725904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:31.629 [2024-07-21 03:29:16.726050] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:31.629 [2024-07-21 03:29:16.726074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:31.629 [2024-07-21 03:29:16.726214] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:31.629 [2024-07-21 03:29:16.726238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:31.629 [2024-07-21 03:29:16.726376] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:31.629 [2024-07-21 03:29:16.726400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:31.629 passed 00:19:31.629 Test: blockdev nvme admin passthru ...passed 00:19:31.629 Test: blockdev copy ...passed 00:19:31.629 00:19:31.629 Run Summary: Type Total Ran Passed Failed Inactive 00:19:31.629 suites 1 1 n/a 0 0 00:19:31.629 tests 23 23 23 0 0 00:19:31.629 asserts 152 152 152 0 n/a 00:19:31.629 00:19:31.629 Elapsed time = 1.289 seconds 00:19:31.887 03:29:16 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:31.887 03:29:16 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.887 03:29:16 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:31.887 03:29:16 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.887 03:29:16 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:19:31.887 03:29:16 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:19:31.887 03:29:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:31.887 03:29:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:19:31.888 03:29:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:31.888 03:29:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:19:31.888 03:29:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:31.888 03:29:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:31.888 rmmod nvme_tcp 00:19:31.888 rmmod nvme_fabrics 00:19:31.888 rmmod nvme_keyring 00:19:31.888 03:29:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:31.888 03:29:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:19:31.888 03:29:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:19:31.888 03:29:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 2417121 ']' 00:19:31.888 03:29:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 2417121 00:19:31.888 03:29:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@946 -- # '[' -z 2417121 ']' 00:19:31.888 03:29:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@950 -- # kill -0 2417121 00:19:31.888 03:29:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@951 -- # uname 00:19:31.888 03:29:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:31.888 03:29:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2417121 00:19:31.888 03:29:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # process_name=reactor_3 00:19:31.888 03:29:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@956 -- # '[' reactor_3 = sudo ']' 00:19:31.888 03:29:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2417121' 00:19:31.888 killing process with pid 2417121 00:19:31.888 03:29:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@965 -- # kill 2417121 00:19:31.888 03:29:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@970 -- # wait 2417121 00:19:32.146 03:29:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:32.146 03:29:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:32.146 03:29:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:32.146 03:29:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:32.147 03:29:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:32.147 03:29:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:32.147 03:29:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:32.147 03:29:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:34.053 03:29:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:34.312 00:19:34.312 real 0m6.447s 00:19:34.312 user 0m11.282s 00:19:34.312 sys 0m2.072s 00:19:34.312 03:29:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:34.312 03:29:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:34.312 ************************************ 00:19:34.312 END TEST nvmf_bdevio 00:19:34.312 ************************************ 00:19:34.312 03:29:19 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:19:34.312 03:29:19 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:19:34.312 03:29:19 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:34.312 03:29:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:34.312 ************************************ 00:19:34.312 START TEST nvmf_auth_target 00:19:34.312 ************************************ 00:19:34.312 03:29:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:19:34.312 * Looking for test storage... 00:19:34.312 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:34.312 03:29:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:34.312 03:29:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:19:34.312 03:29:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:34.312 03:29:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:34.312 03:29:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:34.312 03:29:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:34.312 03:29:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:34.312 03:29:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:34.312 03:29:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:34.312 03:29:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:34.312 03:29:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:34.312 03:29:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:34.312 03:29:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:34.312 03:29:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:34.312 03:29:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:34.312 03:29:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:34.312 03:29:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:34.312 03:29:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:34.312 03:29:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:34.312 03:29:19 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:34.312 03:29:19 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:34.312 03:29:19 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:34.312 03:29:19 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:34.312 03:29:19 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:34.312 03:29:19 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:34.312 03:29:19 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:19:34.312 03:29:19 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:34.312 03:29:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:19:34.312 03:29:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:34.312 03:29:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:34.312 03:29:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:34.312 03:29:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:34.312 03:29:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:34.312 03:29:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:34.312 03:29:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:34.312 03:29:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:34.312 03:29:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:19:34.312 03:29:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:19:34.312 03:29:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:19:34.312 03:29:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:34.312 03:29:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:19:34.312 03:29:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:19:34.312 03:29:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:19:34.312 03:29:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:19:34.312 03:29:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:34.312 03:29:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:34.312 03:29:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:34.312 03:29:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:34.312 03:29:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:34.312 03:29:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:34.312 03:29:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:34.312 03:29:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:34.312 03:29:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:34.312 03:29:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:34.312 03:29:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:19:34.312 03:29:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.841 03:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:36.841 03:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:19:36.841 03:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:36.841 03:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:36.841 03:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:36.841 03:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:36.841 03:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:36.841 03:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:19:36.841 03:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:36.841 03:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:19:36.841 03:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:19:36.841 03:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:19:36.841 03:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:19:36.841 03:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:19:36.841 03:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:19:36.841 03:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:36.841 03:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:36.841 03:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:36.841 03:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:36.841 03:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:36.841 03:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:36.841 03:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:36.841 03:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:36.841 03:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:36.841 03:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:36.841 03:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:36.841 03:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:36.841 03:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:36.841 03:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:36.841 03:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:36.841 03:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:36.841 03:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:36.841 03:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:36.841 03:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:36.841 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:36.841 03:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:36.841 03:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:36.841 03:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:36.841 03:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:36.841 03:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:36.841 03:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:36.841 03:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:36.841 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:36.841 03:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:36.841 03:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:36.841 03:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:36.841 03:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:36.841 03:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:36.841 03:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:36.841 03:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:36.841 03:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:36.841 03:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:36.841 03:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:36.841 03:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:36.841 03:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:36.841 03:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:36.841 03:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:36.841 03:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:36.841 03:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:36.841 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:36.841 03:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:36.841 03:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:36.841 03:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:36.841 03:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:36.841 03:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:36.841 03:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:36.841 03:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:36.841 03:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:36.841 03:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:36.841 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:36.841 03:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:36.841 03:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:36.841 03:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:19:36.841 03:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:36.841 03:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:36.841 03:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:36.841 03:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:36.841 03:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:36.841 03:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:36.841 03:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:36.841 03:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:36.841 03:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:36.841 03:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:36.841 03:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:36.841 03:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:36.841 03:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:36.841 03:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:36.841 03:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:36.841 03:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:36.841 03:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:36.841 03:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:36.841 03:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:36.841 03:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:36.841 03:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:36.841 03:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:36.841 03:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:36.841 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:36.841 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.185 ms 00:19:36.841 00:19:36.841 --- 10.0.0.2 ping statistics --- 00:19:36.841 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:36.841 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:19:36.841 03:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:36.841 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:36.841 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.075 ms 00:19:36.841 00:19:36.841 --- 10.0.0.1 ping statistics --- 00:19:36.841 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:36.841 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:19:36.841 03:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:36.841 03:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:19:36.841 03:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:36.841 03:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:36.841 03:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:36.841 03:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:36.841 03:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:36.841 03:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:36.841 03:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:36.841 03:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:19:36.841 03:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:36.841 03:29:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@720 -- # xtrace_disable 00:19:36.841 03:29:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.841 03:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=2419283 00:19:36.841 03:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:19:36.841 03:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 2419283 00:19:36.841 03:29:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@827 -- # '[' -z 2419283 ']' 00:19:36.842 03:29:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:36.842 03:29:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:36.842 03:29:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:36.842 03:29:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:36.842 03:29:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.842 03:29:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:36.842 03:29:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@860 -- # return 0 00:19:36.842 03:29:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:36.842 03:29:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:36.842 03:29:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.842 03:29:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:36.842 03:29:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=2419425 00:19:36.842 03:29:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:19:36.842 03:29:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:19:36.842 03:29:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:19:36.842 03:29:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:36.842 03:29:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:36.842 03:29:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:36.842 03:29:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:19:36.842 03:29:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:19:36.842 03:29:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:36.842 03:29:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=ef9e502cbcc4e0bee90718f32bf48959438fcd04d59dafc4 00:19:36.842 03:29:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:19:36.842 03:29:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.8bL 00:19:36.842 03:29:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key ef9e502cbcc4e0bee90718f32bf48959438fcd04d59dafc4 0 00:19:36.842 03:29:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 ef9e502cbcc4e0bee90718f32bf48959438fcd04d59dafc4 0 00:19:36.842 03:29:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:36.842 03:29:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:36.842 03:29:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=ef9e502cbcc4e0bee90718f32bf48959438fcd04d59dafc4 00:19:36.842 03:29:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:19:36.842 03:29:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:36.842 03:29:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.8bL 00:19:36.842 03:29:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.8bL 00:19:36.842 03:29:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.8bL 00:19:36.842 03:29:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:19:36.842 03:29:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:36.842 03:29:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:36.842 03:29:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:36.842 03:29:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:19:36.842 03:29:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:19:36.842 03:29:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:36.842 03:29:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=6d45913647d500ea837095cd26448c985a09487596cab4acacbdc90dded553b6 00:19:36.842 03:29:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:19:36.842 03:29:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.eYQ 00:19:36.842 03:29:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 6d45913647d500ea837095cd26448c985a09487596cab4acacbdc90dded553b6 3 00:19:36.842 03:29:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 6d45913647d500ea837095cd26448c985a09487596cab4acacbdc90dded553b6 3 00:19:36.842 03:29:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:36.842 03:29:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:36.842 03:29:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=6d45913647d500ea837095cd26448c985a09487596cab4acacbdc90dded553b6 00:19:36.842 03:29:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:19:36.842 03:29:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:37.100 03:29:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.eYQ 00:19:37.100 03:29:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.eYQ 00:19:37.100 03:29:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.eYQ 00:19:37.100 03:29:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:19:37.100 03:29:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:37.100 03:29:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:37.100 03:29:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:37.100 03:29:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:19:37.100 03:29:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:19:37.100 03:29:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:37.100 03:29:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=4dd704341f7f7cd7f447ea25340a4ee0 00:19:37.100 03:29:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:19:37.100 03:29:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.h1E 00:19:37.100 03:29:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 4dd704341f7f7cd7f447ea25340a4ee0 1 00:19:37.100 03:29:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 4dd704341f7f7cd7f447ea25340a4ee0 1 00:19:37.100 03:29:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:37.100 03:29:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:37.100 03:29:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=4dd704341f7f7cd7f447ea25340a4ee0 00:19:37.100 03:29:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:19:37.100 03:29:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:37.100 03:29:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.h1E 00:19:37.100 03:29:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.h1E 00:19:37.100 03:29:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.h1E 00:19:37.100 03:29:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:19:37.100 03:29:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:37.100 03:29:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:37.100 03:29:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:37.100 03:29:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:19:37.100 03:29:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:19:37.100 03:29:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:37.100 03:29:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=a54a7b2b8b1f2bf5f2b89029fdf104038d54a51927ee3dac 00:19:37.100 03:29:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:19:37.100 03:29:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.uJe 00:19:37.100 03:29:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key a54a7b2b8b1f2bf5f2b89029fdf104038d54a51927ee3dac 2 00:19:37.100 03:29:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 a54a7b2b8b1f2bf5f2b89029fdf104038d54a51927ee3dac 2 00:19:37.100 03:29:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:37.100 03:29:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:37.100 03:29:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=a54a7b2b8b1f2bf5f2b89029fdf104038d54a51927ee3dac 00:19:37.100 03:29:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:19:37.100 03:29:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:37.100 03:29:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.uJe 00:19:37.100 03:29:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.uJe 00:19:37.100 03:29:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.uJe 00:19:37.100 03:29:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:19:37.100 03:29:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:37.100 03:29:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:37.100 03:29:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:37.100 03:29:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:19:37.100 03:29:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:19:37.100 03:29:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:37.100 03:29:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=4dbd813035deadbe1008b010ffa92e1200cea521502b280f 00:19:37.100 03:29:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:19:37.100 03:29:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.iWo 00:19:37.100 03:29:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 4dbd813035deadbe1008b010ffa92e1200cea521502b280f 2 00:19:37.100 03:29:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 4dbd813035deadbe1008b010ffa92e1200cea521502b280f 2 00:19:37.100 03:29:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:37.100 03:29:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:37.100 03:29:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=4dbd813035deadbe1008b010ffa92e1200cea521502b280f 00:19:37.100 03:29:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:19:37.100 03:29:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:37.100 03:29:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.iWo 00:19:37.100 03:29:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.iWo 00:19:37.100 03:29:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.iWo 00:19:37.100 03:29:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:19:37.100 03:29:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:37.100 03:29:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:37.100 03:29:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:37.100 03:29:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:19:37.101 03:29:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:19:37.101 03:29:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:37.101 03:29:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=7847c01a89f342ffae5b46dcd2450ec1 00:19:37.101 03:29:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:19:37.101 03:29:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.MPL 00:19:37.101 03:29:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 7847c01a89f342ffae5b46dcd2450ec1 1 00:19:37.101 03:29:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 7847c01a89f342ffae5b46dcd2450ec1 1 00:19:37.101 03:29:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:37.101 03:29:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:37.101 03:29:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=7847c01a89f342ffae5b46dcd2450ec1 00:19:37.101 03:29:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:19:37.101 03:29:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:37.101 03:29:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.MPL 00:19:37.101 03:29:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.MPL 00:19:37.101 03:29:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.MPL 00:19:37.101 03:29:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:19:37.101 03:29:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:37.101 03:29:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:37.101 03:29:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:37.101 03:29:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:19:37.101 03:29:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:19:37.101 03:29:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:37.101 03:29:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=54ed650a2e2091a559001338e76d508b746164c1f144f4223407400d0f423732 00:19:37.101 03:29:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:19:37.101 03:29:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.pvF 00:19:37.101 03:29:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 54ed650a2e2091a559001338e76d508b746164c1f144f4223407400d0f423732 3 00:19:37.101 03:29:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 54ed650a2e2091a559001338e76d508b746164c1f144f4223407400d0f423732 3 00:19:37.101 03:29:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:37.101 03:29:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:37.101 03:29:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=54ed650a2e2091a559001338e76d508b746164c1f144f4223407400d0f423732 00:19:37.101 03:29:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:19:37.101 03:29:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:37.359 03:29:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.pvF 00:19:37.359 03:29:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.pvF 00:19:37.359 03:29:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.pvF 00:19:37.359 03:29:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:19:37.359 03:29:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 2419283 00:19:37.359 03:29:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@827 -- # '[' -z 2419283 ']' 00:19:37.359 03:29:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:37.359 03:29:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:37.359 03:29:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:37.359 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:37.359 03:29:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:37.359 03:29:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.616 03:29:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:37.616 03:29:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@860 -- # return 0 00:19:37.616 03:29:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 2419425 /var/tmp/host.sock 00:19:37.616 03:29:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@827 -- # '[' -z 2419425 ']' 00:19:37.616 03:29:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/host.sock 00:19:37.616 03:29:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:37.616 03:29:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:19:37.616 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:19:37.616 03:29:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:37.616 03:29:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.873 03:29:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:37.873 03:29:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@860 -- # return 0 00:19:37.873 03:29:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:19:37.873 03:29:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.873 03:29:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.873 03:29:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.873 03:29:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:19:37.873 03:29:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.8bL 00:19:37.873 03:29:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.873 03:29:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.873 03:29:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.873 03:29:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.8bL 00:19:37.873 03:29:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.8bL 00:19:38.130 03:29:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.eYQ ]] 00:19:38.130 03:29:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.eYQ 00:19:38.130 03:29:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:38.130 03:29:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.130 03:29:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:38.130 03:29:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.eYQ 00:19:38.130 03:29:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.eYQ 00:19:38.387 03:29:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:19:38.387 03:29:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.h1E 00:19:38.387 03:29:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:38.387 03:29:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.387 03:29:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:38.387 03:29:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.h1E 00:19:38.387 03:29:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.h1E 00:19:38.644 03:29:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.uJe ]] 00:19:38.644 03:29:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.uJe 00:19:38.644 03:29:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:38.644 03:29:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.644 03:29:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:38.644 03:29:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.uJe 00:19:38.644 03:29:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.uJe 00:19:38.901 03:29:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:19:38.901 03:29:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.iWo 00:19:38.901 03:29:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:38.901 03:29:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.901 03:29:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:38.901 03:29:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.iWo 00:19:38.901 03:29:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.iWo 00:19:39.158 03:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.MPL ]] 00:19:39.158 03:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.MPL 00:19:39.158 03:29:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:39.158 03:29:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.158 03:29:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:39.158 03:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.MPL 00:19:39.158 03:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.MPL 00:19:39.416 03:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:19:39.416 03:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.pvF 00:19:39.416 03:29:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:39.416 03:29:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.416 03:29:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:39.416 03:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.pvF 00:19:39.416 03:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.pvF 00:19:39.673 03:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:19:39.673 03:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:19:39.673 03:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:39.673 03:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:39.673 03:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:39.673 03:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:39.931 03:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:19:39.931 03:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:39.931 03:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:39.931 03:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:39.931 03:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:39.931 03:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:39.931 03:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:39.931 03:29:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:39.931 03:29:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.931 03:29:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:39.931 03:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:39.931 03:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:40.189 00:19:40.189 03:29:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:40.189 03:29:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:40.189 03:29:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:40.446 03:29:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:40.446 03:29:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:40.446 03:29:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.446 03:29:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.446 03:29:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.446 03:29:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:40.446 { 00:19:40.446 "cntlid": 1, 00:19:40.446 "qid": 0, 00:19:40.446 "state": "enabled", 00:19:40.446 "listen_address": { 00:19:40.446 "trtype": "TCP", 00:19:40.446 "adrfam": "IPv4", 00:19:40.446 "traddr": "10.0.0.2", 00:19:40.446 "trsvcid": "4420" 00:19:40.446 }, 00:19:40.446 "peer_address": { 00:19:40.446 "trtype": "TCP", 00:19:40.446 "adrfam": "IPv4", 00:19:40.446 "traddr": "10.0.0.1", 00:19:40.446 "trsvcid": "60796" 00:19:40.446 }, 00:19:40.446 "auth": { 00:19:40.446 "state": "completed", 00:19:40.446 "digest": "sha256", 00:19:40.446 "dhgroup": "null" 00:19:40.447 } 00:19:40.447 } 00:19:40.447 ]' 00:19:40.447 03:29:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:40.447 03:29:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:40.447 03:29:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:40.447 03:29:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:40.447 03:29:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:40.447 03:29:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:40.447 03:29:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:40.447 03:29:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:40.704 03:29:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZWY5ZTUwMmNiY2M0ZTBiZWU5MDcxOGYzMmJmNDg5NTk0MzhmY2QwNGQ1OWRhZmM0bAyhVA==: --dhchap-ctrl-secret DHHC-1:03:NmQ0NTkxMzY0N2Q1MDBlYTgzNzA5NWNkMjY0NDhjOTg1YTA5NDg3NTk2Y2FiNGFjYWNiZGM5MGRkZWQ1NTNiNnbWCLs=: 00:19:41.640 03:29:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:41.640 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:41.640 03:29:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:41.640 03:29:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.640 03:29:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.640 03:29:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.640 03:29:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:41.640 03:29:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:41.640 03:29:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:41.897 03:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:19:41.897 03:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:41.897 03:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:41.897 03:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:41.897 03:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:41.897 03:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:41.897 03:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:41.897 03:29:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.897 03:29:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.897 03:29:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.897 03:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:41.897 03:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:42.153 00:19:42.153 03:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:42.153 03:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:42.153 03:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:42.410 03:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:42.410 03:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:42.410 03:29:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:42.410 03:29:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.410 03:29:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:42.410 03:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:42.410 { 00:19:42.410 "cntlid": 3, 00:19:42.410 "qid": 0, 00:19:42.410 "state": "enabled", 00:19:42.410 "listen_address": { 00:19:42.410 "trtype": "TCP", 00:19:42.410 "adrfam": "IPv4", 00:19:42.410 "traddr": "10.0.0.2", 00:19:42.410 "trsvcid": "4420" 00:19:42.410 }, 00:19:42.410 "peer_address": { 00:19:42.410 "trtype": "TCP", 00:19:42.410 "adrfam": "IPv4", 00:19:42.410 "traddr": "10.0.0.1", 00:19:42.410 "trsvcid": "56056" 00:19:42.410 }, 00:19:42.410 "auth": { 00:19:42.410 "state": "completed", 00:19:42.410 "digest": "sha256", 00:19:42.410 "dhgroup": "null" 00:19:42.410 } 00:19:42.410 } 00:19:42.410 ]' 00:19:42.410 03:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:42.410 03:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:42.410 03:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:42.667 03:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:42.667 03:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:42.667 03:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:42.667 03:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:42.667 03:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:42.925 03:29:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NGRkNzA0MzQxZjdmN2NkN2Y0NDdlYTI1MzQwYTRlZTDXYZOU: --dhchap-ctrl-secret DHHC-1:02:YTU0YTdiMmI4YjFmMmJmNWYyYjg5MDI5ZmRmMTA0MDM4ZDU0YTUxOTI3ZWUzZGFjPc0M1Q==: 00:19:43.860 03:29:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:43.860 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:43.860 03:29:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:43.860 03:29:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.860 03:29:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.860 03:29:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.860 03:29:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:43.860 03:29:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:43.860 03:29:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:44.117 03:29:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:19:44.117 03:29:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:44.117 03:29:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:44.117 03:29:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:44.117 03:29:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:44.117 03:29:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:44.117 03:29:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:44.117 03:29:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:44.117 03:29:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.117 03:29:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:44.117 03:29:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:44.117 03:29:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:44.374 00:19:44.374 03:29:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:44.374 03:29:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:44.374 03:29:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:44.632 03:29:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:44.632 03:29:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:44.632 03:29:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:44.632 03:29:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.632 03:29:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:44.632 03:29:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:44.632 { 00:19:44.632 "cntlid": 5, 00:19:44.632 "qid": 0, 00:19:44.632 "state": "enabled", 00:19:44.632 "listen_address": { 00:19:44.632 "trtype": "TCP", 00:19:44.632 "adrfam": "IPv4", 00:19:44.632 "traddr": "10.0.0.2", 00:19:44.632 "trsvcid": "4420" 00:19:44.632 }, 00:19:44.632 "peer_address": { 00:19:44.632 "trtype": "TCP", 00:19:44.632 "adrfam": "IPv4", 00:19:44.632 "traddr": "10.0.0.1", 00:19:44.632 "trsvcid": "56074" 00:19:44.632 }, 00:19:44.632 "auth": { 00:19:44.632 "state": "completed", 00:19:44.632 "digest": "sha256", 00:19:44.632 "dhgroup": "null" 00:19:44.632 } 00:19:44.632 } 00:19:44.632 ]' 00:19:44.632 03:29:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:44.632 03:29:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:44.632 03:29:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:44.891 03:29:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:44.891 03:29:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:44.891 03:29:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:44.891 03:29:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:44.891 03:29:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:45.149 03:29:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NGRiZDgxMzAzNWRlYWRiZTEwMDhiMDEwZmZhOTJlMTIwMGNlYTUyMTUwMmIyODBm/CtKoQ==: --dhchap-ctrl-secret DHHC-1:01:Nzg0N2MwMWE4OWYzNDJmZmFlNWI0NmRjZDI0NTBlYzEb/Pto: 00:19:46.084 03:29:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:46.084 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:46.084 03:29:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:46.084 03:29:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:46.084 03:29:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.084 03:29:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:46.084 03:29:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:46.084 03:29:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:46.084 03:29:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:46.342 03:29:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:19:46.342 03:29:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:46.342 03:29:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:46.342 03:29:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:46.342 03:29:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:46.342 03:29:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:46.342 03:29:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:46.342 03:29:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:46.342 03:29:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.342 03:29:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:46.342 03:29:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:46.342 03:29:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:46.910 00:19:46.910 03:29:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:46.910 03:29:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:46.910 03:29:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:47.168 03:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:47.168 03:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:47.168 03:29:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:47.168 03:29:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.168 03:29:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:47.168 03:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:47.168 { 00:19:47.168 "cntlid": 7, 00:19:47.168 "qid": 0, 00:19:47.168 "state": "enabled", 00:19:47.168 "listen_address": { 00:19:47.168 "trtype": "TCP", 00:19:47.168 "adrfam": "IPv4", 00:19:47.168 "traddr": "10.0.0.2", 00:19:47.168 "trsvcid": "4420" 00:19:47.168 }, 00:19:47.168 "peer_address": { 00:19:47.168 "trtype": "TCP", 00:19:47.168 "adrfam": "IPv4", 00:19:47.168 "traddr": "10.0.0.1", 00:19:47.168 "trsvcid": "56106" 00:19:47.168 }, 00:19:47.168 "auth": { 00:19:47.169 "state": "completed", 00:19:47.169 "digest": "sha256", 00:19:47.169 "dhgroup": "null" 00:19:47.169 } 00:19:47.169 } 00:19:47.169 ]' 00:19:47.169 03:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:47.169 03:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:47.169 03:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:47.169 03:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:47.169 03:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:47.169 03:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:47.169 03:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:47.169 03:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:47.426 03:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NTRlZDY1MGEyZTIwOTFhNTU5MDAxMzM4ZTc2ZDUwOGI3NDYxNjRjMWYxNDRmNDIyMzQwNzQwMGQwZjQyMzczMrqoUKk=: 00:19:48.386 03:29:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:48.386 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:48.386 03:29:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:48.386 03:29:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:48.386 03:29:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.386 03:29:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:48.386 03:29:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:48.386 03:29:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:48.386 03:29:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:48.386 03:29:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:48.659 03:29:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:19:48.659 03:29:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:48.659 03:29:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:48.659 03:29:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:48.659 03:29:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:48.659 03:29:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:48.659 03:29:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:48.659 03:29:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:48.659 03:29:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.659 03:29:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:48.659 03:29:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:48.659 03:29:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:48.917 00:19:48.917 03:29:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:48.917 03:29:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:48.917 03:29:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:49.175 03:29:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:49.175 03:29:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:49.175 03:29:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:49.175 03:29:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.175 03:29:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:49.175 03:29:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:49.175 { 00:19:49.175 "cntlid": 9, 00:19:49.175 "qid": 0, 00:19:49.175 "state": "enabled", 00:19:49.175 "listen_address": { 00:19:49.175 "trtype": "TCP", 00:19:49.175 "adrfam": "IPv4", 00:19:49.175 "traddr": "10.0.0.2", 00:19:49.175 "trsvcid": "4420" 00:19:49.175 }, 00:19:49.175 "peer_address": { 00:19:49.175 "trtype": "TCP", 00:19:49.175 "adrfam": "IPv4", 00:19:49.175 "traddr": "10.0.0.1", 00:19:49.175 "trsvcid": "56132" 00:19:49.175 }, 00:19:49.175 "auth": { 00:19:49.175 "state": "completed", 00:19:49.175 "digest": "sha256", 00:19:49.175 "dhgroup": "ffdhe2048" 00:19:49.175 } 00:19:49.175 } 00:19:49.175 ]' 00:19:49.175 03:29:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:49.175 03:29:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:49.175 03:29:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:49.433 03:29:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:49.433 03:29:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:49.433 03:29:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:49.433 03:29:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:49.433 03:29:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:49.692 03:29:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZWY5ZTUwMmNiY2M0ZTBiZWU5MDcxOGYzMmJmNDg5NTk0MzhmY2QwNGQ1OWRhZmM0bAyhVA==: --dhchap-ctrl-secret DHHC-1:03:NmQ0NTkxMzY0N2Q1MDBlYTgzNzA5NWNkMjY0NDhjOTg1YTA5NDg3NTk2Y2FiNGFjYWNiZGM5MGRkZWQ1NTNiNnbWCLs=: 00:19:50.628 03:29:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:50.628 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:50.628 03:29:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:50.628 03:29:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.628 03:29:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.628 03:29:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.628 03:29:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:50.628 03:29:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:50.628 03:29:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:50.885 03:29:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:19:50.885 03:29:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:50.885 03:29:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:50.885 03:29:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:50.885 03:29:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:50.885 03:29:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:50.885 03:29:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:50.885 03:29:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.885 03:29:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.885 03:29:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.885 03:29:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:50.885 03:29:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:51.143 00:19:51.143 03:29:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:51.143 03:29:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:51.143 03:29:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:51.401 03:29:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:51.401 03:29:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:51.401 03:29:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.401 03:29:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.401 03:29:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.401 03:29:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:51.401 { 00:19:51.401 "cntlid": 11, 00:19:51.401 "qid": 0, 00:19:51.401 "state": "enabled", 00:19:51.401 "listen_address": { 00:19:51.401 "trtype": "TCP", 00:19:51.401 "adrfam": "IPv4", 00:19:51.401 "traddr": "10.0.0.2", 00:19:51.401 "trsvcid": "4420" 00:19:51.401 }, 00:19:51.401 "peer_address": { 00:19:51.401 "trtype": "TCP", 00:19:51.401 "adrfam": "IPv4", 00:19:51.401 "traddr": "10.0.0.1", 00:19:51.401 "trsvcid": "56168" 00:19:51.401 }, 00:19:51.401 "auth": { 00:19:51.401 "state": "completed", 00:19:51.401 "digest": "sha256", 00:19:51.401 "dhgroup": "ffdhe2048" 00:19:51.401 } 00:19:51.401 } 00:19:51.401 ]' 00:19:51.401 03:29:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:51.659 03:29:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:51.659 03:29:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:51.659 03:29:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:51.659 03:29:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:51.659 03:29:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:51.659 03:29:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:51.659 03:29:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:51.916 03:29:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NGRkNzA0MzQxZjdmN2NkN2Y0NDdlYTI1MzQwYTRlZTDXYZOU: --dhchap-ctrl-secret DHHC-1:02:YTU0YTdiMmI4YjFmMmJmNWYyYjg5MDI5ZmRmMTA0MDM4ZDU0YTUxOTI3ZWUzZGFjPc0M1Q==: 00:19:52.849 03:29:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:52.849 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:52.849 03:29:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:52.849 03:29:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.849 03:29:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.849 03:29:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.849 03:29:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:52.849 03:29:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:52.849 03:29:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:53.107 03:29:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:19:53.107 03:29:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:53.107 03:29:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:53.107 03:29:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:53.107 03:29:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:53.107 03:29:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:53.107 03:29:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:53.107 03:29:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.107 03:29:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.107 03:29:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.107 03:29:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:53.107 03:29:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:53.364 00:19:53.364 03:29:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:53.364 03:29:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:53.364 03:29:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:53.621 03:29:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:53.621 03:29:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:53.621 03:29:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.621 03:29:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.621 03:29:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.621 03:29:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:53.621 { 00:19:53.621 "cntlid": 13, 00:19:53.621 "qid": 0, 00:19:53.621 "state": "enabled", 00:19:53.621 "listen_address": { 00:19:53.621 "trtype": "TCP", 00:19:53.621 "adrfam": "IPv4", 00:19:53.621 "traddr": "10.0.0.2", 00:19:53.621 "trsvcid": "4420" 00:19:53.621 }, 00:19:53.621 "peer_address": { 00:19:53.621 "trtype": "TCP", 00:19:53.621 "adrfam": "IPv4", 00:19:53.621 "traddr": "10.0.0.1", 00:19:53.621 "trsvcid": "35228" 00:19:53.621 }, 00:19:53.621 "auth": { 00:19:53.621 "state": "completed", 00:19:53.621 "digest": "sha256", 00:19:53.621 "dhgroup": "ffdhe2048" 00:19:53.621 } 00:19:53.621 } 00:19:53.621 ]' 00:19:53.621 03:29:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:53.621 03:29:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:53.621 03:29:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:53.621 03:29:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:53.621 03:29:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:53.880 03:29:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:53.880 03:29:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:53.880 03:29:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:54.137 03:29:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NGRiZDgxMzAzNWRlYWRiZTEwMDhiMDEwZmZhOTJlMTIwMGNlYTUyMTUwMmIyODBm/CtKoQ==: --dhchap-ctrl-secret DHHC-1:01:Nzg0N2MwMWE4OWYzNDJmZmFlNWI0NmRjZDI0NTBlYzEb/Pto: 00:19:55.073 03:29:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:55.073 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:55.073 03:29:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:55.073 03:29:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:55.073 03:29:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.073 03:29:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:55.073 03:29:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:55.073 03:29:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:55.073 03:29:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:55.331 03:29:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:19:55.331 03:29:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:55.331 03:29:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:55.331 03:29:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:55.331 03:29:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:55.331 03:29:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:55.331 03:29:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:55.331 03:29:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:55.331 03:29:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.331 03:29:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:55.331 03:29:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:55.331 03:29:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:55.588 00:19:55.588 03:29:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:55.588 03:29:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:55.588 03:29:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:55.846 03:29:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:55.846 03:29:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:55.846 03:29:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:55.846 03:29:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.846 03:29:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:55.846 03:29:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:55.846 { 00:19:55.846 "cntlid": 15, 00:19:55.846 "qid": 0, 00:19:55.846 "state": "enabled", 00:19:55.846 "listen_address": { 00:19:55.846 "trtype": "TCP", 00:19:55.846 "adrfam": "IPv4", 00:19:55.846 "traddr": "10.0.0.2", 00:19:55.846 "trsvcid": "4420" 00:19:55.846 }, 00:19:55.846 "peer_address": { 00:19:55.846 "trtype": "TCP", 00:19:55.846 "adrfam": "IPv4", 00:19:55.846 "traddr": "10.0.0.1", 00:19:55.846 "trsvcid": "35254" 00:19:55.846 }, 00:19:55.846 "auth": { 00:19:55.846 "state": "completed", 00:19:55.846 "digest": "sha256", 00:19:55.846 "dhgroup": "ffdhe2048" 00:19:55.846 } 00:19:55.846 } 00:19:55.846 ]' 00:19:55.846 03:29:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:55.846 03:29:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:55.846 03:29:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:55.846 03:29:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:55.846 03:29:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:56.104 03:29:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:56.104 03:29:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:56.104 03:29:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:56.363 03:29:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NTRlZDY1MGEyZTIwOTFhNTU5MDAxMzM4ZTc2ZDUwOGI3NDYxNjRjMWYxNDRmNDIyMzQwNzQwMGQwZjQyMzczMrqoUKk=: 00:19:57.296 03:29:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:57.296 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:57.296 03:29:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:57.296 03:29:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:57.296 03:29:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.296 03:29:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:57.296 03:29:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:57.296 03:29:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:57.296 03:29:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:57.296 03:29:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:57.554 03:29:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:19:57.554 03:29:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:57.554 03:29:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:57.554 03:29:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:57.554 03:29:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:57.554 03:29:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:57.554 03:29:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:57.554 03:29:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:57.554 03:29:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.554 03:29:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:57.554 03:29:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:57.554 03:29:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:57.812 00:19:57.812 03:29:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:57.812 03:29:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:57.812 03:29:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:58.068 03:29:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:58.068 03:29:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:58.068 03:29:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.068 03:29:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.068 03:29:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:58.068 03:29:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:58.068 { 00:19:58.068 "cntlid": 17, 00:19:58.068 "qid": 0, 00:19:58.068 "state": "enabled", 00:19:58.068 "listen_address": { 00:19:58.068 "trtype": "TCP", 00:19:58.068 "adrfam": "IPv4", 00:19:58.068 "traddr": "10.0.0.2", 00:19:58.068 "trsvcid": "4420" 00:19:58.068 }, 00:19:58.068 "peer_address": { 00:19:58.068 "trtype": "TCP", 00:19:58.068 "adrfam": "IPv4", 00:19:58.068 "traddr": "10.0.0.1", 00:19:58.068 "trsvcid": "35264" 00:19:58.068 }, 00:19:58.068 "auth": { 00:19:58.068 "state": "completed", 00:19:58.068 "digest": "sha256", 00:19:58.068 "dhgroup": "ffdhe3072" 00:19:58.068 } 00:19:58.068 } 00:19:58.068 ]' 00:19:58.068 03:29:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:58.068 03:29:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:58.068 03:29:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:58.068 03:29:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:58.068 03:29:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:58.069 03:29:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:58.069 03:29:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:58.069 03:29:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:58.326 03:29:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZWY5ZTUwMmNiY2M0ZTBiZWU5MDcxOGYzMmJmNDg5NTk0MzhmY2QwNGQ1OWRhZmM0bAyhVA==: --dhchap-ctrl-secret DHHC-1:03:NmQ0NTkxMzY0N2Q1MDBlYTgzNzA5NWNkMjY0NDhjOTg1YTA5NDg3NTk2Y2FiNGFjYWNiZGM5MGRkZWQ1NTNiNnbWCLs=: 00:19:59.258 03:29:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:59.258 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:59.258 03:29:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:59.258 03:29:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.258 03:29:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.258 03:29:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.258 03:29:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:59.258 03:29:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:59.258 03:29:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:59.827 03:29:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:19:59.827 03:29:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:59.827 03:29:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:59.827 03:29:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:59.827 03:29:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:59.827 03:29:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:59.828 03:29:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:59.828 03:29:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.828 03:29:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.828 03:29:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.828 03:29:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:59.828 03:29:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:00.085 00:20:00.085 03:29:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:00.085 03:29:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:00.085 03:29:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:00.342 03:29:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:00.342 03:29:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:00.342 03:29:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.342 03:29:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.342 03:29:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.342 03:29:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:00.342 { 00:20:00.342 "cntlid": 19, 00:20:00.342 "qid": 0, 00:20:00.342 "state": "enabled", 00:20:00.342 "listen_address": { 00:20:00.342 "trtype": "TCP", 00:20:00.343 "adrfam": "IPv4", 00:20:00.343 "traddr": "10.0.0.2", 00:20:00.343 "trsvcid": "4420" 00:20:00.343 }, 00:20:00.343 "peer_address": { 00:20:00.343 "trtype": "TCP", 00:20:00.343 "adrfam": "IPv4", 00:20:00.343 "traddr": "10.0.0.1", 00:20:00.343 "trsvcid": "35298" 00:20:00.343 }, 00:20:00.343 "auth": { 00:20:00.343 "state": "completed", 00:20:00.343 "digest": "sha256", 00:20:00.343 "dhgroup": "ffdhe3072" 00:20:00.343 } 00:20:00.343 } 00:20:00.343 ]' 00:20:00.343 03:29:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:00.343 03:29:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:00.343 03:29:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:00.343 03:29:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:00.343 03:29:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:00.343 03:29:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:00.343 03:29:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:00.343 03:29:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:00.599 03:29:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NGRkNzA0MzQxZjdmN2NkN2Y0NDdlYTI1MzQwYTRlZTDXYZOU: --dhchap-ctrl-secret DHHC-1:02:YTU0YTdiMmI4YjFmMmJmNWYyYjg5MDI5ZmRmMTA0MDM4ZDU0YTUxOTI3ZWUzZGFjPc0M1Q==: 00:20:01.534 03:29:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:01.534 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:01.534 03:29:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:01.534 03:29:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.534 03:29:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.534 03:29:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.534 03:29:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:01.534 03:29:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:01.534 03:29:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:01.791 03:29:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:20:01.791 03:29:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:01.791 03:29:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:01.791 03:29:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:01.791 03:29:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:01.791 03:29:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:01.791 03:29:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:01.791 03:29:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.791 03:29:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.791 03:29:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.791 03:29:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:01.791 03:29:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:02.048 00:20:02.048 03:29:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:02.048 03:29:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:02.048 03:29:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:02.306 03:29:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:02.306 03:29:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:02.306 03:29:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.306 03:29:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.306 03:29:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.306 03:29:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:02.306 { 00:20:02.306 "cntlid": 21, 00:20:02.306 "qid": 0, 00:20:02.306 "state": "enabled", 00:20:02.306 "listen_address": { 00:20:02.306 "trtype": "TCP", 00:20:02.306 "adrfam": "IPv4", 00:20:02.306 "traddr": "10.0.0.2", 00:20:02.306 "trsvcid": "4420" 00:20:02.306 }, 00:20:02.306 "peer_address": { 00:20:02.306 "trtype": "TCP", 00:20:02.306 "adrfam": "IPv4", 00:20:02.306 "traddr": "10.0.0.1", 00:20:02.306 "trsvcid": "44074" 00:20:02.306 }, 00:20:02.306 "auth": { 00:20:02.306 "state": "completed", 00:20:02.306 "digest": "sha256", 00:20:02.306 "dhgroup": "ffdhe3072" 00:20:02.306 } 00:20:02.306 } 00:20:02.306 ]' 00:20:02.306 03:29:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:02.564 03:29:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:02.564 03:29:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:02.564 03:29:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:02.564 03:29:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:02.564 03:29:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:02.564 03:29:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:02.564 03:29:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:02.823 03:29:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NGRiZDgxMzAzNWRlYWRiZTEwMDhiMDEwZmZhOTJlMTIwMGNlYTUyMTUwMmIyODBm/CtKoQ==: --dhchap-ctrl-secret DHHC-1:01:Nzg0N2MwMWE4OWYzNDJmZmFlNWI0NmRjZDI0NTBlYzEb/Pto: 00:20:03.762 03:29:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:03.762 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:03.762 03:29:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:03.762 03:29:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:03.762 03:29:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.762 03:29:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:03.762 03:29:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:03.762 03:29:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:03.762 03:29:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:04.028 03:29:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:20:04.028 03:29:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:04.028 03:29:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:04.028 03:29:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:04.028 03:29:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:04.028 03:29:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:04.028 03:29:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:04.028 03:29:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.028 03:29:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.028 03:29:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.028 03:29:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:04.028 03:29:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:04.319 00:20:04.319 03:29:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:04.319 03:29:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:04.319 03:29:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:04.577 03:29:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:04.577 03:29:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:04.577 03:29:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.577 03:29:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.577 03:29:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.577 03:29:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:04.577 { 00:20:04.577 "cntlid": 23, 00:20:04.577 "qid": 0, 00:20:04.577 "state": "enabled", 00:20:04.577 "listen_address": { 00:20:04.577 "trtype": "TCP", 00:20:04.577 "adrfam": "IPv4", 00:20:04.577 "traddr": "10.0.0.2", 00:20:04.577 "trsvcid": "4420" 00:20:04.577 }, 00:20:04.577 "peer_address": { 00:20:04.577 "trtype": "TCP", 00:20:04.577 "adrfam": "IPv4", 00:20:04.577 "traddr": "10.0.0.1", 00:20:04.577 "trsvcid": "44104" 00:20:04.577 }, 00:20:04.577 "auth": { 00:20:04.577 "state": "completed", 00:20:04.577 "digest": "sha256", 00:20:04.577 "dhgroup": "ffdhe3072" 00:20:04.577 } 00:20:04.577 } 00:20:04.577 ]' 00:20:04.577 03:29:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:04.577 03:29:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:04.577 03:29:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:04.835 03:29:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:04.835 03:29:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:04.835 03:29:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:04.835 03:29:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:04.835 03:29:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:05.095 03:29:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NTRlZDY1MGEyZTIwOTFhNTU5MDAxMzM4ZTc2ZDUwOGI3NDYxNjRjMWYxNDRmNDIyMzQwNzQwMGQwZjQyMzczMrqoUKk=: 00:20:06.032 03:29:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:06.032 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:06.032 03:29:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:06.032 03:29:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:06.032 03:29:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.032 03:29:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:06.032 03:29:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:06.032 03:29:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:06.032 03:29:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:06.032 03:29:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:06.290 03:29:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:20:06.290 03:29:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:06.290 03:29:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:06.290 03:29:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:06.290 03:29:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:06.290 03:29:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:06.290 03:29:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:06.290 03:29:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:06.290 03:29:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.290 03:29:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:06.290 03:29:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:06.290 03:29:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:06.547 00:20:06.547 03:29:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:06.547 03:29:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:06.548 03:29:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:06.805 03:29:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:06.805 03:29:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:06.805 03:29:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:06.805 03:29:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.805 03:29:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:06.805 03:29:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:06.805 { 00:20:06.805 "cntlid": 25, 00:20:06.805 "qid": 0, 00:20:06.805 "state": "enabled", 00:20:06.805 "listen_address": { 00:20:06.805 "trtype": "TCP", 00:20:06.805 "adrfam": "IPv4", 00:20:06.805 "traddr": "10.0.0.2", 00:20:06.805 "trsvcid": "4420" 00:20:06.805 }, 00:20:06.805 "peer_address": { 00:20:06.805 "trtype": "TCP", 00:20:06.805 "adrfam": "IPv4", 00:20:06.805 "traddr": "10.0.0.1", 00:20:06.805 "trsvcid": "44124" 00:20:06.805 }, 00:20:06.805 "auth": { 00:20:06.805 "state": "completed", 00:20:06.805 "digest": "sha256", 00:20:06.805 "dhgroup": "ffdhe4096" 00:20:06.805 } 00:20:06.805 } 00:20:06.805 ]' 00:20:06.805 03:29:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:06.805 03:29:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:06.805 03:29:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:07.063 03:29:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:07.063 03:29:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:07.063 03:29:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:07.063 03:29:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:07.063 03:29:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:07.321 03:29:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZWY5ZTUwMmNiY2M0ZTBiZWU5MDcxOGYzMmJmNDg5NTk0MzhmY2QwNGQ1OWRhZmM0bAyhVA==: --dhchap-ctrl-secret DHHC-1:03:NmQ0NTkxMzY0N2Q1MDBlYTgzNzA5NWNkMjY0NDhjOTg1YTA5NDg3NTk2Y2FiNGFjYWNiZGM5MGRkZWQ1NTNiNnbWCLs=: 00:20:08.254 03:29:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:08.254 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:08.254 03:29:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:08.254 03:29:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:08.254 03:29:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.254 03:29:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:08.254 03:29:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:08.254 03:29:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:08.254 03:29:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:08.511 03:29:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:20:08.511 03:29:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:08.511 03:29:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:08.511 03:29:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:08.511 03:29:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:08.511 03:29:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:08.511 03:29:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:08.511 03:29:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:08.511 03:29:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.511 03:29:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:08.511 03:29:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:08.511 03:29:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:08.769 00:20:09.028 03:29:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:09.028 03:29:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:09.028 03:29:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:09.028 03:29:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:09.028 03:29:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:09.028 03:29:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:09.028 03:29:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.286 03:29:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:09.286 03:29:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:09.286 { 00:20:09.286 "cntlid": 27, 00:20:09.286 "qid": 0, 00:20:09.287 "state": "enabled", 00:20:09.287 "listen_address": { 00:20:09.287 "trtype": "TCP", 00:20:09.287 "adrfam": "IPv4", 00:20:09.287 "traddr": "10.0.0.2", 00:20:09.287 "trsvcid": "4420" 00:20:09.287 }, 00:20:09.287 "peer_address": { 00:20:09.287 "trtype": "TCP", 00:20:09.287 "adrfam": "IPv4", 00:20:09.287 "traddr": "10.0.0.1", 00:20:09.287 "trsvcid": "44154" 00:20:09.287 }, 00:20:09.287 "auth": { 00:20:09.287 "state": "completed", 00:20:09.287 "digest": "sha256", 00:20:09.287 "dhgroup": "ffdhe4096" 00:20:09.287 } 00:20:09.287 } 00:20:09.287 ]' 00:20:09.287 03:29:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:09.287 03:29:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:09.287 03:29:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:09.287 03:29:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:09.287 03:29:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:09.287 03:29:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:09.287 03:29:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:09.287 03:29:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:09.544 03:29:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NGRkNzA0MzQxZjdmN2NkN2Y0NDdlYTI1MzQwYTRlZTDXYZOU: --dhchap-ctrl-secret DHHC-1:02:YTU0YTdiMmI4YjFmMmJmNWYyYjg5MDI5ZmRmMTA0MDM4ZDU0YTUxOTI3ZWUzZGFjPc0M1Q==: 00:20:10.479 03:29:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:10.479 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:10.479 03:29:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:10.479 03:29:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:10.479 03:29:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.479 03:29:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:10.479 03:29:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:10.479 03:29:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:10.479 03:29:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:10.736 03:29:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:20:10.736 03:29:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:10.736 03:29:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:10.736 03:29:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:10.736 03:29:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:10.736 03:29:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:10.736 03:29:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:10.736 03:29:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:10.736 03:29:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.736 03:29:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:10.737 03:29:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:10.737 03:29:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:11.303 00:20:11.303 03:29:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:11.303 03:29:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:11.303 03:29:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:11.561 03:29:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:11.561 03:29:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:11.561 03:29:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.561 03:29:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.561 03:29:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.561 03:29:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:11.561 { 00:20:11.561 "cntlid": 29, 00:20:11.561 "qid": 0, 00:20:11.561 "state": "enabled", 00:20:11.561 "listen_address": { 00:20:11.561 "trtype": "TCP", 00:20:11.561 "adrfam": "IPv4", 00:20:11.561 "traddr": "10.0.0.2", 00:20:11.561 "trsvcid": "4420" 00:20:11.561 }, 00:20:11.561 "peer_address": { 00:20:11.561 "trtype": "TCP", 00:20:11.561 "adrfam": "IPv4", 00:20:11.561 "traddr": "10.0.0.1", 00:20:11.561 "trsvcid": "44184" 00:20:11.561 }, 00:20:11.562 "auth": { 00:20:11.562 "state": "completed", 00:20:11.562 "digest": "sha256", 00:20:11.562 "dhgroup": "ffdhe4096" 00:20:11.562 } 00:20:11.562 } 00:20:11.562 ]' 00:20:11.562 03:29:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:11.562 03:29:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:11.562 03:29:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:11.562 03:29:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:11.562 03:29:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:11.562 03:29:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:11.562 03:29:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:11.562 03:29:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:11.819 03:29:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NGRiZDgxMzAzNWRlYWRiZTEwMDhiMDEwZmZhOTJlMTIwMGNlYTUyMTUwMmIyODBm/CtKoQ==: --dhchap-ctrl-secret DHHC-1:01:Nzg0N2MwMWE4OWYzNDJmZmFlNWI0NmRjZDI0NTBlYzEb/Pto: 00:20:12.756 03:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:12.756 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:12.756 03:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:12.756 03:29:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.756 03:29:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.756 03:29:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.756 03:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:12.756 03:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:12.756 03:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:13.014 03:29:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:20:13.014 03:29:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:13.014 03:29:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:13.014 03:29:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:13.014 03:29:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:13.014 03:29:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:13.014 03:29:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:13.014 03:29:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:13.015 03:29:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.015 03:29:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:13.015 03:29:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:13.015 03:29:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:13.584 00:20:13.584 03:29:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:13.584 03:29:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:13.584 03:29:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:13.584 03:29:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:13.584 03:29:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:13.584 03:29:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:13.584 03:29:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.584 03:29:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:13.584 03:29:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:13.584 { 00:20:13.584 "cntlid": 31, 00:20:13.584 "qid": 0, 00:20:13.584 "state": "enabled", 00:20:13.584 "listen_address": { 00:20:13.584 "trtype": "TCP", 00:20:13.584 "adrfam": "IPv4", 00:20:13.584 "traddr": "10.0.0.2", 00:20:13.584 "trsvcid": "4420" 00:20:13.584 }, 00:20:13.584 "peer_address": { 00:20:13.584 "trtype": "TCP", 00:20:13.584 "adrfam": "IPv4", 00:20:13.584 "traddr": "10.0.0.1", 00:20:13.584 "trsvcid": "50106" 00:20:13.584 }, 00:20:13.584 "auth": { 00:20:13.584 "state": "completed", 00:20:13.584 "digest": "sha256", 00:20:13.584 "dhgroup": "ffdhe4096" 00:20:13.584 } 00:20:13.584 } 00:20:13.584 ]' 00:20:13.584 03:29:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:13.842 03:29:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:13.842 03:29:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:13.842 03:29:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:13.842 03:29:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:13.842 03:29:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:13.842 03:29:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:13.842 03:29:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:14.100 03:29:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NTRlZDY1MGEyZTIwOTFhNTU5MDAxMzM4ZTc2ZDUwOGI3NDYxNjRjMWYxNDRmNDIyMzQwNzQwMGQwZjQyMzczMrqoUKk=: 00:20:15.035 03:30:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:15.036 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:15.036 03:30:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:15.036 03:30:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:15.036 03:30:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.036 03:30:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:15.036 03:30:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:15.036 03:30:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:15.036 03:30:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:15.036 03:30:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:15.294 03:30:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:20:15.294 03:30:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:15.294 03:30:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:15.294 03:30:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:15.294 03:30:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:15.294 03:30:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:15.294 03:30:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:15.294 03:30:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:15.294 03:30:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.294 03:30:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:15.294 03:30:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:15.294 03:30:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:15.859 00:20:15.859 03:30:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:15.859 03:30:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:15.859 03:30:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:16.118 03:30:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:16.118 03:30:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:16.118 03:30:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:16.118 03:30:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.118 03:30:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:16.118 03:30:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:16.118 { 00:20:16.118 "cntlid": 33, 00:20:16.118 "qid": 0, 00:20:16.118 "state": "enabled", 00:20:16.118 "listen_address": { 00:20:16.118 "trtype": "TCP", 00:20:16.118 "adrfam": "IPv4", 00:20:16.118 "traddr": "10.0.0.2", 00:20:16.118 "trsvcid": "4420" 00:20:16.118 }, 00:20:16.118 "peer_address": { 00:20:16.118 "trtype": "TCP", 00:20:16.118 "adrfam": "IPv4", 00:20:16.118 "traddr": "10.0.0.1", 00:20:16.118 "trsvcid": "50138" 00:20:16.118 }, 00:20:16.118 "auth": { 00:20:16.118 "state": "completed", 00:20:16.118 "digest": "sha256", 00:20:16.118 "dhgroup": "ffdhe6144" 00:20:16.118 } 00:20:16.118 } 00:20:16.118 ]' 00:20:16.118 03:30:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:16.118 03:30:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:16.118 03:30:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:16.375 03:30:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:16.375 03:30:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:16.375 03:30:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:16.375 03:30:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:16.375 03:30:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:16.633 03:30:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZWY5ZTUwMmNiY2M0ZTBiZWU5MDcxOGYzMmJmNDg5NTk0MzhmY2QwNGQ1OWRhZmM0bAyhVA==: --dhchap-ctrl-secret DHHC-1:03:NmQ0NTkxMzY0N2Q1MDBlYTgzNzA5NWNkMjY0NDhjOTg1YTA5NDg3NTk2Y2FiNGFjYWNiZGM5MGRkZWQ1NTNiNnbWCLs=: 00:20:17.565 03:30:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:17.565 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:17.565 03:30:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:17.565 03:30:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.565 03:30:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.565 03:30:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.565 03:30:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:17.565 03:30:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:17.565 03:30:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:17.825 03:30:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:20:17.825 03:30:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:17.825 03:30:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:17.825 03:30:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:17.825 03:30:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:17.825 03:30:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:17.825 03:30:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:17.825 03:30:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.825 03:30:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.825 03:30:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.825 03:30:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:17.825 03:30:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:18.391 00:20:18.391 03:30:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:18.391 03:30:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:18.391 03:30:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:18.650 03:30:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:18.650 03:30:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:18.650 03:30:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.650 03:30:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.650 03:30:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.650 03:30:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:18.650 { 00:20:18.650 "cntlid": 35, 00:20:18.650 "qid": 0, 00:20:18.650 "state": "enabled", 00:20:18.650 "listen_address": { 00:20:18.650 "trtype": "TCP", 00:20:18.650 "adrfam": "IPv4", 00:20:18.650 "traddr": "10.0.0.2", 00:20:18.650 "trsvcid": "4420" 00:20:18.650 }, 00:20:18.650 "peer_address": { 00:20:18.650 "trtype": "TCP", 00:20:18.650 "adrfam": "IPv4", 00:20:18.650 "traddr": "10.0.0.1", 00:20:18.650 "trsvcid": "50174" 00:20:18.650 }, 00:20:18.650 "auth": { 00:20:18.650 "state": "completed", 00:20:18.650 "digest": "sha256", 00:20:18.650 "dhgroup": "ffdhe6144" 00:20:18.650 } 00:20:18.650 } 00:20:18.650 ]' 00:20:18.650 03:30:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:18.650 03:30:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:18.650 03:30:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:18.650 03:30:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:18.650 03:30:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:18.650 03:30:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:18.650 03:30:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:18.650 03:30:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:18.907 03:30:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NGRkNzA0MzQxZjdmN2NkN2Y0NDdlYTI1MzQwYTRlZTDXYZOU: --dhchap-ctrl-secret DHHC-1:02:YTU0YTdiMmI4YjFmMmJmNWYyYjg5MDI5ZmRmMTA0MDM4ZDU0YTUxOTI3ZWUzZGFjPc0M1Q==: 00:20:19.854 03:30:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:19.854 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:19.854 03:30:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:19.854 03:30:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.854 03:30:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.854 03:30:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.854 03:30:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:19.854 03:30:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:19.854 03:30:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:20.111 03:30:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:20:20.111 03:30:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:20.111 03:30:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:20.111 03:30:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:20.111 03:30:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:20.111 03:30:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:20.111 03:30:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:20.111 03:30:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.111 03:30:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.111 03:30:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.111 03:30:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:20.111 03:30:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:20.678 00:20:20.678 03:30:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:20.678 03:30:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:20.678 03:30:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:20.936 03:30:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:20.936 03:30:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:20.936 03:30:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.936 03:30:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.936 03:30:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.936 03:30:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:20.936 { 00:20:20.936 "cntlid": 37, 00:20:20.936 "qid": 0, 00:20:20.936 "state": "enabled", 00:20:20.936 "listen_address": { 00:20:20.936 "trtype": "TCP", 00:20:20.936 "adrfam": "IPv4", 00:20:20.936 "traddr": "10.0.0.2", 00:20:20.936 "trsvcid": "4420" 00:20:20.936 }, 00:20:20.936 "peer_address": { 00:20:20.936 "trtype": "TCP", 00:20:20.936 "adrfam": "IPv4", 00:20:20.936 "traddr": "10.0.0.1", 00:20:20.936 "trsvcid": "50200" 00:20:20.936 }, 00:20:20.936 "auth": { 00:20:20.936 "state": "completed", 00:20:20.936 "digest": "sha256", 00:20:20.936 "dhgroup": "ffdhe6144" 00:20:20.936 } 00:20:20.936 } 00:20:20.936 ]' 00:20:20.936 03:30:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:20.936 03:30:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:20.936 03:30:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:21.193 03:30:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:21.193 03:30:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:21.193 03:30:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:21.193 03:30:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:21.193 03:30:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:21.453 03:30:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NGRiZDgxMzAzNWRlYWRiZTEwMDhiMDEwZmZhOTJlMTIwMGNlYTUyMTUwMmIyODBm/CtKoQ==: --dhchap-ctrl-secret DHHC-1:01:Nzg0N2MwMWE4OWYzNDJmZmFlNWI0NmRjZDI0NTBlYzEb/Pto: 00:20:22.386 03:30:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:22.386 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:22.386 03:30:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:22.386 03:30:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:22.386 03:30:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.386 03:30:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:22.386 03:30:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:22.386 03:30:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:22.386 03:30:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:22.644 03:30:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:20:22.644 03:30:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:22.644 03:30:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:22.644 03:30:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:22.644 03:30:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:22.644 03:30:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:22.644 03:30:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:22.644 03:30:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:22.644 03:30:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.644 03:30:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:22.644 03:30:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:22.645 03:30:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:23.212 00:20:23.212 03:30:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:23.212 03:30:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:23.212 03:30:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:23.469 03:30:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:23.469 03:30:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:23.469 03:30:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:23.469 03:30:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.469 03:30:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:23.469 03:30:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:23.469 { 00:20:23.469 "cntlid": 39, 00:20:23.469 "qid": 0, 00:20:23.469 "state": "enabled", 00:20:23.469 "listen_address": { 00:20:23.469 "trtype": "TCP", 00:20:23.469 "adrfam": "IPv4", 00:20:23.469 "traddr": "10.0.0.2", 00:20:23.469 "trsvcid": "4420" 00:20:23.469 }, 00:20:23.469 "peer_address": { 00:20:23.469 "trtype": "TCP", 00:20:23.469 "adrfam": "IPv4", 00:20:23.469 "traddr": "10.0.0.1", 00:20:23.469 "trsvcid": "48230" 00:20:23.469 }, 00:20:23.469 "auth": { 00:20:23.469 "state": "completed", 00:20:23.469 "digest": "sha256", 00:20:23.469 "dhgroup": "ffdhe6144" 00:20:23.469 } 00:20:23.469 } 00:20:23.469 ]' 00:20:23.469 03:30:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:23.469 03:30:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:23.469 03:30:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:23.469 03:30:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:23.469 03:30:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:23.469 03:30:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:23.469 03:30:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:23.469 03:30:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:23.727 03:30:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NTRlZDY1MGEyZTIwOTFhNTU5MDAxMzM4ZTc2ZDUwOGI3NDYxNjRjMWYxNDRmNDIyMzQwNzQwMGQwZjQyMzczMrqoUKk=: 00:20:24.662 03:30:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:24.662 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:24.662 03:30:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:24.662 03:30:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:24.662 03:30:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.662 03:30:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:24.662 03:30:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:24.662 03:30:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:24.662 03:30:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:24.662 03:30:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:24.919 03:30:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:20:24.919 03:30:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:24.919 03:30:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:24.919 03:30:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:24.919 03:30:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:24.919 03:30:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:24.920 03:30:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:24.920 03:30:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:24.920 03:30:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.920 03:30:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:24.920 03:30:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:24.920 03:30:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:25.856 00:20:25.856 03:30:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:25.856 03:30:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:25.856 03:30:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:26.113 03:30:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:26.113 03:30:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:26.113 03:30:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.113 03:30:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.113 03:30:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.113 03:30:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:26.113 { 00:20:26.113 "cntlid": 41, 00:20:26.113 "qid": 0, 00:20:26.113 "state": "enabled", 00:20:26.113 "listen_address": { 00:20:26.113 "trtype": "TCP", 00:20:26.114 "adrfam": "IPv4", 00:20:26.114 "traddr": "10.0.0.2", 00:20:26.114 "trsvcid": "4420" 00:20:26.114 }, 00:20:26.114 "peer_address": { 00:20:26.114 "trtype": "TCP", 00:20:26.114 "adrfam": "IPv4", 00:20:26.114 "traddr": "10.0.0.1", 00:20:26.114 "trsvcid": "48272" 00:20:26.114 }, 00:20:26.114 "auth": { 00:20:26.114 "state": "completed", 00:20:26.114 "digest": "sha256", 00:20:26.114 "dhgroup": "ffdhe8192" 00:20:26.114 } 00:20:26.114 } 00:20:26.114 ]' 00:20:26.114 03:30:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:26.114 03:30:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:26.114 03:30:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:26.371 03:30:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:26.371 03:30:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:26.371 03:30:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:26.371 03:30:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:26.371 03:30:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:26.629 03:30:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZWY5ZTUwMmNiY2M0ZTBiZWU5MDcxOGYzMmJmNDg5NTk0MzhmY2QwNGQ1OWRhZmM0bAyhVA==: --dhchap-ctrl-secret DHHC-1:03:NmQ0NTkxMzY0N2Q1MDBlYTgzNzA5NWNkMjY0NDhjOTg1YTA5NDg3NTk2Y2FiNGFjYWNiZGM5MGRkZWQ1NTNiNnbWCLs=: 00:20:27.564 03:30:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:27.564 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:27.564 03:30:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:27.564 03:30:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.564 03:30:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.564 03:30:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.564 03:30:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:27.564 03:30:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:27.564 03:30:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:27.827 03:30:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:20:27.828 03:30:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:27.828 03:30:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:27.828 03:30:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:27.828 03:30:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:27.828 03:30:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:27.828 03:30:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:27.828 03:30:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.828 03:30:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.828 03:30:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.828 03:30:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:27.828 03:30:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:28.763 00:20:28.763 03:30:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:28.763 03:30:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:28.763 03:30:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:29.019 03:30:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:29.020 03:30:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:29.020 03:30:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.020 03:30:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.020 03:30:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.020 03:30:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:29.020 { 00:20:29.020 "cntlid": 43, 00:20:29.020 "qid": 0, 00:20:29.020 "state": "enabled", 00:20:29.020 "listen_address": { 00:20:29.020 "trtype": "TCP", 00:20:29.020 "adrfam": "IPv4", 00:20:29.020 "traddr": "10.0.0.2", 00:20:29.020 "trsvcid": "4420" 00:20:29.020 }, 00:20:29.020 "peer_address": { 00:20:29.020 "trtype": "TCP", 00:20:29.020 "adrfam": "IPv4", 00:20:29.020 "traddr": "10.0.0.1", 00:20:29.020 "trsvcid": "48300" 00:20:29.020 }, 00:20:29.020 "auth": { 00:20:29.020 "state": "completed", 00:20:29.020 "digest": "sha256", 00:20:29.020 "dhgroup": "ffdhe8192" 00:20:29.020 } 00:20:29.020 } 00:20:29.020 ]' 00:20:29.020 03:30:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:29.020 03:30:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:29.020 03:30:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:29.020 03:30:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:29.020 03:30:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:29.020 03:30:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:29.020 03:30:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:29.020 03:30:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:29.586 03:30:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NGRkNzA0MzQxZjdmN2NkN2Y0NDdlYTI1MzQwYTRlZTDXYZOU: --dhchap-ctrl-secret DHHC-1:02:YTU0YTdiMmI4YjFmMmJmNWYyYjg5MDI5ZmRmMTA0MDM4ZDU0YTUxOTI3ZWUzZGFjPc0M1Q==: 00:20:30.521 03:30:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:30.521 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:30.521 03:30:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:30.521 03:30:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:30.521 03:30:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.521 03:30:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:30.521 03:30:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:30.521 03:30:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:30.521 03:30:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:30.521 03:30:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:20:30.521 03:30:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:30.521 03:30:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:30.521 03:30:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:30.521 03:30:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:30.521 03:30:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:30.521 03:30:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:30.521 03:30:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:30.521 03:30:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.521 03:30:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:30.521 03:30:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:30.521 03:30:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:31.460 00:20:31.460 03:30:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:31.460 03:30:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:31.460 03:30:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:31.717 03:30:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:31.717 03:30:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:31.717 03:30:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:31.717 03:30:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.717 03:30:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:31.717 03:30:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:31.717 { 00:20:31.717 "cntlid": 45, 00:20:31.717 "qid": 0, 00:20:31.717 "state": "enabled", 00:20:31.717 "listen_address": { 00:20:31.717 "trtype": "TCP", 00:20:31.717 "adrfam": "IPv4", 00:20:31.717 "traddr": "10.0.0.2", 00:20:31.717 "trsvcid": "4420" 00:20:31.717 }, 00:20:31.717 "peer_address": { 00:20:31.717 "trtype": "TCP", 00:20:31.717 "adrfam": "IPv4", 00:20:31.717 "traddr": "10.0.0.1", 00:20:31.717 "trsvcid": "48322" 00:20:31.717 }, 00:20:31.717 "auth": { 00:20:31.717 "state": "completed", 00:20:31.717 "digest": "sha256", 00:20:31.717 "dhgroup": "ffdhe8192" 00:20:31.717 } 00:20:31.717 } 00:20:31.717 ]' 00:20:31.717 03:30:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:31.717 03:30:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:31.717 03:30:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:31.717 03:30:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:31.717 03:30:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:31.976 03:30:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:31.976 03:30:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:31.976 03:30:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:32.235 03:30:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NGRiZDgxMzAzNWRlYWRiZTEwMDhiMDEwZmZhOTJlMTIwMGNlYTUyMTUwMmIyODBm/CtKoQ==: --dhchap-ctrl-secret DHHC-1:01:Nzg0N2MwMWE4OWYzNDJmZmFlNWI0NmRjZDI0NTBlYzEb/Pto: 00:20:33.169 03:30:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:33.169 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:33.169 03:30:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:33.169 03:30:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.169 03:30:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.169 03:30:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.169 03:30:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:33.169 03:30:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:33.169 03:30:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:33.427 03:30:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:20:33.427 03:30:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:33.427 03:30:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:33.427 03:30:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:33.427 03:30:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:33.427 03:30:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:33.427 03:30:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:33.427 03:30:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.427 03:30:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.427 03:30:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.427 03:30:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:33.427 03:30:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:34.359 00:20:34.359 03:30:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:34.359 03:30:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:34.359 03:30:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:34.615 03:30:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:34.615 03:30:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:34.615 03:30:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.615 03:30:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.615 03:30:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.615 03:30:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:34.615 { 00:20:34.615 "cntlid": 47, 00:20:34.615 "qid": 0, 00:20:34.615 "state": "enabled", 00:20:34.615 "listen_address": { 00:20:34.615 "trtype": "TCP", 00:20:34.615 "adrfam": "IPv4", 00:20:34.615 "traddr": "10.0.0.2", 00:20:34.615 "trsvcid": "4420" 00:20:34.615 }, 00:20:34.615 "peer_address": { 00:20:34.615 "trtype": "TCP", 00:20:34.615 "adrfam": "IPv4", 00:20:34.615 "traddr": "10.0.0.1", 00:20:34.615 "trsvcid": "39596" 00:20:34.615 }, 00:20:34.615 "auth": { 00:20:34.615 "state": "completed", 00:20:34.615 "digest": "sha256", 00:20:34.615 "dhgroup": "ffdhe8192" 00:20:34.615 } 00:20:34.615 } 00:20:34.615 ]' 00:20:34.615 03:30:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:34.615 03:30:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:34.615 03:30:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:34.615 03:30:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:34.615 03:30:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:34.615 03:30:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:34.615 03:30:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:34.615 03:30:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:34.872 03:30:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NTRlZDY1MGEyZTIwOTFhNTU5MDAxMzM4ZTc2ZDUwOGI3NDYxNjRjMWYxNDRmNDIyMzQwNzQwMGQwZjQyMzczMrqoUKk=: 00:20:35.807 03:30:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:35.807 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:35.807 03:30:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:35.807 03:30:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:35.807 03:30:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.807 03:30:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:35.807 03:30:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:20:35.807 03:30:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:35.807 03:30:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:35.807 03:30:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:35.807 03:30:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:36.065 03:30:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:20:36.065 03:30:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:36.065 03:30:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:36.065 03:30:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:36.065 03:30:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:36.065 03:30:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:36.065 03:30:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:36.065 03:30:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:36.065 03:30:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.065 03:30:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:36.065 03:30:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:36.065 03:30:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:36.323 00:20:36.323 03:30:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:36.323 03:30:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:36.323 03:30:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:36.587 03:30:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:36.587 03:30:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:36.587 03:30:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:36.587 03:30:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.587 03:30:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:36.587 03:30:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:36.587 { 00:20:36.587 "cntlid": 49, 00:20:36.587 "qid": 0, 00:20:36.587 "state": "enabled", 00:20:36.587 "listen_address": { 00:20:36.587 "trtype": "TCP", 00:20:36.587 "adrfam": "IPv4", 00:20:36.587 "traddr": "10.0.0.2", 00:20:36.587 "trsvcid": "4420" 00:20:36.587 }, 00:20:36.587 "peer_address": { 00:20:36.587 "trtype": "TCP", 00:20:36.587 "adrfam": "IPv4", 00:20:36.587 "traddr": "10.0.0.1", 00:20:36.587 "trsvcid": "39632" 00:20:36.587 }, 00:20:36.587 "auth": { 00:20:36.587 "state": "completed", 00:20:36.587 "digest": "sha384", 00:20:36.587 "dhgroup": "null" 00:20:36.587 } 00:20:36.587 } 00:20:36.587 ]' 00:20:36.587 03:30:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:36.587 03:30:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:36.845 03:30:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:36.845 03:30:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:36.845 03:30:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:36.845 03:30:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:36.845 03:30:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:36.845 03:30:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:37.107 03:30:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZWY5ZTUwMmNiY2M0ZTBiZWU5MDcxOGYzMmJmNDg5NTk0MzhmY2QwNGQ1OWRhZmM0bAyhVA==: --dhchap-ctrl-secret DHHC-1:03:NmQ0NTkxMzY0N2Q1MDBlYTgzNzA5NWNkMjY0NDhjOTg1YTA5NDg3NTk2Y2FiNGFjYWNiZGM5MGRkZWQ1NTNiNnbWCLs=: 00:20:38.086 03:30:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:38.086 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:38.086 03:30:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:38.086 03:30:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:38.086 03:30:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.086 03:30:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:38.086 03:30:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:38.086 03:30:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:38.086 03:30:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:38.343 03:30:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:20:38.343 03:30:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:38.343 03:30:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:38.343 03:30:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:38.343 03:30:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:38.343 03:30:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:38.343 03:30:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:38.343 03:30:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:38.343 03:30:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.343 03:30:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:38.343 03:30:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:38.343 03:30:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:38.601 00:20:38.601 03:30:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:38.601 03:30:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:38.601 03:30:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:38.858 03:30:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:38.858 03:30:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:38.858 03:30:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:38.858 03:30:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.858 03:30:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:38.858 03:30:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:38.858 { 00:20:38.858 "cntlid": 51, 00:20:38.858 "qid": 0, 00:20:38.858 "state": "enabled", 00:20:38.858 "listen_address": { 00:20:38.858 "trtype": "TCP", 00:20:38.858 "adrfam": "IPv4", 00:20:38.858 "traddr": "10.0.0.2", 00:20:38.858 "trsvcid": "4420" 00:20:38.858 }, 00:20:38.858 "peer_address": { 00:20:38.858 "trtype": "TCP", 00:20:38.858 "adrfam": "IPv4", 00:20:38.858 "traddr": "10.0.0.1", 00:20:38.858 "trsvcid": "39654" 00:20:38.858 }, 00:20:38.858 "auth": { 00:20:38.858 "state": "completed", 00:20:38.858 "digest": "sha384", 00:20:38.858 "dhgroup": "null" 00:20:38.858 } 00:20:38.858 } 00:20:38.858 ]' 00:20:38.858 03:30:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:38.858 03:30:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:38.858 03:30:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:38.858 03:30:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:38.858 03:30:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:38.858 03:30:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:38.858 03:30:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:38.858 03:30:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:39.116 03:30:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NGRkNzA0MzQxZjdmN2NkN2Y0NDdlYTI1MzQwYTRlZTDXYZOU: --dhchap-ctrl-secret DHHC-1:02:YTU0YTdiMmI4YjFmMmJmNWYyYjg5MDI5ZmRmMTA0MDM4ZDU0YTUxOTI3ZWUzZGFjPc0M1Q==: 00:20:40.050 03:30:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:40.050 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:40.050 03:30:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:40.050 03:30:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:40.050 03:30:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.050 03:30:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:40.050 03:30:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:40.050 03:30:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:40.050 03:30:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:40.307 03:30:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:20:40.307 03:30:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:40.307 03:30:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:40.307 03:30:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:40.307 03:30:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:40.307 03:30:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:40.307 03:30:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:40.307 03:30:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:40.307 03:30:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.307 03:30:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:40.307 03:30:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:40.307 03:30:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:40.874 00:20:40.874 03:30:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:40.874 03:30:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:40.874 03:30:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:41.132 03:30:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:41.132 03:30:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:41.132 03:30:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:41.132 03:30:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.132 03:30:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:41.132 03:30:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:41.132 { 00:20:41.132 "cntlid": 53, 00:20:41.132 "qid": 0, 00:20:41.132 "state": "enabled", 00:20:41.132 "listen_address": { 00:20:41.132 "trtype": "TCP", 00:20:41.132 "adrfam": "IPv4", 00:20:41.132 "traddr": "10.0.0.2", 00:20:41.132 "trsvcid": "4420" 00:20:41.132 }, 00:20:41.132 "peer_address": { 00:20:41.132 "trtype": "TCP", 00:20:41.132 "adrfam": "IPv4", 00:20:41.132 "traddr": "10.0.0.1", 00:20:41.132 "trsvcid": "39670" 00:20:41.132 }, 00:20:41.132 "auth": { 00:20:41.132 "state": "completed", 00:20:41.132 "digest": "sha384", 00:20:41.132 "dhgroup": "null" 00:20:41.132 } 00:20:41.132 } 00:20:41.132 ]' 00:20:41.132 03:30:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:41.132 03:30:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:41.132 03:30:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:41.132 03:30:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:41.132 03:30:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:41.132 03:30:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:41.132 03:30:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:41.132 03:30:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:41.389 03:30:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NGRiZDgxMzAzNWRlYWRiZTEwMDhiMDEwZmZhOTJlMTIwMGNlYTUyMTUwMmIyODBm/CtKoQ==: --dhchap-ctrl-secret DHHC-1:01:Nzg0N2MwMWE4OWYzNDJmZmFlNWI0NmRjZDI0NTBlYzEb/Pto: 00:20:42.346 03:30:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:42.346 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:42.346 03:30:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:42.346 03:30:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:42.346 03:30:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.346 03:30:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:42.346 03:30:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:42.346 03:30:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:42.346 03:30:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:42.603 03:30:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:20:42.603 03:30:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:42.603 03:30:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:42.603 03:30:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:42.603 03:30:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:42.603 03:30:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:42.603 03:30:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:42.603 03:30:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:42.603 03:30:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.603 03:30:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:42.603 03:30:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:42.603 03:30:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:42.861 00:20:42.861 03:30:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:42.861 03:30:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:42.861 03:30:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:43.119 03:30:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:43.119 03:30:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:43.119 03:30:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.119 03:30:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.119 03:30:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.119 03:30:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:43.119 { 00:20:43.119 "cntlid": 55, 00:20:43.119 "qid": 0, 00:20:43.119 "state": "enabled", 00:20:43.119 "listen_address": { 00:20:43.119 "trtype": "TCP", 00:20:43.119 "adrfam": "IPv4", 00:20:43.119 "traddr": "10.0.0.2", 00:20:43.119 "trsvcid": "4420" 00:20:43.119 }, 00:20:43.119 "peer_address": { 00:20:43.119 "trtype": "TCP", 00:20:43.119 "adrfam": "IPv4", 00:20:43.119 "traddr": "10.0.0.1", 00:20:43.119 "trsvcid": "51374" 00:20:43.119 }, 00:20:43.119 "auth": { 00:20:43.119 "state": "completed", 00:20:43.119 "digest": "sha384", 00:20:43.119 "dhgroup": "null" 00:20:43.119 } 00:20:43.119 } 00:20:43.119 ]' 00:20:43.119 03:30:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:43.376 03:30:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:43.376 03:30:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:43.376 03:30:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:43.376 03:30:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:43.376 03:30:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:43.376 03:30:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:43.376 03:30:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:43.633 03:30:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NTRlZDY1MGEyZTIwOTFhNTU5MDAxMzM4ZTc2ZDUwOGI3NDYxNjRjMWYxNDRmNDIyMzQwNzQwMGQwZjQyMzczMrqoUKk=: 00:20:44.564 03:30:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:44.564 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:44.564 03:30:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:44.564 03:30:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.564 03:30:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.564 03:30:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.564 03:30:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:44.564 03:30:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:44.564 03:30:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:44.564 03:30:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:44.822 03:30:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:20:44.822 03:30:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:44.822 03:30:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:44.822 03:30:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:44.822 03:30:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:44.822 03:30:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:44.822 03:30:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:44.822 03:30:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.822 03:30:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.822 03:30:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.822 03:30:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:44.822 03:30:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:45.080 00:20:45.080 03:30:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:45.080 03:30:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:45.080 03:30:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:45.337 03:30:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:45.337 03:30:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:45.337 03:30:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.337 03:30:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.337 03:30:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.337 03:30:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:45.337 { 00:20:45.337 "cntlid": 57, 00:20:45.337 "qid": 0, 00:20:45.337 "state": "enabled", 00:20:45.337 "listen_address": { 00:20:45.337 "trtype": "TCP", 00:20:45.337 "adrfam": "IPv4", 00:20:45.337 "traddr": "10.0.0.2", 00:20:45.337 "trsvcid": "4420" 00:20:45.337 }, 00:20:45.337 "peer_address": { 00:20:45.337 "trtype": "TCP", 00:20:45.337 "adrfam": "IPv4", 00:20:45.337 "traddr": "10.0.0.1", 00:20:45.337 "trsvcid": "51392" 00:20:45.337 }, 00:20:45.337 "auth": { 00:20:45.337 "state": "completed", 00:20:45.337 "digest": "sha384", 00:20:45.337 "dhgroup": "ffdhe2048" 00:20:45.337 } 00:20:45.337 } 00:20:45.337 ]' 00:20:45.337 03:30:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:45.337 03:30:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:45.337 03:30:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:45.595 03:30:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:45.595 03:30:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:45.595 03:30:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:45.595 03:30:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:45.595 03:30:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:45.852 03:30:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZWY5ZTUwMmNiY2M0ZTBiZWU5MDcxOGYzMmJmNDg5NTk0MzhmY2QwNGQ1OWRhZmM0bAyhVA==: --dhchap-ctrl-secret DHHC-1:03:NmQ0NTkxMzY0N2Q1MDBlYTgzNzA5NWNkMjY0NDhjOTg1YTA5NDg3NTk2Y2FiNGFjYWNiZGM5MGRkZWQ1NTNiNnbWCLs=: 00:20:46.786 03:30:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:46.786 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:46.786 03:30:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:46.786 03:30:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.786 03:30:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.786 03:30:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:46.786 03:30:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:46.786 03:30:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:46.786 03:30:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:47.046 03:30:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:20:47.046 03:30:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:47.046 03:30:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:47.046 03:30:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:47.046 03:30:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:47.046 03:30:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:47.046 03:30:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:47.046 03:30:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.046 03:30:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.046 03:30:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:47.046 03:30:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:47.046 03:30:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:47.304 00:20:47.304 03:30:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:47.304 03:30:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:47.304 03:30:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:47.562 03:30:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:47.562 03:30:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:47.562 03:30:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.562 03:30:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.562 03:30:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:47.562 03:30:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:47.562 { 00:20:47.562 "cntlid": 59, 00:20:47.562 "qid": 0, 00:20:47.562 "state": "enabled", 00:20:47.562 "listen_address": { 00:20:47.562 "trtype": "TCP", 00:20:47.562 "adrfam": "IPv4", 00:20:47.562 "traddr": "10.0.0.2", 00:20:47.562 "trsvcid": "4420" 00:20:47.562 }, 00:20:47.562 "peer_address": { 00:20:47.562 "trtype": "TCP", 00:20:47.562 "adrfam": "IPv4", 00:20:47.562 "traddr": "10.0.0.1", 00:20:47.562 "trsvcid": "51426" 00:20:47.562 }, 00:20:47.562 "auth": { 00:20:47.562 "state": "completed", 00:20:47.562 "digest": "sha384", 00:20:47.562 "dhgroup": "ffdhe2048" 00:20:47.562 } 00:20:47.562 } 00:20:47.562 ]' 00:20:47.562 03:30:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:47.562 03:30:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:47.562 03:30:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:47.562 03:30:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:47.562 03:30:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:47.562 03:30:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:47.562 03:30:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:47.562 03:30:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:47.819 03:30:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NGRkNzA0MzQxZjdmN2NkN2Y0NDdlYTI1MzQwYTRlZTDXYZOU: --dhchap-ctrl-secret DHHC-1:02:YTU0YTdiMmI4YjFmMmJmNWYyYjg5MDI5ZmRmMTA0MDM4ZDU0YTUxOTI3ZWUzZGFjPc0M1Q==: 00:20:48.752 03:30:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:48.752 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:48.752 03:30:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:48.752 03:30:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:48.752 03:30:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.752 03:30:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:48.752 03:30:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:48.752 03:30:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:48.752 03:30:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:49.009 03:30:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:20:49.009 03:30:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:49.009 03:30:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:49.009 03:30:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:49.009 03:30:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:49.009 03:30:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:49.009 03:30:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:49.009 03:30:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:49.009 03:30:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.009 03:30:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:49.009 03:30:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:49.009 03:30:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:49.265 00:20:49.523 03:30:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:49.523 03:30:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:49.523 03:30:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:49.781 03:30:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:49.781 03:30:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:49.781 03:30:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:49.781 03:30:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.781 03:30:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:49.781 03:30:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:49.781 { 00:20:49.781 "cntlid": 61, 00:20:49.781 "qid": 0, 00:20:49.781 "state": "enabled", 00:20:49.781 "listen_address": { 00:20:49.781 "trtype": "TCP", 00:20:49.781 "adrfam": "IPv4", 00:20:49.781 "traddr": "10.0.0.2", 00:20:49.781 "trsvcid": "4420" 00:20:49.781 }, 00:20:49.781 "peer_address": { 00:20:49.781 "trtype": "TCP", 00:20:49.781 "adrfam": "IPv4", 00:20:49.781 "traddr": "10.0.0.1", 00:20:49.781 "trsvcid": "51446" 00:20:49.781 }, 00:20:49.781 "auth": { 00:20:49.781 "state": "completed", 00:20:49.781 "digest": "sha384", 00:20:49.781 "dhgroup": "ffdhe2048" 00:20:49.781 } 00:20:49.781 } 00:20:49.781 ]' 00:20:49.781 03:30:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:49.781 03:30:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:49.781 03:30:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:49.781 03:30:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:49.781 03:30:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:49.781 03:30:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:49.781 03:30:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:49.781 03:30:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:50.039 03:30:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NGRiZDgxMzAzNWRlYWRiZTEwMDhiMDEwZmZhOTJlMTIwMGNlYTUyMTUwMmIyODBm/CtKoQ==: --dhchap-ctrl-secret DHHC-1:01:Nzg0N2MwMWE4OWYzNDJmZmFlNWI0NmRjZDI0NTBlYzEb/Pto: 00:20:50.982 03:30:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:50.982 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:50.982 03:30:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:50.982 03:30:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:50.982 03:30:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.982 03:30:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:50.982 03:30:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:50.982 03:30:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:50.982 03:30:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:51.240 03:30:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:20:51.240 03:30:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:51.240 03:30:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:51.240 03:30:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:51.240 03:30:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:51.240 03:30:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:51.240 03:30:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:51.240 03:30:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:51.240 03:30:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.240 03:30:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:51.240 03:30:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:51.240 03:30:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:51.498 00:20:51.498 03:30:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:51.498 03:30:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:51.498 03:30:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:51.756 03:30:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:51.756 03:30:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:51.756 03:30:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:51.756 03:30:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.756 03:30:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:51.756 03:30:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:51.756 { 00:20:51.756 "cntlid": 63, 00:20:51.756 "qid": 0, 00:20:51.756 "state": "enabled", 00:20:51.756 "listen_address": { 00:20:51.756 "trtype": "TCP", 00:20:51.756 "adrfam": "IPv4", 00:20:51.756 "traddr": "10.0.0.2", 00:20:51.756 "trsvcid": "4420" 00:20:51.756 }, 00:20:51.756 "peer_address": { 00:20:51.756 "trtype": "TCP", 00:20:51.756 "adrfam": "IPv4", 00:20:51.756 "traddr": "10.0.0.1", 00:20:51.756 "trsvcid": "60556" 00:20:51.756 }, 00:20:51.756 "auth": { 00:20:51.756 "state": "completed", 00:20:51.756 "digest": "sha384", 00:20:51.756 "dhgroup": "ffdhe2048" 00:20:51.756 } 00:20:51.756 } 00:20:51.756 ]' 00:20:51.756 03:30:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:51.756 03:30:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:51.756 03:30:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:52.014 03:30:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:52.014 03:30:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:52.014 03:30:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:52.014 03:30:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:52.014 03:30:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:52.273 03:30:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NTRlZDY1MGEyZTIwOTFhNTU5MDAxMzM4ZTc2ZDUwOGI3NDYxNjRjMWYxNDRmNDIyMzQwNzQwMGQwZjQyMzczMrqoUKk=: 00:20:53.211 03:30:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:53.211 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:53.211 03:30:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:53.211 03:30:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:53.211 03:30:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.211 03:30:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:53.211 03:30:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:53.211 03:30:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:53.211 03:30:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:53.211 03:30:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:53.507 03:30:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:20:53.507 03:30:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:53.507 03:30:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:53.507 03:30:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:53.507 03:30:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:53.507 03:30:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:53.507 03:30:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:53.507 03:30:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:53.507 03:30:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.507 03:30:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:53.507 03:30:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:53.507 03:30:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:53.765 00:20:53.765 03:30:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:53.765 03:30:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:53.765 03:30:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:54.023 03:30:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:54.023 03:30:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:54.023 03:30:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:54.023 03:30:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.024 03:30:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:54.024 03:30:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:54.024 { 00:20:54.024 "cntlid": 65, 00:20:54.024 "qid": 0, 00:20:54.024 "state": "enabled", 00:20:54.024 "listen_address": { 00:20:54.024 "trtype": "TCP", 00:20:54.024 "adrfam": "IPv4", 00:20:54.024 "traddr": "10.0.0.2", 00:20:54.024 "trsvcid": "4420" 00:20:54.024 }, 00:20:54.024 "peer_address": { 00:20:54.024 "trtype": "TCP", 00:20:54.024 "adrfam": "IPv4", 00:20:54.024 "traddr": "10.0.0.1", 00:20:54.024 "trsvcid": "60576" 00:20:54.024 }, 00:20:54.024 "auth": { 00:20:54.024 "state": "completed", 00:20:54.024 "digest": "sha384", 00:20:54.024 "dhgroup": "ffdhe3072" 00:20:54.024 } 00:20:54.024 } 00:20:54.024 ]' 00:20:54.024 03:30:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:54.024 03:30:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:54.024 03:30:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:54.024 03:30:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:54.024 03:30:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:54.024 03:30:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:54.024 03:30:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:54.024 03:30:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:54.282 03:30:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZWY5ZTUwMmNiY2M0ZTBiZWU5MDcxOGYzMmJmNDg5NTk0MzhmY2QwNGQ1OWRhZmM0bAyhVA==: --dhchap-ctrl-secret DHHC-1:03:NmQ0NTkxMzY0N2Q1MDBlYTgzNzA5NWNkMjY0NDhjOTg1YTA5NDg3NTk2Y2FiNGFjYWNiZGM5MGRkZWQ1NTNiNnbWCLs=: 00:20:55.214 03:30:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:55.214 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:55.214 03:30:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:55.214 03:30:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:55.214 03:30:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.214 03:30:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:55.214 03:30:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:55.214 03:30:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:55.214 03:30:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:55.472 03:30:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:20:55.472 03:30:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:55.472 03:30:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:55.472 03:30:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:55.472 03:30:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:55.472 03:30:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:55.472 03:30:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:55.472 03:30:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:55.472 03:30:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.472 03:30:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:55.472 03:30:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:55.472 03:30:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:56.038 00:20:56.038 03:30:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:56.038 03:30:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:56.038 03:30:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:56.038 03:30:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:56.038 03:30:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:56.038 03:30:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:56.038 03:30:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.296 03:30:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:56.296 03:30:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:56.296 { 00:20:56.296 "cntlid": 67, 00:20:56.296 "qid": 0, 00:20:56.296 "state": "enabled", 00:20:56.296 "listen_address": { 00:20:56.296 "trtype": "TCP", 00:20:56.296 "adrfam": "IPv4", 00:20:56.296 "traddr": "10.0.0.2", 00:20:56.296 "trsvcid": "4420" 00:20:56.296 }, 00:20:56.296 "peer_address": { 00:20:56.296 "trtype": "TCP", 00:20:56.296 "adrfam": "IPv4", 00:20:56.296 "traddr": "10.0.0.1", 00:20:56.296 "trsvcid": "60592" 00:20:56.296 }, 00:20:56.296 "auth": { 00:20:56.296 "state": "completed", 00:20:56.296 "digest": "sha384", 00:20:56.296 "dhgroup": "ffdhe3072" 00:20:56.296 } 00:20:56.296 } 00:20:56.296 ]' 00:20:56.296 03:30:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:56.296 03:30:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:56.296 03:30:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:56.296 03:30:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:56.296 03:30:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:56.296 03:30:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:56.296 03:30:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:56.296 03:30:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:56.555 03:30:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NGRkNzA0MzQxZjdmN2NkN2Y0NDdlYTI1MzQwYTRlZTDXYZOU: --dhchap-ctrl-secret DHHC-1:02:YTU0YTdiMmI4YjFmMmJmNWYyYjg5MDI5ZmRmMTA0MDM4ZDU0YTUxOTI3ZWUzZGFjPc0M1Q==: 00:20:57.486 03:30:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:57.486 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:57.486 03:30:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:57.486 03:30:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:57.486 03:30:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.486 03:30:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:57.486 03:30:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:57.486 03:30:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:57.486 03:30:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:57.744 03:30:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:20:57.744 03:30:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:57.744 03:30:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:57.744 03:30:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:57.744 03:30:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:57.744 03:30:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:57.744 03:30:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:57.744 03:30:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:57.744 03:30:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.744 03:30:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:57.744 03:30:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:57.744 03:30:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:58.001 00:20:58.001 03:30:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:58.001 03:30:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:58.001 03:30:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:58.257 03:30:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:58.257 03:30:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:58.257 03:30:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:58.258 03:30:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.258 03:30:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:58.258 03:30:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:58.258 { 00:20:58.258 "cntlid": 69, 00:20:58.258 "qid": 0, 00:20:58.258 "state": "enabled", 00:20:58.258 "listen_address": { 00:20:58.258 "trtype": "TCP", 00:20:58.258 "adrfam": "IPv4", 00:20:58.258 "traddr": "10.0.0.2", 00:20:58.258 "trsvcid": "4420" 00:20:58.258 }, 00:20:58.258 "peer_address": { 00:20:58.258 "trtype": "TCP", 00:20:58.258 "adrfam": "IPv4", 00:20:58.258 "traddr": "10.0.0.1", 00:20:58.258 "trsvcid": "60628" 00:20:58.258 }, 00:20:58.258 "auth": { 00:20:58.258 "state": "completed", 00:20:58.258 "digest": "sha384", 00:20:58.258 "dhgroup": "ffdhe3072" 00:20:58.258 } 00:20:58.258 } 00:20:58.258 ]' 00:20:58.258 03:30:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:58.258 03:30:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:58.514 03:30:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:58.514 03:30:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:58.514 03:30:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:58.514 03:30:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:58.514 03:30:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:58.514 03:30:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:58.770 03:30:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NGRiZDgxMzAzNWRlYWRiZTEwMDhiMDEwZmZhOTJlMTIwMGNlYTUyMTUwMmIyODBm/CtKoQ==: --dhchap-ctrl-secret DHHC-1:01:Nzg0N2MwMWE4OWYzNDJmZmFlNWI0NmRjZDI0NTBlYzEb/Pto: 00:20:59.701 03:30:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:59.701 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:59.701 03:30:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:59.701 03:30:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:59.701 03:30:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.701 03:30:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:59.701 03:30:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:59.701 03:30:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:59.701 03:30:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:59.959 03:30:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:20:59.959 03:30:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:59.959 03:30:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:59.959 03:30:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:59.959 03:30:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:59.959 03:30:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:59.959 03:30:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:59.959 03:30:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:59.959 03:30:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.959 03:30:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:59.959 03:30:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:59.959 03:30:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:00.215 00:21:00.215 03:30:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:00.215 03:30:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:00.215 03:30:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:00.472 03:30:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:00.472 03:30:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:00.472 03:30:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:00.472 03:30:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.472 03:30:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:00.472 03:30:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:00.472 { 00:21:00.472 "cntlid": 71, 00:21:00.472 "qid": 0, 00:21:00.472 "state": "enabled", 00:21:00.472 "listen_address": { 00:21:00.472 "trtype": "TCP", 00:21:00.472 "adrfam": "IPv4", 00:21:00.472 "traddr": "10.0.0.2", 00:21:00.472 "trsvcid": "4420" 00:21:00.472 }, 00:21:00.472 "peer_address": { 00:21:00.472 "trtype": "TCP", 00:21:00.472 "adrfam": "IPv4", 00:21:00.472 "traddr": "10.0.0.1", 00:21:00.472 "trsvcid": "60656" 00:21:00.472 }, 00:21:00.472 "auth": { 00:21:00.472 "state": "completed", 00:21:00.472 "digest": "sha384", 00:21:00.472 "dhgroup": "ffdhe3072" 00:21:00.472 } 00:21:00.472 } 00:21:00.472 ]' 00:21:00.472 03:30:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:00.729 03:30:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:00.729 03:30:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:00.729 03:30:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:00.729 03:30:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:00.729 03:30:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:00.729 03:30:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:00.729 03:30:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:00.987 03:30:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NTRlZDY1MGEyZTIwOTFhNTU5MDAxMzM4ZTc2ZDUwOGI3NDYxNjRjMWYxNDRmNDIyMzQwNzQwMGQwZjQyMzczMrqoUKk=: 00:21:01.941 03:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:01.941 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:01.941 03:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:01.941 03:30:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:01.941 03:30:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.941 03:30:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:01.941 03:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:01.941 03:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:01.941 03:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:01.941 03:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:02.204 03:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:21:02.204 03:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:02.204 03:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:02.204 03:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:02.204 03:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:02.204 03:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:02.204 03:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:02.204 03:30:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:02.204 03:30:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.204 03:30:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:02.204 03:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:02.204 03:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:02.462 00:21:02.462 03:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:02.462 03:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:02.462 03:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:02.720 03:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:02.720 03:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:02.720 03:30:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:02.720 03:30:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.720 03:30:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:02.720 03:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:02.720 { 00:21:02.720 "cntlid": 73, 00:21:02.720 "qid": 0, 00:21:02.720 "state": "enabled", 00:21:02.720 "listen_address": { 00:21:02.720 "trtype": "TCP", 00:21:02.720 "adrfam": "IPv4", 00:21:02.720 "traddr": "10.0.0.2", 00:21:02.720 "trsvcid": "4420" 00:21:02.720 }, 00:21:02.720 "peer_address": { 00:21:02.720 "trtype": "TCP", 00:21:02.720 "adrfam": "IPv4", 00:21:02.720 "traddr": "10.0.0.1", 00:21:02.720 "trsvcid": "47034" 00:21:02.720 }, 00:21:02.720 "auth": { 00:21:02.720 "state": "completed", 00:21:02.720 "digest": "sha384", 00:21:02.720 "dhgroup": "ffdhe4096" 00:21:02.720 } 00:21:02.720 } 00:21:02.720 ]' 00:21:02.720 03:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:02.720 03:30:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:02.720 03:30:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:02.978 03:30:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:02.978 03:30:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:02.978 03:30:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:02.978 03:30:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:02.978 03:30:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:03.235 03:30:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZWY5ZTUwMmNiY2M0ZTBiZWU5MDcxOGYzMmJmNDg5NTk0MzhmY2QwNGQ1OWRhZmM0bAyhVA==: --dhchap-ctrl-secret DHHC-1:03:NmQ0NTkxMzY0N2Q1MDBlYTgzNzA5NWNkMjY0NDhjOTg1YTA5NDg3NTk2Y2FiNGFjYWNiZGM5MGRkZWQ1NTNiNnbWCLs=: 00:21:04.170 03:30:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:04.170 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:04.170 03:30:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:04.170 03:30:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:04.170 03:30:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.170 03:30:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:04.170 03:30:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:04.170 03:30:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:04.170 03:30:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:04.427 03:30:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:21:04.427 03:30:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:04.427 03:30:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:04.427 03:30:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:04.427 03:30:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:04.427 03:30:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:04.427 03:30:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:04.427 03:30:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:04.427 03:30:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.427 03:30:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:04.427 03:30:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:04.427 03:30:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:04.684 00:21:04.684 03:30:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:04.684 03:30:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:04.684 03:30:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:04.942 03:30:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:04.942 03:30:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:04.942 03:30:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:04.942 03:30:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.942 03:30:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:04.942 03:30:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:04.942 { 00:21:04.942 "cntlid": 75, 00:21:04.942 "qid": 0, 00:21:04.942 "state": "enabled", 00:21:04.942 "listen_address": { 00:21:04.942 "trtype": "TCP", 00:21:04.942 "adrfam": "IPv4", 00:21:04.942 "traddr": "10.0.0.2", 00:21:04.942 "trsvcid": "4420" 00:21:04.942 }, 00:21:04.942 "peer_address": { 00:21:04.942 "trtype": "TCP", 00:21:04.942 "adrfam": "IPv4", 00:21:04.942 "traddr": "10.0.0.1", 00:21:04.942 "trsvcid": "47050" 00:21:04.942 }, 00:21:04.942 "auth": { 00:21:04.942 "state": "completed", 00:21:04.942 "digest": "sha384", 00:21:04.942 "dhgroup": "ffdhe4096" 00:21:04.942 } 00:21:04.942 } 00:21:04.942 ]' 00:21:04.942 03:30:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:05.199 03:30:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:05.199 03:30:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:05.199 03:30:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:05.199 03:30:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:05.199 03:30:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:05.199 03:30:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:05.199 03:30:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:05.456 03:30:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NGRkNzA0MzQxZjdmN2NkN2Y0NDdlYTI1MzQwYTRlZTDXYZOU: --dhchap-ctrl-secret DHHC-1:02:YTU0YTdiMmI4YjFmMmJmNWYyYjg5MDI5ZmRmMTA0MDM4ZDU0YTUxOTI3ZWUzZGFjPc0M1Q==: 00:21:06.435 03:30:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:06.435 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:06.435 03:30:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:06.435 03:30:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:06.435 03:30:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.435 03:30:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:06.435 03:30:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:06.435 03:30:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:06.435 03:30:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:06.693 03:30:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:21:06.693 03:30:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:06.693 03:30:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:06.693 03:30:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:06.693 03:30:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:06.693 03:30:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:06.693 03:30:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:06.693 03:30:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:06.693 03:30:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.693 03:30:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:06.693 03:30:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:06.693 03:30:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:06.950 00:21:06.950 03:30:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:06.950 03:30:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:06.950 03:30:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:07.208 03:30:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:07.208 03:30:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:07.208 03:30:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:07.208 03:30:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.208 03:30:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:07.208 03:30:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:07.208 { 00:21:07.208 "cntlid": 77, 00:21:07.208 "qid": 0, 00:21:07.208 "state": "enabled", 00:21:07.208 "listen_address": { 00:21:07.208 "trtype": "TCP", 00:21:07.208 "adrfam": "IPv4", 00:21:07.208 "traddr": "10.0.0.2", 00:21:07.208 "trsvcid": "4420" 00:21:07.208 }, 00:21:07.208 "peer_address": { 00:21:07.208 "trtype": "TCP", 00:21:07.208 "adrfam": "IPv4", 00:21:07.208 "traddr": "10.0.0.1", 00:21:07.208 "trsvcid": "47068" 00:21:07.208 }, 00:21:07.208 "auth": { 00:21:07.208 "state": "completed", 00:21:07.208 "digest": "sha384", 00:21:07.208 "dhgroup": "ffdhe4096" 00:21:07.208 } 00:21:07.208 } 00:21:07.208 ]' 00:21:07.208 03:30:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:07.208 03:30:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:07.208 03:30:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:07.465 03:30:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:07.465 03:30:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:07.465 03:30:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:07.465 03:30:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:07.465 03:30:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:07.724 03:30:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NGRiZDgxMzAzNWRlYWRiZTEwMDhiMDEwZmZhOTJlMTIwMGNlYTUyMTUwMmIyODBm/CtKoQ==: --dhchap-ctrl-secret DHHC-1:01:Nzg0N2MwMWE4OWYzNDJmZmFlNWI0NmRjZDI0NTBlYzEb/Pto: 00:21:08.658 03:30:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:08.658 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:08.658 03:30:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:08.658 03:30:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:08.658 03:30:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.658 03:30:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:08.658 03:30:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:08.658 03:30:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:08.658 03:30:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:08.916 03:30:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:21:08.916 03:30:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:08.916 03:30:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:08.916 03:30:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:08.916 03:30:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:08.916 03:30:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:08.916 03:30:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:08.916 03:30:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:08.916 03:30:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.916 03:30:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:08.916 03:30:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:08.916 03:30:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:09.173 00:21:09.173 03:30:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:09.173 03:30:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:09.173 03:30:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:09.432 03:30:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:09.432 03:30:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:09.432 03:30:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:09.432 03:30:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.690 03:30:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:09.691 03:30:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:09.691 { 00:21:09.691 "cntlid": 79, 00:21:09.691 "qid": 0, 00:21:09.691 "state": "enabled", 00:21:09.691 "listen_address": { 00:21:09.691 "trtype": "TCP", 00:21:09.691 "adrfam": "IPv4", 00:21:09.691 "traddr": "10.0.0.2", 00:21:09.691 "trsvcid": "4420" 00:21:09.691 }, 00:21:09.691 "peer_address": { 00:21:09.691 "trtype": "TCP", 00:21:09.691 "adrfam": "IPv4", 00:21:09.691 "traddr": "10.0.0.1", 00:21:09.691 "trsvcid": "47092" 00:21:09.691 }, 00:21:09.691 "auth": { 00:21:09.691 "state": "completed", 00:21:09.691 "digest": "sha384", 00:21:09.691 "dhgroup": "ffdhe4096" 00:21:09.691 } 00:21:09.691 } 00:21:09.691 ]' 00:21:09.691 03:30:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:09.691 03:30:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:09.691 03:30:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:09.691 03:30:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:09.691 03:30:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:09.691 03:30:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:09.691 03:30:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:09.691 03:30:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:09.947 03:30:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NTRlZDY1MGEyZTIwOTFhNTU5MDAxMzM4ZTc2ZDUwOGI3NDYxNjRjMWYxNDRmNDIyMzQwNzQwMGQwZjQyMzczMrqoUKk=: 00:21:10.882 03:30:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:10.882 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:10.882 03:30:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:10.882 03:30:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:10.882 03:30:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.882 03:30:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:10.882 03:30:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:10.882 03:30:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:10.882 03:30:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:10.882 03:30:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:11.140 03:30:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:21:11.140 03:30:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:11.140 03:30:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:11.140 03:30:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:11.140 03:30:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:11.140 03:30:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:11.140 03:30:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:11.140 03:30:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:11.140 03:30:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.140 03:30:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:11.140 03:30:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:11.140 03:30:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:11.706 00:21:11.706 03:30:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:11.706 03:30:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:11.706 03:30:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:11.963 03:30:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:11.963 03:30:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:11.963 03:30:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:11.963 03:30:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.963 03:30:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:11.963 03:30:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:11.963 { 00:21:11.963 "cntlid": 81, 00:21:11.963 "qid": 0, 00:21:11.963 "state": "enabled", 00:21:11.963 "listen_address": { 00:21:11.963 "trtype": "TCP", 00:21:11.963 "adrfam": "IPv4", 00:21:11.963 "traddr": "10.0.0.2", 00:21:11.963 "trsvcid": "4420" 00:21:11.963 }, 00:21:11.963 "peer_address": { 00:21:11.963 "trtype": "TCP", 00:21:11.963 "adrfam": "IPv4", 00:21:11.963 "traddr": "10.0.0.1", 00:21:11.963 "trsvcid": "59064" 00:21:11.963 }, 00:21:11.963 "auth": { 00:21:11.963 "state": "completed", 00:21:11.963 "digest": "sha384", 00:21:11.963 "dhgroup": "ffdhe6144" 00:21:11.963 } 00:21:11.963 } 00:21:11.963 ]' 00:21:11.963 03:30:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:12.219 03:30:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:12.219 03:30:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:12.219 03:30:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:12.219 03:30:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:12.219 03:30:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:12.219 03:30:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:12.220 03:30:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:12.478 03:30:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZWY5ZTUwMmNiY2M0ZTBiZWU5MDcxOGYzMmJmNDg5NTk0MzhmY2QwNGQ1OWRhZmM0bAyhVA==: --dhchap-ctrl-secret DHHC-1:03:NmQ0NTkxMzY0N2Q1MDBlYTgzNzA5NWNkMjY0NDhjOTg1YTA5NDg3NTk2Y2FiNGFjYWNiZGM5MGRkZWQ1NTNiNnbWCLs=: 00:21:13.413 03:30:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:13.413 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:13.413 03:30:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:13.413 03:30:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:13.413 03:30:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.413 03:30:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:13.413 03:30:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:13.413 03:30:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:13.413 03:30:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:13.670 03:30:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:21:13.670 03:30:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:13.670 03:30:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:13.670 03:30:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:13.670 03:30:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:13.670 03:30:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:13.670 03:30:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:13.670 03:30:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:13.670 03:30:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.670 03:30:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:13.670 03:30:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:13.670 03:30:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:14.235 00:21:14.235 03:30:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:14.235 03:30:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:14.235 03:30:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:14.494 03:30:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:14.494 03:30:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:14.494 03:30:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:14.494 03:30:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.494 03:30:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:14.494 03:30:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:14.494 { 00:21:14.494 "cntlid": 83, 00:21:14.494 "qid": 0, 00:21:14.494 "state": "enabled", 00:21:14.494 "listen_address": { 00:21:14.494 "trtype": "TCP", 00:21:14.494 "adrfam": "IPv4", 00:21:14.494 "traddr": "10.0.0.2", 00:21:14.494 "trsvcid": "4420" 00:21:14.494 }, 00:21:14.494 "peer_address": { 00:21:14.494 "trtype": "TCP", 00:21:14.494 "adrfam": "IPv4", 00:21:14.494 "traddr": "10.0.0.1", 00:21:14.494 "trsvcid": "59088" 00:21:14.494 }, 00:21:14.494 "auth": { 00:21:14.494 "state": "completed", 00:21:14.494 "digest": "sha384", 00:21:14.494 "dhgroup": "ffdhe6144" 00:21:14.494 } 00:21:14.494 } 00:21:14.494 ]' 00:21:14.494 03:30:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:14.494 03:30:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:14.494 03:30:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:14.494 03:30:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:14.494 03:30:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:14.752 03:30:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:14.752 03:30:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:14.752 03:30:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:15.010 03:31:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NGRkNzA0MzQxZjdmN2NkN2Y0NDdlYTI1MzQwYTRlZTDXYZOU: --dhchap-ctrl-secret DHHC-1:02:YTU0YTdiMmI4YjFmMmJmNWYyYjg5MDI5ZmRmMTA0MDM4ZDU0YTUxOTI3ZWUzZGFjPc0M1Q==: 00:21:15.948 03:31:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:15.948 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:15.948 03:31:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:15.948 03:31:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:15.948 03:31:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.948 03:31:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:15.948 03:31:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:15.948 03:31:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:15.948 03:31:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:16.206 03:31:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:21:16.206 03:31:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:16.206 03:31:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:16.206 03:31:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:16.206 03:31:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:16.206 03:31:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:16.206 03:31:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:16.206 03:31:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:16.206 03:31:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.206 03:31:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:16.206 03:31:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:16.206 03:31:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:16.774 00:21:16.774 03:31:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:16.774 03:31:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:16.774 03:31:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:17.033 03:31:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:17.033 03:31:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:17.033 03:31:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:17.033 03:31:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.033 03:31:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:17.033 03:31:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:17.033 { 00:21:17.033 "cntlid": 85, 00:21:17.033 "qid": 0, 00:21:17.033 "state": "enabled", 00:21:17.033 "listen_address": { 00:21:17.033 "trtype": "TCP", 00:21:17.033 "adrfam": "IPv4", 00:21:17.033 "traddr": "10.0.0.2", 00:21:17.033 "trsvcid": "4420" 00:21:17.033 }, 00:21:17.033 "peer_address": { 00:21:17.033 "trtype": "TCP", 00:21:17.033 "adrfam": "IPv4", 00:21:17.033 "traddr": "10.0.0.1", 00:21:17.033 "trsvcid": "59094" 00:21:17.033 }, 00:21:17.033 "auth": { 00:21:17.033 "state": "completed", 00:21:17.033 "digest": "sha384", 00:21:17.033 "dhgroup": "ffdhe6144" 00:21:17.033 } 00:21:17.033 } 00:21:17.033 ]' 00:21:17.033 03:31:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:17.033 03:31:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:17.033 03:31:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:17.033 03:31:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:17.033 03:31:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:17.033 03:31:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:17.033 03:31:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:17.033 03:31:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:17.291 03:31:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NGRiZDgxMzAzNWRlYWRiZTEwMDhiMDEwZmZhOTJlMTIwMGNlYTUyMTUwMmIyODBm/CtKoQ==: --dhchap-ctrl-secret DHHC-1:01:Nzg0N2MwMWE4OWYzNDJmZmFlNWI0NmRjZDI0NTBlYzEb/Pto: 00:21:18.227 03:31:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:18.227 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:18.227 03:31:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:18.227 03:31:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:18.227 03:31:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.227 03:31:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:18.227 03:31:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:18.227 03:31:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:18.227 03:31:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:18.486 03:31:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:21:18.486 03:31:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:18.486 03:31:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:18.486 03:31:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:18.486 03:31:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:18.486 03:31:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:18.486 03:31:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:18.487 03:31:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:18.487 03:31:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.745 03:31:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:18.745 03:31:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:18.745 03:31:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:19.311 00:21:19.311 03:31:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:19.311 03:31:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:19.311 03:31:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:19.311 03:31:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:19.311 03:31:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:19.311 03:31:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:19.311 03:31:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.311 03:31:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:19.311 03:31:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:19.311 { 00:21:19.311 "cntlid": 87, 00:21:19.311 "qid": 0, 00:21:19.311 "state": "enabled", 00:21:19.311 "listen_address": { 00:21:19.311 "trtype": "TCP", 00:21:19.311 "adrfam": "IPv4", 00:21:19.312 "traddr": "10.0.0.2", 00:21:19.312 "trsvcid": "4420" 00:21:19.312 }, 00:21:19.312 "peer_address": { 00:21:19.312 "trtype": "TCP", 00:21:19.312 "adrfam": "IPv4", 00:21:19.312 "traddr": "10.0.0.1", 00:21:19.312 "trsvcid": "59114" 00:21:19.312 }, 00:21:19.312 "auth": { 00:21:19.312 "state": "completed", 00:21:19.312 "digest": "sha384", 00:21:19.312 "dhgroup": "ffdhe6144" 00:21:19.312 } 00:21:19.312 } 00:21:19.312 ]' 00:21:19.312 03:31:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:19.569 03:31:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:19.569 03:31:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:19.569 03:31:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:19.569 03:31:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:19.569 03:31:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:19.569 03:31:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:19.569 03:31:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:19.827 03:31:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NTRlZDY1MGEyZTIwOTFhNTU5MDAxMzM4ZTc2ZDUwOGI3NDYxNjRjMWYxNDRmNDIyMzQwNzQwMGQwZjQyMzczMrqoUKk=: 00:21:20.762 03:31:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:20.763 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:20.763 03:31:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:20.763 03:31:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:20.763 03:31:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.763 03:31:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:20.763 03:31:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:20.763 03:31:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:20.763 03:31:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:20.763 03:31:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:21.021 03:31:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:21:21.021 03:31:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:21.021 03:31:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:21.021 03:31:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:21.021 03:31:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:21.021 03:31:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:21.021 03:31:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:21.021 03:31:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:21.021 03:31:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.021 03:31:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:21.021 03:31:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:21.021 03:31:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:21.956 00:21:21.956 03:31:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:21.956 03:31:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:21.956 03:31:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:22.213 03:31:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:22.213 03:31:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:22.213 03:31:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:22.213 03:31:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.213 03:31:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:22.213 03:31:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:22.213 { 00:21:22.213 "cntlid": 89, 00:21:22.213 "qid": 0, 00:21:22.213 "state": "enabled", 00:21:22.213 "listen_address": { 00:21:22.213 "trtype": "TCP", 00:21:22.213 "adrfam": "IPv4", 00:21:22.213 "traddr": "10.0.0.2", 00:21:22.213 "trsvcid": "4420" 00:21:22.213 }, 00:21:22.213 "peer_address": { 00:21:22.213 "trtype": "TCP", 00:21:22.213 "adrfam": "IPv4", 00:21:22.213 "traddr": "10.0.0.1", 00:21:22.213 "trsvcid": "59146" 00:21:22.213 }, 00:21:22.213 "auth": { 00:21:22.213 "state": "completed", 00:21:22.213 "digest": "sha384", 00:21:22.213 "dhgroup": "ffdhe8192" 00:21:22.213 } 00:21:22.213 } 00:21:22.213 ]' 00:21:22.213 03:31:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:22.213 03:31:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:22.213 03:31:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:22.213 03:31:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:22.213 03:31:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:22.213 03:31:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:22.213 03:31:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:22.213 03:31:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:22.472 03:31:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZWY5ZTUwMmNiY2M0ZTBiZWU5MDcxOGYzMmJmNDg5NTk0MzhmY2QwNGQ1OWRhZmM0bAyhVA==: --dhchap-ctrl-secret DHHC-1:03:NmQ0NTkxMzY0N2Q1MDBlYTgzNzA5NWNkMjY0NDhjOTg1YTA5NDg3NTk2Y2FiNGFjYWNiZGM5MGRkZWQ1NTNiNnbWCLs=: 00:21:23.406 03:31:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:23.406 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:23.406 03:31:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:23.406 03:31:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:23.406 03:31:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.666 03:31:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:23.666 03:31:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:23.666 03:31:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:23.666 03:31:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:23.925 03:31:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:21:23.925 03:31:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:23.925 03:31:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:23.925 03:31:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:23.925 03:31:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:23.925 03:31:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:23.925 03:31:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:23.925 03:31:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:23.925 03:31:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.925 03:31:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:23.925 03:31:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:23.925 03:31:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:24.863 00:21:24.863 03:31:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:24.863 03:31:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:24.863 03:31:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:24.863 03:31:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:24.863 03:31:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:24.863 03:31:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:24.863 03:31:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.863 03:31:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:24.863 03:31:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:24.863 { 00:21:24.863 "cntlid": 91, 00:21:24.863 "qid": 0, 00:21:24.863 "state": "enabled", 00:21:24.863 "listen_address": { 00:21:24.863 "trtype": "TCP", 00:21:24.863 "adrfam": "IPv4", 00:21:24.863 "traddr": "10.0.0.2", 00:21:24.863 "trsvcid": "4420" 00:21:24.863 }, 00:21:24.863 "peer_address": { 00:21:24.863 "trtype": "TCP", 00:21:24.863 "adrfam": "IPv4", 00:21:24.863 "traddr": "10.0.0.1", 00:21:24.863 "trsvcid": "41472" 00:21:24.863 }, 00:21:24.863 "auth": { 00:21:24.863 "state": "completed", 00:21:24.863 "digest": "sha384", 00:21:24.863 "dhgroup": "ffdhe8192" 00:21:24.863 } 00:21:24.863 } 00:21:24.863 ]' 00:21:24.863 03:31:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:24.863 03:31:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:24.863 03:31:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:25.142 03:31:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:25.142 03:31:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:25.142 03:31:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:25.142 03:31:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:25.142 03:31:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:25.455 03:31:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NGRkNzA0MzQxZjdmN2NkN2Y0NDdlYTI1MzQwYTRlZTDXYZOU: --dhchap-ctrl-secret DHHC-1:02:YTU0YTdiMmI4YjFmMmJmNWYyYjg5MDI5ZmRmMTA0MDM4ZDU0YTUxOTI3ZWUzZGFjPc0M1Q==: 00:21:26.415 03:31:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:26.415 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:26.415 03:31:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:26.415 03:31:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:26.415 03:31:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.415 03:31:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:26.415 03:31:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:26.415 03:31:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:26.415 03:31:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:26.672 03:31:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:21:26.672 03:31:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:26.672 03:31:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:26.672 03:31:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:26.672 03:31:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:26.672 03:31:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:26.672 03:31:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:26.672 03:31:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:26.672 03:31:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.672 03:31:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:26.672 03:31:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:26.672 03:31:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:27.618 00:21:27.618 03:31:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:27.618 03:31:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:27.618 03:31:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:27.618 03:31:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:27.618 03:31:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:27.618 03:31:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:27.618 03:31:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.618 03:31:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:27.618 03:31:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:27.618 { 00:21:27.618 "cntlid": 93, 00:21:27.618 "qid": 0, 00:21:27.618 "state": "enabled", 00:21:27.618 "listen_address": { 00:21:27.618 "trtype": "TCP", 00:21:27.618 "adrfam": "IPv4", 00:21:27.618 "traddr": "10.0.0.2", 00:21:27.618 "trsvcid": "4420" 00:21:27.618 }, 00:21:27.618 "peer_address": { 00:21:27.618 "trtype": "TCP", 00:21:27.618 "adrfam": "IPv4", 00:21:27.618 "traddr": "10.0.0.1", 00:21:27.618 "trsvcid": "41500" 00:21:27.618 }, 00:21:27.618 "auth": { 00:21:27.618 "state": "completed", 00:21:27.618 "digest": "sha384", 00:21:27.618 "dhgroup": "ffdhe8192" 00:21:27.618 } 00:21:27.618 } 00:21:27.618 ]' 00:21:27.618 03:31:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:27.618 03:31:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:27.618 03:31:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:27.875 03:31:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:27.875 03:31:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:27.875 03:31:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:27.875 03:31:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:27.875 03:31:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:28.132 03:31:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NGRiZDgxMzAzNWRlYWRiZTEwMDhiMDEwZmZhOTJlMTIwMGNlYTUyMTUwMmIyODBm/CtKoQ==: --dhchap-ctrl-secret DHHC-1:01:Nzg0N2MwMWE4OWYzNDJmZmFlNWI0NmRjZDI0NTBlYzEb/Pto: 00:21:29.062 03:31:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:29.062 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:29.062 03:31:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:29.062 03:31:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:29.062 03:31:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.062 03:31:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:29.062 03:31:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:29.062 03:31:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:29.062 03:31:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:29.319 03:31:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:21:29.319 03:31:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:29.319 03:31:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:29.319 03:31:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:29.319 03:31:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:29.319 03:31:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:29.319 03:31:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:29.319 03:31:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:29.319 03:31:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.319 03:31:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:29.319 03:31:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:29.319 03:31:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:30.249 00:21:30.249 03:31:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:30.249 03:31:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:30.249 03:31:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:30.505 03:31:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:30.505 03:31:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:30.505 03:31:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:30.505 03:31:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.505 03:31:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:30.505 03:31:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:30.505 { 00:21:30.505 "cntlid": 95, 00:21:30.505 "qid": 0, 00:21:30.505 "state": "enabled", 00:21:30.505 "listen_address": { 00:21:30.505 "trtype": "TCP", 00:21:30.505 "adrfam": "IPv4", 00:21:30.505 "traddr": "10.0.0.2", 00:21:30.505 "trsvcid": "4420" 00:21:30.505 }, 00:21:30.505 "peer_address": { 00:21:30.505 "trtype": "TCP", 00:21:30.505 "adrfam": "IPv4", 00:21:30.505 "traddr": "10.0.0.1", 00:21:30.505 "trsvcid": "41524" 00:21:30.505 }, 00:21:30.505 "auth": { 00:21:30.505 "state": "completed", 00:21:30.505 "digest": "sha384", 00:21:30.505 "dhgroup": "ffdhe8192" 00:21:30.505 } 00:21:30.505 } 00:21:30.505 ]' 00:21:30.505 03:31:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:30.505 03:31:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:30.505 03:31:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:30.505 03:31:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:30.505 03:31:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:30.505 03:31:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:30.505 03:31:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:30.505 03:31:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:30.761 03:31:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NTRlZDY1MGEyZTIwOTFhNTU5MDAxMzM4ZTc2ZDUwOGI3NDYxNjRjMWYxNDRmNDIyMzQwNzQwMGQwZjQyMzczMrqoUKk=: 00:21:31.746 03:31:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:31.746 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:31.746 03:31:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:31.746 03:31:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:31.746 03:31:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.746 03:31:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:31.746 03:31:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:21:31.746 03:31:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:31.746 03:31:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:31.746 03:31:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:31.746 03:31:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:32.004 03:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:21:32.004 03:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:32.004 03:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:32.004 03:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:32.004 03:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:32.004 03:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:32.004 03:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:32.004 03:31:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:32.004 03:31:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.004 03:31:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:32.004 03:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:32.004 03:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:32.261 00:21:32.261 03:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:32.261 03:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:32.261 03:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:32.518 03:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:32.518 03:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:32.518 03:31:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:32.518 03:31:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.518 03:31:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:32.518 03:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:32.518 { 00:21:32.518 "cntlid": 97, 00:21:32.518 "qid": 0, 00:21:32.518 "state": "enabled", 00:21:32.518 "listen_address": { 00:21:32.518 "trtype": "TCP", 00:21:32.518 "adrfam": "IPv4", 00:21:32.518 "traddr": "10.0.0.2", 00:21:32.518 "trsvcid": "4420" 00:21:32.518 }, 00:21:32.518 "peer_address": { 00:21:32.518 "trtype": "TCP", 00:21:32.518 "adrfam": "IPv4", 00:21:32.518 "traddr": "10.0.0.1", 00:21:32.518 "trsvcid": "60994" 00:21:32.518 }, 00:21:32.518 "auth": { 00:21:32.518 "state": "completed", 00:21:32.518 "digest": "sha512", 00:21:32.518 "dhgroup": "null" 00:21:32.518 } 00:21:32.518 } 00:21:32.518 ]' 00:21:32.518 03:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:32.518 03:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:32.518 03:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:32.518 03:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:32.518 03:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:32.518 03:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:32.518 03:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:32.518 03:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:32.775 03:31:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZWY5ZTUwMmNiY2M0ZTBiZWU5MDcxOGYzMmJmNDg5NTk0MzhmY2QwNGQ1OWRhZmM0bAyhVA==: --dhchap-ctrl-secret DHHC-1:03:NmQ0NTkxMzY0N2Q1MDBlYTgzNzA5NWNkMjY0NDhjOTg1YTA5NDg3NTk2Y2FiNGFjYWNiZGM5MGRkZWQ1NTNiNnbWCLs=: 00:21:33.706 03:31:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:33.706 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:33.706 03:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:33.706 03:31:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:33.706 03:31:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.706 03:31:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:33.706 03:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:33.706 03:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:33.706 03:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:33.964 03:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:21:33.964 03:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:33.964 03:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:33.964 03:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:33.964 03:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:33.964 03:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:33.964 03:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:33.964 03:31:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:33.964 03:31:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.964 03:31:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:33.964 03:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:33.964 03:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:34.529 00:21:34.529 03:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:34.529 03:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:34.529 03:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:34.786 03:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:34.786 03:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:34.787 03:31:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:34.787 03:31:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.787 03:31:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:34.787 03:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:34.787 { 00:21:34.787 "cntlid": 99, 00:21:34.787 "qid": 0, 00:21:34.787 "state": "enabled", 00:21:34.787 "listen_address": { 00:21:34.787 "trtype": "TCP", 00:21:34.787 "adrfam": "IPv4", 00:21:34.787 "traddr": "10.0.0.2", 00:21:34.787 "trsvcid": "4420" 00:21:34.787 }, 00:21:34.787 "peer_address": { 00:21:34.787 "trtype": "TCP", 00:21:34.787 "adrfam": "IPv4", 00:21:34.787 "traddr": "10.0.0.1", 00:21:34.787 "trsvcid": "32786" 00:21:34.787 }, 00:21:34.787 "auth": { 00:21:34.787 "state": "completed", 00:21:34.787 "digest": "sha512", 00:21:34.787 "dhgroup": "null" 00:21:34.787 } 00:21:34.787 } 00:21:34.787 ]' 00:21:34.787 03:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:34.787 03:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:34.787 03:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:34.787 03:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:34.787 03:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:34.787 03:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:34.787 03:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:34.787 03:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:35.044 03:31:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NGRkNzA0MzQxZjdmN2NkN2Y0NDdlYTI1MzQwYTRlZTDXYZOU: --dhchap-ctrl-secret DHHC-1:02:YTU0YTdiMmI4YjFmMmJmNWYyYjg5MDI5ZmRmMTA0MDM4ZDU0YTUxOTI3ZWUzZGFjPc0M1Q==: 00:21:35.974 03:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:35.974 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:35.974 03:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:35.974 03:31:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:35.974 03:31:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.974 03:31:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:35.974 03:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:35.974 03:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:35.974 03:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:36.232 03:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:21:36.232 03:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:36.232 03:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:36.232 03:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:36.232 03:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:36.232 03:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:36.232 03:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:36.232 03:31:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:36.232 03:31:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.232 03:31:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:36.232 03:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:36.232 03:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:36.489 00:21:36.489 03:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:36.489 03:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:36.489 03:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:36.746 03:31:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:36.746 03:31:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:36.746 03:31:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:36.746 03:31:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.746 03:31:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:36.746 03:31:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:36.746 { 00:21:36.746 "cntlid": 101, 00:21:36.746 "qid": 0, 00:21:36.746 "state": "enabled", 00:21:36.746 "listen_address": { 00:21:36.746 "trtype": "TCP", 00:21:36.746 "adrfam": "IPv4", 00:21:36.746 "traddr": "10.0.0.2", 00:21:36.746 "trsvcid": "4420" 00:21:36.746 }, 00:21:36.746 "peer_address": { 00:21:36.746 "trtype": "TCP", 00:21:36.746 "adrfam": "IPv4", 00:21:36.746 "traddr": "10.0.0.1", 00:21:36.746 "trsvcid": "32824" 00:21:36.746 }, 00:21:36.746 "auth": { 00:21:36.746 "state": "completed", 00:21:36.746 "digest": "sha512", 00:21:36.746 "dhgroup": "null" 00:21:36.746 } 00:21:36.746 } 00:21:36.746 ]' 00:21:36.746 03:31:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:37.003 03:31:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:37.004 03:31:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:37.004 03:31:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:37.004 03:31:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:37.004 03:31:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:37.004 03:31:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:37.004 03:31:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:37.261 03:31:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NGRiZDgxMzAzNWRlYWRiZTEwMDhiMDEwZmZhOTJlMTIwMGNlYTUyMTUwMmIyODBm/CtKoQ==: --dhchap-ctrl-secret DHHC-1:01:Nzg0N2MwMWE4OWYzNDJmZmFlNWI0NmRjZDI0NTBlYzEb/Pto: 00:21:38.191 03:31:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:38.191 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:38.191 03:31:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:38.191 03:31:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:38.191 03:31:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.191 03:31:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:38.191 03:31:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:38.191 03:31:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:38.191 03:31:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:38.448 03:31:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:21:38.448 03:31:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:38.448 03:31:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:38.448 03:31:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:38.448 03:31:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:38.448 03:31:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:38.448 03:31:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:38.448 03:31:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:38.448 03:31:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.448 03:31:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:38.448 03:31:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:38.448 03:31:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:38.704 00:21:38.704 03:31:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:38.704 03:31:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:38.704 03:31:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:38.960 03:31:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:38.960 03:31:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:38.960 03:31:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:38.960 03:31:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.960 03:31:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:38.961 03:31:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:38.961 { 00:21:38.961 "cntlid": 103, 00:21:38.961 "qid": 0, 00:21:38.961 "state": "enabled", 00:21:38.961 "listen_address": { 00:21:38.961 "trtype": "TCP", 00:21:38.961 "adrfam": "IPv4", 00:21:38.961 "traddr": "10.0.0.2", 00:21:38.961 "trsvcid": "4420" 00:21:38.961 }, 00:21:38.961 "peer_address": { 00:21:38.961 "trtype": "TCP", 00:21:38.961 "adrfam": "IPv4", 00:21:38.961 "traddr": "10.0.0.1", 00:21:38.961 "trsvcid": "32850" 00:21:38.961 }, 00:21:38.961 "auth": { 00:21:38.961 "state": "completed", 00:21:38.961 "digest": "sha512", 00:21:38.961 "dhgroup": "null" 00:21:38.961 } 00:21:38.961 } 00:21:38.961 ]' 00:21:38.961 03:31:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:38.961 03:31:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:38.961 03:31:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:38.961 03:31:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:38.961 03:31:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:39.217 03:31:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:39.217 03:31:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:39.217 03:31:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:39.476 03:31:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NTRlZDY1MGEyZTIwOTFhNTU5MDAxMzM4ZTc2ZDUwOGI3NDYxNjRjMWYxNDRmNDIyMzQwNzQwMGQwZjQyMzczMrqoUKk=: 00:21:40.407 03:31:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:40.407 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:40.407 03:31:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:40.407 03:31:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:40.407 03:31:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.407 03:31:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:40.407 03:31:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:40.407 03:31:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:40.407 03:31:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:40.407 03:31:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:40.665 03:31:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:21:40.665 03:31:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:40.665 03:31:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:40.665 03:31:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:40.665 03:31:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:40.665 03:31:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:40.665 03:31:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:40.665 03:31:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:40.665 03:31:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.665 03:31:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:40.665 03:31:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:40.665 03:31:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:40.960 00:21:40.960 03:31:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:40.960 03:31:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:40.960 03:31:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:41.218 03:31:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:41.218 03:31:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:41.218 03:31:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:41.218 03:31:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.218 03:31:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:41.218 03:31:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:41.218 { 00:21:41.218 "cntlid": 105, 00:21:41.218 "qid": 0, 00:21:41.218 "state": "enabled", 00:21:41.218 "listen_address": { 00:21:41.218 "trtype": "TCP", 00:21:41.218 "adrfam": "IPv4", 00:21:41.218 "traddr": "10.0.0.2", 00:21:41.218 "trsvcid": "4420" 00:21:41.218 }, 00:21:41.218 "peer_address": { 00:21:41.218 "trtype": "TCP", 00:21:41.218 "adrfam": "IPv4", 00:21:41.218 "traddr": "10.0.0.1", 00:21:41.218 "trsvcid": "32884" 00:21:41.218 }, 00:21:41.218 "auth": { 00:21:41.218 "state": "completed", 00:21:41.218 "digest": "sha512", 00:21:41.218 "dhgroup": "ffdhe2048" 00:21:41.218 } 00:21:41.218 } 00:21:41.218 ]' 00:21:41.218 03:31:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:41.218 03:31:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:41.218 03:31:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:41.218 03:31:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:41.218 03:31:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:41.218 03:31:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:41.218 03:31:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:41.218 03:31:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:41.783 03:31:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZWY5ZTUwMmNiY2M0ZTBiZWU5MDcxOGYzMmJmNDg5NTk0MzhmY2QwNGQ1OWRhZmM0bAyhVA==: --dhchap-ctrl-secret DHHC-1:03:NmQ0NTkxMzY0N2Q1MDBlYTgzNzA5NWNkMjY0NDhjOTg1YTA5NDg3NTk2Y2FiNGFjYWNiZGM5MGRkZWQ1NTNiNnbWCLs=: 00:21:42.718 03:31:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:42.718 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:42.718 03:31:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:42.718 03:31:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:42.718 03:31:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.718 03:31:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:42.718 03:31:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:42.718 03:31:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:42.718 03:31:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:42.976 03:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:21:42.976 03:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:42.976 03:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:42.976 03:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:42.976 03:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:42.976 03:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:42.976 03:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:42.976 03:31:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:42.976 03:31:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.976 03:31:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:42.976 03:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:42.976 03:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:43.234 00:21:43.234 03:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:43.234 03:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:43.234 03:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:43.492 03:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:43.492 03:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:43.492 03:31:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:43.492 03:31:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.492 03:31:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:43.492 03:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:43.492 { 00:21:43.492 "cntlid": 107, 00:21:43.492 "qid": 0, 00:21:43.492 "state": "enabled", 00:21:43.492 "listen_address": { 00:21:43.492 "trtype": "TCP", 00:21:43.492 "adrfam": "IPv4", 00:21:43.492 "traddr": "10.0.0.2", 00:21:43.492 "trsvcid": "4420" 00:21:43.492 }, 00:21:43.492 "peer_address": { 00:21:43.492 "trtype": "TCP", 00:21:43.492 "adrfam": "IPv4", 00:21:43.492 "traddr": "10.0.0.1", 00:21:43.492 "trsvcid": "34812" 00:21:43.492 }, 00:21:43.492 "auth": { 00:21:43.492 "state": "completed", 00:21:43.492 "digest": "sha512", 00:21:43.492 "dhgroup": "ffdhe2048" 00:21:43.492 } 00:21:43.492 } 00:21:43.492 ]' 00:21:43.492 03:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:43.492 03:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:43.492 03:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:43.492 03:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:43.492 03:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:43.492 03:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:43.492 03:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:43.492 03:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:43.749 03:31:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NGRkNzA0MzQxZjdmN2NkN2Y0NDdlYTI1MzQwYTRlZTDXYZOU: --dhchap-ctrl-secret DHHC-1:02:YTU0YTdiMmI4YjFmMmJmNWYyYjg5MDI5ZmRmMTA0MDM4ZDU0YTUxOTI3ZWUzZGFjPc0M1Q==: 00:21:45.118 03:31:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:45.118 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:45.118 03:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:45.118 03:31:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:45.118 03:31:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.118 03:31:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:45.118 03:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:45.118 03:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:45.118 03:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:45.118 03:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:21:45.118 03:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:45.118 03:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:45.118 03:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:45.118 03:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:45.118 03:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:45.118 03:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:45.118 03:31:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:45.118 03:31:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.118 03:31:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:45.118 03:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:45.118 03:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:45.374 00:21:45.374 03:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:45.374 03:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:45.375 03:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:45.631 03:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:45.631 03:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:45.631 03:31:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:45.631 03:31:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.631 03:31:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:45.631 03:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:45.631 { 00:21:45.631 "cntlid": 109, 00:21:45.631 "qid": 0, 00:21:45.631 "state": "enabled", 00:21:45.631 "listen_address": { 00:21:45.631 "trtype": "TCP", 00:21:45.631 "adrfam": "IPv4", 00:21:45.631 "traddr": "10.0.0.2", 00:21:45.631 "trsvcid": "4420" 00:21:45.631 }, 00:21:45.631 "peer_address": { 00:21:45.631 "trtype": "TCP", 00:21:45.631 "adrfam": "IPv4", 00:21:45.631 "traddr": "10.0.0.1", 00:21:45.631 "trsvcid": "34834" 00:21:45.631 }, 00:21:45.631 "auth": { 00:21:45.631 "state": "completed", 00:21:45.631 "digest": "sha512", 00:21:45.631 "dhgroup": "ffdhe2048" 00:21:45.631 } 00:21:45.631 } 00:21:45.631 ]' 00:21:45.631 03:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:45.631 03:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:45.631 03:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:45.631 03:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:45.631 03:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:45.888 03:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:45.888 03:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:45.888 03:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:46.147 03:31:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NGRiZDgxMzAzNWRlYWRiZTEwMDhiMDEwZmZhOTJlMTIwMGNlYTUyMTUwMmIyODBm/CtKoQ==: --dhchap-ctrl-secret DHHC-1:01:Nzg0N2MwMWE4OWYzNDJmZmFlNWI0NmRjZDI0NTBlYzEb/Pto: 00:21:47.080 03:31:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:47.080 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:47.080 03:31:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:47.080 03:31:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:47.080 03:31:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.080 03:31:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:47.080 03:31:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:47.080 03:31:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:47.080 03:31:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:47.336 03:31:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:21:47.336 03:31:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:47.336 03:31:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:47.336 03:31:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:47.336 03:31:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:47.336 03:31:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:47.336 03:31:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:47.336 03:31:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:47.336 03:31:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.336 03:31:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:47.336 03:31:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:47.336 03:31:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:47.593 00:21:47.593 03:31:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:47.593 03:31:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:47.593 03:31:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:47.852 03:31:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:47.852 03:31:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:47.852 03:31:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:47.852 03:31:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.852 03:31:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:47.852 03:31:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:47.852 { 00:21:47.852 "cntlid": 111, 00:21:47.852 "qid": 0, 00:21:47.852 "state": "enabled", 00:21:47.852 "listen_address": { 00:21:47.852 "trtype": "TCP", 00:21:47.852 "adrfam": "IPv4", 00:21:47.852 "traddr": "10.0.0.2", 00:21:47.852 "trsvcid": "4420" 00:21:47.852 }, 00:21:47.852 "peer_address": { 00:21:47.852 "trtype": "TCP", 00:21:47.852 "adrfam": "IPv4", 00:21:47.852 "traddr": "10.0.0.1", 00:21:47.852 "trsvcid": "34864" 00:21:47.852 }, 00:21:47.852 "auth": { 00:21:47.852 "state": "completed", 00:21:47.852 "digest": "sha512", 00:21:47.852 "dhgroup": "ffdhe2048" 00:21:47.852 } 00:21:47.852 } 00:21:47.852 ]' 00:21:47.852 03:31:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:47.852 03:31:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:47.852 03:31:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:48.110 03:31:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:48.110 03:31:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:48.110 03:31:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:48.110 03:31:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:48.110 03:31:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:48.367 03:31:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NTRlZDY1MGEyZTIwOTFhNTU5MDAxMzM4ZTc2ZDUwOGI3NDYxNjRjMWYxNDRmNDIyMzQwNzQwMGQwZjQyMzczMrqoUKk=: 00:21:49.298 03:31:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:49.298 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:49.298 03:31:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:49.298 03:31:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:49.298 03:31:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.298 03:31:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:49.298 03:31:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:49.299 03:31:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:49.299 03:31:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:49.299 03:31:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:49.555 03:31:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:21:49.555 03:31:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:49.555 03:31:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:49.555 03:31:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:49.555 03:31:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:49.555 03:31:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:49.555 03:31:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:49.555 03:31:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:49.555 03:31:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.555 03:31:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:49.555 03:31:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:49.555 03:31:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:49.813 00:21:50.070 03:31:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:50.070 03:31:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:50.070 03:31:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:50.070 03:31:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:50.070 03:31:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:50.071 03:31:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:50.071 03:31:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.328 03:31:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:50.328 03:31:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:50.328 { 00:21:50.328 "cntlid": 113, 00:21:50.328 "qid": 0, 00:21:50.328 "state": "enabled", 00:21:50.328 "listen_address": { 00:21:50.328 "trtype": "TCP", 00:21:50.328 "adrfam": "IPv4", 00:21:50.328 "traddr": "10.0.0.2", 00:21:50.328 "trsvcid": "4420" 00:21:50.328 }, 00:21:50.328 "peer_address": { 00:21:50.328 "trtype": "TCP", 00:21:50.328 "adrfam": "IPv4", 00:21:50.328 "traddr": "10.0.0.1", 00:21:50.328 "trsvcid": "34886" 00:21:50.328 }, 00:21:50.328 "auth": { 00:21:50.328 "state": "completed", 00:21:50.328 "digest": "sha512", 00:21:50.328 "dhgroup": "ffdhe3072" 00:21:50.328 } 00:21:50.328 } 00:21:50.328 ]' 00:21:50.328 03:31:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:50.328 03:31:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:50.328 03:31:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:50.328 03:31:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:50.328 03:31:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:50.328 03:31:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:50.328 03:31:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:50.328 03:31:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:50.586 03:31:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZWY5ZTUwMmNiY2M0ZTBiZWU5MDcxOGYzMmJmNDg5NTk0MzhmY2QwNGQ1OWRhZmM0bAyhVA==: --dhchap-ctrl-secret DHHC-1:03:NmQ0NTkxMzY0N2Q1MDBlYTgzNzA5NWNkMjY0NDhjOTg1YTA5NDg3NTk2Y2FiNGFjYWNiZGM5MGRkZWQ1NTNiNnbWCLs=: 00:21:51.519 03:31:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:51.519 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:51.519 03:31:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:51.519 03:31:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:51.519 03:31:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.519 03:31:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:51.519 03:31:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:51.519 03:31:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:51.519 03:31:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:51.777 03:31:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:21:51.777 03:31:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:51.777 03:31:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:51.777 03:31:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:51.777 03:31:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:51.777 03:31:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:51.777 03:31:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:51.777 03:31:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:51.777 03:31:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.777 03:31:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:51.777 03:31:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:51.777 03:31:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:52.035 00:21:52.035 03:31:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:52.035 03:31:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:52.035 03:31:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:52.292 03:31:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:52.292 03:31:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:52.292 03:31:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:52.292 03:31:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.292 03:31:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:52.292 03:31:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:52.292 { 00:21:52.292 "cntlid": 115, 00:21:52.292 "qid": 0, 00:21:52.292 "state": "enabled", 00:21:52.292 "listen_address": { 00:21:52.292 "trtype": "TCP", 00:21:52.292 "adrfam": "IPv4", 00:21:52.292 "traddr": "10.0.0.2", 00:21:52.292 "trsvcid": "4420" 00:21:52.292 }, 00:21:52.292 "peer_address": { 00:21:52.292 "trtype": "TCP", 00:21:52.292 "adrfam": "IPv4", 00:21:52.292 "traddr": "10.0.0.1", 00:21:52.292 "trsvcid": "33220" 00:21:52.292 }, 00:21:52.292 "auth": { 00:21:52.292 "state": "completed", 00:21:52.292 "digest": "sha512", 00:21:52.292 "dhgroup": "ffdhe3072" 00:21:52.292 } 00:21:52.292 } 00:21:52.292 ]' 00:21:52.292 03:31:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:52.292 03:31:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:52.292 03:31:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:52.549 03:31:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:52.549 03:31:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:52.549 03:31:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:52.549 03:31:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:52.549 03:31:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:52.806 03:31:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NGRkNzA0MzQxZjdmN2NkN2Y0NDdlYTI1MzQwYTRlZTDXYZOU: --dhchap-ctrl-secret DHHC-1:02:YTU0YTdiMmI4YjFmMmJmNWYyYjg5MDI5ZmRmMTA0MDM4ZDU0YTUxOTI3ZWUzZGFjPc0M1Q==: 00:21:53.738 03:31:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:53.738 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:53.739 03:31:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:53.739 03:31:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:53.739 03:31:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.739 03:31:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:53.739 03:31:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:53.739 03:31:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:53.739 03:31:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:53.997 03:31:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:21:53.997 03:31:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:53.997 03:31:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:53.997 03:31:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:53.997 03:31:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:53.997 03:31:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:53.997 03:31:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:53.997 03:31:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:53.997 03:31:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.997 03:31:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:53.997 03:31:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:53.997 03:31:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:54.561 00:21:54.561 03:31:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:54.561 03:31:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:54.561 03:31:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:54.561 03:31:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:54.561 03:31:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:54.561 03:31:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:54.561 03:31:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.817 03:31:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:54.817 03:31:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:54.817 { 00:21:54.817 "cntlid": 117, 00:21:54.817 "qid": 0, 00:21:54.817 "state": "enabled", 00:21:54.817 "listen_address": { 00:21:54.817 "trtype": "TCP", 00:21:54.817 "adrfam": "IPv4", 00:21:54.817 "traddr": "10.0.0.2", 00:21:54.817 "trsvcid": "4420" 00:21:54.817 }, 00:21:54.817 "peer_address": { 00:21:54.817 "trtype": "TCP", 00:21:54.817 "adrfam": "IPv4", 00:21:54.817 "traddr": "10.0.0.1", 00:21:54.817 "trsvcid": "33244" 00:21:54.817 }, 00:21:54.817 "auth": { 00:21:54.817 "state": "completed", 00:21:54.817 "digest": "sha512", 00:21:54.817 "dhgroup": "ffdhe3072" 00:21:54.817 } 00:21:54.817 } 00:21:54.817 ]' 00:21:54.817 03:31:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:54.817 03:31:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:54.817 03:31:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:54.817 03:31:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:54.817 03:31:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:54.817 03:31:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:54.817 03:31:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:54.817 03:31:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:55.073 03:31:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NGRiZDgxMzAzNWRlYWRiZTEwMDhiMDEwZmZhOTJlMTIwMGNlYTUyMTUwMmIyODBm/CtKoQ==: --dhchap-ctrl-secret DHHC-1:01:Nzg0N2MwMWE4OWYzNDJmZmFlNWI0NmRjZDI0NTBlYzEb/Pto: 00:21:56.002 03:31:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:56.002 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:56.002 03:31:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:56.002 03:31:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:56.002 03:31:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:56.002 03:31:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:56.002 03:31:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:56.002 03:31:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:56.003 03:31:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:56.259 03:31:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:21:56.259 03:31:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:56.259 03:31:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:56.259 03:31:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:56.259 03:31:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:56.259 03:31:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:56.259 03:31:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:56.259 03:31:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:56.259 03:31:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:56.259 03:31:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:56.260 03:31:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:56.260 03:31:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:56.825 00:21:56.825 03:31:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:56.825 03:31:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:56.825 03:31:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:57.082 03:31:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:57.082 03:31:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:57.082 03:31:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:57.082 03:31:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:57.082 03:31:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:57.082 03:31:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:57.082 { 00:21:57.082 "cntlid": 119, 00:21:57.082 "qid": 0, 00:21:57.082 "state": "enabled", 00:21:57.082 "listen_address": { 00:21:57.082 "trtype": "TCP", 00:21:57.082 "adrfam": "IPv4", 00:21:57.082 "traddr": "10.0.0.2", 00:21:57.082 "trsvcid": "4420" 00:21:57.082 }, 00:21:57.082 "peer_address": { 00:21:57.082 "trtype": "TCP", 00:21:57.082 "adrfam": "IPv4", 00:21:57.082 "traddr": "10.0.0.1", 00:21:57.082 "trsvcid": "33282" 00:21:57.082 }, 00:21:57.082 "auth": { 00:21:57.082 "state": "completed", 00:21:57.082 "digest": "sha512", 00:21:57.082 "dhgroup": "ffdhe3072" 00:21:57.082 } 00:21:57.082 } 00:21:57.082 ]' 00:21:57.082 03:31:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:57.082 03:31:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:57.082 03:31:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:57.082 03:31:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:57.082 03:31:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:57.082 03:31:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:57.082 03:31:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:57.082 03:31:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:57.339 03:31:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NTRlZDY1MGEyZTIwOTFhNTU5MDAxMzM4ZTc2ZDUwOGI3NDYxNjRjMWYxNDRmNDIyMzQwNzQwMGQwZjQyMzczMrqoUKk=: 00:21:58.270 03:31:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:58.270 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:58.270 03:31:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:58.270 03:31:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:58.270 03:31:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.526 03:31:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:58.526 03:31:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:58.526 03:31:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:58.526 03:31:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:58.526 03:31:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:58.783 03:31:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:21:58.783 03:31:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:58.783 03:31:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:58.783 03:31:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:58.783 03:31:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:58.783 03:31:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:58.783 03:31:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:58.783 03:31:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:58.783 03:31:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.783 03:31:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:58.783 03:31:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:58.783 03:31:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:59.040 00:21:59.040 03:31:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:59.040 03:31:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:59.040 03:31:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:59.330 03:31:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:59.330 03:31:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:59.330 03:31:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:59.330 03:31:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.330 03:31:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:59.330 03:31:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:59.330 { 00:21:59.330 "cntlid": 121, 00:21:59.330 "qid": 0, 00:21:59.330 "state": "enabled", 00:21:59.330 "listen_address": { 00:21:59.330 "trtype": "TCP", 00:21:59.330 "adrfam": "IPv4", 00:21:59.330 "traddr": "10.0.0.2", 00:21:59.330 "trsvcid": "4420" 00:21:59.330 }, 00:21:59.330 "peer_address": { 00:21:59.330 "trtype": "TCP", 00:21:59.330 "adrfam": "IPv4", 00:21:59.330 "traddr": "10.0.0.1", 00:21:59.330 "trsvcid": "33296" 00:21:59.330 }, 00:21:59.330 "auth": { 00:21:59.330 "state": "completed", 00:21:59.330 "digest": "sha512", 00:21:59.330 "dhgroup": "ffdhe4096" 00:21:59.330 } 00:21:59.330 } 00:21:59.330 ]' 00:21:59.330 03:31:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:59.330 03:31:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:59.330 03:31:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:59.330 03:31:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:59.330 03:31:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:59.330 03:31:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:59.330 03:31:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:59.330 03:31:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:59.595 03:31:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZWY5ZTUwMmNiY2M0ZTBiZWU5MDcxOGYzMmJmNDg5NTk0MzhmY2QwNGQ1OWRhZmM0bAyhVA==: --dhchap-ctrl-secret DHHC-1:03:NmQ0NTkxMzY0N2Q1MDBlYTgzNzA5NWNkMjY0NDhjOTg1YTA5NDg3NTk2Y2FiNGFjYWNiZGM5MGRkZWQ1NTNiNnbWCLs=: 00:22:00.524 03:31:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:00.524 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:00.524 03:31:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:00.524 03:31:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:00.524 03:31:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:00.524 03:31:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:00.524 03:31:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:00.524 03:31:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:00.524 03:31:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:00.832 03:31:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:22:00.832 03:31:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:00.833 03:31:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:00.833 03:31:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:22:00.833 03:31:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:22:00.833 03:31:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:00.833 03:31:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:00.833 03:31:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:00.833 03:31:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:00.833 03:31:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:00.833 03:31:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:00.833 03:31:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:01.396 00:22:01.396 03:31:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:01.396 03:31:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:01.396 03:31:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:01.396 03:31:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:01.396 03:31:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:01.396 03:31:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:01.396 03:31:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.653 03:31:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:01.653 03:31:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:01.653 { 00:22:01.653 "cntlid": 123, 00:22:01.653 "qid": 0, 00:22:01.653 "state": "enabled", 00:22:01.653 "listen_address": { 00:22:01.653 "trtype": "TCP", 00:22:01.653 "adrfam": "IPv4", 00:22:01.653 "traddr": "10.0.0.2", 00:22:01.653 "trsvcid": "4420" 00:22:01.653 }, 00:22:01.653 "peer_address": { 00:22:01.653 "trtype": "TCP", 00:22:01.653 "adrfam": "IPv4", 00:22:01.653 "traddr": "10.0.0.1", 00:22:01.653 "trsvcid": "33332" 00:22:01.653 }, 00:22:01.653 "auth": { 00:22:01.653 "state": "completed", 00:22:01.653 "digest": "sha512", 00:22:01.653 "dhgroup": "ffdhe4096" 00:22:01.653 } 00:22:01.653 } 00:22:01.653 ]' 00:22:01.653 03:31:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:01.653 03:31:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:01.653 03:31:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:01.653 03:31:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:01.654 03:31:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:01.654 03:31:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:01.654 03:31:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:01.654 03:31:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:01.911 03:31:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NGRkNzA0MzQxZjdmN2NkN2Y0NDdlYTI1MzQwYTRlZTDXYZOU: --dhchap-ctrl-secret DHHC-1:02:YTU0YTdiMmI4YjFmMmJmNWYyYjg5MDI5ZmRmMTA0MDM4ZDU0YTUxOTI3ZWUzZGFjPc0M1Q==: 00:22:02.843 03:31:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:02.843 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:02.843 03:31:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:02.843 03:31:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:02.843 03:31:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.843 03:31:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:02.843 03:31:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:02.843 03:31:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:02.843 03:31:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:03.101 03:31:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:22:03.101 03:31:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:03.101 03:31:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:03.101 03:31:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:22:03.101 03:31:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:03.101 03:31:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:03.101 03:31:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:03.101 03:31:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:03.101 03:31:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:03.101 03:31:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:03.101 03:31:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:03.101 03:31:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:03.666 00:22:03.666 03:31:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:03.666 03:31:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:03.666 03:31:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:03.923 03:31:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:03.923 03:31:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:03.923 03:31:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:03.923 03:31:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:03.923 03:31:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:03.923 03:31:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:03.923 { 00:22:03.923 "cntlid": 125, 00:22:03.923 "qid": 0, 00:22:03.923 "state": "enabled", 00:22:03.923 "listen_address": { 00:22:03.923 "trtype": "TCP", 00:22:03.923 "adrfam": "IPv4", 00:22:03.923 "traddr": "10.0.0.2", 00:22:03.923 "trsvcid": "4420" 00:22:03.923 }, 00:22:03.923 "peer_address": { 00:22:03.923 "trtype": "TCP", 00:22:03.923 "adrfam": "IPv4", 00:22:03.923 "traddr": "10.0.0.1", 00:22:03.923 "trsvcid": "54820" 00:22:03.923 }, 00:22:03.923 "auth": { 00:22:03.923 "state": "completed", 00:22:03.923 "digest": "sha512", 00:22:03.923 "dhgroup": "ffdhe4096" 00:22:03.923 } 00:22:03.923 } 00:22:03.923 ]' 00:22:03.923 03:31:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:03.923 03:31:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:03.923 03:31:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:03.923 03:31:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:03.923 03:31:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:03.923 03:31:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:03.923 03:31:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:03.923 03:31:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:04.180 03:31:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NGRiZDgxMzAzNWRlYWRiZTEwMDhiMDEwZmZhOTJlMTIwMGNlYTUyMTUwMmIyODBm/CtKoQ==: --dhchap-ctrl-secret DHHC-1:01:Nzg0N2MwMWE4OWYzNDJmZmFlNWI0NmRjZDI0NTBlYzEb/Pto: 00:22:05.113 03:31:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:05.114 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:05.114 03:31:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:05.114 03:31:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:05.114 03:31:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.114 03:31:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:05.114 03:31:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:05.114 03:31:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:05.114 03:31:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:05.371 03:31:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:22:05.371 03:31:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:05.371 03:31:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:05.371 03:31:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:22:05.371 03:31:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:05.371 03:31:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:05.371 03:31:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:05.371 03:31:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:05.371 03:31:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.371 03:31:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:05.371 03:31:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:05.371 03:31:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:05.935 00:22:05.935 03:31:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:05.935 03:31:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:05.935 03:31:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:06.193 03:31:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:06.193 03:31:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:06.193 03:31:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:06.193 03:31:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:06.193 03:31:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:06.193 03:31:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:06.193 { 00:22:06.193 "cntlid": 127, 00:22:06.193 "qid": 0, 00:22:06.193 "state": "enabled", 00:22:06.193 "listen_address": { 00:22:06.193 "trtype": "TCP", 00:22:06.193 "adrfam": "IPv4", 00:22:06.193 "traddr": "10.0.0.2", 00:22:06.193 "trsvcid": "4420" 00:22:06.193 }, 00:22:06.193 "peer_address": { 00:22:06.193 "trtype": "TCP", 00:22:06.193 "adrfam": "IPv4", 00:22:06.193 "traddr": "10.0.0.1", 00:22:06.193 "trsvcid": "54840" 00:22:06.193 }, 00:22:06.193 "auth": { 00:22:06.193 "state": "completed", 00:22:06.193 "digest": "sha512", 00:22:06.193 "dhgroup": "ffdhe4096" 00:22:06.193 } 00:22:06.193 } 00:22:06.193 ]' 00:22:06.193 03:31:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:06.193 03:31:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:06.193 03:31:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:06.193 03:31:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:06.193 03:31:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:06.193 03:31:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:06.193 03:31:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:06.193 03:31:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:06.451 03:31:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NTRlZDY1MGEyZTIwOTFhNTU5MDAxMzM4ZTc2ZDUwOGI3NDYxNjRjMWYxNDRmNDIyMzQwNzQwMGQwZjQyMzczMrqoUKk=: 00:22:07.387 03:31:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:07.387 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:07.387 03:31:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:07.387 03:31:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:07.387 03:31:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.387 03:31:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:07.387 03:31:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:22:07.387 03:31:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:07.387 03:31:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:07.387 03:31:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:07.644 03:31:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:22:07.644 03:31:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:07.644 03:31:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:07.644 03:31:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:22:07.644 03:31:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:07.644 03:31:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:07.644 03:31:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:07.644 03:31:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:07.644 03:31:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.644 03:31:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:07.644 03:31:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:07.644 03:31:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:08.207 00:22:08.207 03:31:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:08.207 03:31:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:08.207 03:31:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:08.464 03:31:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:08.464 03:31:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:08.464 03:31:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:08.464 03:31:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:08.464 03:31:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:08.464 03:31:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:08.464 { 00:22:08.464 "cntlid": 129, 00:22:08.464 "qid": 0, 00:22:08.464 "state": "enabled", 00:22:08.464 "listen_address": { 00:22:08.464 "trtype": "TCP", 00:22:08.464 "adrfam": "IPv4", 00:22:08.464 "traddr": "10.0.0.2", 00:22:08.464 "trsvcid": "4420" 00:22:08.464 }, 00:22:08.464 "peer_address": { 00:22:08.464 "trtype": "TCP", 00:22:08.464 "adrfam": "IPv4", 00:22:08.464 "traddr": "10.0.0.1", 00:22:08.464 "trsvcid": "54864" 00:22:08.464 }, 00:22:08.464 "auth": { 00:22:08.464 "state": "completed", 00:22:08.464 "digest": "sha512", 00:22:08.464 "dhgroup": "ffdhe6144" 00:22:08.464 } 00:22:08.464 } 00:22:08.464 ]' 00:22:08.464 03:31:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:08.464 03:31:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:08.464 03:31:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:08.720 03:31:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:08.720 03:31:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:08.720 03:31:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:08.720 03:31:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:08.720 03:31:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:08.977 03:31:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZWY5ZTUwMmNiY2M0ZTBiZWU5MDcxOGYzMmJmNDg5NTk0MzhmY2QwNGQ1OWRhZmM0bAyhVA==: --dhchap-ctrl-secret DHHC-1:03:NmQ0NTkxMzY0N2Q1MDBlYTgzNzA5NWNkMjY0NDhjOTg1YTA5NDg3NTk2Y2FiNGFjYWNiZGM5MGRkZWQ1NTNiNnbWCLs=: 00:22:09.906 03:31:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:09.906 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:09.906 03:31:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:09.906 03:31:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:09.906 03:31:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:09.906 03:31:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:09.906 03:31:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:09.906 03:31:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:09.906 03:31:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:10.163 03:31:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:22:10.163 03:31:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:10.163 03:31:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:10.163 03:31:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:22:10.163 03:31:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:22:10.163 03:31:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:10.163 03:31:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:10.163 03:31:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:10.163 03:31:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:10.163 03:31:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:10.163 03:31:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:10.163 03:31:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:10.726 00:22:10.726 03:31:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:10.726 03:31:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:10.726 03:31:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:10.984 03:31:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:10.984 03:31:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:10.984 03:31:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:10.984 03:31:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:10.984 03:31:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:10.984 03:31:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:10.984 { 00:22:10.984 "cntlid": 131, 00:22:10.984 "qid": 0, 00:22:10.984 "state": "enabled", 00:22:10.984 "listen_address": { 00:22:10.984 "trtype": "TCP", 00:22:10.984 "adrfam": "IPv4", 00:22:10.984 "traddr": "10.0.0.2", 00:22:10.984 "trsvcid": "4420" 00:22:10.984 }, 00:22:10.984 "peer_address": { 00:22:10.984 "trtype": "TCP", 00:22:10.984 "adrfam": "IPv4", 00:22:10.984 "traddr": "10.0.0.1", 00:22:10.984 "trsvcid": "54892" 00:22:10.984 }, 00:22:10.984 "auth": { 00:22:10.984 "state": "completed", 00:22:10.984 "digest": "sha512", 00:22:10.984 "dhgroup": "ffdhe6144" 00:22:10.984 } 00:22:10.984 } 00:22:10.984 ]' 00:22:10.984 03:31:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:10.984 03:31:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:10.984 03:31:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:10.984 03:31:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:10.984 03:31:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:10.984 03:31:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:10.984 03:31:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:10.984 03:31:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:11.242 03:31:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NGRkNzA0MzQxZjdmN2NkN2Y0NDdlYTI1MzQwYTRlZTDXYZOU: --dhchap-ctrl-secret DHHC-1:02:YTU0YTdiMmI4YjFmMmJmNWYyYjg5MDI5ZmRmMTA0MDM4ZDU0YTUxOTI3ZWUzZGFjPc0M1Q==: 00:22:12.174 03:31:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:12.174 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:12.174 03:31:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:12.174 03:31:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:12.174 03:31:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:12.174 03:31:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:12.174 03:31:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:12.174 03:31:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:12.174 03:31:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:12.432 03:31:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:22:12.432 03:31:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:12.432 03:31:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:12.432 03:31:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:22:12.432 03:31:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:12.432 03:31:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:12.432 03:31:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:12.432 03:31:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:12.432 03:31:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:12.690 03:31:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:12.690 03:31:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:12.690 03:31:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:13.255 00:22:13.255 03:31:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:13.255 03:31:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:13.255 03:31:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:13.255 03:31:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:13.255 03:31:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:13.255 03:31:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:13.255 03:31:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:13.255 03:31:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:13.255 03:31:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:13.255 { 00:22:13.255 "cntlid": 133, 00:22:13.255 "qid": 0, 00:22:13.255 "state": "enabled", 00:22:13.255 "listen_address": { 00:22:13.255 "trtype": "TCP", 00:22:13.255 "adrfam": "IPv4", 00:22:13.255 "traddr": "10.0.0.2", 00:22:13.255 "trsvcid": "4420" 00:22:13.255 }, 00:22:13.255 "peer_address": { 00:22:13.255 "trtype": "TCP", 00:22:13.255 "adrfam": "IPv4", 00:22:13.255 "traddr": "10.0.0.1", 00:22:13.255 "trsvcid": "55330" 00:22:13.255 }, 00:22:13.255 "auth": { 00:22:13.255 "state": "completed", 00:22:13.255 "digest": "sha512", 00:22:13.255 "dhgroup": "ffdhe6144" 00:22:13.255 } 00:22:13.255 } 00:22:13.255 ]' 00:22:13.255 03:31:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:13.513 03:31:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:13.513 03:31:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:13.513 03:31:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:13.513 03:31:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:13.513 03:31:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:13.513 03:31:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:13.513 03:31:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:13.771 03:31:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NGRiZDgxMzAzNWRlYWRiZTEwMDhiMDEwZmZhOTJlMTIwMGNlYTUyMTUwMmIyODBm/CtKoQ==: --dhchap-ctrl-secret DHHC-1:01:Nzg0N2MwMWE4OWYzNDJmZmFlNWI0NmRjZDI0NTBlYzEb/Pto: 00:22:14.705 03:31:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:14.705 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:14.705 03:31:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:14.705 03:31:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:14.705 03:31:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:14.705 03:31:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:14.705 03:31:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:14.705 03:31:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:14.705 03:31:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:14.963 03:32:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:22:14.963 03:32:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:14.963 03:32:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:14.963 03:32:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:22:14.963 03:32:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:14.963 03:32:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:14.963 03:32:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:14.963 03:32:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:14.963 03:32:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:14.963 03:32:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:14.963 03:32:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:14.963 03:32:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:15.528 00:22:15.528 03:32:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:15.528 03:32:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:15.528 03:32:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:15.786 03:32:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:15.786 03:32:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:15.786 03:32:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:15.786 03:32:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:15.786 03:32:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:15.786 03:32:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:15.786 { 00:22:15.786 "cntlid": 135, 00:22:15.786 "qid": 0, 00:22:15.786 "state": "enabled", 00:22:15.786 "listen_address": { 00:22:15.786 "trtype": "TCP", 00:22:15.786 "adrfam": "IPv4", 00:22:15.786 "traddr": "10.0.0.2", 00:22:15.786 "trsvcid": "4420" 00:22:15.786 }, 00:22:15.786 "peer_address": { 00:22:15.786 "trtype": "TCP", 00:22:15.786 "adrfam": "IPv4", 00:22:15.786 "traddr": "10.0.0.1", 00:22:15.786 "trsvcid": "55364" 00:22:15.786 }, 00:22:15.786 "auth": { 00:22:15.786 "state": "completed", 00:22:15.786 "digest": "sha512", 00:22:15.786 "dhgroup": "ffdhe6144" 00:22:15.786 } 00:22:15.786 } 00:22:15.786 ]' 00:22:15.786 03:32:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:15.786 03:32:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:15.786 03:32:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:16.043 03:32:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:16.043 03:32:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:16.043 03:32:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:16.043 03:32:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:16.043 03:32:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:16.301 03:32:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NTRlZDY1MGEyZTIwOTFhNTU5MDAxMzM4ZTc2ZDUwOGI3NDYxNjRjMWYxNDRmNDIyMzQwNzQwMGQwZjQyMzczMrqoUKk=: 00:22:17.254 03:32:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:17.254 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:17.254 03:32:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:17.254 03:32:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:17.254 03:32:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:17.254 03:32:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:17.254 03:32:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:22:17.254 03:32:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:17.254 03:32:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:17.254 03:32:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:17.511 03:32:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:22:17.511 03:32:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:17.511 03:32:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:17.511 03:32:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:17.511 03:32:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:17.511 03:32:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:17.511 03:32:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:17.511 03:32:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:17.511 03:32:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:17.511 03:32:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:17.511 03:32:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:17.511 03:32:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:18.441 00:22:18.441 03:32:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:18.441 03:32:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:18.441 03:32:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:18.698 03:32:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:18.698 03:32:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:18.698 03:32:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:18.698 03:32:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:18.698 03:32:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:18.698 03:32:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:18.698 { 00:22:18.698 "cntlid": 137, 00:22:18.698 "qid": 0, 00:22:18.698 "state": "enabled", 00:22:18.698 "listen_address": { 00:22:18.698 "trtype": "TCP", 00:22:18.698 "adrfam": "IPv4", 00:22:18.698 "traddr": "10.0.0.2", 00:22:18.698 "trsvcid": "4420" 00:22:18.698 }, 00:22:18.698 "peer_address": { 00:22:18.698 "trtype": "TCP", 00:22:18.698 "adrfam": "IPv4", 00:22:18.698 "traddr": "10.0.0.1", 00:22:18.698 "trsvcid": "55380" 00:22:18.698 }, 00:22:18.698 "auth": { 00:22:18.698 "state": "completed", 00:22:18.698 "digest": "sha512", 00:22:18.698 "dhgroup": "ffdhe8192" 00:22:18.698 } 00:22:18.698 } 00:22:18.698 ]' 00:22:18.698 03:32:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:18.698 03:32:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:18.698 03:32:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:18.698 03:32:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:18.698 03:32:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:18.698 03:32:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:18.698 03:32:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:18.698 03:32:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:18.954 03:32:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZWY5ZTUwMmNiY2M0ZTBiZWU5MDcxOGYzMmJmNDg5NTk0MzhmY2QwNGQ1OWRhZmM0bAyhVA==: --dhchap-ctrl-secret DHHC-1:03:NmQ0NTkxMzY0N2Q1MDBlYTgzNzA5NWNkMjY0NDhjOTg1YTA5NDg3NTk2Y2FiNGFjYWNiZGM5MGRkZWQ1NTNiNnbWCLs=: 00:22:19.885 03:32:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:19.885 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:19.885 03:32:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:19.885 03:32:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:19.885 03:32:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:19.885 03:32:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:19.885 03:32:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:19.885 03:32:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:19.885 03:32:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:20.143 03:32:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:22:20.143 03:32:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:20.143 03:32:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:20.143 03:32:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:20.143 03:32:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:22:20.143 03:32:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:20.143 03:32:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:20.143 03:32:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:20.143 03:32:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:20.400 03:32:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:20.400 03:32:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:20.401 03:32:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:21.333 00:22:21.333 03:32:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:21.333 03:32:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:21.333 03:32:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:21.333 03:32:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:21.333 03:32:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:21.333 03:32:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:21.333 03:32:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:21.333 03:32:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:21.333 03:32:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:21.333 { 00:22:21.333 "cntlid": 139, 00:22:21.333 "qid": 0, 00:22:21.333 "state": "enabled", 00:22:21.333 "listen_address": { 00:22:21.333 "trtype": "TCP", 00:22:21.333 "adrfam": "IPv4", 00:22:21.333 "traddr": "10.0.0.2", 00:22:21.333 "trsvcid": "4420" 00:22:21.333 }, 00:22:21.333 "peer_address": { 00:22:21.333 "trtype": "TCP", 00:22:21.333 "adrfam": "IPv4", 00:22:21.333 "traddr": "10.0.0.1", 00:22:21.333 "trsvcid": "55408" 00:22:21.333 }, 00:22:21.333 "auth": { 00:22:21.333 "state": "completed", 00:22:21.333 "digest": "sha512", 00:22:21.333 "dhgroup": "ffdhe8192" 00:22:21.333 } 00:22:21.333 } 00:22:21.333 ]' 00:22:21.334 03:32:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:21.592 03:32:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:21.592 03:32:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:21.592 03:32:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:21.592 03:32:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:21.592 03:32:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:21.592 03:32:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:21.592 03:32:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:21.851 03:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NGRkNzA0MzQxZjdmN2NkN2Y0NDdlYTI1MzQwYTRlZTDXYZOU: --dhchap-ctrl-secret DHHC-1:02:YTU0YTdiMmI4YjFmMmJmNWYyYjg5MDI5ZmRmMTA0MDM4ZDU0YTUxOTI3ZWUzZGFjPc0M1Q==: 00:22:22.785 03:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:22.785 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:22.785 03:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:22.785 03:32:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:22.785 03:32:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:22.785 03:32:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:22.785 03:32:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:22.785 03:32:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:22.785 03:32:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:23.043 03:32:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:22:23.043 03:32:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:23.043 03:32:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:23.043 03:32:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:23.043 03:32:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:23.043 03:32:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:23.043 03:32:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:23.043 03:32:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:23.043 03:32:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:23.043 03:32:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:23.043 03:32:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:23.043 03:32:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:23.975 00:22:23.975 03:32:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:23.975 03:32:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:23.975 03:32:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:24.233 03:32:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:24.233 03:32:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:24.233 03:32:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:24.233 03:32:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:24.233 03:32:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:24.233 03:32:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:24.233 { 00:22:24.233 "cntlid": 141, 00:22:24.233 "qid": 0, 00:22:24.233 "state": "enabled", 00:22:24.233 "listen_address": { 00:22:24.233 "trtype": "TCP", 00:22:24.233 "adrfam": "IPv4", 00:22:24.233 "traddr": "10.0.0.2", 00:22:24.233 "trsvcid": "4420" 00:22:24.233 }, 00:22:24.233 "peer_address": { 00:22:24.233 "trtype": "TCP", 00:22:24.233 "adrfam": "IPv4", 00:22:24.233 "traddr": "10.0.0.1", 00:22:24.233 "trsvcid": "32942" 00:22:24.233 }, 00:22:24.233 "auth": { 00:22:24.233 "state": "completed", 00:22:24.233 "digest": "sha512", 00:22:24.233 "dhgroup": "ffdhe8192" 00:22:24.233 } 00:22:24.233 } 00:22:24.233 ]' 00:22:24.233 03:32:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:24.233 03:32:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:24.233 03:32:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:24.233 03:32:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:24.233 03:32:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:24.490 03:32:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:24.490 03:32:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:24.490 03:32:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:24.748 03:32:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NGRiZDgxMzAzNWRlYWRiZTEwMDhiMDEwZmZhOTJlMTIwMGNlYTUyMTUwMmIyODBm/CtKoQ==: --dhchap-ctrl-secret DHHC-1:01:Nzg0N2MwMWE4OWYzNDJmZmFlNWI0NmRjZDI0NTBlYzEb/Pto: 00:22:25.680 03:32:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:25.680 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:25.680 03:32:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:25.680 03:32:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:25.680 03:32:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:25.680 03:32:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:25.680 03:32:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:25.680 03:32:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:25.680 03:32:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:25.938 03:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:22:25.938 03:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:25.938 03:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:25.938 03:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:25.938 03:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:25.938 03:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:25.938 03:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:25.938 03:32:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:25.938 03:32:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:25.938 03:32:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:25.938 03:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:25.938 03:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:26.869 00:22:26.869 03:32:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:26.869 03:32:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:26.869 03:32:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:27.126 03:32:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:27.126 03:32:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:27.126 03:32:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:27.126 03:32:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:27.126 03:32:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:27.126 03:32:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:27.126 { 00:22:27.126 "cntlid": 143, 00:22:27.126 "qid": 0, 00:22:27.126 "state": "enabled", 00:22:27.126 "listen_address": { 00:22:27.126 "trtype": "TCP", 00:22:27.126 "adrfam": "IPv4", 00:22:27.126 "traddr": "10.0.0.2", 00:22:27.126 "trsvcid": "4420" 00:22:27.126 }, 00:22:27.126 "peer_address": { 00:22:27.126 "trtype": "TCP", 00:22:27.126 "adrfam": "IPv4", 00:22:27.126 "traddr": "10.0.0.1", 00:22:27.126 "trsvcid": "32970" 00:22:27.126 }, 00:22:27.126 "auth": { 00:22:27.126 "state": "completed", 00:22:27.126 "digest": "sha512", 00:22:27.126 "dhgroup": "ffdhe8192" 00:22:27.126 } 00:22:27.126 } 00:22:27.126 ]' 00:22:27.126 03:32:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:27.126 03:32:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:27.126 03:32:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:27.126 03:32:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:27.126 03:32:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:27.126 03:32:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:27.126 03:32:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:27.126 03:32:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:27.384 03:32:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NTRlZDY1MGEyZTIwOTFhNTU5MDAxMzM4ZTc2ZDUwOGI3NDYxNjRjMWYxNDRmNDIyMzQwNzQwMGQwZjQyMzczMrqoUKk=: 00:22:28.314 03:32:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:28.314 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:28.314 03:32:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:28.314 03:32:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:28.314 03:32:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:28.314 03:32:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:28.314 03:32:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:22:28.314 03:32:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:22:28.314 03:32:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:22:28.314 03:32:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:28.314 03:32:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:28.314 03:32:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:28.571 03:32:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:22:28.571 03:32:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:28.571 03:32:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:28.571 03:32:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:28.571 03:32:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:28.571 03:32:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:28.571 03:32:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:28.571 03:32:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:28.571 03:32:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:28.571 03:32:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:28.571 03:32:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:28.571 03:32:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:29.501 00:22:29.501 03:32:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:29.501 03:32:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:29.501 03:32:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:29.758 03:32:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:29.758 03:32:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:29.758 03:32:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:29.758 03:32:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:29.758 03:32:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:29.758 03:32:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:29.758 { 00:22:29.758 "cntlid": 145, 00:22:29.758 "qid": 0, 00:22:29.758 "state": "enabled", 00:22:29.758 "listen_address": { 00:22:29.758 "trtype": "TCP", 00:22:29.758 "adrfam": "IPv4", 00:22:29.758 "traddr": "10.0.0.2", 00:22:29.758 "trsvcid": "4420" 00:22:29.758 }, 00:22:29.758 "peer_address": { 00:22:29.758 "trtype": "TCP", 00:22:29.758 "adrfam": "IPv4", 00:22:29.758 "traddr": "10.0.0.1", 00:22:29.758 "trsvcid": "32990" 00:22:29.758 }, 00:22:29.758 "auth": { 00:22:29.758 "state": "completed", 00:22:29.758 "digest": "sha512", 00:22:29.758 "dhgroup": "ffdhe8192" 00:22:29.758 } 00:22:29.758 } 00:22:29.758 ]' 00:22:29.758 03:32:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:29.758 03:32:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:29.758 03:32:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:30.015 03:32:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:30.015 03:32:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:30.015 03:32:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:30.015 03:32:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:30.015 03:32:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:30.272 03:32:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZWY5ZTUwMmNiY2M0ZTBiZWU5MDcxOGYzMmJmNDg5NTk0MzhmY2QwNGQ1OWRhZmM0bAyhVA==: --dhchap-ctrl-secret DHHC-1:03:NmQ0NTkxMzY0N2Q1MDBlYTgzNzA5NWNkMjY0NDhjOTg1YTA5NDg3NTk2Y2FiNGFjYWNiZGM5MGRkZWQ1NTNiNnbWCLs=: 00:22:31.202 03:32:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:31.202 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:31.202 03:32:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:31.202 03:32:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:31.202 03:32:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:31.202 03:32:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:31.202 03:32:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:22:31.202 03:32:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:31.202 03:32:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:31.202 03:32:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:31.202 03:32:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:31.202 03:32:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:22:31.202 03:32:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:31.202 03:32:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:22:31.202 03:32:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:31.202 03:32:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:22:31.202 03:32:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:31.202 03:32:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:31.202 03:32:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:32.133 request: 00:22:32.133 { 00:22:32.133 "name": "nvme0", 00:22:32.133 "trtype": "tcp", 00:22:32.133 "traddr": "10.0.0.2", 00:22:32.133 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:32.133 "adrfam": "ipv4", 00:22:32.133 "trsvcid": "4420", 00:22:32.133 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:32.133 "dhchap_key": "key2", 00:22:32.133 "method": "bdev_nvme_attach_controller", 00:22:32.133 "req_id": 1 00:22:32.133 } 00:22:32.133 Got JSON-RPC error response 00:22:32.133 response: 00:22:32.133 { 00:22:32.133 "code": -5, 00:22:32.133 "message": "Input/output error" 00:22:32.133 } 00:22:32.133 03:32:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:22:32.133 03:32:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:32.133 03:32:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:32.133 03:32:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:32.133 03:32:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:32.133 03:32:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:32.133 03:32:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:32.133 03:32:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:32.133 03:32:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:32.133 03:32:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:32.133 03:32:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:32.133 03:32:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:32.133 03:32:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:32.134 03:32:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:22:32.134 03:32:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:32.134 03:32:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:22:32.134 03:32:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:32.134 03:32:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:22:32.134 03:32:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:32.134 03:32:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:32.134 03:32:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:33.063 request: 00:22:33.063 { 00:22:33.063 "name": "nvme0", 00:22:33.063 "trtype": "tcp", 00:22:33.063 "traddr": "10.0.0.2", 00:22:33.063 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:33.063 "adrfam": "ipv4", 00:22:33.063 "trsvcid": "4420", 00:22:33.063 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:33.063 "dhchap_key": "key1", 00:22:33.063 "dhchap_ctrlr_key": "ckey2", 00:22:33.063 "method": "bdev_nvme_attach_controller", 00:22:33.063 "req_id": 1 00:22:33.063 } 00:22:33.063 Got JSON-RPC error response 00:22:33.063 response: 00:22:33.063 { 00:22:33.063 "code": -5, 00:22:33.063 "message": "Input/output error" 00:22:33.063 } 00:22:33.063 03:32:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:22:33.063 03:32:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:33.063 03:32:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:33.063 03:32:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:33.063 03:32:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:33.063 03:32:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:33.063 03:32:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:33.063 03:32:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:33.063 03:32:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:22:33.063 03:32:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:33.063 03:32:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:33.063 03:32:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:33.063 03:32:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:33.063 03:32:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:22:33.063 03:32:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:33.063 03:32:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:22:33.063 03:32:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:33.063 03:32:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:22:33.063 03:32:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:33.063 03:32:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:33.063 03:32:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:33.992 request: 00:22:33.992 { 00:22:33.992 "name": "nvme0", 00:22:33.992 "trtype": "tcp", 00:22:33.992 "traddr": "10.0.0.2", 00:22:33.992 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:33.992 "adrfam": "ipv4", 00:22:33.992 "trsvcid": "4420", 00:22:33.992 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:33.992 "dhchap_key": "key1", 00:22:33.992 "dhchap_ctrlr_key": "ckey1", 00:22:33.992 "method": "bdev_nvme_attach_controller", 00:22:33.992 "req_id": 1 00:22:33.992 } 00:22:33.992 Got JSON-RPC error response 00:22:33.992 response: 00:22:33.992 { 00:22:33.992 "code": -5, 00:22:33.992 "message": "Input/output error" 00:22:33.992 } 00:22:33.992 03:32:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:22:33.992 03:32:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:33.992 03:32:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:33.992 03:32:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:33.992 03:32:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:33.992 03:32:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:33.992 03:32:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:33.992 03:32:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:33.992 03:32:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 2419283 00:22:33.992 03:32:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@946 -- # '[' -z 2419283 ']' 00:22:33.992 03:32:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@950 -- # kill -0 2419283 00:22:33.992 03:32:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # uname 00:22:33.992 03:32:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:33.992 03:32:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2419283 00:22:33.992 03:32:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:22:33.992 03:32:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:22:33.992 03:32:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2419283' 00:22:33.992 killing process with pid 2419283 00:22:33.992 03:32:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@965 -- # kill 2419283 00:22:33.992 03:32:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@970 -- # wait 2419283 00:22:34.249 03:32:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:22:34.249 03:32:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:34.249 03:32:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@720 -- # xtrace_disable 00:22:34.249 03:32:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:34.249 03:32:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=2441674 00:22:34.249 03:32:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:22:34.249 03:32:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 2441674 00:22:34.249 03:32:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@827 -- # '[' -z 2441674 ']' 00:22:34.249 03:32:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:34.249 03:32:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:34.249 03:32:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:34.249 03:32:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:34.249 03:32:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:34.506 03:32:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:34.506 03:32:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@860 -- # return 0 00:22:34.506 03:32:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:34.506 03:32:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:34.506 03:32:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:34.506 03:32:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:34.506 03:32:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:22:34.506 03:32:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 2441674 00:22:34.506 03:32:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@827 -- # '[' -z 2441674 ']' 00:22:34.506 03:32:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:34.506 03:32:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:34.506 03:32:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:34.506 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:34.506 03:32:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:34.506 03:32:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:34.762 03:32:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:34.762 03:32:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@860 -- # return 0 00:22:34.762 03:32:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:22:34.762 03:32:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:34.762 03:32:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:34.762 03:32:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:34.762 03:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:22:34.762 03:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:34.762 03:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:34.762 03:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:34.762 03:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:34.762 03:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:34.762 03:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:34.762 03:32:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:34.762 03:32:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:34.762 03:32:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:34.762 03:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:34.763 03:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:35.712 00:22:35.712 03:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:35.712 03:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:35.712 03:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:35.970 03:32:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:35.970 03:32:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:35.970 03:32:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:35.970 03:32:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:35.970 03:32:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:35.970 03:32:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:35.970 { 00:22:35.970 "cntlid": 1, 00:22:35.970 "qid": 0, 00:22:35.970 "state": "enabled", 00:22:35.970 "listen_address": { 00:22:35.970 "trtype": "TCP", 00:22:35.970 "adrfam": "IPv4", 00:22:35.970 "traddr": "10.0.0.2", 00:22:35.970 "trsvcid": "4420" 00:22:35.970 }, 00:22:35.970 "peer_address": { 00:22:35.970 "trtype": "TCP", 00:22:35.970 "adrfam": "IPv4", 00:22:35.970 "traddr": "10.0.0.1", 00:22:35.970 "trsvcid": "48112" 00:22:35.970 }, 00:22:35.970 "auth": { 00:22:35.970 "state": "completed", 00:22:35.970 "digest": "sha512", 00:22:35.970 "dhgroup": "ffdhe8192" 00:22:35.970 } 00:22:35.970 } 00:22:35.970 ]' 00:22:35.970 03:32:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:35.970 03:32:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:35.970 03:32:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:36.228 03:32:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:36.228 03:32:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:36.228 03:32:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:36.228 03:32:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:36.228 03:32:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:36.486 03:32:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NTRlZDY1MGEyZTIwOTFhNTU5MDAxMzM4ZTc2ZDUwOGI3NDYxNjRjMWYxNDRmNDIyMzQwNzQwMGQwZjQyMzczMrqoUKk=: 00:22:37.418 03:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:37.418 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:37.418 03:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:37.418 03:32:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:37.418 03:32:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:37.418 03:32:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:37.418 03:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:37.418 03:32:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:37.418 03:32:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:37.418 03:32:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:37.418 03:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:22:37.418 03:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:22:37.675 03:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:37.675 03:32:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:22:37.675 03:32:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:37.675 03:32:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:22:37.675 03:32:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:37.675 03:32:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:22:37.675 03:32:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:37.676 03:32:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:37.676 03:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:37.932 request: 00:22:37.932 { 00:22:37.932 "name": "nvme0", 00:22:37.932 "trtype": "tcp", 00:22:37.932 "traddr": "10.0.0.2", 00:22:37.932 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:37.932 "adrfam": "ipv4", 00:22:37.932 "trsvcid": "4420", 00:22:37.932 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:37.932 "dhchap_key": "key3", 00:22:37.932 "method": "bdev_nvme_attach_controller", 00:22:37.932 "req_id": 1 00:22:37.932 } 00:22:37.932 Got JSON-RPC error response 00:22:37.932 response: 00:22:37.932 { 00:22:37.932 "code": -5, 00:22:37.932 "message": "Input/output error" 00:22:37.932 } 00:22:37.932 03:32:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:22:37.933 03:32:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:37.933 03:32:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:37.933 03:32:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:37.933 03:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:22:37.933 03:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:22:37.933 03:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:22:37.933 03:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:22:38.189 03:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:38.189 03:32:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:22:38.189 03:32:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:38.189 03:32:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:22:38.189 03:32:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:38.189 03:32:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:22:38.189 03:32:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:38.189 03:32:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:38.189 03:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:38.446 request: 00:22:38.446 { 00:22:38.446 "name": "nvme0", 00:22:38.446 "trtype": "tcp", 00:22:38.446 "traddr": "10.0.0.2", 00:22:38.446 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:38.446 "adrfam": "ipv4", 00:22:38.446 "trsvcid": "4420", 00:22:38.446 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:38.446 "dhchap_key": "key3", 00:22:38.446 "method": "bdev_nvme_attach_controller", 00:22:38.446 "req_id": 1 00:22:38.446 } 00:22:38.446 Got JSON-RPC error response 00:22:38.446 response: 00:22:38.446 { 00:22:38.446 "code": -5, 00:22:38.446 "message": "Input/output error" 00:22:38.446 } 00:22:38.446 03:32:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:22:38.446 03:32:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:38.446 03:32:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:38.446 03:32:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:38.446 03:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:22:38.446 03:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:22:38.446 03:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:22:38.446 03:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:38.446 03:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:38.446 03:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:38.703 03:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:38.703 03:32:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:38.703 03:32:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:38.703 03:32:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:38.703 03:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:38.703 03:32:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:38.703 03:32:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:38.703 03:32:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:38.703 03:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:38.703 03:32:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:22:38.703 03:32:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:38.703 03:32:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:22:38.704 03:32:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:38.704 03:32:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:22:38.704 03:32:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:38.704 03:32:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:38.704 03:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:38.960 request: 00:22:38.960 { 00:22:38.960 "name": "nvme0", 00:22:38.960 "trtype": "tcp", 00:22:38.960 "traddr": "10.0.0.2", 00:22:38.960 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:38.960 "adrfam": "ipv4", 00:22:38.960 "trsvcid": "4420", 00:22:38.960 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:38.960 "dhchap_key": "key0", 00:22:38.960 "dhchap_ctrlr_key": "key1", 00:22:38.960 "method": "bdev_nvme_attach_controller", 00:22:38.960 "req_id": 1 00:22:38.960 } 00:22:38.960 Got JSON-RPC error response 00:22:38.960 response: 00:22:38.960 { 00:22:38.960 "code": -5, 00:22:38.960 "message": "Input/output error" 00:22:38.960 } 00:22:38.960 03:32:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:22:38.960 03:32:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:38.960 03:32:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:38.960 03:32:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:38.960 03:32:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:22:38.960 03:32:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:22:39.218 00:22:39.218 03:32:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:22:39.218 03:32:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:22:39.218 03:32:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:39.475 03:32:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:39.475 03:32:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:39.475 03:32:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:39.733 03:32:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:22:39.733 03:32:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:22:39.733 03:32:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 2419425 00:22:39.733 03:32:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@946 -- # '[' -z 2419425 ']' 00:22:39.733 03:32:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@950 -- # kill -0 2419425 00:22:39.733 03:32:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # uname 00:22:39.733 03:32:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:39.733 03:32:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2419425 00:22:39.733 03:32:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:22:39.733 03:32:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:22:39.733 03:32:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2419425' 00:22:39.733 killing process with pid 2419425 00:22:39.733 03:32:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@965 -- # kill 2419425 00:22:39.733 03:32:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@970 -- # wait 2419425 00:22:40.298 03:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:22:40.298 03:32:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:40.298 03:32:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:22:40.298 03:32:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:40.298 03:32:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:22:40.298 03:32:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:40.298 03:32:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:40.298 rmmod nvme_tcp 00:22:40.298 rmmod nvme_fabrics 00:22:40.298 rmmod nvme_keyring 00:22:40.298 03:32:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:40.298 03:32:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:22:40.298 03:32:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:22:40.298 03:32:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 2441674 ']' 00:22:40.298 03:32:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 2441674 00:22:40.298 03:32:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@946 -- # '[' -z 2441674 ']' 00:22:40.298 03:32:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@950 -- # kill -0 2441674 00:22:40.298 03:32:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # uname 00:22:40.298 03:32:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:40.298 03:32:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2441674 00:22:40.298 03:32:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:22:40.298 03:32:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:22:40.298 03:32:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2441674' 00:22:40.298 killing process with pid 2441674 00:22:40.298 03:32:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@965 -- # kill 2441674 00:22:40.298 03:32:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@970 -- # wait 2441674 00:22:40.555 03:32:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:40.555 03:32:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:40.555 03:32:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:40.555 03:32:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:40.555 03:32:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:40.555 03:32:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:40.555 03:32:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:40.555 03:32:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:42.448 03:32:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:42.448 03:32:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.8bL /tmp/spdk.key-sha256.h1E /tmp/spdk.key-sha384.iWo /tmp/spdk.key-sha512.pvF /tmp/spdk.key-sha512.eYQ /tmp/spdk.key-sha384.uJe /tmp/spdk.key-sha256.MPL '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:22:42.448 00:22:42.448 real 3m8.264s 00:22:42.448 user 7m17.647s 00:22:42.448 sys 0m24.603s 00:22:42.448 03:32:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1122 -- # xtrace_disable 00:22:42.448 03:32:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:42.448 ************************************ 00:22:42.448 END TEST nvmf_auth_target 00:22:42.448 ************************************ 00:22:42.448 03:32:27 nvmf_tcp -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 00:22:42.448 03:32:27 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:22:42.448 03:32:27 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:22:42.448 03:32:27 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:22:42.448 03:32:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:42.448 ************************************ 00:22:42.448 START TEST nvmf_bdevio_no_huge 00:22:42.448 ************************************ 00:22:42.448 03:32:27 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:22:42.705 * Looking for test storage... 00:22:42.705 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:42.705 03:32:27 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:42.705 03:32:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:22:42.705 03:32:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:42.705 03:32:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:42.705 03:32:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:42.705 03:32:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:42.705 03:32:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:42.705 03:32:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:42.705 03:32:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:42.705 03:32:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:42.705 03:32:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:42.705 03:32:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:42.705 03:32:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:42.705 03:32:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:42.705 03:32:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:42.705 03:32:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:42.705 03:32:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:42.705 03:32:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:42.705 03:32:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:42.705 03:32:27 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:42.705 03:32:27 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:42.705 03:32:27 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:42.705 03:32:27 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:42.705 03:32:27 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:42.705 03:32:27 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:42.705 03:32:27 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:22:42.705 03:32:27 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:42.705 03:32:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:22:42.705 03:32:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:42.705 03:32:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:42.705 03:32:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:42.705 03:32:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:42.705 03:32:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:42.705 03:32:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:42.705 03:32:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:42.705 03:32:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:42.705 03:32:27 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:42.705 03:32:27 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:42.705 03:32:27 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:22:42.705 03:32:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:42.705 03:32:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:42.705 03:32:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:42.705 03:32:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:42.705 03:32:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:42.705 03:32:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:42.705 03:32:27 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:42.705 03:32:27 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:42.705 03:32:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:42.705 03:32:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:42.705 03:32:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:22:42.705 03:32:27 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:44.602 03:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:44.602 03:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:22:44.602 03:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:44.602 03:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:44.602 03:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:44.602 03:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:44.602 03:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:44.602 03:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:22:44.602 03:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:44.602 03:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:22:44.602 03:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:22:44.602 03:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:22:44.602 03:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:22:44.602 03:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:22:44.602 03:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:22:44.602 03:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:44.602 03:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:44.602 03:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:44.602 03:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:44.602 03:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:44.602 03:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:44.602 03:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:44.602 03:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:44.602 03:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:44.602 03:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:44.602 03:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:44.602 03:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:44.602 03:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:44.602 03:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:44.602 03:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:44.602 03:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:44.602 03:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:44.602 03:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:44.602 03:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:44.602 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:44.602 03:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:44.602 03:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:44.602 03:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:44.602 03:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:44.602 03:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:44.602 03:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:44.602 03:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:44.602 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:44.602 03:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:44.602 03:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:44.602 03:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:44.602 03:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:44.602 03:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:44.602 03:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:44.602 03:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:44.602 03:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:44.602 03:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:44.602 03:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:44.602 03:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:44.602 03:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:44.602 03:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:44.602 03:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:44.602 03:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:44.602 03:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:44.602 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:44.602 03:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:44.602 03:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:44.602 03:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:44.602 03:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:44.602 03:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:44.602 03:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:44.602 03:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:44.602 03:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:44.602 03:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:44.602 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:44.602 03:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:44.602 03:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:44.602 03:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:22:44.602 03:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:44.602 03:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:44.602 03:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:44.602 03:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:44.602 03:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:44.602 03:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:44.602 03:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:44.602 03:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:44.602 03:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:44.602 03:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:44.602 03:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:44.602 03:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:44.602 03:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:44.602 03:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:44.602 03:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:44.602 03:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:44.603 03:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:44.603 03:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:44.861 03:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:44.861 03:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:44.861 03:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:44.861 03:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:44.861 03:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:44.861 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:44.861 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.222 ms 00:22:44.861 00:22:44.861 --- 10.0.0.2 ping statistics --- 00:22:44.861 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:44.861 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:22:44.861 03:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:44.861 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:44.861 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.110 ms 00:22:44.861 00:22:44.861 --- 10.0.0.1 ping statistics --- 00:22:44.861 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:44.861 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:22:44.861 03:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:44.861 03:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:22:44.861 03:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:44.861 03:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:44.861 03:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:44.861 03:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:44.861 03:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:44.861 03:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:44.861 03:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:44.861 03:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:22:44.861 03:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:44.861 03:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@720 -- # xtrace_disable 00:22:44.861 03:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:44.861 03:32:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=2444427 00:22:44.861 03:32:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:22:44.861 03:32:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 2444427 00:22:44.861 03:32:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@827 -- # '[' -z 2444427 ']' 00:22:44.861 03:32:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:44.861 03:32:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:44.861 03:32:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:44.861 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:44.861 03:32:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:44.861 03:32:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:44.861 [2024-07-21 03:32:30.051146] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:22:44.861 [2024-07-21 03:32:30.051261] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:22:44.861 [2024-07-21 03:32:30.125882] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:45.120 [2024-07-21 03:32:30.216781] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:45.120 [2024-07-21 03:32:30.216841] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:45.120 [2024-07-21 03:32:30.216871] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:45.120 [2024-07-21 03:32:30.216886] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:45.120 [2024-07-21 03:32:30.216898] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:45.120 [2024-07-21 03:32:30.216983] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:22:45.120 [2024-07-21 03:32:30.217042] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:22:45.120 [2024-07-21 03:32:30.217096] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:22:45.120 [2024-07-21 03:32:30.217099] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:45.120 03:32:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:45.120 03:32:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@860 -- # return 0 00:22:45.120 03:32:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:45.120 03:32:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:45.120 03:32:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:45.120 03:32:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:45.120 03:32:30 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:45.120 03:32:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:45.120 03:32:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:45.120 [2024-07-21 03:32:30.329017] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:45.120 03:32:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:45.120 03:32:30 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:45.120 03:32:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:45.120 03:32:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:45.120 Malloc0 00:22:45.120 03:32:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:45.120 03:32:30 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:45.120 03:32:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:45.120 03:32:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:45.120 03:32:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:45.120 03:32:30 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:45.120 03:32:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:45.120 03:32:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:45.120 03:32:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:45.120 03:32:30 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:45.120 03:32:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:45.120 03:32:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:45.120 [2024-07-21 03:32:30.366739] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:45.120 03:32:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:45.120 03:32:30 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:22:45.120 03:32:30 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:22:45.120 03:32:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:22:45.120 03:32:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:22:45.120 03:32:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:45.120 03:32:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:45.120 { 00:22:45.120 "params": { 00:22:45.120 "name": "Nvme$subsystem", 00:22:45.120 "trtype": "$TEST_TRANSPORT", 00:22:45.120 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:45.120 "adrfam": "ipv4", 00:22:45.120 "trsvcid": "$NVMF_PORT", 00:22:45.120 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:45.120 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:45.120 "hdgst": ${hdgst:-false}, 00:22:45.120 "ddgst": ${ddgst:-false} 00:22:45.120 }, 00:22:45.120 "method": "bdev_nvme_attach_controller" 00:22:45.120 } 00:22:45.120 EOF 00:22:45.120 )") 00:22:45.120 03:32:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:22:45.120 03:32:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:22:45.120 03:32:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:22:45.120 03:32:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:22:45.120 "params": { 00:22:45.120 "name": "Nvme1", 00:22:45.120 "trtype": "tcp", 00:22:45.120 "traddr": "10.0.0.2", 00:22:45.120 "adrfam": "ipv4", 00:22:45.120 "trsvcid": "4420", 00:22:45.120 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:45.120 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:45.120 "hdgst": false, 00:22:45.120 "ddgst": false 00:22:45.120 }, 00:22:45.120 "method": "bdev_nvme_attach_controller" 00:22:45.120 }' 00:22:45.120 [2024-07-21 03:32:30.409781] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:22:45.120 [2024-07-21 03:32:30.409862] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid2444459 ] 00:22:45.378 [2024-07-21 03:32:30.471167] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:45.378 [2024-07-21 03:32:30.553440] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:45.378 [2024-07-21 03:32:30.553491] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:45.378 [2024-07-21 03:32:30.553494] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:45.636 I/O targets: 00:22:45.636 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:22:45.636 00:22:45.636 00:22:45.636 CUnit - A unit testing framework for C - Version 2.1-3 00:22:45.636 http://cunit.sourceforge.net/ 00:22:45.636 00:22:45.636 00:22:45.636 Suite: bdevio tests on: Nvme1n1 00:22:45.636 Test: blockdev write read block ...passed 00:22:45.893 Test: blockdev write zeroes read block ...passed 00:22:45.893 Test: blockdev write zeroes read no split ...passed 00:22:45.893 Test: blockdev write zeroes read split ...passed 00:22:45.893 Test: blockdev write zeroes read split partial ...passed 00:22:45.893 Test: blockdev reset ...[2024-07-21 03:32:31.064031] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:45.893 [2024-07-21 03:32:31.064145] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x82ca00 (9): Bad file descriptor 00:22:45.893 [2024-07-21 03:32:31.081626] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:45.893 passed 00:22:45.893 Test: blockdev write read 8 blocks ...passed 00:22:45.893 Test: blockdev write read size > 128k ...passed 00:22:45.893 Test: blockdev write read invalid size ...passed 00:22:45.893 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:22:45.893 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:22:45.893 Test: blockdev write read max offset ...passed 00:22:46.167 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:22:46.167 Test: blockdev writev readv 8 blocks ...passed 00:22:46.167 Test: blockdev writev readv 30 x 1block ...passed 00:22:46.167 Test: blockdev writev readv block ...passed 00:22:46.167 Test: blockdev writev readv size > 128k ...passed 00:22:46.167 Test: blockdev writev readv size > 128k in two iovs ...passed 00:22:46.167 Test: blockdev comparev and writev ...[2024-07-21 03:32:31.254106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:46.167 [2024-07-21 03:32:31.254142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:46.167 [2024-07-21 03:32:31.254168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:46.167 [2024-07-21 03:32:31.254186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:46.167 [2024-07-21 03:32:31.254507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:46.167 [2024-07-21 03:32:31.254533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:46.167 [2024-07-21 03:32:31.254555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:46.167 [2024-07-21 03:32:31.254572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:46.167 [2024-07-21 03:32:31.254905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:46.167 [2024-07-21 03:32:31.254930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:46.167 [2024-07-21 03:32:31.254953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:46.167 [2024-07-21 03:32:31.254970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:46.167 [2024-07-21 03:32:31.255294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:46.167 [2024-07-21 03:32:31.255318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:46.167 [2024-07-21 03:32:31.255339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:46.167 [2024-07-21 03:32:31.255356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:46.167 passed 00:22:46.167 Test: blockdev nvme passthru rw ...passed 00:22:46.167 Test: blockdev nvme passthru vendor specific ...[2024-07-21 03:32:31.337875] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:46.167 [2024-07-21 03:32:31.337903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:46.167 [2024-07-21 03:32:31.338061] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:46.167 [2024-07-21 03:32:31.338085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:46.167 [2024-07-21 03:32:31.338239] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:46.167 [2024-07-21 03:32:31.338262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:46.167 [2024-07-21 03:32:31.338416] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:46.167 [2024-07-21 03:32:31.338439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:46.167 passed 00:22:46.167 Test: blockdev nvme admin passthru ...passed 00:22:46.167 Test: blockdev copy ...passed 00:22:46.167 00:22:46.167 Run Summary: Type Total Ran Passed Failed Inactive 00:22:46.167 suites 1 1 n/a 0 0 00:22:46.167 tests 23 23 23 0 0 00:22:46.167 asserts 152 152 152 0 n/a 00:22:46.167 00:22:46.167 Elapsed time = 1.066 seconds 00:22:46.425 03:32:31 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:46.425 03:32:31 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:46.425 03:32:31 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:46.425 03:32:31 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:46.425 03:32:31 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:22:46.425 03:32:31 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:22:46.425 03:32:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:46.425 03:32:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:22:46.682 03:32:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:46.682 03:32:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:22:46.682 03:32:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:46.682 03:32:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:46.682 rmmod nvme_tcp 00:22:46.682 rmmod nvme_fabrics 00:22:46.682 rmmod nvme_keyring 00:22:46.682 03:32:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:46.682 03:32:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:22:46.682 03:32:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:22:46.682 03:32:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 2444427 ']' 00:22:46.682 03:32:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 2444427 00:22:46.682 03:32:31 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@946 -- # '[' -z 2444427 ']' 00:22:46.682 03:32:31 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@950 -- # kill -0 2444427 00:22:46.682 03:32:31 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@951 -- # uname 00:22:46.682 03:32:31 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:46.682 03:32:31 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2444427 00:22:46.682 03:32:31 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # process_name=reactor_3 00:22:46.682 03:32:31 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # '[' reactor_3 = sudo ']' 00:22:46.682 03:32:31 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2444427' 00:22:46.682 killing process with pid 2444427 00:22:46.682 03:32:31 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@965 -- # kill 2444427 00:22:46.683 03:32:31 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@970 -- # wait 2444427 00:22:46.941 03:32:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:46.941 03:32:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:46.941 03:32:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:46.941 03:32:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:46.941 03:32:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:46.941 03:32:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:46.941 03:32:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:46.941 03:32:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:49.465 03:32:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:49.465 00:22:49.465 real 0m6.481s 00:22:49.465 user 0m10.631s 00:22:49.465 sys 0m2.516s 00:22:49.465 03:32:34 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1122 -- # xtrace_disable 00:22:49.465 03:32:34 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:49.465 ************************************ 00:22:49.465 END TEST nvmf_bdevio_no_huge 00:22:49.465 ************************************ 00:22:49.465 03:32:34 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:22:49.465 03:32:34 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:22:49.465 03:32:34 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:22:49.465 03:32:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:49.465 ************************************ 00:22:49.465 START TEST nvmf_tls 00:22:49.465 ************************************ 00:22:49.465 03:32:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:22:49.465 * Looking for test storage... 00:22:49.465 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:49.465 03:32:34 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:49.465 03:32:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:22:49.465 03:32:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:49.465 03:32:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:49.465 03:32:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:49.465 03:32:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:49.465 03:32:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:49.465 03:32:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:49.465 03:32:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:49.465 03:32:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:49.465 03:32:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:49.465 03:32:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:49.465 03:32:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:49.465 03:32:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:49.465 03:32:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:49.465 03:32:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:49.465 03:32:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:49.465 03:32:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:49.465 03:32:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:49.465 03:32:34 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:49.465 03:32:34 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:49.465 03:32:34 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:49.465 03:32:34 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:49.465 03:32:34 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:49.465 03:32:34 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:49.465 03:32:34 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:22:49.465 03:32:34 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:49.465 03:32:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:22:49.465 03:32:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:49.465 03:32:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:49.465 03:32:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:49.465 03:32:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:49.465 03:32:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:49.465 03:32:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:49.465 03:32:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:49.465 03:32:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:49.465 03:32:34 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:49.465 03:32:34 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:22:49.465 03:32:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:49.465 03:32:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:49.465 03:32:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:49.465 03:32:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:49.465 03:32:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:49.465 03:32:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:49.465 03:32:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:49.465 03:32:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:49.465 03:32:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:49.465 03:32:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:49.465 03:32:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:22:49.465 03:32:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:51.377 03:32:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:51.377 03:32:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:22:51.377 03:32:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:51.377 03:32:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:51.377 03:32:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:51.377 03:32:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:51.377 03:32:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:51.377 03:32:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:22:51.377 03:32:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:51.377 03:32:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:22:51.377 03:32:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:22:51.377 03:32:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:22:51.377 03:32:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:22:51.377 03:32:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:22:51.377 03:32:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:22:51.377 03:32:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:51.377 03:32:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:51.377 03:32:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:51.377 03:32:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:51.377 03:32:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:51.377 03:32:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:51.377 03:32:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:51.377 03:32:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:51.377 03:32:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:51.377 03:32:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:51.377 03:32:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:51.377 03:32:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:51.377 03:32:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:51.377 03:32:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:51.377 03:32:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:51.377 03:32:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:51.377 03:32:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:51.377 03:32:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:51.377 03:32:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:51.377 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:51.377 03:32:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:51.377 03:32:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:51.377 03:32:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:51.377 03:32:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:51.377 03:32:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:51.377 03:32:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:51.377 03:32:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:51.377 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:51.377 03:32:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:51.377 03:32:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:51.377 03:32:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:51.377 03:32:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:51.377 03:32:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:51.377 03:32:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:51.377 03:32:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:51.377 03:32:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:51.377 03:32:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:51.378 03:32:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:51.378 03:32:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:51.378 03:32:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:51.378 03:32:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:51.378 03:32:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:51.378 03:32:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:51.378 03:32:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:51.378 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:51.378 03:32:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:51.378 03:32:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:51.378 03:32:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:51.378 03:32:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:51.378 03:32:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:51.378 03:32:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:51.378 03:32:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:51.378 03:32:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:51.378 03:32:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:51.378 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:51.378 03:32:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:51.378 03:32:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:51.378 03:32:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:22:51.378 03:32:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:51.378 03:32:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:51.378 03:32:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:51.378 03:32:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:51.378 03:32:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:51.378 03:32:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:51.378 03:32:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:51.378 03:32:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:51.378 03:32:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:51.378 03:32:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:51.378 03:32:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:51.378 03:32:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:51.378 03:32:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:51.378 03:32:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:51.378 03:32:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:51.378 03:32:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:51.378 03:32:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:51.378 03:32:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:51.378 03:32:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:51.378 03:32:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:51.378 03:32:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:51.378 03:32:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:51.378 03:32:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:51.378 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:51.378 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.131 ms 00:22:51.378 00:22:51.378 --- 10.0.0.2 ping statistics --- 00:22:51.378 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:51.378 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:22:51.378 03:32:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:51.378 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:51.378 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.102 ms 00:22:51.378 00:22:51.378 --- 10.0.0.1 ping statistics --- 00:22:51.378 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:51.378 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:22:51.378 03:32:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:51.378 03:32:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:22:51.378 03:32:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:51.378 03:32:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:51.378 03:32:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:51.378 03:32:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:51.378 03:32:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:51.378 03:32:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:51.378 03:32:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:51.378 03:32:36 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:22:51.378 03:32:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:51.378 03:32:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:22:51.378 03:32:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:51.378 03:32:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2446590 00:22:51.378 03:32:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:22:51.378 03:32:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2446590 00:22:51.378 03:32:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 2446590 ']' 00:22:51.378 03:32:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:51.378 03:32:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:51.378 03:32:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:51.378 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:51.378 03:32:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:51.378 03:32:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:51.378 [2024-07-21 03:32:36.527919] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:22:51.378 [2024-07-21 03:32:36.528027] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:51.378 EAL: No free 2048 kB hugepages reported on node 1 00:22:51.378 [2024-07-21 03:32:36.598013] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:51.378 [2024-07-21 03:32:36.686822] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:51.378 [2024-07-21 03:32:36.686883] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:51.378 [2024-07-21 03:32:36.686910] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:51.378 [2024-07-21 03:32:36.686924] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:51.378 [2024-07-21 03:32:36.686950] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:51.378 [2024-07-21 03:32:36.686981] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:51.633 03:32:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:51.633 03:32:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:22:51.633 03:32:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:51.633 03:32:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:51.633 03:32:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:51.633 03:32:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:51.633 03:32:36 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:22:51.633 03:32:36 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:22:51.890 true 00:22:51.890 03:32:37 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:51.890 03:32:37 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:22:52.147 03:32:37 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:22:52.147 03:32:37 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:22:52.147 03:32:37 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:22:52.404 03:32:37 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:52.404 03:32:37 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:22:52.662 03:32:37 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:22:52.662 03:32:37 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:22:52.662 03:32:37 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:22:52.918 03:32:38 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:52.918 03:32:38 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:22:53.175 03:32:38 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:22:53.175 03:32:38 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:22:53.175 03:32:38 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:53.175 03:32:38 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:22:53.433 03:32:38 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:22:53.433 03:32:38 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:22:53.433 03:32:38 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:22:53.690 03:32:38 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:53.690 03:32:38 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:22:53.947 03:32:39 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:22:53.947 03:32:39 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:22:53.947 03:32:39 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:22:54.242 03:32:39 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:54.242 03:32:39 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:22:54.513 03:32:39 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:22:54.513 03:32:39 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:22:54.513 03:32:39 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:22:54.513 03:32:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:22:54.513 03:32:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:22:54.513 03:32:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:22:54.513 03:32:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:22:54.513 03:32:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:22:54.513 03:32:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:22:54.513 03:32:39 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:54.513 03:32:39 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:22:54.513 03:32:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:22:54.513 03:32:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:22:54.513 03:32:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:22:54.513 03:32:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:22:54.513 03:32:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:22:54.513 03:32:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:22:54.513 03:32:39 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:22:54.513 03:32:39 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:22:54.513 03:32:39 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.qLzm6C3aOa 00:22:54.513 03:32:39 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:22:54.513 03:32:39 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.A9FJNnUWWp 00:22:54.513 03:32:39 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:54.513 03:32:39 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:22:54.513 03:32:39 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.qLzm6C3aOa 00:22:54.513 03:32:39 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.A9FJNnUWWp 00:22:54.513 03:32:39 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:22:54.771 03:32:39 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:22:55.336 03:32:40 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.qLzm6C3aOa 00:22:55.336 03:32:40 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.qLzm6C3aOa 00:22:55.336 03:32:40 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:55.336 [2024-07-21 03:32:40.591502] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:55.336 03:32:40 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:55.593 03:32:40 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:55.851 [2024-07-21 03:32:41.124967] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:55.851 [2024-07-21 03:32:41.125220] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:55.851 03:32:41 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:56.108 malloc0 00:22:56.108 03:32:41 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:56.366 03:32:41 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.qLzm6C3aOa 00:22:56.624 [2024-07-21 03:32:41.846013] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:56.624 03:32:41 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.qLzm6C3aOa 00:22:56.624 EAL: No free 2048 kB hugepages reported on node 1 00:23:08.811 Initializing NVMe Controllers 00:23:08.811 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:08.811 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:08.811 Initialization complete. Launching workers. 00:23:08.811 ======================================================== 00:23:08.811 Latency(us) 00:23:08.811 Device Information : IOPS MiB/s Average min max 00:23:08.811 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7468.76 29.17 8571.76 1369.29 9417.22 00:23:08.811 ======================================================== 00:23:08.811 Total : 7468.76 29.17 8571.76 1369.29 9417.22 00:23:08.811 00:23:08.811 03:32:51 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.qLzm6C3aOa 00:23:08.811 03:32:51 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:08.811 03:32:51 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:08.811 03:32:51 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:08.811 03:32:51 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.qLzm6C3aOa' 00:23:08.811 03:32:51 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:08.811 03:32:51 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2448415 00:23:08.811 03:32:51 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:08.811 03:32:51 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:08.811 03:32:51 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2448415 /var/tmp/bdevperf.sock 00:23:08.811 03:32:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 2448415 ']' 00:23:08.811 03:32:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:08.811 03:32:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:08.811 03:32:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:08.811 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:08.811 03:32:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:08.811 03:32:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:08.811 [2024-07-21 03:32:52.013701] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:23:08.811 [2024-07-21 03:32:52.013804] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2448415 ] 00:23:08.811 EAL: No free 2048 kB hugepages reported on node 1 00:23:08.811 [2024-07-21 03:32:52.080567] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:08.811 [2024-07-21 03:32:52.174485] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:08.811 03:32:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:08.811 03:32:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:08.811 03:32:52 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.qLzm6C3aOa 00:23:08.811 [2024-07-21 03:32:52.524807] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:08.811 [2024-07-21 03:32:52.524948] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:08.811 TLSTESTn1 00:23:08.811 03:32:52 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:08.811 Running I/O for 10 seconds... 00:23:18.767 00:23:18.767 Latency(us) 00:23:18.767 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:18.767 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:18.767 Verification LBA range: start 0x0 length 0x2000 00:23:18.767 TLSTESTn1 : 10.03 3474.34 13.57 0.00 0.00 36771.37 6990.51 38447.79 00:23:18.768 =================================================================================================================== 00:23:18.768 Total : 3474.34 13.57 0.00 0.00 36771.37 6990.51 38447.79 00:23:18.768 0 00:23:18.768 03:33:02 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:18.768 03:33:02 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 2448415 00:23:18.768 03:33:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 2448415 ']' 00:23:18.768 03:33:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 2448415 00:23:18.768 03:33:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:18.768 03:33:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:18.768 03:33:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2448415 00:23:18.768 03:33:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:23:18.768 03:33:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:23:18.768 03:33:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2448415' 00:23:18.768 killing process with pid 2448415 00:23:18.768 03:33:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 2448415 00:23:18.768 Received shutdown signal, test time was about 10.000000 seconds 00:23:18.768 00:23:18.768 Latency(us) 00:23:18.768 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:18.768 =================================================================================================================== 00:23:18.768 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:18.768 [2024-07-21 03:33:02.803693] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:18.768 03:33:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 2448415 00:23:18.768 03:33:03 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.A9FJNnUWWp 00:23:18.768 03:33:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:23:18.768 03:33:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.A9FJNnUWWp 00:23:18.768 03:33:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:23:18.768 03:33:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:18.768 03:33:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:23:18.768 03:33:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:18.768 03:33:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.A9FJNnUWWp 00:23:18.768 03:33:03 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:18.768 03:33:03 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:18.768 03:33:03 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:18.768 03:33:03 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.A9FJNnUWWp' 00:23:18.768 03:33:03 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:18.768 03:33:03 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2449852 00:23:18.768 03:33:03 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:18.768 03:33:03 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:18.768 03:33:03 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2449852 /var/tmp/bdevperf.sock 00:23:18.768 03:33:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 2449852 ']' 00:23:18.768 03:33:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:18.768 03:33:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:18.768 03:33:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:18.768 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:18.768 03:33:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:18.768 03:33:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:18.768 [2024-07-21 03:33:03.044145] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:23:18.768 [2024-07-21 03:33:03.044238] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2449852 ] 00:23:18.768 EAL: No free 2048 kB hugepages reported on node 1 00:23:18.768 [2024-07-21 03:33:03.104083] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:18.768 [2024-07-21 03:33:03.189950] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:18.768 03:33:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:18.768 03:33:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:18.768 03:33:03 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.A9FJNnUWWp 00:23:18.768 [2024-07-21 03:33:03.521792] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:18.768 [2024-07-21 03:33:03.521905] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:18.768 [2024-07-21 03:33:03.529165] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:18.768 [2024-07-21 03:33:03.529695] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ddbed0 (107): Transport endpoint is not connected 00:23:18.768 [2024-07-21 03:33:03.530685] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ddbed0 (9): Bad file descriptor 00:23:18.768 [2024-07-21 03:33:03.531684] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:18.768 [2024-07-21 03:33:03.531705] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:18.768 [2024-07-21 03:33:03.531722] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:18.768 request: 00:23:18.768 { 00:23:18.768 "name": "TLSTEST", 00:23:18.768 "trtype": "tcp", 00:23:18.768 "traddr": "10.0.0.2", 00:23:18.768 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:18.768 "adrfam": "ipv4", 00:23:18.768 "trsvcid": "4420", 00:23:18.768 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:18.768 "psk": "/tmp/tmp.A9FJNnUWWp", 00:23:18.768 "method": "bdev_nvme_attach_controller", 00:23:18.768 "req_id": 1 00:23:18.768 } 00:23:18.768 Got JSON-RPC error response 00:23:18.768 response: 00:23:18.768 { 00:23:18.768 "code": -5, 00:23:18.768 "message": "Input/output error" 00:23:18.768 } 00:23:18.768 03:33:03 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 2449852 00:23:18.768 03:33:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 2449852 ']' 00:23:18.768 03:33:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 2449852 00:23:18.768 03:33:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:18.768 03:33:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:18.768 03:33:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2449852 00:23:18.768 03:33:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:23:18.768 03:33:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:23:18.768 03:33:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2449852' 00:23:18.768 killing process with pid 2449852 00:23:18.768 03:33:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 2449852 00:23:18.768 Received shutdown signal, test time was about 10.000000 seconds 00:23:18.768 00:23:18.768 Latency(us) 00:23:18.768 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:18.768 =================================================================================================================== 00:23:18.768 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:18.768 [2024-07-21 03:33:03.582580] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:18.768 03:33:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 2449852 00:23:18.768 03:33:03 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:23:18.768 03:33:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:23:18.768 03:33:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:18.768 03:33:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:18.768 03:33:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:18.768 03:33:03 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.qLzm6C3aOa 00:23:18.768 03:33:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:23:18.768 03:33:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.qLzm6C3aOa 00:23:18.768 03:33:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:23:18.768 03:33:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:18.768 03:33:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:23:18.768 03:33:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:18.768 03:33:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.qLzm6C3aOa 00:23:18.768 03:33:03 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:18.768 03:33:03 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:18.768 03:33:03 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:23:18.768 03:33:03 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.qLzm6C3aOa' 00:23:18.768 03:33:03 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:18.768 03:33:03 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2449939 00:23:18.768 03:33:03 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:18.768 03:33:03 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:18.768 03:33:03 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2449939 /var/tmp/bdevperf.sock 00:23:18.768 03:33:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 2449939 ']' 00:23:18.768 03:33:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:18.768 03:33:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:18.768 03:33:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:18.768 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:18.768 03:33:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:18.768 03:33:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:18.768 [2024-07-21 03:33:03.842916] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:23:18.768 [2024-07-21 03:33:03.843008] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2449939 ] 00:23:18.768 EAL: No free 2048 kB hugepages reported on node 1 00:23:18.768 [2024-07-21 03:33:03.912657] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:18.768 [2024-07-21 03:33:04.002832] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:19.026 03:33:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:19.026 03:33:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:19.026 03:33:04 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.qLzm6C3aOa 00:23:19.284 [2024-07-21 03:33:04.359641] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:19.284 [2024-07-21 03:33:04.359761] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:19.284 [2024-07-21 03:33:04.371274] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:23:19.284 [2024-07-21 03:33:04.371305] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:23:19.284 [2024-07-21 03:33:04.371341] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:19.284 [2024-07-21 03:33:04.371644] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18c3ed0 (107): Transport endpoint is not connected 00:23:19.284 [2024-07-21 03:33:04.372633] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18c3ed0 (9): Bad file descriptor 00:23:19.284 [2024-07-21 03:33:04.373632] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:19.284 [2024-07-21 03:33:04.373653] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:19.284 [2024-07-21 03:33:04.373671] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:19.284 request: 00:23:19.284 { 00:23:19.284 "name": "TLSTEST", 00:23:19.284 "trtype": "tcp", 00:23:19.284 "traddr": "10.0.0.2", 00:23:19.284 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:19.284 "adrfam": "ipv4", 00:23:19.284 "trsvcid": "4420", 00:23:19.284 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:19.284 "psk": "/tmp/tmp.qLzm6C3aOa", 00:23:19.284 "method": "bdev_nvme_attach_controller", 00:23:19.284 "req_id": 1 00:23:19.284 } 00:23:19.284 Got JSON-RPC error response 00:23:19.284 response: 00:23:19.284 { 00:23:19.284 "code": -5, 00:23:19.284 "message": "Input/output error" 00:23:19.284 } 00:23:19.284 03:33:04 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 2449939 00:23:19.284 03:33:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 2449939 ']' 00:23:19.284 03:33:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 2449939 00:23:19.284 03:33:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:19.284 03:33:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:19.284 03:33:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2449939 00:23:19.284 03:33:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:23:19.284 03:33:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:23:19.284 03:33:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2449939' 00:23:19.284 killing process with pid 2449939 00:23:19.284 03:33:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 2449939 00:23:19.284 Received shutdown signal, test time was about 10.000000 seconds 00:23:19.284 00:23:19.284 Latency(us) 00:23:19.284 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:19.284 =================================================================================================================== 00:23:19.284 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:19.284 [2024-07-21 03:33:04.424051] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:19.284 03:33:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 2449939 00:23:19.543 03:33:04 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:23:19.543 03:33:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:23:19.543 03:33:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:19.543 03:33:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:19.543 03:33:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:19.543 03:33:04 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.qLzm6C3aOa 00:23:19.543 03:33:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:23:19.543 03:33:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.qLzm6C3aOa 00:23:19.543 03:33:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:23:19.543 03:33:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:19.543 03:33:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:23:19.543 03:33:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:19.543 03:33:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.qLzm6C3aOa 00:23:19.543 03:33:04 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:19.543 03:33:04 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:23:19.543 03:33:04 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:19.543 03:33:04 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.qLzm6C3aOa' 00:23:19.543 03:33:04 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:19.543 03:33:04 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2450011 00:23:19.543 03:33:04 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:19.543 03:33:04 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:19.543 03:33:04 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2450011 /var/tmp/bdevperf.sock 00:23:19.543 03:33:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 2450011 ']' 00:23:19.543 03:33:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:19.543 03:33:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:19.543 03:33:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:19.543 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:19.543 03:33:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:19.543 03:33:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:19.543 [2024-07-21 03:33:04.689112] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:23:19.543 [2024-07-21 03:33:04.689206] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2450011 ] 00:23:19.543 EAL: No free 2048 kB hugepages reported on node 1 00:23:19.543 [2024-07-21 03:33:04.749537] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:19.543 [2024-07-21 03:33:04.836959] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:19.801 03:33:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:19.801 03:33:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:19.801 03:33:04 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.qLzm6C3aOa 00:23:20.059 [2024-07-21 03:33:05.215409] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:20.059 [2024-07-21 03:33:05.215520] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:20.059 [2024-07-21 03:33:05.223418] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:23:20.059 [2024-07-21 03:33:05.223454] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:23:20.059 [2024-07-21 03:33:05.223494] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:20.059 [2024-07-21 03:33:05.224294] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1955ed0 (107): Transport endpoint is not connected 00:23:20.059 [2024-07-21 03:33:05.225285] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1955ed0 (9): Bad file descriptor 00:23:20.059 [2024-07-21 03:33:05.226284] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:23:20.059 [2024-07-21 03:33:05.226304] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:20.059 [2024-07-21 03:33:05.226321] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:23:20.059 request: 00:23:20.059 { 00:23:20.059 "name": "TLSTEST", 00:23:20.059 "trtype": "tcp", 00:23:20.059 "traddr": "10.0.0.2", 00:23:20.059 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:20.059 "adrfam": "ipv4", 00:23:20.059 "trsvcid": "4420", 00:23:20.059 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:20.059 "psk": "/tmp/tmp.qLzm6C3aOa", 00:23:20.059 "method": "bdev_nvme_attach_controller", 00:23:20.059 "req_id": 1 00:23:20.059 } 00:23:20.059 Got JSON-RPC error response 00:23:20.059 response: 00:23:20.059 { 00:23:20.059 "code": -5, 00:23:20.059 "message": "Input/output error" 00:23:20.059 } 00:23:20.059 03:33:05 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 2450011 00:23:20.059 03:33:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 2450011 ']' 00:23:20.059 03:33:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 2450011 00:23:20.059 03:33:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:20.059 03:33:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:20.059 03:33:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2450011 00:23:20.059 03:33:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:23:20.059 03:33:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:23:20.060 03:33:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2450011' 00:23:20.060 killing process with pid 2450011 00:23:20.060 03:33:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 2450011 00:23:20.060 Received shutdown signal, test time was about 10.000000 seconds 00:23:20.060 00:23:20.060 Latency(us) 00:23:20.060 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:20.060 =================================================================================================================== 00:23:20.060 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:20.060 [2024-07-21 03:33:05.276168] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:20.060 03:33:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 2450011 00:23:20.318 03:33:05 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:23:20.318 03:33:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:23:20.318 03:33:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:20.318 03:33:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:20.318 03:33:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:20.318 03:33:05 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:20.318 03:33:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:23:20.318 03:33:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:20.318 03:33:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:23:20.318 03:33:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:20.318 03:33:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:23:20.318 03:33:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:20.318 03:33:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:20.318 03:33:05 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:20.318 03:33:05 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:20.318 03:33:05 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:20.318 03:33:05 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:23:20.318 03:33:05 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:20.318 03:33:05 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2450147 00:23:20.318 03:33:05 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:20.318 03:33:05 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:20.318 03:33:05 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2450147 /var/tmp/bdevperf.sock 00:23:20.318 03:33:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 2450147 ']' 00:23:20.318 03:33:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:20.318 03:33:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:20.318 03:33:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:20.318 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:20.318 03:33:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:20.318 03:33:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:20.318 [2024-07-21 03:33:05.539701] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:23:20.318 [2024-07-21 03:33:05.539797] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2450147 ] 00:23:20.318 EAL: No free 2048 kB hugepages reported on node 1 00:23:20.318 [2024-07-21 03:33:05.597113] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:20.576 [2024-07-21 03:33:05.678838] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:20.576 03:33:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:20.576 03:33:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:20.576 03:33:05 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:23:20.833 [2024-07-21 03:33:06.015535] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:20.833 [2024-07-21 03:33:06.016949] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x86a5c0 (9): Bad file descriptor 00:23:20.833 [2024-07-21 03:33:06.017929] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:20.833 [2024-07-21 03:33:06.017951] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:20.833 [2024-07-21 03:33:06.017981] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:20.833 request: 00:23:20.833 { 00:23:20.833 "name": "TLSTEST", 00:23:20.833 "trtype": "tcp", 00:23:20.833 "traddr": "10.0.0.2", 00:23:20.833 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:20.833 "adrfam": "ipv4", 00:23:20.833 "trsvcid": "4420", 00:23:20.833 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:20.833 "method": "bdev_nvme_attach_controller", 00:23:20.833 "req_id": 1 00:23:20.833 } 00:23:20.833 Got JSON-RPC error response 00:23:20.833 response: 00:23:20.833 { 00:23:20.833 "code": -5, 00:23:20.833 "message": "Input/output error" 00:23:20.833 } 00:23:20.833 03:33:06 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 2450147 00:23:20.833 03:33:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 2450147 ']' 00:23:20.833 03:33:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 2450147 00:23:20.833 03:33:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:20.833 03:33:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:20.833 03:33:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2450147 00:23:20.833 03:33:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:23:20.833 03:33:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:23:20.833 03:33:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2450147' 00:23:20.833 killing process with pid 2450147 00:23:20.833 03:33:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 2450147 00:23:20.833 Received shutdown signal, test time was about 10.000000 seconds 00:23:20.833 00:23:20.833 Latency(us) 00:23:20.833 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:20.833 =================================================================================================================== 00:23:20.833 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:20.833 03:33:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 2450147 00:23:21.090 03:33:06 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:23:21.090 03:33:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:23:21.090 03:33:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:21.090 03:33:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:21.090 03:33:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:21.090 03:33:06 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 2446590 00:23:21.090 03:33:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 2446590 ']' 00:23:21.090 03:33:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 2446590 00:23:21.090 03:33:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:21.090 03:33:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:21.090 03:33:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2446590 00:23:21.090 03:33:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:23:21.090 03:33:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:23:21.090 03:33:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2446590' 00:23:21.090 killing process with pid 2446590 00:23:21.090 03:33:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 2446590 00:23:21.090 [2024-07-21 03:33:06.286588] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:21.090 03:33:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 2446590 00:23:21.347 03:33:06 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:23:21.347 03:33:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:23:21.347 03:33:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:23:21.347 03:33:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:23:21.347 03:33:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:23:21.347 03:33:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:23:21.347 03:33:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:23:21.347 03:33:06 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:23:21.347 03:33:06 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:23:21.347 03:33:06 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.o2BoRyDPUu 00:23:21.347 03:33:06 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:23:21.347 03:33:06 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.o2BoRyDPUu 00:23:21.347 03:33:06 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:23:21.347 03:33:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:21.347 03:33:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:21.347 03:33:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:21.347 03:33:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2450296 00:23:21.347 03:33:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:21.347 03:33:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2450296 00:23:21.347 03:33:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 2450296 ']' 00:23:21.347 03:33:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:21.347 03:33:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:21.347 03:33:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:21.347 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:21.347 03:33:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:21.347 03:33:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:21.347 [2024-07-21 03:33:06.594467] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:23:21.347 [2024-07-21 03:33:06.594560] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:21.347 EAL: No free 2048 kB hugepages reported on node 1 00:23:21.605 [2024-07-21 03:33:06.662870] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:21.605 [2024-07-21 03:33:06.754550] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:21.605 [2024-07-21 03:33:06.754630] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:21.605 [2024-07-21 03:33:06.754659] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:21.605 [2024-07-21 03:33:06.754674] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:21.605 [2024-07-21 03:33:06.754686] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:21.605 [2024-07-21 03:33:06.754716] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:21.605 03:33:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:21.605 03:33:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:21.605 03:33:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:21.605 03:33:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:21.605 03:33:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:21.605 03:33:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:21.605 03:33:06 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.o2BoRyDPUu 00:23:21.605 03:33:06 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.o2BoRyDPUu 00:23:21.605 03:33:06 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:21.862 [2024-07-21 03:33:07.117862] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:21.862 03:33:07 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:22.119 03:33:07 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:22.377 [2024-07-21 03:33:07.611229] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:22.377 [2024-07-21 03:33:07.611496] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:22.377 03:33:07 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:22.634 malloc0 00:23:22.634 03:33:07 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:22.892 03:33:08 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.o2BoRyDPUu 00:23:23.150 [2024-07-21 03:33:08.368691] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:23.150 03:33:08 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.o2BoRyDPUu 00:23:23.150 03:33:08 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:23.150 03:33:08 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:23.150 03:33:08 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:23.150 03:33:08 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.o2BoRyDPUu' 00:23:23.150 03:33:08 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:23.150 03:33:08 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2450935 00:23:23.150 03:33:08 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:23.150 03:33:08 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:23.150 03:33:08 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2450935 /var/tmp/bdevperf.sock 00:23:23.150 03:33:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 2450935 ']' 00:23:23.150 03:33:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:23.150 03:33:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:23.150 03:33:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:23.150 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:23.150 03:33:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:23.150 03:33:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:23.150 [2024-07-21 03:33:08.431713] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:23:23.150 [2024-07-21 03:33:08.431789] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2450935 ] 00:23:23.150 EAL: No free 2048 kB hugepages reported on node 1 00:23:23.407 [2024-07-21 03:33:08.491725] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:23.407 [2024-07-21 03:33:08.576503] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:23.407 03:33:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:23.407 03:33:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:23.407 03:33:08 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.o2BoRyDPUu 00:23:23.664 [2024-07-21 03:33:08.913813] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:23.665 [2024-07-21 03:33:08.913949] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:23.922 TLSTESTn1 00:23:23.922 03:33:09 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:23.922 Running I/O for 10 seconds... 00:23:33.925 00:23:33.926 Latency(us) 00:23:33.926 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:33.926 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:33.926 Verification LBA range: start 0x0 length 0x2000 00:23:33.926 TLSTESTn1 : 10.02 3555.51 13.89 0.00 0.00 35939.24 7961.41 34758.35 00:23:33.926 =================================================================================================================== 00:23:33.926 Total : 3555.51 13.89 0.00 0.00 35939.24 7961.41 34758.35 00:23:33.926 0 00:23:33.926 03:33:19 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:33.926 03:33:19 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 2450935 00:23:33.926 03:33:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 2450935 ']' 00:23:33.926 03:33:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 2450935 00:23:33.926 03:33:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:33.926 03:33:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:33.926 03:33:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2450935 00:23:33.926 03:33:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:23:33.926 03:33:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:23:33.926 03:33:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2450935' 00:23:33.926 killing process with pid 2450935 00:23:33.926 03:33:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 2450935 00:23:33.926 Received shutdown signal, test time was about 10.000000 seconds 00:23:33.926 00:23:33.926 Latency(us) 00:23:33.926 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:33.926 =================================================================================================================== 00:23:33.926 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:33.926 [2024-07-21 03:33:19.188790] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:33.926 03:33:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 2450935 00:23:34.184 03:33:19 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.o2BoRyDPUu 00:23:34.184 03:33:19 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.o2BoRyDPUu 00:23:34.184 03:33:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:23:34.184 03:33:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.o2BoRyDPUu 00:23:34.184 03:33:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:23:34.184 03:33:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:34.184 03:33:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:23:34.184 03:33:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:34.184 03:33:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.o2BoRyDPUu 00:23:34.184 03:33:19 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:34.184 03:33:19 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:34.184 03:33:19 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:34.184 03:33:19 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.o2BoRyDPUu' 00:23:34.184 03:33:19 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:34.184 03:33:19 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2452275 00:23:34.184 03:33:19 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:34.184 03:33:19 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:34.184 03:33:19 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2452275 /var/tmp/bdevperf.sock 00:23:34.184 03:33:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 2452275 ']' 00:23:34.184 03:33:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:34.184 03:33:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:34.184 03:33:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:34.184 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:34.184 03:33:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:34.184 03:33:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:34.184 [2024-07-21 03:33:19.464351] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:23:34.184 [2024-07-21 03:33:19.464441] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2452275 ] 00:23:34.184 EAL: No free 2048 kB hugepages reported on node 1 00:23:34.442 [2024-07-21 03:33:19.524683] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:34.442 [2024-07-21 03:33:19.610384] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:34.442 03:33:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:34.442 03:33:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:34.442 03:33:19 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.o2BoRyDPUu 00:23:34.699 [2024-07-21 03:33:19.936566] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:34.699 [2024-07-21 03:33:19.936681] bdev_nvme.c:6122:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:23:34.699 [2024-07-21 03:33:19.936697] bdev_nvme.c:6231:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.o2BoRyDPUu 00:23:34.699 request: 00:23:34.699 { 00:23:34.699 "name": "TLSTEST", 00:23:34.699 "trtype": "tcp", 00:23:34.699 "traddr": "10.0.0.2", 00:23:34.699 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:34.699 "adrfam": "ipv4", 00:23:34.699 "trsvcid": "4420", 00:23:34.699 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:34.699 "psk": "/tmp/tmp.o2BoRyDPUu", 00:23:34.699 "method": "bdev_nvme_attach_controller", 00:23:34.699 "req_id": 1 00:23:34.699 } 00:23:34.699 Got JSON-RPC error response 00:23:34.699 response: 00:23:34.699 { 00:23:34.699 "code": -1, 00:23:34.699 "message": "Operation not permitted" 00:23:34.699 } 00:23:34.699 03:33:19 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 2452275 00:23:34.699 03:33:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 2452275 ']' 00:23:34.699 03:33:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 2452275 00:23:34.699 03:33:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:34.699 03:33:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:34.699 03:33:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2452275 00:23:34.699 03:33:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:23:34.699 03:33:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:23:34.699 03:33:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2452275' 00:23:34.699 killing process with pid 2452275 00:23:34.699 03:33:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 2452275 00:23:34.699 Received shutdown signal, test time was about 10.000000 seconds 00:23:34.699 00:23:34.699 Latency(us) 00:23:34.699 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:34.699 =================================================================================================================== 00:23:34.699 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:34.699 03:33:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 2452275 00:23:34.956 03:33:20 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:23:34.956 03:33:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:23:34.956 03:33:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:34.956 03:33:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:34.956 03:33:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:34.957 03:33:20 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 2450296 00:23:34.957 03:33:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 2450296 ']' 00:23:34.957 03:33:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 2450296 00:23:34.957 03:33:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:34.957 03:33:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:34.957 03:33:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2450296 00:23:34.957 03:33:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:23:34.957 03:33:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:23:34.957 03:33:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2450296' 00:23:34.957 killing process with pid 2450296 00:23:34.957 03:33:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 2450296 00:23:34.957 [2024-07-21 03:33:20.235965] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:34.957 03:33:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 2450296 00:23:35.216 03:33:20 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:23:35.216 03:33:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:35.216 03:33:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:35.216 03:33:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:35.216 03:33:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2452418 00:23:35.216 03:33:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2452418 00:23:35.216 03:33:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:35.216 03:33:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 2452418 ']' 00:23:35.216 03:33:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:35.216 03:33:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:35.216 03:33:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:35.216 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:35.216 03:33:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:35.216 03:33:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:35.476 [2024-07-21 03:33:20.544263] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:23:35.476 [2024-07-21 03:33:20.544366] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:35.476 EAL: No free 2048 kB hugepages reported on node 1 00:23:35.476 [2024-07-21 03:33:20.613724] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:35.476 [2024-07-21 03:33:20.703963] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:35.476 [2024-07-21 03:33:20.704024] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:35.476 [2024-07-21 03:33:20.704049] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:35.476 [2024-07-21 03:33:20.704062] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:35.476 [2024-07-21 03:33:20.704074] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:35.476 [2024-07-21 03:33:20.704104] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:35.734 03:33:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:35.734 03:33:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:35.734 03:33:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:35.734 03:33:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:35.734 03:33:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:35.734 03:33:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:35.734 03:33:20 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.o2BoRyDPUu 00:23:35.734 03:33:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:23:35.734 03:33:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.o2BoRyDPUu 00:23:35.734 03:33:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:23:35.734 03:33:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:35.734 03:33:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:23:35.734 03:33:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:35.734 03:33:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.o2BoRyDPUu 00:23:35.734 03:33:20 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.o2BoRyDPUu 00:23:35.734 03:33:20 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:35.991 [2024-07-21 03:33:21.059666] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:35.991 03:33:21 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:36.248 03:33:21 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:36.248 [2024-07-21 03:33:21.520866] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:36.248 [2024-07-21 03:33:21.521110] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:36.248 03:33:21 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:36.506 malloc0 00:23:36.506 03:33:21 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:36.763 03:33:22 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.o2BoRyDPUu 00:23:37.020 [2024-07-21 03:33:22.234240] tcp.c:3575:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:23:37.020 [2024-07-21 03:33:22.234285] tcp.c:3661:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:23:37.020 [2024-07-21 03:33:22.234334] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:23:37.020 request: 00:23:37.020 { 00:23:37.020 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:37.020 "host": "nqn.2016-06.io.spdk:host1", 00:23:37.020 "psk": "/tmp/tmp.o2BoRyDPUu", 00:23:37.020 "method": "nvmf_subsystem_add_host", 00:23:37.020 "req_id": 1 00:23:37.020 } 00:23:37.020 Got JSON-RPC error response 00:23:37.020 response: 00:23:37.020 { 00:23:37.020 "code": -32603, 00:23:37.020 "message": "Internal error" 00:23:37.020 } 00:23:37.020 03:33:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:23:37.020 03:33:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:37.020 03:33:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:37.020 03:33:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:37.020 03:33:22 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 2452418 00:23:37.020 03:33:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 2452418 ']' 00:23:37.020 03:33:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 2452418 00:23:37.020 03:33:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:37.020 03:33:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:37.020 03:33:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2452418 00:23:37.020 03:33:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:23:37.020 03:33:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:23:37.020 03:33:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2452418' 00:23:37.020 killing process with pid 2452418 00:23:37.020 03:33:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 2452418 00:23:37.020 03:33:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 2452418 00:23:37.277 03:33:22 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.o2BoRyDPUu 00:23:37.277 03:33:22 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:23:37.277 03:33:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:37.277 03:33:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:37.277 03:33:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:37.277 03:33:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2452712 00:23:37.277 03:33:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:37.277 03:33:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2452712 00:23:37.277 03:33:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 2452712 ']' 00:23:37.277 03:33:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:37.277 03:33:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:37.277 03:33:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:37.277 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:37.277 03:33:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:37.277 03:33:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:37.277 [2024-07-21 03:33:22.588221] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:23:37.277 [2024-07-21 03:33:22.588310] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:37.535 EAL: No free 2048 kB hugepages reported on node 1 00:23:37.535 [2024-07-21 03:33:22.652073] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:37.535 [2024-07-21 03:33:22.735218] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:37.535 [2024-07-21 03:33:22.735269] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:37.535 [2024-07-21 03:33:22.735293] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:37.535 [2024-07-21 03:33:22.735304] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:37.535 [2024-07-21 03:33:22.735314] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:37.535 [2024-07-21 03:33:22.735339] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:37.535 03:33:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:37.792 03:33:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:37.792 03:33:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:37.792 03:33:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:37.792 03:33:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:37.792 03:33:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:37.792 03:33:22 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.o2BoRyDPUu 00:23:37.792 03:33:22 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.o2BoRyDPUu 00:23:37.792 03:33:22 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:38.048 [2024-07-21 03:33:23.110109] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:38.048 03:33:23 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:38.305 03:33:23 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:38.305 [2024-07-21 03:33:23.587382] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:38.305 [2024-07-21 03:33:23.587677] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:38.305 03:33:23 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:38.562 malloc0 00:23:38.562 03:33:23 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:38.819 03:33:24 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.o2BoRyDPUu 00:23:39.076 [2024-07-21 03:33:24.312861] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:39.077 03:33:24 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=2452995 00:23:39.077 03:33:24 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:39.077 03:33:24 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 2452995 /var/tmp/bdevperf.sock 00:23:39.077 03:33:24 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:39.077 03:33:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 2452995 ']' 00:23:39.077 03:33:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:39.077 03:33:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:39.077 03:33:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:39.077 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:39.077 03:33:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:39.077 03:33:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:39.077 [2024-07-21 03:33:24.372450] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:23:39.077 [2024-07-21 03:33:24.372523] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2452995 ] 00:23:39.334 EAL: No free 2048 kB hugepages reported on node 1 00:23:39.334 [2024-07-21 03:33:24.432852] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:39.334 [2024-07-21 03:33:24.518459] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:39.334 03:33:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:39.334 03:33:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:39.334 03:33:24 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.o2BoRyDPUu 00:23:39.591 [2024-07-21 03:33:24.830555] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:39.591 [2024-07-21 03:33:24.830688] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:39.591 TLSTESTn1 00:23:39.848 03:33:24 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:23:40.105 03:33:25 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:23:40.105 "subsystems": [ 00:23:40.105 { 00:23:40.105 "subsystem": "keyring", 00:23:40.105 "config": [] 00:23:40.105 }, 00:23:40.105 { 00:23:40.105 "subsystem": "iobuf", 00:23:40.105 "config": [ 00:23:40.105 { 00:23:40.105 "method": "iobuf_set_options", 00:23:40.105 "params": { 00:23:40.105 "small_pool_count": 8192, 00:23:40.105 "large_pool_count": 1024, 00:23:40.105 "small_bufsize": 8192, 00:23:40.105 "large_bufsize": 135168 00:23:40.105 } 00:23:40.105 } 00:23:40.105 ] 00:23:40.105 }, 00:23:40.105 { 00:23:40.105 "subsystem": "sock", 00:23:40.105 "config": [ 00:23:40.105 { 00:23:40.106 "method": "sock_set_default_impl", 00:23:40.106 "params": { 00:23:40.106 "impl_name": "posix" 00:23:40.106 } 00:23:40.106 }, 00:23:40.106 { 00:23:40.106 "method": "sock_impl_set_options", 00:23:40.106 "params": { 00:23:40.106 "impl_name": "ssl", 00:23:40.106 "recv_buf_size": 4096, 00:23:40.106 "send_buf_size": 4096, 00:23:40.106 "enable_recv_pipe": true, 00:23:40.106 "enable_quickack": false, 00:23:40.106 "enable_placement_id": 0, 00:23:40.106 "enable_zerocopy_send_server": true, 00:23:40.106 "enable_zerocopy_send_client": false, 00:23:40.106 "zerocopy_threshold": 0, 00:23:40.106 "tls_version": 0, 00:23:40.106 "enable_ktls": false 00:23:40.106 } 00:23:40.106 }, 00:23:40.106 { 00:23:40.106 "method": "sock_impl_set_options", 00:23:40.106 "params": { 00:23:40.106 "impl_name": "posix", 00:23:40.106 "recv_buf_size": 2097152, 00:23:40.106 "send_buf_size": 2097152, 00:23:40.106 "enable_recv_pipe": true, 00:23:40.106 "enable_quickack": false, 00:23:40.106 "enable_placement_id": 0, 00:23:40.106 "enable_zerocopy_send_server": true, 00:23:40.106 "enable_zerocopy_send_client": false, 00:23:40.106 "zerocopy_threshold": 0, 00:23:40.106 "tls_version": 0, 00:23:40.106 "enable_ktls": false 00:23:40.106 } 00:23:40.106 } 00:23:40.106 ] 00:23:40.106 }, 00:23:40.106 { 00:23:40.106 "subsystem": "vmd", 00:23:40.106 "config": [] 00:23:40.106 }, 00:23:40.106 { 00:23:40.106 "subsystem": "accel", 00:23:40.106 "config": [ 00:23:40.106 { 00:23:40.106 "method": "accel_set_options", 00:23:40.106 "params": { 00:23:40.106 "small_cache_size": 128, 00:23:40.106 "large_cache_size": 16, 00:23:40.106 "task_count": 2048, 00:23:40.106 "sequence_count": 2048, 00:23:40.106 "buf_count": 2048 00:23:40.106 } 00:23:40.106 } 00:23:40.106 ] 00:23:40.106 }, 00:23:40.106 { 00:23:40.106 "subsystem": "bdev", 00:23:40.106 "config": [ 00:23:40.106 { 00:23:40.106 "method": "bdev_set_options", 00:23:40.106 "params": { 00:23:40.106 "bdev_io_pool_size": 65535, 00:23:40.106 "bdev_io_cache_size": 256, 00:23:40.106 "bdev_auto_examine": true, 00:23:40.106 "iobuf_small_cache_size": 128, 00:23:40.106 "iobuf_large_cache_size": 16 00:23:40.106 } 00:23:40.106 }, 00:23:40.106 { 00:23:40.106 "method": "bdev_raid_set_options", 00:23:40.106 "params": { 00:23:40.106 "process_window_size_kb": 1024 00:23:40.106 } 00:23:40.106 }, 00:23:40.106 { 00:23:40.106 "method": "bdev_iscsi_set_options", 00:23:40.106 "params": { 00:23:40.106 "timeout_sec": 30 00:23:40.106 } 00:23:40.106 }, 00:23:40.106 { 00:23:40.106 "method": "bdev_nvme_set_options", 00:23:40.106 "params": { 00:23:40.106 "action_on_timeout": "none", 00:23:40.106 "timeout_us": 0, 00:23:40.106 "timeout_admin_us": 0, 00:23:40.106 "keep_alive_timeout_ms": 10000, 00:23:40.106 "arbitration_burst": 0, 00:23:40.106 "low_priority_weight": 0, 00:23:40.106 "medium_priority_weight": 0, 00:23:40.106 "high_priority_weight": 0, 00:23:40.106 "nvme_adminq_poll_period_us": 10000, 00:23:40.106 "nvme_ioq_poll_period_us": 0, 00:23:40.106 "io_queue_requests": 0, 00:23:40.106 "delay_cmd_submit": true, 00:23:40.106 "transport_retry_count": 4, 00:23:40.106 "bdev_retry_count": 3, 00:23:40.106 "transport_ack_timeout": 0, 00:23:40.106 "ctrlr_loss_timeout_sec": 0, 00:23:40.106 "reconnect_delay_sec": 0, 00:23:40.106 "fast_io_fail_timeout_sec": 0, 00:23:40.106 "disable_auto_failback": false, 00:23:40.106 "generate_uuids": false, 00:23:40.106 "transport_tos": 0, 00:23:40.106 "nvme_error_stat": false, 00:23:40.106 "rdma_srq_size": 0, 00:23:40.106 "io_path_stat": false, 00:23:40.106 "allow_accel_sequence": false, 00:23:40.106 "rdma_max_cq_size": 0, 00:23:40.106 "rdma_cm_event_timeout_ms": 0, 00:23:40.106 "dhchap_digests": [ 00:23:40.106 "sha256", 00:23:40.106 "sha384", 00:23:40.106 "sha512" 00:23:40.106 ], 00:23:40.106 "dhchap_dhgroups": [ 00:23:40.106 "null", 00:23:40.106 "ffdhe2048", 00:23:40.106 "ffdhe3072", 00:23:40.106 "ffdhe4096", 00:23:40.106 "ffdhe6144", 00:23:40.106 "ffdhe8192" 00:23:40.106 ] 00:23:40.106 } 00:23:40.106 }, 00:23:40.106 { 00:23:40.106 "method": "bdev_nvme_set_hotplug", 00:23:40.106 "params": { 00:23:40.106 "period_us": 100000, 00:23:40.106 "enable": false 00:23:40.106 } 00:23:40.106 }, 00:23:40.106 { 00:23:40.106 "method": "bdev_malloc_create", 00:23:40.106 "params": { 00:23:40.106 "name": "malloc0", 00:23:40.106 "num_blocks": 8192, 00:23:40.106 "block_size": 4096, 00:23:40.106 "physical_block_size": 4096, 00:23:40.106 "uuid": "e7133111-c9b5-49f4-899e-3c9cea315295", 00:23:40.106 "optimal_io_boundary": 0 00:23:40.106 } 00:23:40.106 }, 00:23:40.106 { 00:23:40.106 "method": "bdev_wait_for_examine" 00:23:40.106 } 00:23:40.106 ] 00:23:40.106 }, 00:23:40.106 { 00:23:40.106 "subsystem": "nbd", 00:23:40.106 "config": [] 00:23:40.106 }, 00:23:40.106 { 00:23:40.106 "subsystem": "scheduler", 00:23:40.106 "config": [ 00:23:40.106 { 00:23:40.106 "method": "framework_set_scheduler", 00:23:40.106 "params": { 00:23:40.106 "name": "static" 00:23:40.106 } 00:23:40.106 } 00:23:40.106 ] 00:23:40.106 }, 00:23:40.106 { 00:23:40.106 "subsystem": "nvmf", 00:23:40.106 "config": [ 00:23:40.106 { 00:23:40.106 "method": "nvmf_set_config", 00:23:40.106 "params": { 00:23:40.106 "discovery_filter": "match_any", 00:23:40.106 "admin_cmd_passthru": { 00:23:40.106 "identify_ctrlr": false 00:23:40.106 } 00:23:40.106 } 00:23:40.106 }, 00:23:40.106 { 00:23:40.106 "method": "nvmf_set_max_subsystems", 00:23:40.106 "params": { 00:23:40.106 "max_subsystems": 1024 00:23:40.106 } 00:23:40.106 }, 00:23:40.106 { 00:23:40.106 "method": "nvmf_set_crdt", 00:23:40.106 "params": { 00:23:40.106 "crdt1": 0, 00:23:40.106 "crdt2": 0, 00:23:40.106 "crdt3": 0 00:23:40.106 } 00:23:40.106 }, 00:23:40.106 { 00:23:40.106 "method": "nvmf_create_transport", 00:23:40.106 "params": { 00:23:40.106 "trtype": "TCP", 00:23:40.106 "max_queue_depth": 128, 00:23:40.106 "max_io_qpairs_per_ctrlr": 127, 00:23:40.106 "in_capsule_data_size": 4096, 00:23:40.106 "max_io_size": 131072, 00:23:40.106 "io_unit_size": 131072, 00:23:40.106 "max_aq_depth": 128, 00:23:40.106 "num_shared_buffers": 511, 00:23:40.106 "buf_cache_size": 4294967295, 00:23:40.106 "dif_insert_or_strip": false, 00:23:40.106 "zcopy": false, 00:23:40.106 "c2h_success": false, 00:23:40.106 "sock_priority": 0, 00:23:40.106 "abort_timeout_sec": 1, 00:23:40.106 "ack_timeout": 0, 00:23:40.106 "data_wr_pool_size": 0 00:23:40.106 } 00:23:40.106 }, 00:23:40.106 { 00:23:40.106 "method": "nvmf_create_subsystem", 00:23:40.106 "params": { 00:23:40.106 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:40.106 "allow_any_host": false, 00:23:40.106 "serial_number": "SPDK00000000000001", 00:23:40.106 "model_number": "SPDK bdev Controller", 00:23:40.106 "max_namespaces": 10, 00:23:40.106 "min_cntlid": 1, 00:23:40.106 "max_cntlid": 65519, 00:23:40.106 "ana_reporting": false 00:23:40.106 } 00:23:40.106 }, 00:23:40.106 { 00:23:40.106 "method": "nvmf_subsystem_add_host", 00:23:40.106 "params": { 00:23:40.106 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:40.106 "host": "nqn.2016-06.io.spdk:host1", 00:23:40.106 "psk": "/tmp/tmp.o2BoRyDPUu" 00:23:40.106 } 00:23:40.106 }, 00:23:40.106 { 00:23:40.106 "method": "nvmf_subsystem_add_ns", 00:23:40.106 "params": { 00:23:40.106 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:40.106 "namespace": { 00:23:40.106 "nsid": 1, 00:23:40.106 "bdev_name": "malloc0", 00:23:40.106 "nguid": "E7133111C9B549F4899E3C9CEA315295", 00:23:40.106 "uuid": "e7133111-c9b5-49f4-899e-3c9cea315295", 00:23:40.106 "no_auto_visible": false 00:23:40.106 } 00:23:40.106 } 00:23:40.106 }, 00:23:40.106 { 00:23:40.106 "method": "nvmf_subsystem_add_listener", 00:23:40.106 "params": { 00:23:40.106 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:40.106 "listen_address": { 00:23:40.106 "trtype": "TCP", 00:23:40.106 "adrfam": "IPv4", 00:23:40.106 "traddr": "10.0.0.2", 00:23:40.106 "trsvcid": "4420" 00:23:40.106 }, 00:23:40.106 "secure_channel": true 00:23:40.106 } 00:23:40.106 } 00:23:40.106 ] 00:23:40.106 } 00:23:40.106 ] 00:23:40.106 }' 00:23:40.106 03:33:25 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:23:40.364 03:33:25 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:23:40.364 "subsystems": [ 00:23:40.364 { 00:23:40.364 "subsystem": "keyring", 00:23:40.364 "config": [] 00:23:40.364 }, 00:23:40.364 { 00:23:40.364 "subsystem": "iobuf", 00:23:40.364 "config": [ 00:23:40.364 { 00:23:40.364 "method": "iobuf_set_options", 00:23:40.364 "params": { 00:23:40.364 "small_pool_count": 8192, 00:23:40.364 "large_pool_count": 1024, 00:23:40.364 "small_bufsize": 8192, 00:23:40.364 "large_bufsize": 135168 00:23:40.364 } 00:23:40.364 } 00:23:40.364 ] 00:23:40.364 }, 00:23:40.364 { 00:23:40.364 "subsystem": "sock", 00:23:40.364 "config": [ 00:23:40.364 { 00:23:40.364 "method": "sock_set_default_impl", 00:23:40.364 "params": { 00:23:40.364 "impl_name": "posix" 00:23:40.364 } 00:23:40.364 }, 00:23:40.364 { 00:23:40.364 "method": "sock_impl_set_options", 00:23:40.364 "params": { 00:23:40.364 "impl_name": "ssl", 00:23:40.364 "recv_buf_size": 4096, 00:23:40.364 "send_buf_size": 4096, 00:23:40.364 "enable_recv_pipe": true, 00:23:40.364 "enable_quickack": false, 00:23:40.364 "enable_placement_id": 0, 00:23:40.364 "enable_zerocopy_send_server": true, 00:23:40.364 "enable_zerocopy_send_client": false, 00:23:40.364 "zerocopy_threshold": 0, 00:23:40.364 "tls_version": 0, 00:23:40.364 "enable_ktls": false 00:23:40.364 } 00:23:40.364 }, 00:23:40.364 { 00:23:40.364 "method": "sock_impl_set_options", 00:23:40.364 "params": { 00:23:40.364 "impl_name": "posix", 00:23:40.364 "recv_buf_size": 2097152, 00:23:40.364 "send_buf_size": 2097152, 00:23:40.364 "enable_recv_pipe": true, 00:23:40.364 "enable_quickack": false, 00:23:40.364 "enable_placement_id": 0, 00:23:40.364 "enable_zerocopy_send_server": true, 00:23:40.364 "enable_zerocopy_send_client": false, 00:23:40.364 "zerocopy_threshold": 0, 00:23:40.364 "tls_version": 0, 00:23:40.364 "enable_ktls": false 00:23:40.364 } 00:23:40.364 } 00:23:40.364 ] 00:23:40.364 }, 00:23:40.364 { 00:23:40.364 "subsystem": "vmd", 00:23:40.364 "config": [] 00:23:40.364 }, 00:23:40.364 { 00:23:40.364 "subsystem": "accel", 00:23:40.364 "config": [ 00:23:40.364 { 00:23:40.364 "method": "accel_set_options", 00:23:40.364 "params": { 00:23:40.364 "small_cache_size": 128, 00:23:40.364 "large_cache_size": 16, 00:23:40.364 "task_count": 2048, 00:23:40.364 "sequence_count": 2048, 00:23:40.364 "buf_count": 2048 00:23:40.364 } 00:23:40.364 } 00:23:40.364 ] 00:23:40.364 }, 00:23:40.364 { 00:23:40.364 "subsystem": "bdev", 00:23:40.364 "config": [ 00:23:40.364 { 00:23:40.364 "method": "bdev_set_options", 00:23:40.364 "params": { 00:23:40.364 "bdev_io_pool_size": 65535, 00:23:40.364 "bdev_io_cache_size": 256, 00:23:40.364 "bdev_auto_examine": true, 00:23:40.364 "iobuf_small_cache_size": 128, 00:23:40.364 "iobuf_large_cache_size": 16 00:23:40.364 } 00:23:40.364 }, 00:23:40.364 { 00:23:40.364 "method": "bdev_raid_set_options", 00:23:40.364 "params": { 00:23:40.364 "process_window_size_kb": 1024 00:23:40.364 } 00:23:40.364 }, 00:23:40.364 { 00:23:40.364 "method": "bdev_iscsi_set_options", 00:23:40.364 "params": { 00:23:40.364 "timeout_sec": 30 00:23:40.364 } 00:23:40.364 }, 00:23:40.364 { 00:23:40.364 "method": "bdev_nvme_set_options", 00:23:40.364 "params": { 00:23:40.364 "action_on_timeout": "none", 00:23:40.364 "timeout_us": 0, 00:23:40.364 "timeout_admin_us": 0, 00:23:40.364 "keep_alive_timeout_ms": 10000, 00:23:40.364 "arbitration_burst": 0, 00:23:40.364 "low_priority_weight": 0, 00:23:40.364 "medium_priority_weight": 0, 00:23:40.364 "high_priority_weight": 0, 00:23:40.364 "nvme_adminq_poll_period_us": 10000, 00:23:40.364 "nvme_ioq_poll_period_us": 0, 00:23:40.364 "io_queue_requests": 512, 00:23:40.364 "delay_cmd_submit": true, 00:23:40.364 "transport_retry_count": 4, 00:23:40.364 "bdev_retry_count": 3, 00:23:40.364 "transport_ack_timeout": 0, 00:23:40.364 "ctrlr_loss_timeout_sec": 0, 00:23:40.364 "reconnect_delay_sec": 0, 00:23:40.364 "fast_io_fail_timeout_sec": 0, 00:23:40.364 "disable_auto_failback": false, 00:23:40.364 "generate_uuids": false, 00:23:40.364 "transport_tos": 0, 00:23:40.364 "nvme_error_stat": false, 00:23:40.364 "rdma_srq_size": 0, 00:23:40.364 "io_path_stat": false, 00:23:40.364 "allow_accel_sequence": false, 00:23:40.364 "rdma_max_cq_size": 0, 00:23:40.364 "rdma_cm_event_timeout_ms": 0, 00:23:40.364 "dhchap_digests": [ 00:23:40.364 "sha256", 00:23:40.364 "sha384", 00:23:40.364 "sha512" 00:23:40.364 ], 00:23:40.364 "dhchap_dhgroups": [ 00:23:40.364 "null", 00:23:40.364 "ffdhe2048", 00:23:40.364 "ffdhe3072", 00:23:40.364 "ffdhe4096", 00:23:40.364 "ffdhe6144", 00:23:40.364 "ffdhe8192" 00:23:40.364 ] 00:23:40.364 } 00:23:40.364 }, 00:23:40.364 { 00:23:40.364 "method": "bdev_nvme_attach_controller", 00:23:40.364 "params": { 00:23:40.364 "name": "TLSTEST", 00:23:40.364 "trtype": "TCP", 00:23:40.364 "adrfam": "IPv4", 00:23:40.364 "traddr": "10.0.0.2", 00:23:40.364 "trsvcid": "4420", 00:23:40.364 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:40.364 "prchk_reftag": false, 00:23:40.364 "prchk_guard": false, 00:23:40.364 "ctrlr_loss_timeout_sec": 0, 00:23:40.364 "reconnect_delay_sec": 0, 00:23:40.364 "fast_io_fail_timeout_sec": 0, 00:23:40.364 "psk": "/tmp/tmp.o2BoRyDPUu", 00:23:40.364 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:40.364 "hdgst": false, 00:23:40.364 "ddgst": false 00:23:40.364 } 00:23:40.364 }, 00:23:40.364 { 00:23:40.364 "method": "bdev_nvme_set_hotplug", 00:23:40.364 "params": { 00:23:40.364 "period_us": 100000, 00:23:40.364 "enable": false 00:23:40.364 } 00:23:40.364 }, 00:23:40.364 { 00:23:40.364 "method": "bdev_wait_for_examine" 00:23:40.364 } 00:23:40.364 ] 00:23:40.364 }, 00:23:40.364 { 00:23:40.364 "subsystem": "nbd", 00:23:40.364 "config": [] 00:23:40.364 } 00:23:40.364 ] 00:23:40.364 }' 00:23:40.364 03:33:25 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 2452995 00:23:40.364 03:33:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 2452995 ']' 00:23:40.364 03:33:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 2452995 00:23:40.364 03:33:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:40.364 03:33:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:40.364 03:33:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2452995 00:23:40.364 03:33:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:23:40.364 03:33:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:23:40.364 03:33:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2452995' 00:23:40.364 killing process with pid 2452995 00:23:40.364 03:33:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 2452995 00:23:40.364 Received shutdown signal, test time was about 10.000000 seconds 00:23:40.364 00:23:40.364 Latency(us) 00:23:40.364 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:40.364 =================================================================================================================== 00:23:40.364 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:40.364 [2024-07-21 03:33:25.563980] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:40.364 03:33:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 2452995 00:23:40.622 03:33:25 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 2452712 00:23:40.622 03:33:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 2452712 ']' 00:23:40.622 03:33:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 2452712 00:23:40.622 03:33:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:40.622 03:33:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:40.622 03:33:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2452712 00:23:40.622 03:33:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:23:40.622 03:33:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:23:40.622 03:33:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2452712' 00:23:40.622 killing process with pid 2452712 00:23:40.622 03:33:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 2452712 00:23:40.622 [2024-07-21 03:33:25.817710] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:40.622 03:33:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 2452712 00:23:40.880 03:33:26 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:23:40.880 03:33:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:40.880 03:33:26 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:23:40.880 "subsystems": [ 00:23:40.880 { 00:23:40.880 "subsystem": "keyring", 00:23:40.880 "config": [] 00:23:40.880 }, 00:23:40.880 { 00:23:40.880 "subsystem": "iobuf", 00:23:40.880 "config": [ 00:23:40.880 { 00:23:40.880 "method": "iobuf_set_options", 00:23:40.880 "params": { 00:23:40.880 "small_pool_count": 8192, 00:23:40.880 "large_pool_count": 1024, 00:23:40.880 "small_bufsize": 8192, 00:23:40.880 "large_bufsize": 135168 00:23:40.880 } 00:23:40.880 } 00:23:40.880 ] 00:23:40.880 }, 00:23:40.880 { 00:23:40.880 "subsystem": "sock", 00:23:40.880 "config": [ 00:23:40.880 { 00:23:40.880 "method": "sock_set_default_impl", 00:23:40.880 "params": { 00:23:40.880 "impl_name": "posix" 00:23:40.880 } 00:23:40.880 }, 00:23:40.880 { 00:23:40.880 "method": "sock_impl_set_options", 00:23:40.880 "params": { 00:23:40.880 "impl_name": "ssl", 00:23:40.880 "recv_buf_size": 4096, 00:23:40.880 "send_buf_size": 4096, 00:23:40.880 "enable_recv_pipe": true, 00:23:40.880 "enable_quickack": false, 00:23:40.880 "enable_placement_id": 0, 00:23:40.880 "enable_zerocopy_send_server": true, 00:23:40.880 "enable_zerocopy_send_client": false, 00:23:40.880 "zerocopy_threshold": 0, 00:23:40.880 "tls_version": 0, 00:23:40.880 "enable_ktls": false 00:23:40.880 } 00:23:40.880 }, 00:23:40.880 { 00:23:40.880 "method": "sock_impl_set_options", 00:23:40.880 "params": { 00:23:40.880 "impl_name": "posix", 00:23:40.880 "recv_buf_size": 2097152, 00:23:40.880 "send_buf_size": 2097152, 00:23:40.880 "enable_recv_pipe": true, 00:23:40.880 "enable_quickack": false, 00:23:40.880 "enable_placement_id": 0, 00:23:40.880 "enable_zerocopy_send_server": true, 00:23:40.880 "enable_zerocopy_send_client": false, 00:23:40.880 "zerocopy_threshold": 0, 00:23:40.880 "tls_version": 0, 00:23:40.880 "enable_ktls": false 00:23:40.880 } 00:23:40.880 } 00:23:40.880 ] 00:23:40.880 }, 00:23:40.880 { 00:23:40.880 "subsystem": "vmd", 00:23:40.880 "config": [] 00:23:40.880 }, 00:23:40.880 { 00:23:40.880 "subsystem": "accel", 00:23:40.880 "config": [ 00:23:40.880 { 00:23:40.880 "method": "accel_set_options", 00:23:40.880 "params": { 00:23:40.880 "small_cache_size": 128, 00:23:40.880 "large_cache_size": 16, 00:23:40.880 "task_count": 2048, 00:23:40.880 "sequence_count": 2048, 00:23:40.880 "buf_count": 2048 00:23:40.880 } 00:23:40.880 } 00:23:40.880 ] 00:23:40.880 }, 00:23:40.880 { 00:23:40.880 "subsystem": "bdev", 00:23:40.880 "config": [ 00:23:40.880 { 00:23:40.880 "method": "bdev_set_options", 00:23:40.880 "params": { 00:23:40.880 "bdev_io_pool_size": 65535, 00:23:40.880 "bdev_io_cache_size": 256, 00:23:40.880 "bdev_auto_examine": true, 00:23:40.880 "iobuf_small_cache_size": 128, 00:23:40.880 "iobuf_large_cache_size": 16 00:23:40.880 } 00:23:40.880 }, 00:23:40.880 { 00:23:40.880 "method": "bdev_raid_set_options", 00:23:40.880 "params": { 00:23:40.880 "process_window_size_kb": 1024 00:23:40.880 } 00:23:40.880 }, 00:23:40.880 { 00:23:40.880 "method": "bdev_iscsi_set_options", 00:23:40.880 "params": { 00:23:40.880 "timeout_sec": 30 00:23:40.880 } 00:23:40.880 }, 00:23:40.880 { 00:23:40.880 "method": "bdev_nvme_set_options", 00:23:40.880 "params": { 00:23:40.880 "action_on_timeout": "none", 00:23:40.880 "timeout_us": 0, 00:23:40.880 "timeout_admin_us": 0, 00:23:40.880 "keep_alive_timeout_ms": 10000, 00:23:40.880 "arbitration_burst": 0, 00:23:40.880 "low_priority_weight": 0, 00:23:40.880 "medium_priority_weight": 0, 00:23:40.880 "high_priority_weight": 0, 00:23:40.880 "nvme_adminq_poll_period_us": 10000, 00:23:40.880 "nvme_ioq_poll_period_us": 0, 00:23:40.880 "io_queue_requests": 0, 00:23:40.880 "delay_cmd_submit": true, 00:23:40.880 "transport_retry_count": 4, 00:23:40.880 "bdev_retry_count": 3, 00:23:40.880 "transport_ack_timeout": 0, 00:23:40.880 "ctrlr_loss_timeout_sec": 0, 00:23:40.880 "reconnect_delay_sec": 0, 00:23:40.880 "fast_io_fail_timeout_sec": 0, 00:23:40.880 "disable_auto_failback": false, 00:23:40.880 "generate_uuids": false, 00:23:40.880 "transport_tos": 0, 00:23:40.880 "nvme_error_stat": false, 00:23:40.880 "rdma_srq_size": 0, 00:23:40.880 "io_path_stat": false, 00:23:40.880 "allow_accel_sequence": false, 00:23:40.880 "rdma_max_cq_size": 0, 00:23:40.880 "rdma_cm_event_timeout_ms": 0, 00:23:40.880 "dhchap_digests": [ 00:23:40.880 "sha256", 00:23:40.880 "sha384", 00:23:40.880 "sha512" 00:23:40.880 ], 00:23:40.880 "dhchap_dhgroups": [ 00:23:40.880 "null", 00:23:40.880 "ffdhe2048", 00:23:40.880 "ffdhe3072", 00:23:40.880 "ffdhe4096", 00:23:40.880 "ffdhe6144", 00:23:40.880 "ffdhe8192" 00:23:40.880 ] 00:23:40.880 } 00:23:40.880 }, 00:23:40.881 { 00:23:40.881 "method": "bdev_nvme_set_hotplug", 00:23:40.881 "params": { 00:23:40.881 "period_us": 100000, 00:23:40.881 "enable": false 00:23:40.881 } 00:23:40.881 }, 00:23:40.881 { 00:23:40.881 "method": "bdev_malloc_create", 00:23:40.881 "params": { 00:23:40.881 "name": "malloc0", 00:23:40.881 "num_blocks": 8192, 00:23:40.881 "block_size": 4096, 00:23:40.881 "physical_block_size": 4096, 00:23:40.881 "uuid": "e7133111-c9b5-49f4-899e-3c9cea315295", 00:23:40.881 "optimal_io_boundary": 0 00:23:40.881 } 00:23:40.881 }, 00:23:40.881 { 00:23:40.881 "method": "bdev_wait_for_examine" 00:23:40.881 } 00:23:40.881 ] 00:23:40.881 }, 00:23:40.881 { 00:23:40.881 "subsystem": "nbd", 00:23:40.881 "config": [] 00:23:40.881 }, 00:23:40.881 { 00:23:40.881 "subsystem": "scheduler", 00:23:40.881 "config": [ 00:23:40.881 { 00:23:40.881 "method": "framework_set_scheduler", 00:23:40.881 "params": { 00:23:40.881 "name": "static" 00:23:40.881 } 00:23:40.881 } 00:23:40.881 ] 00:23:40.881 }, 00:23:40.881 { 00:23:40.881 "subsystem": "nvmf", 00:23:40.881 "config": [ 00:23:40.881 { 00:23:40.881 "method": "nvmf_set_config", 00:23:40.881 "params": { 00:23:40.881 "discovery_filter": "match_any", 00:23:40.881 "admin_cmd_passthru": { 00:23:40.881 "identify_ctrlr": false 00:23:40.881 } 00:23:40.881 } 00:23:40.881 }, 00:23:40.881 { 00:23:40.881 "method": "nvmf_set_max_subsystems", 00:23:40.881 "params": { 00:23:40.881 "max_subsystems": 1024 00:23:40.881 } 00:23:40.881 }, 00:23:40.881 { 00:23:40.881 "method": "nvmf_set_crdt", 00:23:40.881 "params": { 00:23:40.881 "crdt1": 0, 00:23:40.881 "crdt2": 0, 00:23:40.881 "crdt3": 0 00:23:40.881 } 00:23:40.881 }, 00:23:40.881 { 00:23:40.881 "method": "nvmf_create_transport", 00:23:40.881 "params": { 00:23:40.881 "trtype": "TCP", 00:23:40.881 "max_queue_depth": 128, 00:23:40.881 "max_io_qpairs_per_ctrlr": 127, 00:23:40.881 "in_capsule_data_size": 4096, 00:23:40.881 "max_io_size": 131072, 00:23:40.881 "io_unit_size": 131072, 00:23:40.881 "max_aq_depth": 128, 00:23:40.881 "num_shared_buffers": 511, 00:23:40.881 "buf_cache_size": 4294967295, 00:23:40.881 "dif_insert_or_strip": false, 00:23:40.881 "zcopy": false, 00:23:40.881 "c2h_success": false, 00:23:40.881 "sock_priority": 0, 00:23:40.881 "abort_timeout_sec": 1, 00:23:40.881 "ack_timeout": 0, 00:23:40.881 "data_wr_pool_size": 0 00:23:40.881 } 00:23:40.881 }, 00:23:40.881 { 00:23:40.881 "method": "nvmf_create_subsystem", 00:23:40.881 "params": { 00:23:40.881 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:40.881 "allow_any_host": false, 00:23:40.881 "serial_number": "SPDK00000000000001", 00:23:40.881 "model_number": "SPDK bdev Controller", 00:23:40.881 "max_namespaces": 10, 00:23:40.881 "min_cntlid": 1, 00:23:40.881 "max_cntlid": 65519, 00:23:40.881 "ana_reporting": false 00:23:40.881 } 00:23:40.881 }, 00:23:40.881 { 00:23:40.881 "method": "nvmf_subsystem_add_host", 00:23:40.881 "params": { 00:23:40.881 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:40.881 "host": "nqn.2016-06.io.spdk:host1", 00:23:40.881 "psk": "/tmp/tmp.o2BoRyDPUu" 00:23:40.881 } 00:23:40.881 }, 00:23:40.881 { 00:23:40.881 "method": "nvmf_subsystem_add_ns", 00:23:40.881 "params": { 00:23:40.881 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:40.881 "namespace": { 00:23:40.881 "nsid": 1, 00:23:40.881 "bdev_name": "malloc0", 00:23:40.881 "nguid": "E7133111C9B549F4899E3C9CEA315295", 00:23:40.881 "uuid": "e7133111-c9b5-49f4-899e-3c9cea315295", 00:23:40.881 "no_auto_visible": false 00:23:40.881 } 00:23:40.881 } 00:23:40.881 }, 00:23:40.881 { 00:23:40.881 "method": "nvmf_subsystem_add_listener", 00:23:40.881 "params": { 00:23:40.881 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:40.881 "listen_address": { 00:23:40.881 "trtype": "TCP", 00:23:40.881 "adrfam": "IPv4", 00:23:40.881 "traddr": "10.0.0.2", 00:23:40.881 "trsvcid": "4420" 00:23:40.881 }, 00:23:40.881 "secure_channel": true 00:23:40.881 } 00:23:40.881 } 00:23:40.881 ] 00:23:40.881 } 00:23:40.881 ] 00:23:40.881 }' 00:23:40.881 03:33:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:40.881 03:33:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:40.881 03:33:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2453152 00:23:40.881 03:33:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:23:40.881 03:33:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2453152 00:23:40.881 03:33:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 2453152 ']' 00:23:40.881 03:33:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:40.881 03:33:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:40.881 03:33:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:40.881 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:40.881 03:33:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:40.881 03:33:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:40.881 [2024-07-21 03:33:26.118472] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:23:40.881 [2024-07-21 03:33:26.118551] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:40.881 EAL: No free 2048 kB hugepages reported on node 1 00:23:40.881 [2024-07-21 03:33:26.182102] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:41.139 [2024-07-21 03:33:26.265741] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:41.139 [2024-07-21 03:33:26.265793] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:41.139 [2024-07-21 03:33:26.265816] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:41.139 [2024-07-21 03:33:26.265827] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:41.139 [2024-07-21 03:33:26.265838] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:41.139 [2024-07-21 03:33:26.265929] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:41.396 [2024-07-21 03:33:26.491427] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:41.396 [2024-07-21 03:33:26.507386] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:41.396 [2024-07-21 03:33:26.523439] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:41.396 [2024-07-21 03:33:26.534766] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:41.961 03:33:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:41.961 03:33:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:41.961 03:33:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:41.961 03:33:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:41.961 03:33:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:41.961 03:33:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:41.961 03:33:27 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=2453304 00:23:41.961 03:33:27 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 2453304 /var/tmp/bdevperf.sock 00:23:41.961 03:33:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 2453304 ']' 00:23:41.961 03:33:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:41.961 03:33:27 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:23:41.961 03:33:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:41.961 03:33:27 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:23:41.961 "subsystems": [ 00:23:41.961 { 00:23:41.961 "subsystem": "keyring", 00:23:41.961 "config": [] 00:23:41.961 }, 00:23:41.961 { 00:23:41.961 "subsystem": "iobuf", 00:23:41.961 "config": [ 00:23:41.961 { 00:23:41.961 "method": "iobuf_set_options", 00:23:41.961 "params": { 00:23:41.961 "small_pool_count": 8192, 00:23:41.961 "large_pool_count": 1024, 00:23:41.961 "small_bufsize": 8192, 00:23:41.961 "large_bufsize": 135168 00:23:41.961 } 00:23:41.961 } 00:23:41.961 ] 00:23:41.961 }, 00:23:41.961 { 00:23:41.961 "subsystem": "sock", 00:23:41.961 "config": [ 00:23:41.961 { 00:23:41.961 "method": "sock_set_default_impl", 00:23:41.961 "params": { 00:23:41.961 "impl_name": "posix" 00:23:41.961 } 00:23:41.961 }, 00:23:41.961 { 00:23:41.961 "method": "sock_impl_set_options", 00:23:41.961 "params": { 00:23:41.961 "impl_name": "ssl", 00:23:41.961 "recv_buf_size": 4096, 00:23:41.961 "send_buf_size": 4096, 00:23:41.961 "enable_recv_pipe": true, 00:23:41.961 "enable_quickack": false, 00:23:41.961 "enable_placement_id": 0, 00:23:41.961 "enable_zerocopy_send_server": true, 00:23:41.961 "enable_zerocopy_send_client": false, 00:23:41.961 "zerocopy_threshold": 0, 00:23:41.961 "tls_version": 0, 00:23:41.961 "enable_ktls": false 00:23:41.961 } 00:23:41.961 }, 00:23:41.961 { 00:23:41.961 "method": "sock_impl_set_options", 00:23:41.961 "params": { 00:23:41.961 "impl_name": "posix", 00:23:41.961 "recv_buf_size": 2097152, 00:23:41.961 "send_buf_size": 2097152, 00:23:41.961 "enable_recv_pipe": true, 00:23:41.961 "enable_quickack": false, 00:23:41.961 "enable_placement_id": 0, 00:23:41.961 "enable_zerocopy_send_server": true, 00:23:41.961 "enable_zerocopy_send_client": false, 00:23:41.961 "zerocopy_threshold": 0, 00:23:41.961 "tls_version": 0, 00:23:41.961 "enable_ktls": false 00:23:41.961 } 00:23:41.961 } 00:23:41.961 ] 00:23:41.961 }, 00:23:41.961 { 00:23:41.961 "subsystem": "vmd", 00:23:41.961 "config": [] 00:23:41.961 }, 00:23:41.961 { 00:23:41.961 "subsystem": "accel", 00:23:41.961 "config": [ 00:23:41.961 { 00:23:41.961 "method": "accel_set_options", 00:23:41.961 "params": { 00:23:41.961 "small_cache_size": 128, 00:23:41.961 "large_cache_size": 16, 00:23:41.961 "task_count": 2048, 00:23:41.961 "sequence_count": 2048, 00:23:41.961 "buf_count": 2048 00:23:41.961 } 00:23:41.961 } 00:23:41.961 ] 00:23:41.961 }, 00:23:41.961 { 00:23:41.961 "subsystem": "bdev", 00:23:41.961 "config": [ 00:23:41.961 { 00:23:41.961 "method": "bdev_set_options", 00:23:41.961 "params": { 00:23:41.961 "bdev_io_pool_size": 65535, 00:23:41.961 "bdev_io_cache_size": 256, 00:23:41.961 "bdev_auto_examine": true, 00:23:41.961 "iobuf_small_cache_size": 128, 00:23:41.961 "iobuf_large_cache_size": 16 00:23:41.961 } 00:23:41.961 }, 00:23:41.961 { 00:23:41.961 "method": "bdev_raid_set_options", 00:23:41.961 "params": { 00:23:41.961 "process_window_size_kb": 1024 00:23:41.961 } 00:23:41.961 }, 00:23:41.961 { 00:23:41.961 "method": "bdev_iscsi_set_options", 00:23:41.961 "params": { 00:23:41.961 "timeout_sec": 30 00:23:41.961 } 00:23:41.961 }, 00:23:41.961 { 00:23:41.961 "method": "bdev_nvme_set_options", 00:23:41.961 "params": { 00:23:41.961 "action_on_timeout": "none", 00:23:41.961 "timeout_us": 0, 00:23:41.961 "timeout_admin_us": 0, 00:23:41.961 "keep_alive_timeout_ms": 10000, 00:23:41.962 "arbitration_burst": 0, 00:23:41.962 "low_priority_weight": 0, 00:23:41.962 "medium_priority_weight": 0, 00:23:41.962 "high_priority_weight": 0, 00:23:41.962 "nvme_adminq_poll_period_us": 10000, 00:23:41.962 "nvme_ioq_poll_period_us": 0, 00:23:41.962 "io_queue_requests": 512, 00:23:41.962 "delay_cmd_submit": true, 00:23:41.962 "transport_retry_count": 4, 00:23:41.962 "bdev_retry_count": 3, 00:23:41.962 "transport_ack_timeout": 0, 00:23:41.962 "ctrlr_loss_timeout_sec": 0, 00:23:41.962 "reconnect_delay_sec": 0, 00:23:41.962 "fast_io_fail_timeout_sec": 0, 00:23:41.962 "disable_auto_failback": false, 00:23:41.962 "generate_uuids": false, 00:23:41.962 "transport_tos": 0, 00:23:41.962 "nvme_error_stat": false, 00:23:41.962 "rdma_srq_size": 0, 00:23:41.962 "io_path_stat": false, 00:23:41.962 "allow_accel_sequence": false, 00:23:41.962 "rdma_max_cq_size": 0, 00:23:41.962 "rdma_cm_event_timeout_ms": 0, 00:23:41.962 "dhchap_digests": [ 00:23:41.962 "sha256", 00:23:41.962 "sha384", 00:23:41.962 "sha512" 00:23:41.962 ], 00:23:41.962 "dhchap_dhgroups": [ 00:23:41.962 "null", 00:23:41.962 "ffdhe2048", 00:23:41.962 "ffdhe3072", 00:23:41.962 "ffdhe4096", 00:23:41.962 "ffdhe6144", 00:23:41.962 "ffdhe8192" 00:23:41.962 ] 00:23:41.962 } 00:23:41.962 }, 00:23:41.962 { 00:23:41.962 "method": "bdev_nvme_attach_controller", 00:23:41.962 "params": { 00:23:41.962 "name": "TLSTEST", 00:23:41.962 "trtype": "TCP", 00:23:41.962 "adrfam": "IPv4", 00:23:41.962 "traddr": "10.0.0.2", 00:23:41.962 "trsvcid": "4420", 00:23:41.962 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:41.962 "prchk_reftag": false, 00:23:41.962 "prchk_guard": false, 00:23:41.962 "ctrlr_loss_timeout_sec": 0, 00:23:41.962 "reconnect_delay_sec": 0, 00:23:41.962 "fast_io_fail_timeout_sec": 0, 00:23:41.962 "psk": "/tmp/tmp.o2BoRyDPUu", 00:23:41.962 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:41.962 "hdgst": false, 00:23:41.962 "ddgst": false 00:23:41.962 } 00:23:41.962 }, 00:23:41.962 { 00:23:41.962 "method": "bdev_nvme_set_hotplug", 00:23:41.962 "params": { 00:23:41.962 "period_us": 100000, 00:23:41.962 "enable": false 00:23:41.962 } 00:23:41.962 }, 00:23:41.962 { 00:23:41.962 "method": "bdev_wait_for_examine" 00:23:41.962 } 00:23:41.962 ] 00:23:41.962 }, 00:23:41.962 { 00:23:41.962 "subsystem": "nbd", 00:23:41.962 "config": [] 00:23:41.962 } 00:23:41.962 ] 00:23:41.962 }' 00:23:41.962 03:33:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:41.962 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:41.962 03:33:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:41.962 03:33:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:41.962 [2024-07-21 03:33:27.163859] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:23:41.962 [2024-07-21 03:33:27.163952] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2453304 ] 00:23:41.962 EAL: No free 2048 kB hugepages reported on node 1 00:23:41.962 [2024-07-21 03:33:27.221822] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:42.219 [2024-07-21 03:33:27.306145] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:42.219 [2024-07-21 03:33:27.474025] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:42.219 [2024-07-21 03:33:27.474146] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:42.784 03:33:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:42.784 03:33:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:42.784 03:33:28 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:43.042 Running I/O for 10 seconds... 00:23:53.016 00:23:53.016 Latency(us) 00:23:53.016 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:53.016 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:53.016 Verification LBA range: start 0x0 length 0x2000 00:23:53.016 TLSTESTn1 : 10.02 3402.54 13.29 0.00 0.00 37556.50 7136.14 42719.76 00:23:53.016 =================================================================================================================== 00:23:53.016 Total : 3402.54 13.29 0.00 0.00 37556.50 7136.14 42719.76 00:23:53.016 0 00:23:53.016 03:33:38 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:53.016 03:33:38 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 2453304 00:23:53.016 03:33:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 2453304 ']' 00:23:53.016 03:33:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 2453304 00:23:53.016 03:33:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:53.016 03:33:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:53.016 03:33:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2453304 00:23:53.016 03:33:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:23:53.016 03:33:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:23:53.016 03:33:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2453304' 00:23:53.016 killing process with pid 2453304 00:23:53.016 03:33:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 2453304 00:23:53.016 Received shutdown signal, test time was about 10.000000 seconds 00:23:53.016 00:23:53.016 Latency(us) 00:23:53.016 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:53.016 =================================================================================================================== 00:23:53.016 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:53.016 [2024-07-21 03:33:38.262412] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:53.016 03:33:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 2453304 00:23:53.274 03:33:38 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 2453152 00:23:53.274 03:33:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 2453152 ']' 00:23:53.274 03:33:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 2453152 00:23:53.274 03:33:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:53.274 03:33:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:53.274 03:33:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2453152 00:23:53.274 03:33:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:23:53.274 03:33:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:23:53.274 03:33:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2453152' 00:23:53.274 killing process with pid 2453152 00:23:53.274 03:33:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 2453152 00:23:53.274 [2024-07-21 03:33:38.518778] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:53.274 03:33:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 2453152 00:23:53.533 03:33:38 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:23:53.533 03:33:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:53.533 03:33:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:53.533 03:33:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:53.533 03:33:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2454633 00:23:53.533 03:33:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:53.533 03:33:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2454633 00:23:53.533 03:33:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 2454633 ']' 00:23:53.533 03:33:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:53.533 03:33:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:53.533 03:33:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:53.533 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:53.533 03:33:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:53.533 03:33:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:53.533 [2024-07-21 03:33:38.817475] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:23:53.533 [2024-07-21 03:33:38.817570] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:53.793 EAL: No free 2048 kB hugepages reported on node 1 00:23:53.793 [2024-07-21 03:33:38.881354] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:53.793 [2024-07-21 03:33:38.965850] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:53.793 [2024-07-21 03:33:38.965929] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:53.793 [2024-07-21 03:33:38.965943] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:53.793 [2024-07-21 03:33:38.965954] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:53.793 [2024-07-21 03:33:38.965963] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:53.793 [2024-07-21 03:33:38.966005] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:53.793 03:33:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:53.793 03:33:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:53.793 03:33:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:53.793 03:33:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:53.793 03:33:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:53.793 03:33:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:53.793 03:33:39 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.o2BoRyDPUu 00:23:53.793 03:33:39 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.o2BoRyDPUu 00:23:53.793 03:33:39 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:54.050 [2024-07-21 03:33:39.363257] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:54.307 03:33:39 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:54.307 03:33:39 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:54.872 [2024-07-21 03:33:39.884626] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:54.872 [2024-07-21 03:33:39.884873] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:54.872 03:33:39 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:55.129 malloc0 00:23:55.129 03:33:40 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:55.387 03:33:40 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.o2BoRyDPUu 00:23:55.645 [2024-07-21 03:33:40.707311] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:55.645 03:33:40 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=2454912 00:23:55.645 03:33:40 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:23:55.645 03:33:40 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:55.645 03:33:40 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 2454912 /var/tmp/bdevperf.sock 00:23:55.645 03:33:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 2454912 ']' 00:23:55.645 03:33:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:55.645 03:33:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:55.645 03:33:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:55.645 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:55.645 03:33:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:55.645 03:33:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:55.645 [2024-07-21 03:33:40.770466] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:23:55.645 [2024-07-21 03:33:40.770549] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2454912 ] 00:23:55.645 EAL: No free 2048 kB hugepages reported on node 1 00:23:55.645 [2024-07-21 03:33:40.831547] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:55.645 [2024-07-21 03:33:40.916961] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:55.902 03:33:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:55.902 03:33:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:55.902 03:33:41 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.o2BoRyDPUu 00:23:56.160 03:33:41 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:23:56.417 [2024-07-21 03:33:41.585694] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:56.417 nvme0n1 00:23:56.417 03:33:41 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:56.673 Running I/O for 1 seconds... 00:23:57.604 00:23:57.604 Latency(us) 00:23:57.604 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:57.604 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:57.604 Verification LBA range: start 0x0 length 0x2000 00:23:57.604 nvme0n1 : 1.02 3402.55 13.29 0.00 0.00 37284.72 7573.05 35535.08 00:23:57.604 =================================================================================================================== 00:23:57.604 Total : 3402.55 13.29 0.00 0.00 37284.72 7573.05 35535.08 00:23:57.604 0 00:23:57.604 03:33:42 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 2454912 00:23:57.604 03:33:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 2454912 ']' 00:23:57.604 03:33:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 2454912 00:23:57.604 03:33:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:57.604 03:33:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:57.604 03:33:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2454912 00:23:57.604 03:33:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:23:57.604 03:33:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:23:57.604 03:33:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2454912' 00:23:57.604 killing process with pid 2454912 00:23:57.604 03:33:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 2454912 00:23:57.604 Received shutdown signal, test time was about 1.000000 seconds 00:23:57.604 00:23:57.604 Latency(us) 00:23:57.604 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:57.604 =================================================================================================================== 00:23:57.604 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:57.604 03:33:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 2454912 00:23:57.861 03:33:43 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 2454633 00:23:57.861 03:33:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 2454633 ']' 00:23:57.861 03:33:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 2454633 00:23:57.861 03:33:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:57.861 03:33:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:57.861 03:33:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2454633 00:23:57.861 03:33:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:23:57.861 03:33:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:23:57.861 03:33:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2454633' 00:23:57.861 killing process with pid 2454633 00:23:57.861 03:33:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 2454633 00:23:57.861 [2024-07-21 03:33:43.068745] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:57.861 03:33:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 2454633 00:23:58.118 03:33:43 nvmf_tcp.nvmf_tls -- target/tls.sh@238 -- # nvmfappstart 00:23:58.118 03:33:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:58.118 03:33:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:58.118 03:33:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:58.118 03:33:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2455195 00:23:58.118 03:33:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:58.118 03:33:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2455195 00:23:58.118 03:33:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 2455195 ']' 00:23:58.118 03:33:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:58.118 03:33:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:58.118 03:33:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:58.118 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:58.118 03:33:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:58.118 03:33:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:58.118 [2024-07-21 03:33:43.364162] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:23:58.118 [2024-07-21 03:33:43.364228] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:58.118 EAL: No free 2048 kB hugepages reported on node 1 00:23:58.118 [2024-07-21 03:33:43.429517] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:58.375 [2024-07-21 03:33:43.519185] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:58.375 [2024-07-21 03:33:43.519251] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:58.375 [2024-07-21 03:33:43.519267] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:58.375 [2024-07-21 03:33:43.519290] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:58.375 [2024-07-21 03:33:43.519303] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:58.375 [2024-07-21 03:33:43.519332] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:58.375 03:33:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:58.375 03:33:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:58.375 03:33:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:58.375 03:33:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:58.375 03:33:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:58.375 03:33:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:58.375 03:33:43 nvmf_tcp.nvmf_tls -- target/tls.sh@239 -- # rpc_cmd 00:23:58.375 03:33:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.375 03:33:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:58.375 [2024-07-21 03:33:43.665320] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:58.375 malloc0 00:23:58.633 [2024-07-21 03:33:43.698224] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:58.633 [2024-07-21 03:33:43.698517] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:58.633 03:33:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:58.633 03:33:43 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # bdevperf_pid=2455329 00:23:58.633 03:33:43 nvmf_tcp.nvmf_tls -- target/tls.sh@250 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:23:58.633 03:33:43 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # waitforlisten 2455329 /var/tmp/bdevperf.sock 00:23:58.633 03:33:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 2455329 ']' 00:23:58.633 03:33:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:58.633 03:33:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:58.633 03:33:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:58.633 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:58.633 03:33:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:58.633 03:33:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:58.633 [2024-07-21 03:33:43.768183] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:23:58.633 [2024-07-21 03:33:43.768244] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2455329 ] 00:23:58.633 EAL: No free 2048 kB hugepages reported on node 1 00:23:58.633 [2024-07-21 03:33:43.830363] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:58.633 [2024-07-21 03:33:43.922704] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:58.890 03:33:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:58.890 03:33:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:58.890 03:33:44 nvmf_tcp.nvmf_tls -- target/tls.sh@255 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.o2BoRyDPUu 00:23:59.147 03:33:44 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:23:59.404 [2024-07-21 03:33:44.490259] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:59.404 nvme0n1 00:23:59.404 03:33:44 nvmf_tcp.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:59.404 Running I/O for 1 seconds... 00:24:00.773 00:24:00.773 Latency(us) 00:24:00.773 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:00.773 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:24:00.773 Verification LBA range: start 0x0 length 0x2000 00:24:00.773 nvme0n1 : 1.02 3307.52 12.92 0.00 0.00 38256.14 6189.51 30292.20 00:24:00.773 =================================================================================================================== 00:24:00.773 Total : 3307.52 12.92 0.00 0.00 38256.14 6189.51 30292.20 00:24:00.773 0 00:24:00.773 03:33:45 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # rpc_cmd save_config 00:24:00.773 03:33:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:00.773 03:33:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:00.773 03:33:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:00.773 03:33:45 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # tgtcfg='{ 00:24:00.773 "subsystems": [ 00:24:00.773 { 00:24:00.773 "subsystem": "keyring", 00:24:00.773 "config": [ 00:24:00.773 { 00:24:00.773 "method": "keyring_file_add_key", 00:24:00.773 "params": { 00:24:00.773 "name": "key0", 00:24:00.773 "path": "/tmp/tmp.o2BoRyDPUu" 00:24:00.773 } 00:24:00.773 } 00:24:00.773 ] 00:24:00.773 }, 00:24:00.773 { 00:24:00.773 "subsystem": "iobuf", 00:24:00.773 "config": [ 00:24:00.773 { 00:24:00.773 "method": "iobuf_set_options", 00:24:00.773 "params": { 00:24:00.773 "small_pool_count": 8192, 00:24:00.773 "large_pool_count": 1024, 00:24:00.773 "small_bufsize": 8192, 00:24:00.773 "large_bufsize": 135168 00:24:00.773 } 00:24:00.773 } 00:24:00.773 ] 00:24:00.773 }, 00:24:00.773 { 00:24:00.773 "subsystem": "sock", 00:24:00.773 "config": [ 00:24:00.773 { 00:24:00.773 "method": "sock_set_default_impl", 00:24:00.773 "params": { 00:24:00.773 "impl_name": "posix" 00:24:00.773 } 00:24:00.773 }, 00:24:00.773 { 00:24:00.773 "method": "sock_impl_set_options", 00:24:00.773 "params": { 00:24:00.773 "impl_name": "ssl", 00:24:00.773 "recv_buf_size": 4096, 00:24:00.773 "send_buf_size": 4096, 00:24:00.773 "enable_recv_pipe": true, 00:24:00.773 "enable_quickack": false, 00:24:00.773 "enable_placement_id": 0, 00:24:00.773 "enable_zerocopy_send_server": true, 00:24:00.773 "enable_zerocopy_send_client": false, 00:24:00.773 "zerocopy_threshold": 0, 00:24:00.773 "tls_version": 0, 00:24:00.773 "enable_ktls": false 00:24:00.773 } 00:24:00.773 }, 00:24:00.773 { 00:24:00.773 "method": "sock_impl_set_options", 00:24:00.773 "params": { 00:24:00.773 "impl_name": "posix", 00:24:00.773 "recv_buf_size": 2097152, 00:24:00.773 "send_buf_size": 2097152, 00:24:00.773 "enable_recv_pipe": true, 00:24:00.773 "enable_quickack": false, 00:24:00.773 "enable_placement_id": 0, 00:24:00.773 "enable_zerocopy_send_server": true, 00:24:00.773 "enable_zerocopy_send_client": false, 00:24:00.773 "zerocopy_threshold": 0, 00:24:00.773 "tls_version": 0, 00:24:00.773 "enable_ktls": false 00:24:00.773 } 00:24:00.773 } 00:24:00.773 ] 00:24:00.773 }, 00:24:00.773 { 00:24:00.773 "subsystem": "vmd", 00:24:00.773 "config": [] 00:24:00.773 }, 00:24:00.773 { 00:24:00.773 "subsystem": "accel", 00:24:00.773 "config": [ 00:24:00.773 { 00:24:00.773 "method": "accel_set_options", 00:24:00.773 "params": { 00:24:00.773 "small_cache_size": 128, 00:24:00.773 "large_cache_size": 16, 00:24:00.773 "task_count": 2048, 00:24:00.773 "sequence_count": 2048, 00:24:00.773 "buf_count": 2048 00:24:00.773 } 00:24:00.773 } 00:24:00.773 ] 00:24:00.773 }, 00:24:00.773 { 00:24:00.773 "subsystem": "bdev", 00:24:00.773 "config": [ 00:24:00.773 { 00:24:00.773 "method": "bdev_set_options", 00:24:00.773 "params": { 00:24:00.773 "bdev_io_pool_size": 65535, 00:24:00.773 "bdev_io_cache_size": 256, 00:24:00.773 "bdev_auto_examine": true, 00:24:00.773 "iobuf_small_cache_size": 128, 00:24:00.773 "iobuf_large_cache_size": 16 00:24:00.773 } 00:24:00.773 }, 00:24:00.773 { 00:24:00.773 "method": "bdev_raid_set_options", 00:24:00.773 "params": { 00:24:00.773 "process_window_size_kb": 1024 00:24:00.773 } 00:24:00.773 }, 00:24:00.773 { 00:24:00.773 "method": "bdev_iscsi_set_options", 00:24:00.773 "params": { 00:24:00.773 "timeout_sec": 30 00:24:00.773 } 00:24:00.773 }, 00:24:00.773 { 00:24:00.773 "method": "bdev_nvme_set_options", 00:24:00.773 "params": { 00:24:00.773 "action_on_timeout": "none", 00:24:00.773 "timeout_us": 0, 00:24:00.773 "timeout_admin_us": 0, 00:24:00.773 "keep_alive_timeout_ms": 10000, 00:24:00.773 "arbitration_burst": 0, 00:24:00.773 "low_priority_weight": 0, 00:24:00.773 "medium_priority_weight": 0, 00:24:00.773 "high_priority_weight": 0, 00:24:00.773 "nvme_adminq_poll_period_us": 10000, 00:24:00.773 "nvme_ioq_poll_period_us": 0, 00:24:00.773 "io_queue_requests": 0, 00:24:00.773 "delay_cmd_submit": true, 00:24:00.773 "transport_retry_count": 4, 00:24:00.773 "bdev_retry_count": 3, 00:24:00.773 "transport_ack_timeout": 0, 00:24:00.773 "ctrlr_loss_timeout_sec": 0, 00:24:00.773 "reconnect_delay_sec": 0, 00:24:00.773 "fast_io_fail_timeout_sec": 0, 00:24:00.773 "disable_auto_failback": false, 00:24:00.773 "generate_uuids": false, 00:24:00.773 "transport_tos": 0, 00:24:00.773 "nvme_error_stat": false, 00:24:00.773 "rdma_srq_size": 0, 00:24:00.773 "io_path_stat": false, 00:24:00.773 "allow_accel_sequence": false, 00:24:00.773 "rdma_max_cq_size": 0, 00:24:00.773 "rdma_cm_event_timeout_ms": 0, 00:24:00.773 "dhchap_digests": [ 00:24:00.773 "sha256", 00:24:00.773 "sha384", 00:24:00.773 "sha512" 00:24:00.773 ], 00:24:00.773 "dhchap_dhgroups": [ 00:24:00.773 "null", 00:24:00.773 "ffdhe2048", 00:24:00.773 "ffdhe3072", 00:24:00.773 "ffdhe4096", 00:24:00.773 "ffdhe6144", 00:24:00.773 "ffdhe8192" 00:24:00.773 ] 00:24:00.773 } 00:24:00.773 }, 00:24:00.773 { 00:24:00.773 "method": "bdev_nvme_set_hotplug", 00:24:00.773 "params": { 00:24:00.773 "period_us": 100000, 00:24:00.773 "enable": false 00:24:00.773 } 00:24:00.773 }, 00:24:00.773 { 00:24:00.773 "method": "bdev_malloc_create", 00:24:00.773 "params": { 00:24:00.773 "name": "malloc0", 00:24:00.773 "num_blocks": 8192, 00:24:00.773 "block_size": 4096, 00:24:00.773 "physical_block_size": 4096, 00:24:00.773 "uuid": "41fd5406-aca4-4b5c-8692-88c8cfa3de50", 00:24:00.773 "optimal_io_boundary": 0 00:24:00.773 } 00:24:00.773 }, 00:24:00.773 { 00:24:00.773 "method": "bdev_wait_for_examine" 00:24:00.773 } 00:24:00.773 ] 00:24:00.773 }, 00:24:00.773 { 00:24:00.773 "subsystem": "nbd", 00:24:00.773 "config": [] 00:24:00.773 }, 00:24:00.773 { 00:24:00.773 "subsystem": "scheduler", 00:24:00.773 "config": [ 00:24:00.773 { 00:24:00.773 "method": "framework_set_scheduler", 00:24:00.773 "params": { 00:24:00.773 "name": "static" 00:24:00.773 } 00:24:00.773 } 00:24:00.773 ] 00:24:00.773 }, 00:24:00.773 { 00:24:00.773 "subsystem": "nvmf", 00:24:00.773 "config": [ 00:24:00.773 { 00:24:00.773 "method": "nvmf_set_config", 00:24:00.773 "params": { 00:24:00.773 "discovery_filter": "match_any", 00:24:00.773 "admin_cmd_passthru": { 00:24:00.773 "identify_ctrlr": false 00:24:00.773 } 00:24:00.773 } 00:24:00.773 }, 00:24:00.773 { 00:24:00.773 "method": "nvmf_set_max_subsystems", 00:24:00.773 "params": { 00:24:00.773 "max_subsystems": 1024 00:24:00.773 } 00:24:00.773 }, 00:24:00.773 { 00:24:00.773 "method": "nvmf_set_crdt", 00:24:00.773 "params": { 00:24:00.773 "crdt1": 0, 00:24:00.773 "crdt2": 0, 00:24:00.773 "crdt3": 0 00:24:00.773 } 00:24:00.773 }, 00:24:00.773 { 00:24:00.773 "method": "nvmf_create_transport", 00:24:00.773 "params": { 00:24:00.773 "trtype": "TCP", 00:24:00.773 "max_queue_depth": 128, 00:24:00.773 "max_io_qpairs_per_ctrlr": 127, 00:24:00.773 "in_capsule_data_size": 4096, 00:24:00.773 "max_io_size": 131072, 00:24:00.773 "io_unit_size": 131072, 00:24:00.773 "max_aq_depth": 128, 00:24:00.773 "num_shared_buffers": 511, 00:24:00.773 "buf_cache_size": 4294967295, 00:24:00.773 "dif_insert_or_strip": false, 00:24:00.773 "zcopy": false, 00:24:00.773 "c2h_success": false, 00:24:00.773 "sock_priority": 0, 00:24:00.773 "abort_timeout_sec": 1, 00:24:00.773 "ack_timeout": 0, 00:24:00.773 "data_wr_pool_size": 0 00:24:00.773 } 00:24:00.773 }, 00:24:00.773 { 00:24:00.773 "method": "nvmf_create_subsystem", 00:24:00.773 "params": { 00:24:00.773 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:00.773 "allow_any_host": false, 00:24:00.773 "serial_number": "00000000000000000000", 00:24:00.773 "model_number": "SPDK bdev Controller", 00:24:00.773 "max_namespaces": 32, 00:24:00.773 "min_cntlid": 1, 00:24:00.773 "max_cntlid": 65519, 00:24:00.773 "ana_reporting": false 00:24:00.773 } 00:24:00.773 }, 00:24:00.773 { 00:24:00.773 "method": "nvmf_subsystem_add_host", 00:24:00.773 "params": { 00:24:00.773 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:00.773 "host": "nqn.2016-06.io.spdk:host1", 00:24:00.773 "psk": "key0" 00:24:00.773 } 00:24:00.773 }, 00:24:00.773 { 00:24:00.774 "method": "nvmf_subsystem_add_ns", 00:24:00.774 "params": { 00:24:00.774 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:00.774 "namespace": { 00:24:00.774 "nsid": 1, 00:24:00.774 "bdev_name": "malloc0", 00:24:00.774 "nguid": "41FD5406ACA44B5C869288C8CFA3DE50", 00:24:00.774 "uuid": "41fd5406-aca4-4b5c-8692-88c8cfa3de50", 00:24:00.774 "no_auto_visible": false 00:24:00.774 } 00:24:00.774 } 00:24:00.774 }, 00:24:00.774 { 00:24:00.774 "method": "nvmf_subsystem_add_listener", 00:24:00.774 "params": { 00:24:00.774 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:00.774 "listen_address": { 00:24:00.774 "trtype": "TCP", 00:24:00.774 "adrfam": "IPv4", 00:24:00.774 "traddr": "10.0.0.2", 00:24:00.774 "trsvcid": "4420" 00:24:00.774 }, 00:24:00.774 "secure_channel": true 00:24:00.774 } 00:24:00.774 } 00:24:00.774 ] 00:24:00.774 } 00:24:00.774 ] 00:24:00.774 }' 00:24:00.774 03:33:45 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:24:01.032 03:33:46 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # bperfcfg='{ 00:24:01.032 "subsystems": [ 00:24:01.032 { 00:24:01.032 "subsystem": "keyring", 00:24:01.032 "config": [ 00:24:01.032 { 00:24:01.032 "method": "keyring_file_add_key", 00:24:01.032 "params": { 00:24:01.032 "name": "key0", 00:24:01.032 "path": "/tmp/tmp.o2BoRyDPUu" 00:24:01.032 } 00:24:01.032 } 00:24:01.032 ] 00:24:01.032 }, 00:24:01.032 { 00:24:01.032 "subsystem": "iobuf", 00:24:01.032 "config": [ 00:24:01.032 { 00:24:01.032 "method": "iobuf_set_options", 00:24:01.032 "params": { 00:24:01.032 "small_pool_count": 8192, 00:24:01.032 "large_pool_count": 1024, 00:24:01.032 "small_bufsize": 8192, 00:24:01.032 "large_bufsize": 135168 00:24:01.032 } 00:24:01.032 } 00:24:01.032 ] 00:24:01.032 }, 00:24:01.032 { 00:24:01.032 "subsystem": "sock", 00:24:01.032 "config": [ 00:24:01.032 { 00:24:01.032 "method": "sock_set_default_impl", 00:24:01.032 "params": { 00:24:01.032 "impl_name": "posix" 00:24:01.032 } 00:24:01.032 }, 00:24:01.032 { 00:24:01.032 "method": "sock_impl_set_options", 00:24:01.032 "params": { 00:24:01.032 "impl_name": "ssl", 00:24:01.032 "recv_buf_size": 4096, 00:24:01.032 "send_buf_size": 4096, 00:24:01.032 "enable_recv_pipe": true, 00:24:01.032 "enable_quickack": false, 00:24:01.032 "enable_placement_id": 0, 00:24:01.032 "enable_zerocopy_send_server": true, 00:24:01.032 "enable_zerocopy_send_client": false, 00:24:01.032 "zerocopy_threshold": 0, 00:24:01.032 "tls_version": 0, 00:24:01.032 "enable_ktls": false 00:24:01.032 } 00:24:01.032 }, 00:24:01.032 { 00:24:01.032 "method": "sock_impl_set_options", 00:24:01.032 "params": { 00:24:01.032 "impl_name": "posix", 00:24:01.032 "recv_buf_size": 2097152, 00:24:01.032 "send_buf_size": 2097152, 00:24:01.032 "enable_recv_pipe": true, 00:24:01.032 "enable_quickack": false, 00:24:01.032 "enable_placement_id": 0, 00:24:01.032 "enable_zerocopy_send_server": true, 00:24:01.032 "enable_zerocopy_send_client": false, 00:24:01.032 "zerocopy_threshold": 0, 00:24:01.032 "tls_version": 0, 00:24:01.032 "enable_ktls": false 00:24:01.032 } 00:24:01.032 } 00:24:01.032 ] 00:24:01.032 }, 00:24:01.032 { 00:24:01.032 "subsystem": "vmd", 00:24:01.032 "config": [] 00:24:01.032 }, 00:24:01.032 { 00:24:01.032 "subsystem": "accel", 00:24:01.032 "config": [ 00:24:01.032 { 00:24:01.032 "method": "accel_set_options", 00:24:01.032 "params": { 00:24:01.032 "small_cache_size": 128, 00:24:01.032 "large_cache_size": 16, 00:24:01.032 "task_count": 2048, 00:24:01.032 "sequence_count": 2048, 00:24:01.032 "buf_count": 2048 00:24:01.032 } 00:24:01.032 } 00:24:01.032 ] 00:24:01.032 }, 00:24:01.032 { 00:24:01.032 "subsystem": "bdev", 00:24:01.032 "config": [ 00:24:01.032 { 00:24:01.032 "method": "bdev_set_options", 00:24:01.032 "params": { 00:24:01.032 "bdev_io_pool_size": 65535, 00:24:01.032 "bdev_io_cache_size": 256, 00:24:01.032 "bdev_auto_examine": true, 00:24:01.032 "iobuf_small_cache_size": 128, 00:24:01.032 "iobuf_large_cache_size": 16 00:24:01.032 } 00:24:01.032 }, 00:24:01.032 { 00:24:01.032 "method": "bdev_raid_set_options", 00:24:01.032 "params": { 00:24:01.032 "process_window_size_kb": 1024 00:24:01.032 } 00:24:01.032 }, 00:24:01.032 { 00:24:01.032 "method": "bdev_iscsi_set_options", 00:24:01.032 "params": { 00:24:01.032 "timeout_sec": 30 00:24:01.032 } 00:24:01.032 }, 00:24:01.032 { 00:24:01.032 "method": "bdev_nvme_set_options", 00:24:01.032 "params": { 00:24:01.032 "action_on_timeout": "none", 00:24:01.032 "timeout_us": 0, 00:24:01.032 "timeout_admin_us": 0, 00:24:01.032 "keep_alive_timeout_ms": 10000, 00:24:01.032 "arbitration_burst": 0, 00:24:01.032 "low_priority_weight": 0, 00:24:01.032 "medium_priority_weight": 0, 00:24:01.032 "high_priority_weight": 0, 00:24:01.032 "nvme_adminq_poll_period_us": 10000, 00:24:01.032 "nvme_ioq_poll_period_us": 0, 00:24:01.032 "io_queue_requests": 512, 00:24:01.032 "delay_cmd_submit": true, 00:24:01.032 "transport_retry_count": 4, 00:24:01.032 "bdev_retry_count": 3, 00:24:01.032 "transport_ack_timeout": 0, 00:24:01.032 "ctrlr_loss_timeout_sec": 0, 00:24:01.032 "reconnect_delay_sec": 0, 00:24:01.032 "fast_io_fail_timeout_sec": 0, 00:24:01.032 "disable_auto_failback": false, 00:24:01.032 "generate_uuids": false, 00:24:01.032 "transport_tos": 0, 00:24:01.032 "nvme_error_stat": false, 00:24:01.032 "rdma_srq_size": 0, 00:24:01.032 "io_path_stat": false, 00:24:01.032 "allow_accel_sequence": false, 00:24:01.032 "rdma_max_cq_size": 0, 00:24:01.032 "rdma_cm_event_timeout_ms": 0, 00:24:01.032 "dhchap_digests": [ 00:24:01.032 "sha256", 00:24:01.032 "sha384", 00:24:01.032 "sha512" 00:24:01.032 ], 00:24:01.032 "dhchap_dhgroups": [ 00:24:01.032 "null", 00:24:01.032 "ffdhe2048", 00:24:01.032 "ffdhe3072", 00:24:01.032 "ffdhe4096", 00:24:01.032 "ffdhe6144", 00:24:01.032 "ffdhe8192" 00:24:01.032 ] 00:24:01.032 } 00:24:01.032 }, 00:24:01.032 { 00:24:01.032 "method": "bdev_nvme_attach_controller", 00:24:01.032 "params": { 00:24:01.032 "name": "nvme0", 00:24:01.032 "trtype": "TCP", 00:24:01.032 "adrfam": "IPv4", 00:24:01.032 "traddr": "10.0.0.2", 00:24:01.032 "trsvcid": "4420", 00:24:01.032 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:01.032 "prchk_reftag": false, 00:24:01.032 "prchk_guard": false, 00:24:01.032 "ctrlr_loss_timeout_sec": 0, 00:24:01.032 "reconnect_delay_sec": 0, 00:24:01.032 "fast_io_fail_timeout_sec": 0, 00:24:01.032 "psk": "key0", 00:24:01.032 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:01.032 "hdgst": false, 00:24:01.032 "ddgst": false 00:24:01.032 } 00:24:01.032 }, 00:24:01.032 { 00:24:01.032 "method": "bdev_nvme_set_hotplug", 00:24:01.032 "params": { 00:24:01.032 "period_us": 100000, 00:24:01.032 "enable": false 00:24:01.032 } 00:24:01.032 }, 00:24:01.032 { 00:24:01.032 "method": "bdev_enable_histogram", 00:24:01.032 "params": { 00:24:01.032 "name": "nvme0n1", 00:24:01.032 "enable": true 00:24:01.032 } 00:24:01.032 }, 00:24:01.033 { 00:24:01.033 "method": "bdev_wait_for_examine" 00:24:01.033 } 00:24:01.033 ] 00:24:01.033 }, 00:24:01.033 { 00:24:01.033 "subsystem": "nbd", 00:24:01.033 "config": [] 00:24:01.033 } 00:24:01.033 ] 00:24:01.033 }' 00:24:01.033 03:33:46 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # killprocess 2455329 00:24:01.033 03:33:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 2455329 ']' 00:24:01.033 03:33:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 2455329 00:24:01.033 03:33:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:24:01.033 03:33:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:01.033 03:33:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2455329 00:24:01.033 03:33:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:24:01.033 03:33:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:24:01.033 03:33:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2455329' 00:24:01.033 killing process with pid 2455329 00:24:01.033 03:33:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 2455329 00:24:01.033 Received shutdown signal, test time was about 1.000000 seconds 00:24:01.033 00:24:01.033 Latency(us) 00:24:01.033 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:01.033 =================================================================================================================== 00:24:01.033 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:01.033 03:33:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 2455329 00:24:01.290 03:33:46 nvmf_tcp.nvmf_tls -- target/tls.sh@267 -- # killprocess 2455195 00:24:01.290 03:33:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 2455195 ']' 00:24:01.290 03:33:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 2455195 00:24:01.290 03:33:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:24:01.290 03:33:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:01.290 03:33:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2455195 00:24:01.290 03:33:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:24:01.290 03:33:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:24:01.290 03:33:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2455195' 00:24:01.290 killing process with pid 2455195 00:24:01.290 03:33:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 2455195 00:24:01.290 03:33:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 2455195 00:24:01.548 03:33:46 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:24:01.548 03:33:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:01.548 03:33:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:24:01.548 03:33:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:01.548 03:33:46 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # echo '{ 00:24:01.548 "subsystems": [ 00:24:01.548 { 00:24:01.548 "subsystem": "keyring", 00:24:01.548 "config": [ 00:24:01.548 { 00:24:01.548 "method": "keyring_file_add_key", 00:24:01.548 "params": { 00:24:01.548 "name": "key0", 00:24:01.548 "path": "/tmp/tmp.o2BoRyDPUu" 00:24:01.548 } 00:24:01.548 } 00:24:01.548 ] 00:24:01.548 }, 00:24:01.548 { 00:24:01.548 "subsystem": "iobuf", 00:24:01.548 "config": [ 00:24:01.548 { 00:24:01.548 "method": "iobuf_set_options", 00:24:01.548 "params": { 00:24:01.548 "small_pool_count": 8192, 00:24:01.548 "large_pool_count": 1024, 00:24:01.548 "small_bufsize": 8192, 00:24:01.548 "large_bufsize": 135168 00:24:01.548 } 00:24:01.548 } 00:24:01.548 ] 00:24:01.548 }, 00:24:01.548 { 00:24:01.548 "subsystem": "sock", 00:24:01.548 "config": [ 00:24:01.548 { 00:24:01.548 "method": "sock_set_default_impl", 00:24:01.548 "params": { 00:24:01.548 "impl_name": "posix" 00:24:01.548 } 00:24:01.548 }, 00:24:01.548 { 00:24:01.548 "method": "sock_impl_set_options", 00:24:01.548 "params": { 00:24:01.548 "impl_name": "ssl", 00:24:01.548 "recv_buf_size": 4096, 00:24:01.548 "send_buf_size": 4096, 00:24:01.548 "enable_recv_pipe": true, 00:24:01.548 "enable_quickack": false, 00:24:01.548 "enable_placement_id": 0, 00:24:01.548 "enable_zerocopy_send_server": true, 00:24:01.548 "enable_zerocopy_send_client": false, 00:24:01.548 "zerocopy_threshold": 0, 00:24:01.548 "tls_version": 0, 00:24:01.548 "enable_ktls": false 00:24:01.548 } 00:24:01.548 }, 00:24:01.548 { 00:24:01.548 "method": "sock_impl_set_options", 00:24:01.548 "params": { 00:24:01.548 "impl_name": "posix", 00:24:01.548 "recv_buf_size": 2097152, 00:24:01.548 "send_buf_size": 2097152, 00:24:01.548 "enable_recv_pipe": true, 00:24:01.548 "enable_quickack": false, 00:24:01.548 "enable_placement_id": 0, 00:24:01.548 "enable_zerocopy_send_server": true, 00:24:01.548 "enable_zerocopy_send_client": false, 00:24:01.548 "zerocopy_threshold": 0, 00:24:01.548 "tls_version": 0, 00:24:01.548 "enable_ktls": false 00:24:01.548 } 00:24:01.548 } 00:24:01.548 ] 00:24:01.548 }, 00:24:01.548 { 00:24:01.548 "subsystem": "vmd", 00:24:01.548 "config": [] 00:24:01.548 }, 00:24:01.548 { 00:24:01.548 "subsystem": "accel", 00:24:01.548 "config": [ 00:24:01.548 { 00:24:01.548 "method": "accel_set_options", 00:24:01.548 "params": { 00:24:01.548 "small_cache_size": 128, 00:24:01.548 "large_cache_size": 16, 00:24:01.548 "task_count": 2048, 00:24:01.548 "sequence_count": 2048, 00:24:01.548 "buf_count": 2048 00:24:01.548 } 00:24:01.548 } 00:24:01.548 ] 00:24:01.548 }, 00:24:01.548 { 00:24:01.548 "subsystem": "bdev", 00:24:01.548 "config": [ 00:24:01.548 { 00:24:01.548 "method": "bdev_set_options", 00:24:01.548 "params": { 00:24:01.548 "bdev_io_pool_size": 65535, 00:24:01.548 "bdev_io_cache_size": 256, 00:24:01.548 "bdev_auto_examine": true, 00:24:01.548 "iobuf_small_cache_size": 128, 00:24:01.548 "iobuf_large_cache_size": 16 00:24:01.548 } 00:24:01.548 }, 00:24:01.548 { 00:24:01.548 "method": "bdev_raid_set_options", 00:24:01.548 "params": { 00:24:01.548 "process_window_size_kb": 1024 00:24:01.548 } 00:24:01.548 }, 00:24:01.548 { 00:24:01.548 "method": "bdev_iscsi_set_options", 00:24:01.548 "params": { 00:24:01.548 "timeout_sec": 30 00:24:01.548 } 00:24:01.548 }, 00:24:01.548 { 00:24:01.548 "method": "bdev_nvme_set_options", 00:24:01.548 "params": { 00:24:01.548 "action_on_timeout": "none", 00:24:01.548 "timeout_us": 0, 00:24:01.548 "timeout_admin_us": 0, 00:24:01.548 "keep_alive_timeout_ms": 10000, 00:24:01.549 "arbitration_burst": 0, 00:24:01.549 "low_priority_weight": 0, 00:24:01.549 "medium_priority_weight": 0, 00:24:01.549 "high_priority_weight": 0, 00:24:01.549 "nvme_adminq_poll_period_us": 10000, 00:24:01.549 "nvme_ioq_poll_period_us": 0, 00:24:01.549 "io_queue_requests": 0, 00:24:01.549 "delay_cmd_submit": true, 00:24:01.549 "transport_retry_count": 4, 00:24:01.549 "bdev_retry_count": 3, 00:24:01.549 "transport_ack_timeout": 0, 00:24:01.549 "ctrlr_loss_timeout_sec": 0, 00:24:01.549 "reconnect_delay_sec": 0, 00:24:01.549 "fast_io_fail_timeout_sec": 0, 00:24:01.549 "disable_auto_failback": false, 00:24:01.549 "generate_uuids": false, 00:24:01.549 "transport_tos": 0, 00:24:01.549 "nvme_error_stat": false, 00:24:01.549 "rdma_srq_size": 0, 00:24:01.549 "io_path_stat": false, 00:24:01.549 "allow_accel_sequence": false, 00:24:01.549 "rdma_max_cq_size": 0, 00:24:01.549 "rdma_cm_event_timeout_ms": 0, 00:24:01.549 "dhchap_digests": [ 00:24:01.549 "sha256", 00:24:01.549 "sha384", 00:24:01.549 "sha512" 00:24:01.549 ], 00:24:01.549 "dhchap_dhgroups": [ 00:24:01.549 "null", 00:24:01.549 "ffdhe2048", 00:24:01.549 "ffdhe3072", 00:24:01.549 "ffdhe4096", 00:24:01.549 "ffdhe6144", 00:24:01.549 "ffdhe8192" 00:24:01.549 ] 00:24:01.549 } 00:24:01.549 }, 00:24:01.549 { 00:24:01.549 "method": "bdev_nvme_set_hotplug", 00:24:01.549 "params": { 00:24:01.549 "period_us": 100000, 00:24:01.549 "enable": false 00:24:01.549 } 00:24:01.549 }, 00:24:01.549 { 00:24:01.549 "method": "bdev_malloc_create", 00:24:01.549 "params": { 00:24:01.549 "name": "malloc0", 00:24:01.549 "num_blocks": 8192, 00:24:01.549 "block_size": 4096, 00:24:01.549 "physical_block_size": 4096, 00:24:01.549 "uuid": "41fd5406-aca4-4b5c-8692-88c8cfa3de50", 00:24:01.549 "optimal_io_boundary": 0 00:24:01.549 } 00:24:01.549 }, 00:24:01.549 { 00:24:01.549 "method": "bdev_wait_for_examine" 00:24:01.549 } 00:24:01.549 ] 00:24:01.549 }, 00:24:01.549 { 00:24:01.549 "subsystem": "nbd", 00:24:01.549 "config": [] 00:24:01.549 }, 00:24:01.549 { 00:24:01.549 "subsystem": "scheduler", 00:24:01.549 "config": [ 00:24:01.549 { 00:24:01.549 "method": "framework_set_scheduler", 00:24:01.549 "params": { 00:24:01.549 "name": "static" 00:24:01.549 } 00:24:01.549 } 00:24:01.549 ] 00:24:01.549 }, 00:24:01.549 { 00:24:01.549 "subsystem": "nvmf", 00:24:01.549 "config": [ 00:24:01.549 { 00:24:01.549 "method": "nvmf_set_config", 00:24:01.549 "params": { 00:24:01.549 "discovery_filter": "match_any", 00:24:01.549 "admin_cmd_passthru": { 00:24:01.549 "identify_ctrlr": false 00:24:01.549 } 00:24:01.549 } 00:24:01.549 }, 00:24:01.549 { 00:24:01.549 "method": "nvmf_set_max_subsystems", 00:24:01.549 "params": { 00:24:01.549 "max_subsystems": 1024 00:24:01.549 } 00:24:01.549 }, 00:24:01.549 { 00:24:01.549 "method": "nvmf_set_crdt", 00:24:01.549 "params": { 00:24:01.549 "crdt1": 0, 00:24:01.549 "crdt2": 0, 00:24:01.549 "crdt3": 0 00:24:01.549 } 00:24:01.549 }, 00:24:01.549 { 00:24:01.549 "method": "nvmf_create_transport", 00:24:01.549 "params": { 00:24:01.549 "trtype": "TCP", 00:24:01.549 "max_queue_depth": 128, 00:24:01.549 "max_io_qpairs_per_ctrlr": 127, 00:24:01.549 "in_capsule_data_size": 4096, 00:24:01.549 "max_io_size": 131072, 00:24:01.549 "io_unit_size": 131072, 00:24:01.549 "max_aq_depth": 128, 00:24:01.549 "num_shared_buffers": 511, 00:24:01.549 "buf_cache_size": 4294967295, 00:24:01.549 "dif_insert_or_strip": false, 00:24:01.549 "zcopy": false, 00:24:01.549 "c2h_success": false, 00:24:01.549 "sock_priority": 0, 00:24:01.549 "abort_timeout_sec": 1, 00:24:01.549 "ack_timeout": 0, 00:24:01.549 "data_wr_pool_size": 0 00:24:01.549 } 00:24:01.549 }, 00:24:01.549 { 00:24:01.549 "method": "nvmf_create_subsystem", 00:24:01.549 "params": { 00:24:01.549 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:01.549 "allow_any_host": false, 00:24:01.549 "serial_number": "00000000000000000000", 00:24:01.549 "model_number": "SPDK bdev Controller", 00:24:01.549 "max_namespaces": 32, 00:24:01.549 "min_cntlid": 1, 00:24:01.549 "max_cntlid": 65519, 00:24:01.549 "ana_reporting": false 00:24:01.549 } 00:24:01.549 }, 00:24:01.549 { 00:24:01.549 "method": "nvmf_subsystem_add_host", 00:24:01.549 "params": { 00:24:01.549 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:01.549 "host": "nqn.2016-06.io.spdk:host1", 00:24:01.549 "psk": "key0" 00:24:01.549 } 00:24:01.549 }, 00:24:01.549 { 00:24:01.549 "method": "nvmf_subsystem_add_ns", 00:24:01.549 "params": { 00:24:01.549 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:01.549 "namespace": { 00:24:01.549 "nsid": 1, 00:24:01.549 "bdev_name": "malloc0", 00:24:01.549 "nguid": "41FD5406ACA44B5C869288C8CFA3DE50", 00:24:01.549 "uuid": "41fd5406-aca4-4b5c-8692-88c8cfa3de50", 00:24:01.549 "no_auto_visible": false 00:24:01.549 } 00:24:01.549 } 00:24:01.549 }, 00:24:01.549 { 00:24:01.549 "method": "nvmf_subsystem_add_listener", 00:24:01.549 "params": { 00:24:01.549 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:01.549 "listen_address": { 00:24:01.549 "trtype": "TCP", 00:24:01.549 "adrfam": "IPv4", 00:24:01.549 "traddr": "10.0.0.2", 00:24:01.549 "trsvcid": "4420" 00:24:01.549 }, 00:24:01.549 "secure_channel": true 00:24:01.549 } 00:24:01.549 } 00:24:01.549 ] 00:24:01.549 } 00:24:01.549 ] 00:24:01.549 }' 00:24:01.549 03:33:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2455623 00:24:01.549 03:33:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:24:01.549 03:33:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2455623 00:24:01.549 03:33:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 2455623 ']' 00:24:01.549 03:33:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:01.549 03:33:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:01.549 03:33:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:01.549 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:01.549 03:33:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:01.549 03:33:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:01.549 [2024-07-21 03:33:46.707676] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:24:01.549 [2024-07-21 03:33:46.707755] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:01.549 EAL: No free 2048 kB hugepages reported on node 1 00:24:01.549 [2024-07-21 03:33:46.785744] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:01.807 [2024-07-21 03:33:46.888694] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:01.807 [2024-07-21 03:33:46.888756] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:01.807 [2024-07-21 03:33:46.888783] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:01.807 [2024-07-21 03:33:46.888808] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:01.807 [2024-07-21 03:33:46.888830] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:01.807 [2024-07-21 03:33:46.888971] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:02.065 [2024-07-21 03:33:47.128656] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:02.065 [2024-07-21 03:33:47.160670] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:02.065 [2024-07-21 03:33:47.173840] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:02.630 03:33:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:02.630 03:33:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:24:02.630 03:33:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:02.630 03:33:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:02.630 03:33:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:02.630 03:33:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:02.630 03:33:47 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # bdevperf_pid=2455777 00:24:02.630 03:33:47 nvmf_tcp.nvmf_tls -- target/tls.sh@273 -- # waitforlisten 2455777 /var/tmp/bdevperf.sock 00:24:02.630 03:33:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 2455777 ']' 00:24:02.630 03:33:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:02.630 03:33:47 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:24:02.630 03:33:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:02.630 03:33:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:02.630 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:02.630 03:33:47 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # echo '{ 00:24:02.630 "subsystems": [ 00:24:02.630 { 00:24:02.630 "subsystem": "keyring", 00:24:02.630 "config": [ 00:24:02.630 { 00:24:02.630 "method": "keyring_file_add_key", 00:24:02.630 "params": { 00:24:02.630 "name": "key0", 00:24:02.630 "path": "/tmp/tmp.o2BoRyDPUu" 00:24:02.630 } 00:24:02.630 } 00:24:02.630 ] 00:24:02.630 }, 00:24:02.630 { 00:24:02.630 "subsystem": "iobuf", 00:24:02.630 "config": [ 00:24:02.630 { 00:24:02.630 "method": "iobuf_set_options", 00:24:02.630 "params": { 00:24:02.630 "small_pool_count": 8192, 00:24:02.630 "large_pool_count": 1024, 00:24:02.630 "small_bufsize": 8192, 00:24:02.630 "large_bufsize": 135168 00:24:02.630 } 00:24:02.630 } 00:24:02.630 ] 00:24:02.630 }, 00:24:02.630 { 00:24:02.630 "subsystem": "sock", 00:24:02.630 "config": [ 00:24:02.630 { 00:24:02.630 "method": "sock_set_default_impl", 00:24:02.630 "params": { 00:24:02.630 "impl_name": "posix" 00:24:02.630 } 00:24:02.630 }, 00:24:02.630 { 00:24:02.630 "method": "sock_impl_set_options", 00:24:02.630 "params": { 00:24:02.630 "impl_name": "ssl", 00:24:02.630 "recv_buf_size": 4096, 00:24:02.630 "send_buf_size": 4096, 00:24:02.630 "enable_recv_pipe": true, 00:24:02.630 "enable_quickack": false, 00:24:02.630 "enable_placement_id": 0, 00:24:02.630 "enable_zerocopy_send_server": true, 00:24:02.630 "enable_zerocopy_send_client": false, 00:24:02.630 "zerocopy_threshold": 0, 00:24:02.630 "tls_version": 0, 00:24:02.630 "enable_ktls": false 00:24:02.630 } 00:24:02.630 }, 00:24:02.630 { 00:24:02.630 "method": "sock_impl_set_options", 00:24:02.630 "params": { 00:24:02.630 "impl_name": "posix", 00:24:02.630 "recv_buf_size": 2097152, 00:24:02.630 "send_buf_size": 2097152, 00:24:02.630 "enable_recv_pipe": true, 00:24:02.630 "enable_quickack": false, 00:24:02.630 "enable_placement_id": 0, 00:24:02.630 "enable_zerocopy_send_server": true, 00:24:02.630 "enable_zerocopy_send_client": false, 00:24:02.630 "zerocopy_threshold": 0, 00:24:02.630 "tls_version": 0, 00:24:02.630 "enable_ktls": false 00:24:02.630 } 00:24:02.630 } 00:24:02.630 ] 00:24:02.630 }, 00:24:02.630 { 00:24:02.630 "subsystem": "vmd", 00:24:02.630 "config": [] 00:24:02.630 }, 00:24:02.630 { 00:24:02.630 "subsystem": "accel", 00:24:02.630 "config": [ 00:24:02.630 { 00:24:02.630 "method": "accel_set_options", 00:24:02.630 "params": { 00:24:02.630 "small_cache_size": 128, 00:24:02.630 "large_cache_size": 16, 00:24:02.630 "task_count": 2048, 00:24:02.630 "sequence_count": 2048, 00:24:02.630 "buf_count": 2048 00:24:02.630 } 00:24:02.630 } 00:24:02.630 ] 00:24:02.630 }, 00:24:02.630 { 00:24:02.630 "subsystem": "bdev", 00:24:02.630 "config": [ 00:24:02.630 { 00:24:02.630 "method": "bdev_set_options", 00:24:02.630 "params": { 00:24:02.630 "bdev_io_pool_size": 65535, 00:24:02.630 "bdev_io_cache_size": 256, 00:24:02.630 "bdev_auto_examine": true, 00:24:02.630 "iobuf_small_cache_size": 128, 00:24:02.630 "iobuf_large_cache_size": 16 00:24:02.630 } 00:24:02.630 }, 00:24:02.630 { 00:24:02.630 "method": "bdev_raid_set_options", 00:24:02.630 "params": { 00:24:02.630 "process_window_size_kb": 1024 00:24:02.630 } 00:24:02.630 }, 00:24:02.630 { 00:24:02.630 "method": "bdev_iscsi_set_options", 00:24:02.630 "params": { 00:24:02.630 "timeout_sec": 30 00:24:02.630 } 00:24:02.630 }, 00:24:02.630 { 00:24:02.630 "method": "bdev_nvme_set_options", 00:24:02.630 "params": { 00:24:02.630 "action_on_timeout": "none", 00:24:02.630 "timeout_us": 0, 00:24:02.630 "timeout_admin_us": 0, 00:24:02.630 "keep_alive_timeout_ms": 10000, 00:24:02.631 "arbitration_burst": 0, 00:24:02.631 "low_priority_weight": 0, 00:24:02.631 "medium_priority_weight": 0, 00:24:02.631 "high_priority_weight": 0, 00:24:02.631 "nvme_adminq_poll_period_us": 10000, 00:24:02.631 "nvme_ioq_poll_period_us": 0, 00:24:02.631 "io_queue_requests": 512, 00:24:02.631 "delay_cmd_submit": true, 00:24:02.631 "transport_retry_count": 4, 00:24:02.631 "bdev_retry_count": 3, 00:24:02.631 "transport_ack_timeout": 0, 00:24:02.631 "ctrlr_loss_timeout_sec": 0, 00:24:02.631 "reconnect_delay_sec": 0, 00:24:02.631 "fast_io_fail_timeout_sec": 0, 00:24:02.631 "disable_auto_failback": false, 00:24:02.631 "generate_uuids": false, 00:24:02.631 "transport_tos": 0, 00:24:02.631 "nvme_error_stat": false, 00:24:02.631 "rdma_srq_size": 0, 00:24:02.631 "io_path_stat": false, 00:24:02.631 "allow_accel_sequence": false, 00:24:02.631 "rdma_max_cq_size": 0, 00:24:02.631 "rdma_cm_event_timeout_ms": 0, 00:24:02.631 "dhchap_digests": [ 00:24:02.631 "sha256", 00:24:02.631 "sha384", 00:24:02.631 "sha512" 00:24:02.631 ], 00:24:02.631 "dhchap_dhgroups": [ 00:24:02.631 "null", 00:24:02.631 "ffdhe2048", 00:24:02.631 "ffdhe3072", 00:24:02.631 "ffdhe4096", 00:24:02.631 "ffdhe6144", 00:24:02.631 "ffdhe8192" 00:24:02.631 ] 00:24:02.631 } 00:24:02.631 }, 00:24:02.631 { 00:24:02.631 "method": "bdev_nvme_attach_controller", 00:24:02.631 "params": { 00:24:02.631 "name": "nvme0", 00:24:02.631 "trtype": "TCP", 00:24:02.631 "adrfam": "IPv4", 00:24:02.631 "traddr": "10.0.0.2", 00:24:02.631 "trsvcid": "4420", 00:24:02.631 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:02.631 "prchk_reftag": false, 00:24:02.631 "prchk_guard": false, 00:24:02.631 "ctrlr_loss_timeout_sec": 0, 00:24:02.631 "reconnect_delay_sec": 0, 00:24:02.631 "fast_io_fail_timeout_sec": 0, 00:24:02.631 "psk": "key0", 00:24:02.631 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:02.631 "hdgst": false, 00:24:02.631 "ddgst": false 00:24:02.631 } 00:24:02.631 }, 00:24:02.631 { 00:24:02.631 "method": "bdev_nvme_set_hotplug", 00:24:02.631 "params": { 00:24:02.631 "period_us": 100000, 00:24:02.631 "enable": false 00:24:02.631 } 00:24:02.631 }, 00:24:02.631 { 00:24:02.631 "method": "bdev_enable_histogram", 00:24:02.631 "params": { 00:24:02.631 "name": "nvme0n1", 00:24:02.631 "enable": true 00:24:02.631 } 00:24:02.631 }, 00:24:02.631 { 00:24:02.631 "method": "bdev_wait_for_examine" 00:24:02.631 } 00:24:02.631 ] 00:24:02.631 }, 00:24:02.631 { 00:24:02.631 "subsystem": "nbd", 00:24:02.631 "config": [] 00:24:02.631 } 00:24:02.631 ] 00:24:02.631 }' 00:24:02.631 03:33:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:02.631 03:33:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:02.631 [2024-07-21 03:33:47.834201] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:24:02.631 [2024-07-21 03:33:47.834277] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2455777 ] 00:24:02.631 EAL: No free 2048 kB hugepages reported on node 1 00:24:02.631 [2024-07-21 03:33:47.893110] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:02.890 [2024-07-21 03:33:47.979734] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:02.890 [2024-07-21 03:33:48.153025] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:03.824 03:33:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:03.824 03:33:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:24:03.824 03:33:48 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:03.824 03:33:48 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # jq -r '.[].name' 00:24:03.824 03:33:49 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:03.824 03:33:49 nvmf_tcp.nvmf_tls -- target/tls.sh@276 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:03.824 Running I/O for 1 seconds... 00:24:05.196 00:24:05.196 Latency(us) 00:24:05.196 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:05.196 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:24:05.196 Verification LBA range: start 0x0 length 0x2000 00:24:05.196 nvme0n1 : 1.02 3294.31 12.87 0.00 0.00 38489.51 7233.23 43302.31 00:24:05.196 =================================================================================================================== 00:24:05.196 Total : 3294.31 12.87 0.00 0.00 38489.51 7233.23 43302.31 00:24:05.196 0 00:24:05.196 03:33:50 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:24:05.196 03:33:50 nvmf_tcp.nvmf_tls -- target/tls.sh@279 -- # cleanup 00:24:05.196 03:33:50 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:24:05.196 03:33:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@804 -- # type=--id 00:24:05.196 03:33:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@805 -- # id=0 00:24:05.196 03:33:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@806 -- # '[' --id = --pid ']' 00:24:05.196 03:33:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@810 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:24:05.196 03:33:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@810 -- # shm_files=nvmf_trace.0 00:24:05.196 03:33:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # [[ -z nvmf_trace.0 ]] 00:24:05.196 03:33:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@816 -- # for n in $shm_files 00:24:05.196 03:33:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@817 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:24:05.196 nvmf_trace.0 00:24:05.196 03:33:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@819 -- # return 0 00:24:05.196 03:33:50 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 2455777 00:24:05.196 03:33:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 2455777 ']' 00:24:05.196 03:33:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 2455777 00:24:05.196 03:33:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:24:05.196 03:33:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:05.196 03:33:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2455777 00:24:05.196 03:33:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:24:05.196 03:33:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:24:05.196 03:33:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2455777' 00:24:05.196 killing process with pid 2455777 00:24:05.196 03:33:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 2455777 00:24:05.196 Received shutdown signal, test time was about 1.000000 seconds 00:24:05.196 00:24:05.196 Latency(us) 00:24:05.196 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:05.196 =================================================================================================================== 00:24:05.196 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:05.196 03:33:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 2455777 00:24:05.196 03:33:50 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:24:05.196 03:33:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:05.196 03:33:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:24:05.196 03:33:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:05.196 03:33:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:24:05.196 03:33:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:05.196 03:33:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:05.196 rmmod nvme_tcp 00:24:05.196 rmmod nvme_fabrics 00:24:05.454 rmmod nvme_keyring 00:24:05.454 03:33:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:05.454 03:33:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:24:05.454 03:33:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:24:05.454 03:33:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 2455623 ']' 00:24:05.454 03:33:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 2455623 00:24:05.454 03:33:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 2455623 ']' 00:24:05.454 03:33:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 2455623 00:24:05.454 03:33:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:24:05.454 03:33:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:05.454 03:33:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2455623 00:24:05.454 03:33:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:24:05.454 03:33:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:24:05.454 03:33:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2455623' 00:24:05.454 killing process with pid 2455623 00:24:05.454 03:33:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 2455623 00:24:05.454 03:33:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 2455623 00:24:05.713 03:33:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:05.713 03:33:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:05.713 03:33:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:05.713 03:33:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:05.713 03:33:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:05.713 03:33:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:05.713 03:33:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:05.713 03:33:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:07.616 03:33:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:07.616 03:33:52 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.qLzm6C3aOa /tmp/tmp.A9FJNnUWWp /tmp/tmp.o2BoRyDPUu 00:24:07.616 00:24:07.616 real 1m18.599s 00:24:07.617 user 2m7.983s 00:24:07.617 sys 0m24.740s 00:24:07.617 03:33:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1122 -- # xtrace_disable 00:24:07.617 03:33:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:07.617 ************************************ 00:24:07.617 END TEST nvmf_tls 00:24:07.617 ************************************ 00:24:07.617 03:33:52 nvmf_tcp -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:24:07.617 03:33:52 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:24:07.617 03:33:52 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:24:07.617 03:33:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:07.617 ************************************ 00:24:07.617 START TEST nvmf_fips 00:24:07.617 ************************************ 00:24:07.617 03:33:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:24:07.875 * Looking for test storage... 00:24:07.875 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:24:07.875 03:33:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:07.875 03:33:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:24:07.875 03:33:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:07.875 03:33:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:07.875 03:33:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:07.875 03:33:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:07.875 03:33:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:07.875 03:33:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:07.875 03:33:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:07.875 03:33:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:07.875 03:33:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:07.875 03:33:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:07.875 03:33:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:07.875 03:33:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:07.875 03:33:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:07.875 03:33:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:07.875 03:33:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:07.875 03:33:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:07.875 03:33:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:07.875 03:33:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:07.875 03:33:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:07.875 03:33:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:07.875 03:33:52 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:07.875 03:33:52 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:07.875 03:33:52 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:07.875 03:33:52 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:24:07.875 03:33:52 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:07.875 03:33:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:24:07.875 03:33:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:07.875 03:33:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:07.875 03:33:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:07.875 03:33:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:07.875 03:33:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:07.875 03:33:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:07.875 03:33:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:07.875 03:33:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:07.875 03:33:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:07.875 03:33:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:24:07.875 03:33:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:24:07.875 03:33:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:24:07.875 03:33:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:24:07.875 03:33:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:24:07.875 03:33:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:24:07.875 03:33:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:24:07.875 03:33:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:24:07.875 03:33:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:24:07.875 03:33:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:24:07.875 03:33:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:24:07.875 03:33:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:24:07.875 03:33:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:24:07.876 03:33:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:24:07.876 03:33:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:24:07.876 03:33:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:24:07.876 03:33:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:24:07.876 03:33:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:24:07.876 03:33:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:24:07.876 03:33:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:07.876 03:33:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:24:07.876 03:33:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:24:07.876 03:33:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:24:07.876 03:33:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:24:07.876 03:33:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:24:07.876 03:33:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:24:07.876 03:33:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:24:07.876 03:33:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:24:07.876 03:33:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:24:07.876 03:33:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:24:07.876 03:33:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:24:07.876 03:33:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:24:07.876 03:33:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:24:07.876 03:33:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:07.876 03:33:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:24:07.876 03:33:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:24:07.876 03:33:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:24:07.876 03:33:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:24:07.876 03:33:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:24:07.876 03:33:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:24:07.876 03:33:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:24:07.876 03:33:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:24:07.876 03:33:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:24:07.876 03:33:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:24:07.876 03:33:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:24:07.876 03:33:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:24:07.876 03:33:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:24:07.876 03:33:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:07.876 03:33:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:24:07.876 03:33:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:24:07.876 03:33:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:24:07.876 03:33:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:24:07.876 03:33:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:24:07.876 03:33:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:24:07.876 03:33:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:24:07.876 03:33:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:24:07.876 03:33:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:24:07.876 03:33:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:24:07.876 03:33:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:24:07.876 03:33:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:24:07.876 03:33:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:24:07.876 03:33:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:24:07.876 03:33:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:24:07.876 03:33:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:24:07.876 03:33:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:24:07.876 03:33:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:24:07.876 03:33:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:24:07.876 03:33:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:24:07.876 03:33:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:24:07.876 03:33:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:24:07.876 03:33:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:24:07.876 03:33:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:24:07.876 03:33:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:24:07.876 03:33:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:24:07.876 03:33:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:24:07.876 03:33:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:24:07.876 03:33:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:24:07.876 03:33:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:24:07.876 03:33:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:24:07.876 03:33:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:24:07.876 03:33:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:24:07.876 03:33:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:24:07.876 03:33:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:24:07.876 03:33:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:24:07.876 03:33:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:07.876 03:33:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:24:07.876 03:33:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:07.876 03:33:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:24:07.876 03:33:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:07.876 03:33:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:24:07.876 03:33:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:24:07.876 03:33:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:24:07.876 Error setting digest 00:24:07.876 00D22FB2467F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:24:07.876 00D22FB2467F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:24:07.876 03:33:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:24:07.876 03:33:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:07.876 03:33:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:07.876 03:33:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:07.876 03:33:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:24:07.876 03:33:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:07.876 03:33:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:07.876 03:33:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:07.876 03:33:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:07.876 03:33:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:07.876 03:33:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:07.876 03:33:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:07.876 03:33:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:07.876 03:33:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:07.876 03:33:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:07.876 03:33:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:24:07.876 03:33:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:09.771 03:33:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:09.771 03:33:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:24:09.771 03:33:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:09.771 03:33:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:09.771 03:33:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:09.771 03:33:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:09.771 03:33:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:09.771 03:33:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:24:09.771 03:33:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:09.771 03:33:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:24:09.771 03:33:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:24:09.771 03:33:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:24:09.771 03:33:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:24:09.771 03:33:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:24:09.771 03:33:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:24:09.771 03:33:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:09.771 03:33:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:09.771 03:33:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:09.771 03:33:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:09.771 03:33:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:09.771 03:33:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:09.771 03:33:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:09.771 03:33:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:09.771 03:33:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:09.771 03:33:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:09.771 03:33:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:09.771 03:33:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:09.771 03:33:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:09.771 03:33:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:09.771 03:33:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:09.771 03:33:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:09.771 03:33:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:09.771 03:33:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:09.771 03:33:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:09.771 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:09.771 03:33:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:09.771 03:33:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:09.771 03:33:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:09.771 03:33:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:09.771 03:33:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:09.771 03:33:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:09.771 03:33:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:09.771 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:09.771 03:33:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:09.771 03:33:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:09.771 03:33:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:09.771 03:33:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:09.771 03:33:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:09.771 03:33:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:09.771 03:33:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:09.771 03:33:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:09.771 03:33:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:09.771 03:33:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:09.771 03:33:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:09.771 03:33:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:09.771 03:33:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:09.771 03:33:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:09.771 03:33:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:09.771 03:33:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:09.771 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:09.771 03:33:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:09.771 03:33:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:09.771 03:33:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:09.771 03:33:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:09.771 03:33:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:09.771 03:33:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:09.771 03:33:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:09.771 03:33:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:09.771 03:33:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:09.771 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:09.771 03:33:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:09.771 03:33:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:09.771 03:33:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:24:09.771 03:33:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:09.771 03:33:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:09.771 03:33:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:09.771 03:33:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:09.771 03:33:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:09.771 03:33:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:09.771 03:33:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:09.771 03:33:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:09.771 03:33:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:09.771 03:33:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:09.771 03:33:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:09.771 03:33:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:09.771 03:33:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:09.771 03:33:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:09.771 03:33:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:09.771 03:33:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:09.771 03:33:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:09.771 03:33:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:09.771 03:33:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:09.771 03:33:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:10.029 03:33:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:10.029 03:33:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:10.029 03:33:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:10.029 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:10.029 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.168 ms 00:24:10.029 00:24:10.029 --- 10.0.0.2 ping statistics --- 00:24:10.029 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:10.029 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:24:10.029 03:33:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:10.029 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:10.029 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.073 ms 00:24:10.029 00:24:10.029 --- 10.0.0.1 ping statistics --- 00:24:10.029 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:10.029 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:24:10.029 03:33:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:10.029 03:33:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:24:10.029 03:33:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:10.029 03:33:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:10.029 03:33:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:10.029 03:33:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:10.029 03:33:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:10.029 03:33:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:10.029 03:33:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:10.029 03:33:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:24:10.029 03:33:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:10.029 03:33:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@720 -- # xtrace_disable 00:24:10.029 03:33:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:10.029 03:33:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=2458134 00:24:10.029 03:33:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:10.029 03:33:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 2458134 00:24:10.029 03:33:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@827 -- # '[' -z 2458134 ']' 00:24:10.029 03:33:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:10.029 03:33:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:10.029 03:33:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:10.029 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:10.029 03:33:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:10.029 03:33:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:10.029 [2024-07-21 03:33:55.223812] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:24:10.029 [2024-07-21 03:33:55.223889] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:10.029 EAL: No free 2048 kB hugepages reported on node 1 00:24:10.029 [2024-07-21 03:33:55.292315] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:10.287 [2024-07-21 03:33:55.385245] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:10.287 [2024-07-21 03:33:55.385307] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:10.287 [2024-07-21 03:33:55.385333] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:10.287 [2024-07-21 03:33:55.385348] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:10.287 [2024-07-21 03:33:55.385360] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:10.287 [2024-07-21 03:33:55.385392] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:10.287 03:33:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:10.287 03:33:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@860 -- # return 0 00:24:10.287 03:33:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:10.287 03:33:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:10.287 03:33:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:10.287 03:33:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:10.287 03:33:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:24:10.287 03:33:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:24:10.287 03:33:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:24:10.287 03:33:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:24:10.287 03:33:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:24:10.287 03:33:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:24:10.287 03:33:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:24:10.287 03:33:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:10.544 [2024-07-21 03:33:55.742726] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:10.544 [2024-07-21 03:33:55.758734] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:10.544 [2024-07-21 03:33:55.758968] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:10.544 [2024-07-21 03:33:55.790037] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:24:10.544 malloc0 00:24:10.544 03:33:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:10.544 03:33:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=2458161 00:24:10.544 03:33:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:10.544 03:33:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 2458161 /var/tmp/bdevperf.sock 00:24:10.544 03:33:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@827 -- # '[' -z 2458161 ']' 00:24:10.544 03:33:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:10.544 03:33:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:10.544 03:33:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:10.544 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:10.544 03:33:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:10.544 03:33:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:10.801 [2024-07-21 03:33:55.875349] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:24:10.801 [2024-07-21 03:33:55.875437] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2458161 ] 00:24:10.801 EAL: No free 2048 kB hugepages reported on node 1 00:24:10.801 [2024-07-21 03:33:55.941637] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:10.801 [2024-07-21 03:33:56.040310] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:11.058 03:33:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:11.058 03:33:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@860 -- # return 0 00:24:11.058 03:33:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:24:11.317 [2024-07-21 03:33:56.372127] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:11.318 [2024-07-21 03:33:56.372274] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:24:11.318 TLSTESTn1 00:24:11.318 03:33:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:11.318 Running I/O for 10 seconds... 00:24:23.542 00:24:23.542 Latency(us) 00:24:23.542 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:23.542 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:23.542 Verification LBA range: start 0x0 length 0x2000 00:24:23.542 TLSTESTn1 : 10.02 3397.60 13.27 0.00 0.00 37607.75 9175.04 42913.94 00:24:23.542 =================================================================================================================== 00:24:23.542 Total : 3397.60 13.27 0.00 0.00 37607.75 9175.04 42913.94 00:24:23.542 0 00:24:23.542 03:34:06 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:24:23.542 03:34:06 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:24:23.542 03:34:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@804 -- # type=--id 00:24:23.542 03:34:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@805 -- # id=0 00:24:23.542 03:34:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@806 -- # '[' --id = --pid ']' 00:24:23.542 03:34:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@810 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:24:23.542 03:34:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@810 -- # shm_files=nvmf_trace.0 00:24:23.542 03:34:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # [[ -z nvmf_trace.0 ]] 00:24:23.542 03:34:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@816 -- # for n in $shm_files 00:24:23.543 03:34:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@817 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:24:23.543 nvmf_trace.0 00:24:23.543 03:34:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@819 -- # return 0 00:24:23.543 03:34:06 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 2458161 00:24:23.543 03:34:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@946 -- # '[' -z 2458161 ']' 00:24:23.543 03:34:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@950 -- # kill -0 2458161 00:24:23.543 03:34:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # uname 00:24:23.543 03:34:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:23.543 03:34:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2458161 00:24:23.543 03:34:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:24:23.543 03:34:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:24:23.543 03:34:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2458161' 00:24:23.543 killing process with pid 2458161 00:24:23.543 03:34:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@965 -- # kill 2458161 00:24:23.543 Received shutdown signal, test time was about 10.000000 seconds 00:24:23.543 00:24:23.543 Latency(us) 00:24:23.543 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:23.543 =================================================================================================================== 00:24:23.543 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:23.543 [2024-07-21 03:34:06.732865] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:24:23.543 03:34:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@970 -- # wait 2458161 00:24:23.543 03:34:06 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:24:23.543 03:34:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:23.543 03:34:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:24:23.543 03:34:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:23.543 03:34:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:24:23.543 03:34:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:23.543 03:34:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:23.543 rmmod nvme_tcp 00:24:23.543 rmmod nvme_fabrics 00:24:23.543 rmmod nvme_keyring 00:24:23.543 03:34:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:23.543 03:34:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:24:23.543 03:34:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:24:23.543 03:34:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 2458134 ']' 00:24:23.543 03:34:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 2458134 00:24:23.543 03:34:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@946 -- # '[' -z 2458134 ']' 00:24:23.543 03:34:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@950 -- # kill -0 2458134 00:24:23.543 03:34:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # uname 00:24:23.543 03:34:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:23.543 03:34:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2458134 00:24:23.543 03:34:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:24:23.543 03:34:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:24:23.543 03:34:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2458134' 00:24:23.543 killing process with pid 2458134 00:24:23.543 03:34:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@965 -- # kill 2458134 00:24:23.543 [2024-07-21 03:34:07.026131] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:24:23.543 03:34:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@970 -- # wait 2458134 00:24:23.543 03:34:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:23.543 03:34:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:23.543 03:34:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:23.543 03:34:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:23.543 03:34:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:23.543 03:34:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:23.543 03:34:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:23.543 03:34:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:24.109 03:34:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:24.109 03:34:09 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:24:24.109 00:24:24.109 real 0m16.401s 00:24:24.109 user 0m20.486s 00:24:24.109 sys 0m5.896s 00:24:24.109 03:34:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1122 -- # xtrace_disable 00:24:24.109 03:34:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:24.109 ************************************ 00:24:24.109 END TEST nvmf_fips 00:24:24.109 ************************************ 00:24:24.109 03:34:09 nvmf_tcp -- nvmf/nvmf.sh@65 -- # '[' 1 -eq 1 ']' 00:24:24.109 03:34:09 nvmf_tcp -- nvmf/nvmf.sh@66 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:24:24.109 03:34:09 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:24:24.109 03:34:09 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:24:24.109 03:34:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:24.109 ************************************ 00:24:24.109 START TEST nvmf_fuzz 00:24:24.109 ************************************ 00:24:24.109 03:34:09 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:24:24.109 * Looking for test storage... 00:24:24.109 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:24.109 03:34:09 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:24.109 03:34:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:24:24.109 03:34:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:24.109 03:34:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:24.109 03:34:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:24.109 03:34:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:24.109 03:34:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:24.109 03:34:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:24.109 03:34:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:24.109 03:34:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:24.109 03:34:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:24.110 03:34:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:24.110 03:34:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:24.110 03:34:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:24.110 03:34:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:24.110 03:34:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:24.110 03:34:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:24.110 03:34:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:24.110 03:34:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:24.368 03:34:09 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:24.368 03:34:09 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:24.368 03:34:09 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:24.368 03:34:09 nvmf_tcp.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:24.368 03:34:09 nvmf_tcp.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:24.368 03:34:09 nvmf_tcp.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:24.368 03:34:09 nvmf_tcp.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:24:24.368 03:34:09 nvmf_tcp.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:24.368 03:34:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@47 -- # : 0 00:24:24.368 03:34:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:24.368 03:34:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:24.368 03:34:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:24.368 03:34:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:24.368 03:34:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:24.368 03:34:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:24.368 03:34:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:24.368 03:34:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:24.368 03:34:09 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:24:24.368 03:34:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:24.368 03:34:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:24.368 03:34:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:24.368 03:34:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:24.368 03:34:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:24.368 03:34:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:24.368 03:34:09 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:24.368 03:34:09 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:24.368 03:34:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:24.368 03:34:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:24.368 03:34:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@285 -- # xtrace_disable 00:24:24.368 03:34:09 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:26.266 03:34:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:26.266 03:34:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@291 -- # pci_devs=() 00:24:26.266 03:34:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:26.266 03:34:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:26.266 03:34:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:26.266 03:34:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:26.266 03:34:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:26.266 03:34:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@295 -- # net_devs=() 00:24:26.266 03:34:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:26.266 03:34:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@296 -- # e810=() 00:24:26.266 03:34:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@296 -- # local -ga e810 00:24:26.266 03:34:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@297 -- # x722=() 00:24:26.266 03:34:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@297 -- # local -ga x722 00:24:26.266 03:34:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@298 -- # mlx=() 00:24:26.266 03:34:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@298 -- # local -ga mlx 00:24:26.266 03:34:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:26.266 03:34:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:26.266 03:34:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:26.266 03:34:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:26.266 03:34:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:26.266 03:34:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:26.266 03:34:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:26.266 03:34:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:26.266 03:34:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:26.266 03:34:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:26.266 03:34:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:26.266 03:34:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:26.266 03:34:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:26.266 03:34:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:26.266 03:34:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:26.266 03:34:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:26.266 03:34:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:26.266 03:34:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:26.266 03:34:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:26.266 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:26.266 03:34:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:26.266 03:34:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:26.266 03:34:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:26.266 03:34:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:26.266 03:34:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:26.266 03:34:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:26.266 03:34:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:26.266 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:26.266 03:34:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:26.266 03:34:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:26.266 03:34:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:26.266 03:34:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:26.266 03:34:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:26.266 03:34:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:26.266 03:34:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:26.266 03:34:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:26.266 03:34:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:26.266 03:34:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:26.266 03:34:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:26.266 03:34:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:26.266 03:34:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:26.266 03:34:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:26.266 03:34:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:26.266 03:34:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:26.266 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:26.266 03:34:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:26.266 03:34:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:26.266 03:34:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:26.266 03:34:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:26.266 03:34:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:26.266 03:34:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:26.266 03:34:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:26.266 03:34:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:26.266 03:34:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:26.266 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:26.266 03:34:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:26.267 03:34:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:26.267 03:34:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@414 -- # is_hw=yes 00:24:26.267 03:34:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:26.267 03:34:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:26.267 03:34:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:26.267 03:34:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:26.267 03:34:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:26.267 03:34:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:26.267 03:34:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:26.267 03:34:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:26.267 03:34:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:26.267 03:34:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:26.267 03:34:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:26.267 03:34:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:26.267 03:34:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:26.267 03:34:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:26.267 03:34:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:26.267 03:34:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:26.267 03:34:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:26.267 03:34:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:26.267 03:34:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:26.267 03:34:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:26.267 03:34:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:26.267 03:34:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:26.267 03:34:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:26.267 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:26.267 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.199 ms 00:24:26.267 00:24:26.267 --- 10.0.0.2 ping statistics --- 00:24:26.267 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:26.267 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:24:26.267 03:34:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:26.267 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:26.267 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.061 ms 00:24:26.267 00:24:26.267 --- 10.0.0.1 ping statistics --- 00:24:26.267 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:26.267 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:24:26.267 03:34:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:26.267 03:34:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@422 -- # return 0 00:24:26.267 03:34:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:26.267 03:34:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:26.267 03:34:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:26.267 03:34:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:26.267 03:34:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:26.267 03:34:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:26.267 03:34:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:26.523 03:34:11 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=2461408 00:24:26.523 03:34:11 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:24:26.523 03:34:11 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:24:26.523 03:34:11 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 2461408 00:24:26.524 03:34:11 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@827 -- # '[' -z 2461408 ']' 00:24:26.524 03:34:11 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:26.524 03:34:11 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:26.524 03:34:11 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:26.524 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:26.524 03:34:11 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:26.524 03:34:11 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:26.781 03:34:11 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:26.781 03:34:11 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@860 -- # return 0 00:24:26.781 03:34:11 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:26.781 03:34:11 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:26.781 03:34:11 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:26.781 03:34:11 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:26.781 03:34:11 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:24:26.781 03:34:11 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:26.781 03:34:11 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:26.781 Malloc0 00:24:26.781 03:34:11 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:26.781 03:34:11 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:26.781 03:34:11 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:26.781 03:34:11 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:26.781 03:34:11 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:26.781 03:34:11 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:26.781 03:34:11 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:26.781 03:34:11 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:26.781 03:34:11 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:26.781 03:34:11 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:26.781 03:34:11 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:26.781 03:34:11 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:26.781 03:34:11 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:26.781 03:34:11 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:24:26.781 03:34:11 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:24:58.845 Fuzzing completed. Shutting down the fuzz application 00:24:58.845 00:24:58.845 Dumping successful admin opcodes: 00:24:58.845 8, 9, 10, 24, 00:24:58.845 Dumping successful io opcodes: 00:24:58.845 0, 9, 00:24:58.845 NS: 0x200003aeff00 I/O qp, Total commands completed: 461242, total successful commands: 2668, random_seed: 1236430016 00:24:58.845 NS: 0x200003aeff00 admin qp, Total commands completed: 57184, total successful commands: 455, random_seed: 449719296 00:24:58.845 03:34:42 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:24:58.845 Fuzzing completed. Shutting down the fuzz application 00:24:58.845 00:24:58.845 Dumping successful admin opcodes: 00:24:58.845 24, 00:24:58.845 Dumping successful io opcodes: 00:24:58.845 00:24:58.845 NS: 0x200003aeff00 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 2347164875 00:24:58.845 NS: 0x200003aeff00 admin qp, Total commands completed: 16, total successful commands: 4, random_seed: 2347272014 00:24:58.845 03:34:44 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:58.845 03:34:44 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:58.845 03:34:44 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:58.845 03:34:44 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:58.845 03:34:44 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:24:58.845 03:34:44 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:24:58.845 03:34:44 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:58.845 03:34:44 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@117 -- # sync 00:24:58.845 03:34:44 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:58.845 03:34:44 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@120 -- # set +e 00:24:58.845 03:34:44 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:58.845 03:34:44 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:59.102 rmmod nvme_tcp 00:24:59.102 rmmod nvme_fabrics 00:24:59.102 rmmod nvme_keyring 00:24:59.102 03:34:44 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:59.102 03:34:44 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@124 -- # set -e 00:24:59.102 03:34:44 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@125 -- # return 0 00:24:59.102 03:34:44 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@489 -- # '[' -n 2461408 ']' 00:24:59.102 03:34:44 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@490 -- # killprocess 2461408 00:24:59.102 03:34:44 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@946 -- # '[' -z 2461408 ']' 00:24:59.102 03:34:44 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@950 -- # kill -0 2461408 00:24:59.102 03:34:44 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@951 -- # uname 00:24:59.102 03:34:44 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:59.102 03:34:44 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2461408 00:24:59.102 03:34:44 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:24:59.102 03:34:44 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:24:59.102 03:34:44 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2461408' 00:24:59.102 killing process with pid 2461408 00:24:59.102 03:34:44 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@965 -- # kill 2461408 00:24:59.102 03:34:44 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@970 -- # wait 2461408 00:24:59.360 03:34:44 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:59.360 03:34:44 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:59.360 03:34:44 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:59.360 03:34:44 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:59.360 03:34:44 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:59.360 03:34:44 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:59.360 03:34:44 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:59.361 03:34:44 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:01.259 03:34:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:01.259 03:34:46 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:25:01.259 00:25:01.259 real 0m37.192s 00:25:01.259 user 0m51.471s 00:25:01.259 sys 0m15.059s 00:25:01.259 03:34:46 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@1122 -- # xtrace_disable 00:25:01.259 03:34:46 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:01.259 ************************************ 00:25:01.259 END TEST nvmf_fuzz 00:25:01.259 ************************************ 00:25:01.518 03:34:46 nvmf_tcp -- nvmf/nvmf.sh@67 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:25:01.518 03:34:46 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:25:01.518 03:34:46 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:25:01.518 03:34:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:01.518 ************************************ 00:25:01.518 START TEST nvmf_multiconnection 00:25:01.518 ************************************ 00:25:01.518 03:34:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:25:01.518 * Looking for test storage... 00:25:01.518 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:01.518 03:34:46 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:01.518 03:34:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:25:01.518 03:34:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:01.518 03:34:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:01.518 03:34:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:01.518 03:34:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:01.518 03:34:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:01.518 03:34:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:01.518 03:34:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:01.518 03:34:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:01.518 03:34:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:01.518 03:34:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:01.518 03:34:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:01.518 03:34:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:01.518 03:34:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:01.518 03:34:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:01.518 03:34:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:01.518 03:34:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:01.518 03:34:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:01.518 03:34:46 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:01.518 03:34:46 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:01.518 03:34:46 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:01.518 03:34:46 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:01.518 03:34:46 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:01.518 03:34:46 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:01.518 03:34:46 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:25:01.518 03:34:46 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:01.518 03:34:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@47 -- # : 0 00:25:01.518 03:34:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:01.518 03:34:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:01.518 03:34:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:01.518 03:34:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:01.518 03:34:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:01.518 03:34:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:01.518 03:34:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:01.518 03:34:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:01.518 03:34:46 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:01.518 03:34:46 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:01.518 03:34:46 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:25:01.518 03:34:46 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:25:01.518 03:34:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:01.518 03:34:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:01.518 03:34:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:01.518 03:34:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:01.518 03:34:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:01.518 03:34:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:01.518 03:34:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:01.518 03:34:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:01.518 03:34:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:01.518 03:34:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:01.518 03:34:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@285 -- # xtrace_disable 00:25:01.518 03:34:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:03.419 03:34:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:03.419 03:34:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@291 -- # pci_devs=() 00:25:03.419 03:34:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:03.419 03:34:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:03.419 03:34:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:03.419 03:34:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:03.419 03:34:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:03.419 03:34:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@295 -- # net_devs=() 00:25:03.419 03:34:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:03.419 03:34:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@296 -- # e810=() 00:25:03.419 03:34:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@296 -- # local -ga e810 00:25:03.419 03:34:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@297 -- # x722=() 00:25:03.419 03:34:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@297 -- # local -ga x722 00:25:03.419 03:34:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@298 -- # mlx=() 00:25:03.419 03:34:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@298 -- # local -ga mlx 00:25:03.419 03:34:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:03.419 03:34:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:03.419 03:34:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:03.419 03:34:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:03.419 03:34:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:03.419 03:34:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:03.419 03:34:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:03.419 03:34:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:03.419 03:34:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:03.419 03:34:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:03.419 03:34:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:03.419 03:34:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:03.419 03:34:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:03.419 03:34:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:03.419 03:34:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:03.419 03:34:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:03.419 03:34:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:03.419 03:34:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:03.419 03:34:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:03.419 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:03.419 03:34:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:03.419 03:34:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:03.419 03:34:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:03.419 03:34:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:03.419 03:34:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:03.419 03:34:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:03.419 03:34:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:03.419 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:03.419 03:34:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:03.419 03:34:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:03.419 03:34:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:03.419 03:34:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:03.419 03:34:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:03.419 03:34:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:03.419 03:34:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:03.419 03:34:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:03.419 03:34:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:03.419 03:34:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:03.419 03:34:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:03.419 03:34:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:03.419 03:34:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:03.420 03:34:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:03.420 03:34:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:03.420 03:34:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:03.420 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:03.420 03:34:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:03.420 03:34:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:03.420 03:34:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:03.420 03:34:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:03.420 03:34:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:03.420 03:34:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:03.420 03:34:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:03.420 03:34:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:03.420 03:34:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:03.420 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:03.420 03:34:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:03.420 03:34:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:03.420 03:34:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@414 -- # is_hw=yes 00:25:03.420 03:34:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:03.420 03:34:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:03.420 03:34:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:03.420 03:34:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:03.420 03:34:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:03.420 03:34:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:03.420 03:34:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:03.420 03:34:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:03.420 03:34:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:03.420 03:34:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:03.420 03:34:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:03.420 03:34:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:03.420 03:34:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:03.420 03:34:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:03.420 03:34:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:03.420 03:34:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:03.420 03:34:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:03.420 03:34:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:03.420 03:34:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:03.420 03:34:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:03.420 03:34:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:03.420 03:34:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:03.420 03:34:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:03.420 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:03.420 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.193 ms 00:25:03.420 00:25:03.420 --- 10.0.0.2 ping statistics --- 00:25:03.420 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:03.420 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:25:03.420 03:34:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:03.420 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:03.420 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.157 ms 00:25:03.420 00:25:03.420 --- 10.0.0.1 ping statistics --- 00:25:03.420 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:03.420 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:25:03.420 03:34:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:03.420 03:34:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@422 -- # return 0 00:25:03.420 03:34:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:03.420 03:34:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:03.420 03:34:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:03.420 03:34:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:03.420 03:34:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:03.420 03:34:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:03.420 03:34:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:03.420 03:34:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:25:03.420 03:34:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:03.420 03:34:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@720 -- # xtrace_disable 00:25:03.420 03:34:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:03.420 03:34:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@481 -- # nvmfpid=2467015 00:25:03.420 03:34:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:03.420 03:34:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@482 -- # waitforlisten 2467015 00:25:03.420 03:34:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@827 -- # '[' -z 2467015 ']' 00:25:03.420 03:34:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:03.420 03:34:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@832 -- # local max_retries=100 00:25:03.420 03:34:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:03.420 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:03.420 03:34:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@836 -- # xtrace_disable 00:25:03.420 03:34:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:03.420 [2024-07-21 03:34:48.716042] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:25:03.420 [2024-07-21 03:34:48.716126] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:03.678 EAL: No free 2048 kB hugepages reported on node 1 00:25:03.678 [2024-07-21 03:34:48.786013] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:03.678 [2024-07-21 03:34:48.880695] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:03.678 [2024-07-21 03:34:48.880756] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:03.678 [2024-07-21 03:34:48.880773] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:03.678 [2024-07-21 03:34:48.880786] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:03.678 [2024-07-21 03:34:48.880799] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:03.678 [2024-07-21 03:34:48.880857] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:03.678 [2024-07-21 03:34:48.880938] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:03.678 [2024-07-21 03:34:48.881032] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:03.678 [2024-07-21 03:34:48.881033] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:03.936 03:34:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:25:03.936 03:34:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@860 -- # return 0 00:25:03.936 03:34:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:03.936 03:34:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:03.936 03:34:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:03.936 03:34:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:03.936 03:34:49 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:03.936 03:34:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:03.936 03:34:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:03.936 [2024-07-21 03:34:49.034454] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:03.936 03:34:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:03.936 03:34:49 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:25:03.936 03:34:49 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:03.936 03:34:49 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:25:03.936 03:34:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:03.936 03:34:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:03.936 Malloc1 00:25:03.936 03:34:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:03.936 03:34:49 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:25:03.936 03:34:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:03.936 03:34:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:03.936 03:34:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:03.936 03:34:49 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:25:03.936 03:34:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:03.936 03:34:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:03.936 03:34:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:03.936 03:34:49 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:03.936 03:34:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:03.936 03:34:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:03.936 [2024-07-21 03:34:49.089693] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:03.936 03:34:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:03.936 03:34:49 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:03.936 03:34:49 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:25:03.936 03:34:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:03.937 03:34:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:03.937 Malloc2 00:25:03.937 03:34:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:03.937 03:34:49 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:25:03.937 03:34:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:03.937 03:34:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:03.937 03:34:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:03.937 03:34:49 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:25:03.937 03:34:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:03.937 03:34:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:03.937 03:34:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:03.937 03:34:49 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:25:03.937 03:34:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:03.937 03:34:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:03.937 03:34:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:03.937 03:34:49 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:03.937 03:34:49 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:25:03.937 03:34:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:03.937 03:34:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:03.937 Malloc3 00:25:03.937 03:34:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:03.937 03:34:49 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:25:03.937 03:34:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:03.937 03:34:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:03.937 03:34:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:03.937 03:34:49 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:25:03.937 03:34:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:03.937 03:34:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:03.937 03:34:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:03.937 03:34:49 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:25:03.937 03:34:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:03.937 03:34:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:03.937 03:34:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:03.937 03:34:49 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:03.937 03:34:49 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:25:03.937 03:34:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:03.937 03:34:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:03.937 Malloc4 00:25:03.937 03:34:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:03.937 03:34:49 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:25:03.937 03:34:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:03.937 03:34:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:03.937 03:34:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:03.937 03:34:49 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:25:03.937 03:34:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:03.937 03:34:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:03.937 03:34:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:03.937 03:34:49 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:25:03.937 03:34:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:03.937 03:34:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:03.937 03:34:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:03.937 03:34:49 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:03.937 03:34:49 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:25:03.937 03:34:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:03.937 03:34:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:04.196 Malloc5 00:25:04.196 03:34:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:04.196 03:34:49 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:25:04.196 03:34:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:04.196 03:34:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:04.196 03:34:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:04.196 03:34:49 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:25:04.196 03:34:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:04.196 03:34:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:04.196 03:34:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:04.196 03:34:49 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:25:04.196 03:34:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:04.196 03:34:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:04.196 03:34:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:04.196 03:34:49 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:04.196 03:34:49 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:25:04.196 03:34:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:04.196 03:34:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:04.196 Malloc6 00:25:04.196 03:34:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:04.196 03:34:49 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:25:04.196 03:34:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:04.196 03:34:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:04.196 03:34:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:04.197 03:34:49 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:25:04.197 03:34:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:04.197 03:34:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:04.197 03:34:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:04.197 03:34:49 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:25:04.197 03:34:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:04.197 03:34:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:04.197 03:34:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:04.197 03:34:49 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:04.197 03:34:49 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:25:04.197 03:34:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:04.197 03:34:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:04.197 Malloc7 00:25:04.197 03:34:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:04.197 03:34:49 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:25:04.197 03:34:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:04.197 03:34:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:04.197 03:34:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:04.197 03:34:49 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:25:04.197 03:34:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:04.197 03:34:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:04.197 03:34:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:04.197 03:34:49 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:25:04.197 03:34:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:04.197 03:34:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:04.197 03:34:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:04.197 03:34:49 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:04.197 03:34:49 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:25:04.197 03:34:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:04.197 03:34:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:04.197 Malloc8 00:25:04.197 03:34:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:04.197 03:34:49 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:25:04.197 03:34:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:04.197 03:34:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:04.197 03:34:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:04.197 03:34:49 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:25:04.197 03:34:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:04.197 03:34:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:04.197 03:34:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:04.197 03:34:49 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:25:04.197 03:34:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:04.197 03:34:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:04.197 03:34:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:04.197 03:34:49 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:04.197 03:34:49 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:25:04.197 03:34:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:04.197 03:34:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:04.197 Malloc9 00:25:04.197 03:34:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:04.197 03:34:49 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:25:04.197 03:34:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:04.197 03:34:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:04.197 03:34:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:04.197 03:34:49 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:25:04.197 03:34:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:04.197 03:34:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:04.197 03:34:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:04.197 03:34:49 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:25:04.197 03:34:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:04.197 03:34:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:04.197 03:34:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:04.197 03:34:49 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:04.197 03:34:49 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:25:04.197 03:34:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:04.197 03:34:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:04.197 Malloc10 00:25:04.197 03:34:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:04.197 03:34:49 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:25:04.197 03:34:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:04.197 03:34:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:04.197 03:34:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:04.197 03:34:49 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:25:04.197 03:34:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:04.197 03:34:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:04.455 03:34:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:04.455 03:34:49 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:25:04.455 03:34:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:04.455 03:34:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:04.455 03:34:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:04.455 03:34:49 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:04.455 03:34:49 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:25:04.455 03:34:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:04.455 03:34:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:04.455 Malloc11 00:25:04.455 03:34:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:04.455 03:34:49 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:25:04.455 03:34:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:04.455 03:34:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:04.455 03:34:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:04.455 03:34:49 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:25:04.455 03:34:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:04.455 03:34:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:04.455 03:34:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:04.455 03:34:49 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:25:04.455 03:34:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:04.455 03:34:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:04.455 03:34:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:04.455 03:34:49 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:25:04.455 03:34:49 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:04.455 03:34:49 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:25:05.021 03:34:50 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:25:05.021 03:34:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:25:05.021 03:34:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:25:05.021 03:34:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:25:05.021 03:34:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:25:07.546 03:34:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:25:07.546 03:34:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:25:07.546 03:34:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK1 00:25:07.546 03:34:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:25:07.546 03:34:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:25:07.546 03:34:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:25:07.546 03:34:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:07.546 03:34:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:25:07.804 03:34:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:25:07.804 03:34:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:25:07.804 03:34:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:25:07.804 03:34:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:25:07.804 03:34:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:25:09.698 03:34:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:25:09.698 03:34:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:25:09.698 03:34:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK2 00:25:09.698 03:34:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:25:09.698 03:34:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:25:09.698 03:34:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:25:09.699 03:34:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:09.699 03:34:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:25:10.649 03:34:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:25:10.649 03:34:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:25:10.649 03:34:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:25:10.649 03:34:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:25:10.649 03:34:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:25:12.547 03:34:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:25:12.547 03:34:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:25:12.547 03:34:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK3 00:25:12.547 03:34:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:25:12.547 03:34:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:25:12.547 03:34:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:25:12.547 03:34:57 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:12.547 03:34:57 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:25:13.112 03:34:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:25:13.112 03:34:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:25:13.112 03:34:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:25:13.112 03:34:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:25:13.112 03:34:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:25:15.005 03:35:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:25:15.005 03:35:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:25:15.005 03:35:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK4 00:25:15.005 03:35:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:25:15.005 03:35:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:25:15.005 03:35:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:25:15.005 03:35:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:15.005 03:35:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:25:15.934 03:35:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:25:15.934 03:35:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:25:15.934 03:35:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:25:15.934 03:35:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:25:15.934 03:35:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:25:17.827 03:35:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:25:17.827 03:35:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:25:17.827 03:35:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK5 00:25:17.827 03:35:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:25:17.827 03:35:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:25:17.827 03:35:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:25:17.827 03:35:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:17.827 03:35:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:25:18.757 03:35:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:25:18.757 03:35:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:25:18.757 03:35:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:25:18.757 03:35:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:25:18.757 03:35:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:25:20.647 03:35:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:25:20.647 03:35:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:25:20.647 03:35:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK6 00:25:20.647 03:35:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:25:20.647 03:35:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:25:20.647 03:35:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:25:20.647 03:35:05 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:20.647 03:35:05 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:25:21.575 03:35:06 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:25:21.575 03:35:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:25:21.575 03:35:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:25:21.575 03:35:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:25:21.575 03:35:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:25:23.464 03:35:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:25:23.464 03:35:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:25:23.464 03:35:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK7 00:25:23.464 03:35:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:25:23.464 03:35:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:25:23.464 03:35:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:25:23.464 03:35:08 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:23.464 03:35:08 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:25:24.392 03:35:09 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:25:24.392 03:35:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:25:24.392 03:35:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:25:24.392 03:35:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:25:24.392 03:35:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:25:26.283 03:35:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:25:26.283 03:35:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:25:26.283 03:35:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK8 00:25:26.283 03:35:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:25:26.283 03:35:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:25:26.283 03:35:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:25:26.283 03:35:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:26.283 03:35:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:25:27.215 03:35:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:25:27.215 03:35:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:25:27.215 03:35:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:25:27.215 03:35:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:25:27.215 03:35:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:25:29.110 03:35:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:25:29.110 03:35:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:25:29.110 03:35:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK9 00:25:29.110 03:35:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:25:29.110 03:35:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:25:29.110 03:35:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:25:29.110 03:35:14 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:29.110 03:35:14 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:25:30.040 03:35:15 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:25:30.040 03:35:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:25:30.040 03:35:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:25:30.040 03:35:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:25:30.040 03:35:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:25:31.932 03:35:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:25:31.932 03:35:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:25:31.932 03:35:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK10 00:25:31.932 03:35:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:25:31.932 03:35:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:25:31.932 03:35:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:25:31.932 03:35:17 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:31.932 03:35:17 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:25:32.861 03:35:17 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:25:32.862 03:35:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:25:32.862 03:35:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:25:32.862 03:35:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:25:32.862 03:35:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:25:34.792 03:35:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:25:34.792 03:35:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:25:34.792 03:35:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK11 00:25:34.792 03:35:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:25:34.792 03:35:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:25:34.792 03:35:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:25:34.792 03:35:20 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:25:34.792 [global] 00:25:34.792 thread=1 00:25:34.792 invalidate=1 00:25:34.792 rw=read 00:25:34.792 time_based=1 00:25:34.792 runtime=10 00:25:34.792 ioengine=libaio 00:25:34.792 direct=1 00:25:34.792 bs=262144 00:25:34.792 iodepth=64 00:25:34.792 norandommap=1 00:25:34.792 numjobs=1 00:25:34.792 00:25:34.792 [job0] 00:25:34.792 filename=/dev/nvme0n1 00:25:34.792 [job1] 00:25:34.792 filename=/dev/nvme10n1 00:25:34.792 [job2] 00:25:34.792 filename=/dev/nvme1n1 00:25:34.792 [job3] 00:25:34.792 filename=/dev/nvme2n1 00:25:34.792 [job4] 00:25:34.792 filename=/dev/nvme3n1 00:25:34.792 [job5] 00:25:34.792 filename=/dev/nvme4n1 00:25:34.792 [job6] 00:25:34.792 filename=/dev/nvme5n1 00:25:34.792 [job7] 00:25:34.792 filename=/dev/nvme6n1 00:25:34.792 [job8] 00:25:34.792 filename=/dev/nvme7n1 00:25:34.792 [job9] 00:25:34.792 filename=/dev/nvme8n1 00:25:34.792 [job10] 00:25:34.792 filename=/dev/nvme9n1 00:25:35.049 Could not set queue depth (nvme0n1) 00:25:35.049 Could not set queue depth (nvme10n1) 00:25:35.049 Could not set queue depth (nvme1n1) 00:25:35.049 Could not set queue depth (nvme2n1) 00:25:35.049 Could not set queue depth (nvme3n1) 00:25:35.049 Could not set queue depth (nvme4n1) 00:25:35.049 Could not set queue depth (nvme5n1) 00:25:35.049 Could not set queue depth (nvme6n1) 00:25:35.049 Could not set queue depth (nvme7n1) 00:25:35.049 Could not set queue depth (nvme8n1) 00:25:35.049 Could not set queue depth (nvme9n1) 00:25:35.049 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:35.049 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:35.049 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:35.049 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:35.049 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:35.049 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:35.049 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:35.049 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:35.049 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:35.049 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:35.049 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:35.049 fio-3.35 00:25:35.049 Starting 11 threads 00:25:47.254 00:25:47.254 job0: (groupid=0, jobs=1): err= 0: pid=2471254: Sun Jul 21 03:35:30 2024 00:25:47.254 read: IOPS=668, BW=167MiB/s (175MB/s)(1676MiB/10024msec) 00:25:47.254 slat (usec): min=13, max=110008, avg=1286.91, stdev=5063.01 00:25:47.254 clat (msec): min=3, max=279, avg=94.36, stdev=52.41 00:25:47.254 lat (msec): min=4, max=318, avg=95.65, stdev=53.16 00:25:47.254 clat percentiles (msec): 00:25:47.254 | 1.00th=[ 15], 5.00th=[ 29], 10.00th=[ 33], 20.00th=[ 51], 00:25:47.254 | 30.00th=[ 59], 40.00th=[ 71], 50.00th=[ 81], 60.00th=[ 92], 00:25:47.254 | 70.00th=[ 114], 80.00th=[ 146], 90.00th=[ 182], 95.00th=[ 197], 00:25:47.254 | 99.00th=[ 209], 99.50th=[ 215], 99.90th=[ 271], 99.95th=[ 271], 00:25:47.254 | 99.99th=[ 279] 00:25:47.254 bw ( KiB/s): min=76646, max=378368, per=9.41%, avg=169950.00, stdev=91329.91, samples=20 00:25:47.254 iops : min= 299, max= 1478, avg=663.75, stdev=356.78, samples=20 00:25:47.254 lat (msec) : 4=0.03%, 10=0.51%, 20=0.63%, 50=19.08%, 100=44.50% 00:25:47.254 lat (msec) : 250=35.12%, 500=0.13% 00:25:47.254 cpu : usr=0.46%, sys=2.35%, ctx=1204, majf=0, minf=4097 00:25:47.254 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:25:47.254 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:47.254 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:47.254 issued rwts: total=6703,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:47.254 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:47.254 job1: (groupid=0, jobs=1): err= 0: pid=2471255: Sun Jul 21 03:35:30 2024 00:25:47.254 read: IOPS=1047, BW=262MiB/s (275MB/s)(2650MiB/10117msec) 00:25:47.254 slat (usec): min=13, max=216907, avg=853.83, stdev=4477.13 00:25:47.254 clat (usec): min=1799, max=435586, avg=60178.65, stdev=51935.52 00:25:47.254 lat (usec): min=1814, max=435604, avg=61032.48, stdev=52666.98 00:25:47.254 clat percentiles (msec): 00:25:47.254 | 1.00th=[ 8], 5.00th=[ 19], 10.00th=[ 27], 20.00th=[ 29], 00:25:47.254 | 30.00th=[ 31], 40.00th=[ 34], 50.00th=[ 45], 60.00th=[ 52], 00:25:47.254 | 70.00th=[ 59], 80.00th=[ 73], 90.00th=[ 155], 95.00th=[ 192], 00:25:47.254 | 99.00th=[ 241], 99.50th=[ 264], 99.90th=[ 326], 99.95th=[ 330], 00:25:47.254 | 99.99th=[ 435] 00:25:47.254 bw ( KiB/s): min=89088, max=527360, per=14.93%, avg=269653.15, stdev=133401.75, samples=20 00:25:47.254 iops : min= 348, max= 2060, avg=1053.25, stdev=521.10, samples=20 00:25:47.254 lat (msec) : 2=0.11%, 4=0.39%, 10=0.91%, 20=4.04%, 50=51.76% 00:25:47.254 lat (msec) : 100=31.26%, 250=10.72%, 500=0.81% 00:25:47.254 cpu : usr=0.56%, sys=3.56%, ctx=1779, majf=0, minf=4097 00:25:47.254 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:25:47.254 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:47.254 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:47.254 issued rwts: total=10600,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:47.254 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:47.254 job2: (groupid=0, jobs=1): err= 0: pid=2471259: Sun Jul 21 03:35:30 2024 00:25:47.254 read: IOPS=402, BW=101MiB/s (105MB/s)(1018MiB/10121msec) 00:25:47.254 slat (usec): min=13, max=116503, avg=2273.59, stdev=7030.67 00:25:47.254 clat (msec): min=33, max=297, avg=156.71, stdev=43.65 00:25:47.254 lat (msec): min=36, max=355, avg=158.98, stdev=44.59 00:25:47.254 clat percentiles (msec): 00:25:47.254 | 1.00th=[ 62], 5.00th=[ 85], 10.00th=[ 94], 20.00th=[ 112], 00:25:47.254 | 30.00th=[ 133], 40.00th=[ 148], 50.00th=[ 165], 60.00th=[ 176], 00:25:47.254 | 70.00th=[ 186], 80.00th=[ 197], 90.00th=[ 209], 95.00th=[ 218], 00:25:47.254 | 99.00th=[ 245], 99.50th=[ 249], 99.90th=[ 266], 99.95th=[ 266], 00:25:47.254 | 99.99th=[ 296] 00:25:47.254 bw ( KiB/s): min=67584, max=154624, per=5.68%, avg=102553.70, stdev=25158.33, samples=20 00:25:47.254 iops : min= 264, max= 604, avg=400.55, stdev=98.31, samples=20 00:25:47.254 lat (msec) : 50=0.29%, 100=14.35%, 250=84.89%, 500=0.47% 00:25:47.254 cpu : usr=0.25%, sys=1.58%, ctx=848, majf=0, minf=4097 00:25:47.254 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:25:47.254 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:47.254 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:47.254 issued rwts: total=4071,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:47.254 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:47.254 job3: (groupid=0, jobs=1): err= 0: pid=2471261: Sun Jul 21 03:35:30 2024 00:25:47.254 read: IOPS=632, BW=158MiB/s (166MB/s)(1591MiB/10065msec) 00:25:47.254 slat (usec): min=13, max=143883, avg=1410.19, stdev=6273.89 00:25:47.254 clat (msec): min=3, max=326, avg=99.76, stdev=59.46 00:25:47.254 lat (msec): min=3, max=326, avg=101.17, stdev=60.43 00:25:47.254 clat percentiles (msec): 00:25:47.254 | 1.00th=[ 19], 5.00th=[ 29], 10.00th=[ 32], 20.00th=[ 44], 00:25:47.254 | 30.00th=[ 58], 40.00th=[ 71], 50.00th=[ 85], 60.00th=[ 101], 00:25:47.254 | 70.00th=[ 129], 80.00th=[ 169], 90.00th=[ 192], 95.00th=[ 205], 00:25:47.254 | 99.00th=[ 234], 99.50th=[ 247], 99.90th=[ 296], 99.95th=[ 317], 00:25:47.254 | 99.99th=[ 326] 00:25:47.254 bw ( KiB/s): min=71680, max=361984, per=8.93%, avg=161219.10, stdev=86786.45, samples=20 00:25:47.254 iops : min= 280, max= 1414, avg=629.70, stdev=339.06, samples=20 00:25:47.254 lat (msec) : 4=0.02%, 10=0.33%, 20=0.94%, 50=22.73%, 100=35.96% 00:25:47.254 lat (msec) : 250=39.65%, 500=0.38% 00:25:47.254 cpu : usr=0.36%, sys=2.19%, ctx=1125, majf=0, minf=4097 00:25:47.254 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:25:47.254 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:47.254 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:47.254 issued rwts: total=6363,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:47.254 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:47.255 job4: (groupid=0, jobs=1): err= 0: pid=2471262: Sun Jul 21 03:35:30 2024 00:25:47.255 read: IOPS=729, BW=182MiB/s (191MB/s)(1829MiB/10030msec) 00:25:47.255 slat (usec): min=11, max=120251, avg=1225.95, stdev=4615.63 00:25:47.255 clat (msec): min=3, max=256, avg=86.47, stdev=52.18 00:25:47.255 lat (msec): min=3, max=283, avg=87.69, stdev=53.00 00:25:47.255 clat percentiles (msec): 00:25:47.255 | 1.00th=[ 9], 5.00th=[ 20], 10.00th=[ 29], 20.00th=[ 45], 00:25:47.255 | 30.00th=[ 58], 40.00th=[ 66], 50.00th=[ 77], 60.00th=[ 86], 00:25:47.255 | 70.00th=[ 99], 80.00th=[ 117], 90.00th=[ 182], 95.00th=[ 197], 00:25:47.255 | 99.00th=[ 215], 99.50th=[ 224], 99.90th=[ 236], 99.95th=[ 239], 00:25:47.255 | 99.99th=[ 257] 00:25:47.255 bw ( KiB/s): min=77668, max=372224, per=10.28%, avg=185618.40, stdev=82310.87, samples=20 00:25:47.255 iops : min= 303, max= 1454, avg=725.00, stdev=321.59, samples=20 00:25:47.255 lat (msec) : 4=0.03%, 10=1.50%, 20=3.49%, 50=17.99%, 100=49.24% 00:25:47.255 lat (msec) : 250=27.74%, 500=0.01% 00:25:47.255 cpu : usr=0.40%, sys=2.63%, ctx=1319, majf=0, minf=3721 00:25:47.255 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:25:47.255 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:47.255 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:47.255 issued rwts: total=7315,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:47.255 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:47.255 job5: (groupid=0, jobs=1): err= 0: pid=2471263: Sun Jul 21 03:35:30 2024 00:25:47.255 read: IOPS=502, BW=126MiB/s (132MB/s)(1271MiB/10112msec) 00:25:47.255 slat (usec): min=9, max=153038, avg=1508.20, stdev=7569.90 00:25:47.255 clat (usec): min=697, max=379836, avg=125740.52, stdev=70451.52 00:25:47.255 lat (usec): min=719, max=379884, avg=127248.72, stdev=71774.75 00:25:47.255 clat percentiles (msec): 00:25:47.255 | 1.00th=[ 3], 5.00th=[ 13], 10.00th=[ 28], 20.00th=[ 56], 00:25:47.255 | 30.00th=[ 75], 40.00th=[ 95], 50.00th=[ 127], 60.00th=[ 165], 00:25:47.255 | 70.00th=[ 182], 80.00th=[ 197], 90.00th=[ 211], 95.00th=[ 226], 00:25:47.255 | 99.00th=[ 253], 99.50th=[ 262], 99.90th=[ 326], 99.95th=[ 347], 00:25:47.255 | 99.99th=[ 380] 00:25:47.255 bw ( KiB/s): min=67072, max=261632, per=7.11%, avg=128445.00, stdev=56942.33, samples=20 00:25:47.255 iops : min= 262, max= 1022, avg=501.70, stdev=222.44, samples=20 00:25:47.255 lat (usec) : 750=0.02%, 1000=0.10% 00:25:47.255 lat (msec) : 2=0.37%, 4=1.32%, 10=2.14%, 20=3.36%, 50=11.06% 00:25:47.255 lat (msec) : 100=23.83%, 250=56.71%, 500=1.08% 00:25:47.255 cpu : usr=0.26%, sys=1.53%, ctx=1102, majf=0, minf=4097 00:25:47.255 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:25:47.255 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:47.255 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:47.255 issued rwts: total=5082,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:47.255 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:47.255 job6: (groupid=0, jobs=1): err= 0: pid=2471264: Sun Jul 21 03:35:30 2024 00:25:47.255 read: IOPS=711, BW=178MiB/s (186MB/s)(1781MiB/10018msec) 00:25:47.255 slat (usec): min=13, max=113165, avg=1333.93, stdev=4675.11 00:25:47.255 clat (usec): min=929, max=304938, avg=88620.28, stdev=42340.54 00:25:47.255 lat (usec): min=948, max=304978, avg=89954.21, stdev=43075.47 00:25:47.255 clat percentiles (msec): 00:25:47.255 | 1.00th=[ 17], 5.00th=[ 42], 10.00th=[ 50], 20.00th=[ 59], 00:25:47.255 | 30.00th=[ 64], 40.00th=[ 70], 50.00th=[ 79], 60.00th=[ 88], 00:25:47.255 | 70.00th=[ 96], 80.00th=[ 113], 90.00th=[ 150], 95.00th=[ 192], 00:25:47.255 | 99.00th=[ 211], 99.50th=[ 215], 99.90th=[ 247], 99.95th=[ 305], 00:25:47.255 | 99.99th=[ 305] 00:25:47.255 bw ( KiB/s): min=84992, max=311296, per=10.01%, avg=180701.30, stdev=67136.49, samples=20 00:25:47.255 iops : min= 332, max= 1216, avg=705.75, stdev=262.15, samples=20 00:25:47.255 lat (usec) : 1000=0.01% 00:25:47.255 lat (msec) : 2=0.24%, 4=0.01%, 10=0.24%, 20=0.86%, 50=9.32% 00:25:47.255 lat (msec) : 100=62.92%, 250=26.31%, 500=0.08% 00:25:47.255 cpu : usr=0.47%, sys=2.45%, ctx=1161, majf=0, minf=4097 00:25:47.255 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:25:47.255 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:47.255 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:47.255 issued rwts: total=7123,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:47.255 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:47.255 job7: (groupid=0, jobs=1): err= 0: pid=2471265: Sun Jul 21 03:35:30 2024 00:25:47.255 read: IOPS=717, BW=179MiB/s (188MB/s)(1815MiB/10117msec) 00:25:47.255 slat (usec): min=12, max=93172, avg=1295.12, stdev=4521.52 00:25:47.255 clat (usec): min=800, max=326730, avg=87812.83, stdev=51696.15 00:25:47.255 lat (usec): min=851, max=326795, avg=89107.96, stdev=52551.15 00:25:47.255 clat percentiles (msec): 00:25:47.255 | 1.00th=[ 7], 5.00th=[ 29], 10.00th=[ 42], 20.00th=[ 55], 00:25:47.255 | 30.00th=[ 62], 40.00th=[ 67], 50.00th=[ 71], 60.00th=[ 80], 00:25:47.255 | 70.00th=[ 90], 80.00th=[ 120], 90.00th=[ 180], 95.00th=[ 207], 00:25:47.255 | 99.00th=[ 245], 99.50th=[ 264], 99.90th=[ 292], 99.95th=[ 292], 00:25:47.255 | 99.99th=[ 326] 00:25:47.255 bw ( KiB/s): min=66560, max=301568, per=10.20%, avg=184194.85, stdev=75613.03, samples=20 00:25:47.255 iops : min= 260, max= 1178, avg=719.40, stdev=295.26, samples=20 00:25:47.255 lat (usec) : 1000=0.01% 00:25:47.255 lat (msec) : 2=0.29%, 4=0.29%, 10=0.99%, 20=0.81%, 50=13.14% 00:25:47.255 lat (msec) : 100=60.21%, 250=23.42%, 500=0.84% 00:25:47.255 cpu : usr=0.38%, sys=2.57%, ctx=1226, majf=0, minf=4097 00:25:47.255 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:25:47.255 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:47.255 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:47.255 issued rwts: total=7260,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:47.255 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:47.255 job8: (groupid=0, jobs=1): err= 0: pid=2471266: Sun Jul 21 03:35:30 2024 00:25:47.255 read: IOPS=424, BW=106MiB/s (111MB/s)(1073MiB/10119msec) 00:25:47.255 slat (usec): min=14, max=98968, avg=2049.42, stdev=7095.53 00:25:47.255 clat (msec): min=4, max=300, avg=148.76, stdev=63.29 00:25:47.255 lat (msec): min=4, max=300, avg=150.81, stdev=64.61 00:25:47.255 clat percentiles (msec): 00:25:47.255 | 1.00th=[ 16], 5.00th=[ 29], 10.00th=[ 47], 20.00th=[ 75], 00:25:47.255 | 30.00th=[ 130], 40.00th=[ 153], 50.00th=[ 167], 60.00th=[ 180], 00:25:47.255 | 70.00th=[ 192], 80.00th=[ 201], 90.00th=[ 215], 95.00th=[ 228], 00:25:47.255 | 99.00th=[ 264], 99.50th=[ 275], 99.90th=[ 284], 99.95th=[ 288], 00:25:47.255 | 99.99th=[ 300] 00:25:47.255 bw ( KiB/s): min=62976, max=204288, per=5.99%, avg=108187.95, stdev=43642.27, samples=20 00:25:47.255 iops : min= 246, max= 798, avg=422.50, stdev=170.48, samples=20 00:25:47.255 lat (msec) : 10=0.49%, 20=1.86%, 50=8.04%, 100=15.92%, 250=71.87% 00:25:47.255 lat (msec) : 500=1.82% 00:25:47.255 cpu : usr=0.31%, sys=1.60%, ctx=949, majf=0, minf=4097 00:25:47.255 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.5% 00:25:47.255 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:47.255 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:47.255 issued rwts: total=4291,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:47.255 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:47.255 job9: (groupid=0, jobs=1): err= 0: pid=2471269: Sun Jul 21 03:35:30 2024 00:25:47.255 read: IOPS=478, BW=120MiB/s (125MB/s)(1202MiB/10058msec) 00:25:47.255 slat (usec): min=9, max=139094, avg=1610.94, stdev=7518.57 00:25:47.255 clat (usec): min=1781, max=304994, avg=132187.57, stdev=74610.92 00:25:47.255 lat (usec): min=1807, max=328324, avg=133798.51, stdev=75911.23 00:25:47.255 clat percentiles (msec): 00:25:47.255 | 1.00th=[ 4], 5.00th=[ 11], 10.00th=[ 23], 20.00th=[ 37], 00:25:47.255 | 30.00th=[ 79], 40.00th=[ 129], 50.00th=[ 159], 60.00th=[ 174], 00:25:47.255 | 70.00th=[ 190], 80.00th=[ 201], 90.00th=[ 215], 95.00th=[ 226], 00:25:47.255 | 99.00th=[ 251], 99.50th=[ 271], 99.90th=[ 300], 99.95th=[ 305], 00:25:47.255 | 99.99th=[ 305] 00:25:47.255 bw ( KiB/s): min=76288, max=321536, per=6.73%, avg=121446.60, stdev=56813.23, samples=20 00:25:47.255 iops : min= 298, max= 1256, avg=474.35, stdev=221.96, samples=20 00:25:47.255 lat (msec) : 2=0.08%, 4=1.06%, 10=3.41%, 20=4.26%, 50=15.06% 00:25:47.255 lat (msec) : 100=10.75%, 250=64.33%, 500=1.04% 00:25:47.255 cpu : usr=0.21%, sys=1.42%, ctx=972, majf=0, minf=4097 00:25:47.255 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:25:47.255 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:47.255 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:47.255 issued rwts: total=4808,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:47.255 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:47.255 job10: (groupid=0, jobs=1): err= 0: pid=2471270: Sun Jul 21 03:35:30 2024 00:25:47.255 read: IOPS=768, BW=192MiB/s (201MB/s)(1943MiB/10118msec) 00:25:47.255 slat (usec): min=9, max=245443, avg=1088.37, stdev=5819.08 00:25:47.255 clat (usec): min=1459, max=442758, avg=82175.61, stdev=61775.95 00:25:47.255 lat (usec): min=1490, max=442832, avg=83263.98, stdev=62841.42 00:25:47.255 clat percentiles (msec): 00:25:47.255 | 1.00th=[ 8], 5.00th=[ 20], 10.00th=[ 28], 20.00th=[ 34], 00:25:47.255 | 30.00th=[ 41], 40.00th=[ 53], 50.00th=[ 61], 60.00th=[ 71], 00:25:47.255 | 70.00th=[ 90], 80.00th=[ 134], 90.00th=[ 190], 95.00th=[ 213], 00:25:47.255 | 99.00th=[ 249], 99.50th=[ 279], 99.90th=[ 309], 99.95th=[ 309], 00:25:47.255 | 99.99th=[ 443] 00:25:47.255 bw ( KiB/s): min=64000, max=479679, per=10.93%, avg=197314.65, stdev=120900.28, samples=20 00:25:47.255 iops : min= 250, max= 1873, avg=770.70, stdev=472.19, samples=20 00:25:47.255 lat (msec) : 2=0.01%, 4=0.28%, 10=1.36%, 20=3.44%, 50=32.83% 00:25:47.255 lat (msec) : 100=36.12%, 250=25.02%, 500=0.94% 00:25:47.255 cpu : usr=0.54%, sys=2.39%, ctx=1299, majf=0, minf=4097 00:25:47.255 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:25:47.255 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:47.255 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:47.255 issued rwts: total=7771,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:47.255 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:47.255 00:25:47.255 Run status group 0 (all jobs): 00:25:47.255 READ: bw=1763MiB/s (1849MB/s), 101MiB/s-262MiB/s (105MB/s-275MB/s), io=17.4GiB (18.7GB), run=10018-10121msec 00:25:47.255 00:25:47.255 Disk stats (read/write): 00:25:47.255 nvme0n1: ios=13137/0, merge=0/0, ticks=1241560/0, in_queue=1241560, util=97.16% 00:25:47.255 nvme10n1: ios=21006/0, merge=0/0, ticks=1230584/0, in_queue=1230584, util=97.37% 00:25:47.255 nvme1n1: ios=7991/0, merge=0/0, ticks=1231996/0, in_queue=1231996, util=97.64% 00:25:47.255 nvme2n1: ios=12467/0, merge=0/0, ticks=1236891/0, in_queue=1236891, util=97.80% 00:25:47.255 nvme3n1: ios=14333/0, merge=0/0, ticks=1238117/0, in_queue=1238117, util=97.90% 00:25:47.255 nvme4n1: ios=10037/0, merge=0/0, ticks=1237723/0, in_queue=1237723, util=98.24% 00:25:47.255 nvme5n1: ios=13986/0, merge=0/0, ticks=1238859/0, in_queue=1238859, util=98.40% 00:25:47.255 nvme6n1: ios=14329/0, merge=0/0, ticks=1230817/0, in_queue=1230817, util=98.50% 00:25:47.255 nvme7n1: ios=8420/0, merge=0/0, ticks=1230650/0, in_queue=1230650, util=98.90% 00:25:47.255 nvme8n1: ios=9305/0, merge=0/0, ticks=1230024/0, in_queue=1230024, util=99.09% 00:25:47.255 nvme9n1: ios=15370/0, merge=0/0, ticks=1233269/0, in_queue=1233269, util=99.22% 00:25:47.255 03:35:30 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:25:47.255 [global] 00:25:47.255 thread=1 00:25:47.255 invalidate=1 00:25:47.255 rw=randwrite 00:25:47.255 time_based=1 00:25:47.255 runtime=10 00:25:47.255 ioengine=libaio 00:25:47.255 direct=1 00:25:47.255 bs=262144 00:25:47.255 iodepth=64 00:25:47.255 norandommap=1 00:25:47.255 numjobs=1 00:25:47.255 00:25:47.255 [job0] 00:25:47.255 filename=/dev/nvme0n1 00:25:47.255 [job1] 00:25:47.255 filename=/dev/nvme10n1 00:25:47.255 [job2] 00:25:47.255 filename=/dev/nvme1n1 00:25:47.255 [job3] 00:25:47.255 filename=/dev/nvme2n1 00:25:47.255 [job4] 00:25:47.255 filename=/dev/nvme3n1 00:25:47.255 [job5] 00:25:47.255 filename=/dev/nvme4n1 00:25:47.255 [job6] 00:25:47.255 filename=/dev/nvme5n1 00:25:47.255 [job7] 00:25:47.255 filename=/dev/nvme6n1 00:25:47.255 [job8] 00:25:47.255 filename=/dev/nvme7n1 00:25:47.255 [job9] 00:25:47.255 filename=/dev/nvme8n1 00:25:47.255 [job10] 00:25:47.255 filename=/dev/nvme9n1 00:25:47.255 Could not set queue depth (nvme0n1) 00:25:47.255 Could not set queue depth (nvme10n1) 00:25:47.255 Could not set queue depth (nvme1n1) 00:25:47.255 Could not set queue depth (nvme2n1) 00:25:47.255 Could not set queue depth (nvme3n1) 00:25:47.255 Could not set queue depth (nvme4n1) 00:25:47.255 Could not set queue depth (nvme5n1) 00:25:47.255 Could not set queue depth (nvme6n1) 00:25:47.255 Could not set queue depth (nvme7n1) 00:25:47.255 Could not set queue depth (nvme8n1) 00:25:47.255 Could not set queue depth (nvme9n1) 00:25:47.255 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:47.255 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:47.255 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:47.255 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:47.255 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:47.255 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:47.255 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:47.255 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:47.255 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:47.256 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:47.256 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:47.256 fio-3.35 00:25:47.256 Starting 11 threads 00:25:57.253 00:25:57.253 job0: (groupid=0, jobs=1): err= 0: pid=2472439: Sun Jul 21 03:35:41 2024 00:25:57.253 write: IOPS=503, BW=126MiB/s (132MB/s)(1282MiB/10187msec); 0 zone resets 00:25:57.253 slat (usec): min=19, max=214341, avg=1414.16, stdev=5922.29 00:25:57.253 clat (usec): min=741, max=440251, avg=125636.12, stdev=96256.50 00:25:57.253 lat (usec): min=780, max=440292, avg=127050.28, stdev=97423.07 00:25:57.253 clat percentiles (msec): 00:25:57.253 | 1.00th=[ 6], 5.00th=[ 18], 10.00th=[ 35], 20.00th=[ 41], 00:25:57.253 | 30.00th=[ 42], 40.00th=[ 56], 50.00th=[ 121], 60.00th=[ 157], 00:25:57.253 | 70.00th=[ 171], 80.00th=[ 190], 90.00th=[ 264], 95.00th=[ 330], 00:25:57.253 | 99.00th=[ 397], 99.50th=[ 405], 99.90th=[ 426], 99.95th=[ 439], 00:25:57.253 | 99.99th=[ 439] 00:25:57.253 bw ( KiB/s): min=34885, max=407040, per=9.30%, avg=129630.15, stdev=102276.54, samples=20 00:25:57.253 iops : min= 136, max= 1590, avg=506.35, stdev=399.53, samples=20 00:25:57.253 lat (usec) : 750=0.02%, 1000=0.06% 00:25:57.253 lat (msec) : 2=0.14%, 4=0.47%, 10=2.22%, 20=3.06%, 50=32.69% 00:25:57.253 lat (msec) : 100=9.26%, 250=41.39%, 500=10.69% 00:25:57.253 cpu : usr=1.50%, sys=1.41%, ctx=2747, majf=0, minf=1 00:25:57.253 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:25:57.253 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:57.253 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:57.253 issued rwts: total=0,5127,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:57.253 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:57.253 job1: (groupid=0, jobs=1): err= 0: pid=2472451: Sun Jul 21 03:35:41 2024 00:25:57.253 write: IOPS=494, BW=124MiB/s (130MB/s)(1259MiB/10180msec); 0 zone resets 00:25:57.253 slat (usec): min=18, max=119728, avg=1501.39, stdev=4228.84 00:25:57.253 clat (usec): min=1007, max=390204, avg=127772.05, stdev=65281.95 00:25:57.253 lat (usec): min=1072, max=390239, avg=129273.43, stdev=66144.82 00:25:57.253 clat percentiles (msec): 00:25:57.253 | 1.00th=[ 10], 5.00th=[ 37], 10.00th=[ 58], 20.00th=[ 82], 00:25:57.253 | 30.00th=[ 89], 40.00th=[ 104], 50.00th=[ 118], 60.00th=[ 130], 00:25:57.253 | 70.00th=[ 148], 80.00th=[ 174], 90.00th=[ 213], 95.00th=[ 262], 00:25:57.253 | 99.00th=[ 338], 99.50th=[ 363], 99.90th=[ 384], 99.95th=[ 384], 00:25:57.253 | 99.99th=[ 393] 00:25:57.253 bw ( KiB/s): min=51200, max=190464, per=9.13%, avg=127322.30, stdev=37377.00, samples=20 00:25:57.253 iops : min= 200, max= 744, avg=497.35, stdev=146.00, samples=20 00:25:57.253 lat (msec) : 2=0.10%, 4=0.36%, 10=0.66%, 20=1.25%, 50=5.46% 00:25:57.253 lat (msec) : 100=30.65%, 250=56.10%, 500=5.42% 00:25:57.253 cpu : usr=1.38%, sys=1.72%, ctx=2585, majf=0, minf=1 00:25:57.253 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:25:57.253 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:57.253 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:57.253 issued rwts: total=0,5037,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:57.253 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:57.253 job2: (groupid=0, jobs=1): err= 0: pid=2472453: Sun Jul 21 03:35:41 2024 00:25:57.253 write: IOPS=548, BW=137MiB/s (144MB/s)(1396MiB/10179msec); 0 zone resets 00:25:57.253 slat (usec): min=19, max=83988, avg=1128.65, stdev=3612.98 00:25:57.253 clat (usec): min=1212, max=363978, avg=115443.41, stdev=64962.58 00:25:57.253 lat (usec): min=1273, max=364034, avg=116572.06, stdev=65824.20 00:25:57.253 clat percentiles (msec): 00:25:57.253 | 1.00th=[ 7], 5.00th=[ 17], 10.00th=[ 33], 20.00th=[ 64], 00:25:57.253 | 30.00th=[ 83], 40.00th=[ 92], 50.00th=[ 110], 60.00th=[ 124], 00:25:57.253 | 70.00th=[ 144], 80.00th=[ 165], 90.00th=[ 192], 95.00th=[ 222], 00:25:57.253 | 99.00th=[ 338], 99.50th=[ 342], 99.90th=[ 351], 99.95th=[ 363], 00:25:57.253 | 99.99th=[ 363] 00:25:57.253 bw ( KiB/s): min=57344, max=237056, per=10.14%, avg=141296.30, stdev=47742.36, samples=20 00:25:57.253 iops : min= 224, max= 926, avg=551.90, stdev=186.48, samples=20 00:25:57.253 lat (msec) : 2=0.07%, 4=0.39%, 10=1.77%, 20=3.96%, 50=9.80% 00:25:57.253 lat (msec) : 100=28.98%, 250=51.66%, 500=3.37% 00:25:57.253 cpu : usr=1.75%, sys=1.84%, ctx=3481, majf=0, minf=1 00:25:57.253 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:25:57.253 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:57.253 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:57.253 issued rwts: total=0,5583,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:57.253 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:57.253 job3: (groupid=0, jobs=1): err= 0: pid=2472454: Sun Jul 21 03:35:41 2024 00:25:57.253 write: IOPS=600, BW=150MiB/s (157MB/s)(1508MiB/10037msec); 0 zone resets 00:25:57.253 slat (usec): min=16, max=38819, avg=1115.82, stdev=3138.19 00:25:57.253 clat (msec): min=2, max=343, avg=105.37, stdev=72.75 00:25:57.253 lat (msec): min=2, max=343, avg=106.49, stdev=73.40 00:25:57.253 clat percentiles (msec): 00:25:57.253 | 1.00th=[ 8], 5.00th=[ 20], 10.00th=[ 35], 20.00th=[ 41], 00:25:57.253 | 30.00th=[ 47], 40.00th=[ 56], 50.00th=[ 85], 60.00th=[ 120], 00:25:57.253 | 70.00th=[ 155], 80.00th=[ 174], 90.00th=[ 197], 95.00th=[ 236], 00:25:57.253 | 99.00th=[ 309], 99.50th=[ 321], 99.90th=[ 334], 99.95th=[ 334], 00:25:57.253 | 99.99th=[ 342] 00:25:57.253 bw ( KiB/s): min=82944, max=401920, per=10.96%, avg=152746.00, stdev=77930.10, samples=20 00:25:57.253 iops : min= 324, max= 1570, avg=596.65, stdev=304.43, samples=20 00:25:57.253 lat (msec) : 4=0.18%, 10=1.51%, 20=3.48%, 50=30.35%, 100=18.14% 00:25:57.253 lat (msec) : 250=42.04%, 500=4.30% 00:25:57.253 cpu : usr=1.85%, sys=1.94%, ctx=3146, majf=0, minf=1 00:25:57.253 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:25:57.253 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:57.253 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:57.253 issued rwts: total=0,6030,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:57.253 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:57.253 job4: (groupid=0, jobs=1): err= 0: pid=2472455: Sun Jul 21 03:35:41 2024 00:25:57.253 write: IOPS=599, BW=150MiB/s (157MB/s)(1521MiB/10141msec); 0 zone resets 00:25:57.253 slat (usec): min=13, max=190844, avg=1321.06, stdev=5571.06 00:25:57.253 clat (usec): min=1120, max=545155, avg=105303.94, stdev=91917.18 00:25:57.253 lat (usec): min=1143, max=545259, avg=106625.00, stdev=93061.18 00:25:57.253 clat percentiles (msec): 00:25:57.253 | 1.00th=[ 4], 5.00th=[ 12], 10.00th=[ 20], 20.00th=[ 40], 00:25:57.253 | 30.00th=[ 41], 40.00th=[ 43], 50.00th=[ 54], 60.00th=[ 110], 00:25:57.253 | 70.00th=[ 159], 80.00th=[ 184], 90.00th=[ 234], 95.00th=[ 284], 00:25:57.253 | 99.00th=[ 376], 99.50th=[ 388], 99.90th=[ 518], 99.95th=[ 531], 00:25:57.253 | 99.99th=[ 542] 00:25:57.253 bw ( KiB/s): min=42496, max=377856, per=11.06%, avg=154137.60, stdev=105955.61, samples=20 00:25:57.253 iops : min= 166, max= 1476, avg=602.10, stdev=413.89, samples=20 00:25:57.253 lat (msec) : 2=0.20%, 4=1.07%, 10=2.99%, 20=6.10%, 50=38.18% 00:25:57.253 lat (msec) : 100=10.90%, 250=33.15%, 500=7.18%, 750=0.23% 00:25:57.253 cpu : usr=1.97%, sys=2.36%, ctx=2915, majf=0, minf=1 00:25:57.253 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:25:57.253 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:57.253 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:57.253 issued rwts: total=0,6084,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:57.253 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:57.253 job5: (groupid=0, jobs=1): err= 0: pid=2472456: Sun Jul 21 03:35:41 2024 00:25:57.253 write: IOPS=505, BW=126MiB/s (133MB/s)(1287MiB/10176msec); 0 zone resets 00:25:57.253 slat (usec): min=16, max=171038, avg=1295.57, stdev=4760.30 00:25:57.253 clat (usec): min=1142, max=503204, avg=125160.93, stdev=79526.46 00:25:57.253 lat (msec): min=2, max=503, avg=126.46, stdev=80.43 00:25:57.253 clat percentiles (msec): 00:25:57.253 | 1.00th=[ 5], 5.00th=[ 11], 10.00th=[ 21], 20.00th=[ 54], 00:25:57.253 | 30.00th=[ 77], 40.00th=[ 108], 50.00th=[ 123], 60.00th=[ 142], 00:25:57.253 | 70.00th=[ 159], 80.00th=[ 176], 90.00th=[ 220], 95.00th=[ 296], 00:25:57.253 | 99.00th=[ 351], 99.50th=[ 368], 99.90th=[ 388], 99.95th=[ 397], 00:25:57.253 | 99.99th=[ 502] 00:25:57.253 bw ( KiB/s): min=43094, max=237568, per=9.34%, avg=130142.40, stdev=44844.20, samples=20 00:25:57.253 iops : min= 168, max= 928, avg=508.35, stdev=175.21, samples=20 00:25:57.253 lat (msec) : 2=0.08%, 4=0.64%, 10=4.04%, 20=5.03%, 50=9.15% 00:25:57.253 lat (msec) : 100=17.78%, 250=55.95%, 500=7.29%, 750=0.04% 00:25:57.253 cpu : usr=1.70%, sys=1.44%, ctx=3189, majf=0, minf=1 00:25:57.254 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:25:57.254 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:57.254 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:57.254 issued rwts: total=0,5147,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:57.254 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:57.254 job6: (groupid=0, jobs=1): err= 0: pid=2472457: Sun Jul 21 03:35:41 2024 00:25:57.254 write: IOPS=409, BW=102MiB/s (107MB/s)(1044MiB/10185msec); 0 zone resets 00:25:57.254 slat (usec): min=18, max=104448, avg=1942.87, stdev=4755.03 00:25:57.254 clat (usec): min=866, max=363370, avg=154128.06, stdev=63405.77 00:25:57.254 lat (usec): min=902, max=363424, avg=156070.92, stdev=64325.64 00:25:57.254 clat percentiles (msec): 00:25:57.254 | 1.00th=[ 18], 5.00th=[ 57], 10.00th=[ 70], 20.00th=[ 104], 00:25:57.254 | 30.00th=[ 120], 40.00th=[ 142], 50.00th=[ 159], 60.00th=[ 169], 00:25:57.254 | 70.00th=[ 180], 80.00th=[ 199], 90.00th=[ 236], 95.00th=[ 268], 00:25:57.254 | 99.00th=[ 330], 99.50th=[ 338], 99.90th=[ 355], 99.95th=[ 355], 00:25:57.254 | 99.99th=[ 363] 00:25:57.254 bw ( KiB/s): min=57344, max=180224, per=7.55%, avg=105232.40, stdev=29550.55, samples=20 00:25:57.254 iops : min= 224, max= 704, avg=411.05, stdev=115.44, samples=20 00:25:57.254 lat (usec) : 1000=0.05% 00:25:57.254 lat (msec) : 2=0.14%, 4=0.55%, 10=0.12%, 20=0.17%, 50=2.80% 00:25:57.254 lat (msec) : 100=14.83%, 250=73.60%, 500=7.74% 00:25:57.254 cpu : usr=1.18%, sys=1.46%, ctx=1911, majf=0, minf=1 00:25:57.254 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:25:57.254 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:57.254 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:57.254 issued rwts: total=0,4174,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:57.254 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:57.254 job7: (groupid=0, jobs=1): err= 0: pid=2472458: Sun Jul 21 03:35:41 2024 00:25:57.254 write: IOPS=460, BW=115MiB/s (121MB/s)(1173MiB/10189msec); 0 zone resets 00:25:57.254 slat (usec): min=17, max=59578, avg=1620.29, stdev=4565.15 00:25:57.254 clat (usec): min=1193, max=398686, avg=137345.12, stdev=81882.73 00:25:57.254 lat (usec): min=1759, max=398719, avg=138965.41, stdev=83020.74 00:25:57.254 clat percentiles (msec): 00:25:57.254 | 1.00th=[ 5], 5.00th=[ 18], 10.00th=[ 30], 20.00th=[ 61], 00:25:57.254 | 30.00th=[ 95], 40.00th=[ 114], 50.00th=[ 128], 60.00th=[ 157], 00:25:57.254 | 70.00th=[ 174], 80.00th=[ 199], 90.00th=[ 247], 95.00th=[ 296], 00:25:57.254 | 99.00th=[ 355], 99.50th=[ 376], 99.90th=[ 393], 99.95th=[ 393], 00:25:57.254 | 99.99th=[ 401] 00:25:57.254 bw ( KiB/s): min=50176, max=197632, per=8.49%, avg=118411.50, stdev=45166.16, samples=20 00:25:57.254 iops : min= 196, max= 772, avg=462.50, stdev=176.41, samples=20 00:25:57.254 lat (msec) : 2=0.09%, 4=0.64%, 10=1.79%, 20=3.60%, 50=11.11% 00:25:57.254 lat (msec) : 100=14.43%, 250=59.36%, 500=8.98% 00:25:57.254 cpu : usr=1.33%, sys=1.86%, ctx=2626, majf=0, minf=1 00:25:57.254 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:25:57.254 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:57.254 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:57.254 issued rwts: total=0,4690,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:57.254 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:57.254 job8: (groupid=0, jobs=1): err= 0: pid=2472459: Sun Jul 21 03:35:41 2024 00:25:57.254 write: IOPS=334, BW=83.5MiB/s (87.6MB/s)(847MiB/10137msec); 0 zone resets 00:25:57.254 slat (usec): min=22, max=179764, avg=2416.11, stdev=6168.03 00:25:57.254 clat (msec): min=2, max=405, avg=189.08, stdev=78.32 00:25:57.254 lat (msec): min=2, max=405, avg=191.50, stdev=79.32 00:25:57.254 clat percentiles (msec): 00:25:57.254 | 1.00th=[ 7], 5.00th=[ 31], 10.00th=[ 71], 20.00th=[ 138], 00:25:57.254 | 30.00th=[ 155], 40.00th=[ 171], 50.00th=[ 192], 60.00th=[ 209], 00:25:57.254 | 70.00th=[ 230], 80.00th=[ 257], 90.00th=[ 288], 95.00th=[ 317], 00:25:57.254 | 99.00th=[ 342], 99.50th=[ 351], 99.90th=[ 388], 99.95th=[ 405], 00:25:57.254 | 99.99th=[ 405] 00:25:57.254 bw ( KiB/s): min=53760, max=130048, per=6.10%, avg=85065.45, stdev=23353.43, samples=20 00:25:57.254 iops : min= 210, max= 508, avg=332.25, stdev=91.24, samples=20 00:25:57.254 lat (msec) : 4=0.09%, 10=1.95%, 20=0.97%, 50=4.49%, 100=4.78% 00:25:57.254 lat (msec) : 250=64.00%, 500=23.72% 00:25:57.254 cpu : usr=1.05%, sys=1.14%, ctx=1564, majf=0, minf=1 00:25:57.254 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.1% 00:25:57.254 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:57.254 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:57.254 issued rwts: total=0,3386,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:57.254 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:57.254 job9: (groupid=0, jobs=1): err= 0: pid=2472460: Sun Jul 21 03:35:41 2024 00:25:57.254 write: IOPS=443, BW=111MiB/s (116MB/s)(1125MiB/10138msec); 0 zone resets 00:25:57.254 slat (usec): min=18, max=35275, avg=1922.03, stdev=4520.40 00:25:57.254 clat (msec): min=2, max=398, avg=142.19, stdev=81.14 00:25:57.254 lat (msec): min=2, max=398, avg=144.11, stdev=82.31 00:25:57.254 clat percentiles (msec): 00:25:57.254 | 1.00th=[ 9], 5.00th=[ 36], 10.00th=[ 41], 20.00th=[ 49], 00:25:57.254 | 30.00th=[ 84], 40.00th=[ 127], 50.00th=[ 144], 60.00th=[ 163], 00:25:57.254 | 70.00th=[ 186], 80.00th=[ 211], 90.00th=[ 247], 95.00th=[ 292], 00:25:57.254 | 99.00th=[ 326], 99.50th=[ 326], 99.90th=[ 380], 99.95th=[ 380], 00:25:57.254 | 99.99th=[ 397] 00:25:57.254 bw ( KiB/s): min=55296, max=355527, per=8.15%, avg=113612.30, stdev=70509.17, samples=20 00:25:57.254 iops : min= 216, max= 1388, avg=443.75, stdev=275.29, samples=20 00:25:57.254 lat (msec) : 4=0.16%, 10=0.96%, 20=1.73%, 50=17.91%, 100=12.20% 00:25:57.254 lat (msec) : 250=57.51%, 500=9.53% 00:25:57.254 cpu : usr=1.33%, sys=1.50%, ctx=1834, majf=0, minf=1 00:25:57.254 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:25:57.254 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:57.254 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:57.254 issued rwts: total=0,4500,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:57.254 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:57.254 job10: (groupid=0, jobs=1): err= 0: pid=2472461: Sun Jul 21 03:35:41 2024 00:25:57.254 write: IOPS=569, BW=142MiB/s (149MB/s)(1430MiB/10039msec); 0 zone resets 00:25:57.254 slat (usec): min=17, max=56848, avg=1306.79, stdev=3488.25 00:25:57.254 clat (usec): min=1257, max=346552, avg=110958.97, stdev=71523.00 00:25:57.254 lat (usec): min=1373, max=351222, avg=112265.77, stdev=72322.73 00:25:57.254 clat percentiles (msec): 00:25:57.254 | 1.00th=[ 8], 5.00th=[ 40], 10.00th=[ 43], 20.00th=[ 47], 00:25:57.254 | 30.00th=[ 51], 40.00th=[ 77], 50.00th=[ 97], 60.00th=[ 123], 00:25:57.254 | 70.00th=[ 140], 80.00th=[ 165], 90.00th=[ 203], 95.00th=[ 271], 00:25:57.254 | 99.00th=[ 326], 99.50th=[ 330], 99.90th=[ 338], 99.95th=[ 342], 00:25:57.254 | 99.99th=[ 347] 00:25:57.254 bw ( KiB/s): min=49152, max=371200, per=10.39%, avg=144832.55, stdev=78401.05, samples=20 00:25:57.254 iops : min= 192, max= 1450, avg=565.75, stdev=306.25, samples=20 00:25:57.254 lat (msec) : 2=0.12%, 4=0.37%, 10=0.77%, 20=1.03%, 50=27.65% 00:25:57.254 lat (msec) : 100=21.19%, 250=42.93%, 500=5.94% 00:25:57.254 cpu : usr=1.69%, sys=1.71%, ctx=2464, majf=0, minf=1 00:25:57.254 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:25:57.254 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:57.254 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:57.254 issued rwts: total=0,5721,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:57.254 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:57.254 00:25:57.254 Run status group 0 (all jobs): 00:25:57.254 WRITE: bw=1361MiB/s (1427MB/s), 83.5MiB/s-150MiB/s (87.6MB/s-157MB/s), io=13.5GiB (14.5GB), run=10037-10189msec 00:25:57.254 00:25:57.254 Disk stats (read/write): 00:25:57.254 nvme0n1: ios=42/10215, merge=0/0, ticks=598/1241720, in_queue=1242318, util=100.00% 00:25:57.254 nvme10n1: ios=48/10047, merge=0/0, ticks=119/1239869, in_queue=1239988, util=98.03% 00:25:57.254 nvme1n1: ios=46/11141, merge=0/0, ticks=1132/1242805, in_queue=1243937, util=100.00% 00:25:57.254 nvme2n1: ios=0/11608, merge=0/0, ticks=0/1223648, in_queue=1223648, util=97.50% 00:25:57.254 nvme3n1: ios=43/11998, merge=0/0, ticks=2341/1186854, in_queue=1189195, util=99.84% 00:25:57.254 nvme4n1: ios=0/10271, merge=0/0, ticks=0/1245266, in_queue=1245266, util=98.01% 00:25:57.254 nvme5n1: ios=21/8316, merge=0/0, ticks=68/1237368, in_queue=1237436, util=98.45% 00:25:57.254 nvme6n1: ios=0/9337, merge=0/0, ticks=0/1238007, in_queue=1238007, util=98.32% 00:25:57.254 nvme7n1: ios=0/6601, merge=0/0, ticks=0/1202814, in_queue=1202814, util=98.71% 00:25:57.254 nvme8n1: ios=45/8825, merge=0/0, ticks=117/1200294, in_queue=1200411, util=99.57% 00:25:57.254 nvme9n1: ios=0/11012, merge=0/0, ticks=0/1219461, in_queue=1219461, util=99.07% 00:25:57.254 03:35:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:25:57.254 03:35:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:25:57.254 03:35:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:57.254 03:35:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:25:57.254 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:25:57.254 03:35:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:25:57.254 03:35:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:25:57.254 03:35:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:25:57.254 03:35:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK1 00:25:57.254 03:35:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:25:57.254 03:35:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK1 00:25:57.254 03:35:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:25:57.254 03:35:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:57.254 03:35:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.254 03:35:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:57.254 03:35:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.254 03:35:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:57.254 03:35:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:25:57.254 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:25:57.254 03:35:42 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:25:57.254 03:35:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:25:57.254 03:35:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:25:57.254 03:35:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK2 00:25:57.254 03:35:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:25:57.254 03:35:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK2 00:25:57.254 03:35:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:25:57.255 03:35:42 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:25:57.255 03:35:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.255 03:35:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:57.255 03:35:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.255 03:35:42 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:57.255 03:35:42 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:25:57.513 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:25:57.513 03:35:42 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:25:57.513 03:35:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:25:57.513 03:35:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:25:57.513 03:35:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK3 00:25:57.513 03:35:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:25:57.513 03:35:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK3 00:25:57.513 03:35:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:25:57.513 03:35:42 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:25:57.513 03:35:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.513 03:35:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:57.513 03:35:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.513 03:35:42 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:57.513 03:35:42 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:25:57.771 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:25:57.771 03:35:42 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:25:57.771 03:35:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:25:57.772 03:35:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:25:57.772 03:35:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK4 00:25:57.772 03:35:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:25:57.772 03:35:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK4 00:25:57.772 03:35:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:25:57.772 03:35:42 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:25:57.772 03:35:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.772 03:35:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:57.772 03:35:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.772 03:35:42 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:57.772 03:35:42 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:25:57.772 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:25:57.772 03:35:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:25:57.772 03:35:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:25:57.772 03:35:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:25:57.772 03:35:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK5 00:25:58.045 03:35:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:25:58.045 03:35:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK5 00:25:58.045 03:35:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:25:58.045 03:35:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:25:58.045 03:35:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:58.045 03:35:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:58.045 03:35:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:58.045 03:35:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:58.045 03:35:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:25:58.302 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:25:58.302 03:35:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:25:58.302 03:35:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:25:58.302 03:35:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:25:58.302 03:35:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK6 00:25:58.302 03:35:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:25:58.302 03:35:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK6 00:25:58.302 03:35:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:25:58.303 03:35:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:25:58.303 03:35:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:58.303 03:35:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:58.303 03:35:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:58.303 03:35:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:58.303 03:35:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:25:58.303 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:25:58.303 03:35:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:25:58.303 03:35:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:25:58.303 03:35:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:25:58.303 03:35:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK7 00:25:58.303 03:35:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:25:58.303 03:35:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK7 00:25:58.303 03:35:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:25:58.303 03:35:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:25:58.303 03:35:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:58.303 03:35:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:58.559 03:35:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:58.560 03:35:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:58.560 03:35:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:25:58.560 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:25:58.560 03:35:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:25:58.560 03:35:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:25:58.560 03:35:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:25:58.560 03:35:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK8 00:25:58.560 03:35:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:25:58.560 03:35:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK8 00:25:58.560 03:35:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:25:58.560 03:35:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:25:58.560 03:35:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:58.560 03:35:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:58.560 03:35:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:58.560 03:35:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:58.560 03:35:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:25:58.816 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:25:58.816 03:35:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:25:58.816 03:35:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:25:58.816 03:35:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:25:58.817 03:35:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK9 00:25:58.817 03:35:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:25:58.817 03:35:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK9 00:25:58.817 03:35:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:25:58.817 03:35:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:25:58.817 03:35:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:58.817 03:35:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:58.817 03:35:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:58.817 03:35:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:58.817 03:35:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:25:58.817 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:25:58.817 03:35:44 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:25:58.817 03:35:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:25:58.817 03:35:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:25:58.817 03:35:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK10 00:25:58.817 03:35:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:25:58.817 03:35:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK10 00:25:58.817 03:35:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:25:58.817 03:35:44 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:25:58.817 03:35:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:58.817 03:35:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:58.817 03:35:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:58.817 03:35:44 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:58.817 03:35:44 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:25:59.074 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:25:59.074 03:35:44 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:25:59.074 03:35:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:25:59.074 03:35:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:25:59.074 03:35:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK11 00:25:59.074 03:35:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:25:59.074 03:35:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK11 00:25:59.074 03:35:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:25:59.074 03:35:44 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:25:59.074 03:35:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:59.074 03:35:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:59.074 03:35:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:59.074 03:35:44 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:25:59.074 03:35:44 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:25:59.074 03:35:44 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:25:59.074 03:35:44 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:59.074 03:35:44 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@117 -- # sync 00:25:59.074 03:35:44 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:59.074 03:35:44 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@120 -- # set +e 00:25:59.074 03:35:44 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:59.074 03:35:44 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:59.074 rmmod nvme_tcp 00:25:59.074 rmmod nvme_fabrics 00:25:59.074 rmmod nvme_keyring 00:25:59.074 03:35:44 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:59.074 03:35:44 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@124 -- # set -e 00:25:59.074 03:35:44 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@125 -- # return 0 00:25:59.074 03:35:44 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@489 -- # '[' -n 2467015 ']' 00:25:59.074 03:35:44 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@490 -- # killprocess 2467015 00:25:59.074 03:35:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@946 -- # '[' -z 2467015 ']' 00:25:59.074 03:35:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@950 -- # kill -0 2467015 00:25:59.074 03:35:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@951 -- # uname 00:25:59.074 03:35:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:25:59.074 03:35:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2467015 00:25:59.074 03:35:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:25:59.074 03:35:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:25:59.074 03:35:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2467015' 00:25:59.074 killing process with pid 2467015 00:25:59.074 03:35:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@965 -- # kill 2467015 00:25:59.074 03:35:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@970 -- # wait 2467015 00:25:59.638 03:35:44 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:59.638 03:35:44 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:59.638 03:35:44 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:59.638 03:35:44 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:59.638 03:35:44 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:59.638 03:35:44 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:59.638 03:35:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:59.638 03:35:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:02.167 03:35:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:02.167 00:26:02.167 real 1m0.263s 00:26:02.167 user 3m21.077s 00:26:02.167 sys 0m25.112s 00:26:02.167 03:35:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1122 -- # xtrace_disable 00:26:02.167 03:35:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:02.167 ************************************ 00:26:02.167 END TEST nvmf_multiconnection 00:26:02.167 ************************************ 00:26:02.167 03:35:46 nvmf_tcp -- nvmf/nvmf.sh@68 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:26:02.167 03:35:46 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:26:02.167 03:35:46 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:26:02.167 03:35:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:02.167 ************************************ 00:26:02.167 START TEST nvmf_initiator_timeout 00:26:02.167 ************************************ 00:26:02.167 03:35:46 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:26:02.167 * Looking for test storage... 00:26:02.167 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:02.167 03:35:46 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:02.167 03:35:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:26:02.167 03:35:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:02.167 03:35:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:02.167 03:35:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:02.167 03:35:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:02.167 03:35:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:02.167 03:35:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:02.167 03:35:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:02.167 03:35:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:02.167 03:35:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:02.167 03:35:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:02.167 03:35:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:26:02.167 03:35:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:26:02.167 03:35:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:02.167 03:35:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:02.167 03:35:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:02.167 03:35:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:02.167 03:35:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:02.167 03:35:46 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:02.167 03:35:46 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:02.167 03:35:46 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:02.167 03:35:46 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:02.167 03:35:46 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:02.167 03:35:46 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:02.167 03:35:46 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:26:02.167 03:35:46 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:02.167 03:35:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@47 -- # : 0 00:26:02.167 03:35:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:02.167 03:35:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:02.167 03:35:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:02.167 03:35:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:02.167 03:35:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:02.167 03:35:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:02.167 03:35:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:02.167 03:35:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:02.167 03:35:46 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:02.167 03:35:46 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:02.167 03:35:46 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:26:02.167 03:35:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:02.167 03:35:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:02.167 03:35:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:02.167 03:35:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:02.167 03:35:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:02.167 03:35:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:02.167 03:35:46 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:02.167 03:35:46 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:02.167 03:35:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:02.167 03:35:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:02.167 03:35:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@285 -- # xtrace_disable 00:26:02.167 03:35:46 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:04.068 03:35:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:04.068 03:35:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # pci_devs=() 00:26:04.068 03:35:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:04.068 03:35:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:04.068 03:35:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:04.068 03:35:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:04.068 03:35:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:04.068 03:35:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@295 -- # net_devs=() 00:26:04.068 03:35:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:04.068 03:35:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@296 -- # e810=() 00:26:04.068 03:35:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@296 -- # local -ga e810 00:26:04.068 03:35:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # x722=() 00:26:04.068 03:35:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # local -ga x722 00:26:04.068 03:35:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # mlx=() 00:26:04.068 03:35:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # local -ga mlx 00:26:04.068 03:35:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:04.068 03:35:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:04.068 03:35:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:04.068 03:35:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:04.068 03:35:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:04.068 03:35:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:04.068 03:35:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:04.068 03:35:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:04.068 03:35:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:04.068 03:35:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:04.068 03:35:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:04.068 03:35:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:04.068 03:35:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:04.068 03:35:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:04.068 03:35:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:04.068 03:35:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:04.068 03:35:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:04.068 03:35:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:04.068 03:35:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:26:04.068 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:26:04.068 03:35:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:04.068 03:35:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:04.068 03:35:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:04.068 03:35:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:04.068 03:35:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:04.068 03:35:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:04.068 03:35:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:26:04.068 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:26:04.068 03:35:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:04.068 03:35:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:04.068 03:35:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:04.068 03:35:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:04.068 03:35:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:04.068 03:35:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:04.068 03:35:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:04.068 03:35:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:04.068 03:35:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:04.068 03:35:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:04.068 03:35:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:04.068 03:35:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:04.068 03:35:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:04.068 03:35:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:04.068 03:35:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:04.068 03:35:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:26:04.068 Found net devices under 0000:0a:00.0: cvl_0_0 00:26:04.068 03:35:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:04.068 03:35:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:04.068 03:35:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:04.068 03:35:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:04.068 03:35:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:04.068 03:35:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:04.068 03:35:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:04.068 03:35:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:04.068 03:35:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:26:04.068 Found net devices under 0000:0a:00.1: cvl_0_1 00:26:04.068 03:35:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:04.068 03:35:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:04.068 03:35:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # is_hw=yes 00:26:04.068 03:35:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:04.068 03:35:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:04.068 03:35:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:04.068 03:35:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:04.068 03:35:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:04.068 03:35:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:04.068 03:35:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:04.068 03:35:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:04.068 03:35:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:04.068 03:35:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:04.068 03:35:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:04.068 03:35:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:04.068 03:35:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:04.068 03:35:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:04.068 03:35:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:04.068 03:35:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:04.068 03:35:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:04.068 03:35:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:04.068 03:35:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:04.068 03:35:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:04.068 03:35:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:04.068 03:35:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:04.068 03:35:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:04.068 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:04.068 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.250 ms 00:26:04.069 00:26:04.069 --- 10.0.0.2 ping statistics --- 00:26:04.069 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:04.069 rtt min/avg/max/mdev = 0.250/0.250/0.250/0.000 ms 00:26:04.069 03:35:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:04.069 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:04.069 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.100 ms 00:26:04.069 00:26:04.069 --- 10.0.0.1 ping statistics --- 00:26:04.069 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:04.069 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:26:04.069 03:35:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:04.069 03:35:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@422 -- # return 0 00:26:04.069 03:35:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:04.069 03:35:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:04.069 03:35:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:04.069 03:35:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:04.069 03:35:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:04.069 03:35:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:04.069 03:35:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:04.069 03:35:49 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:26:04.069 03:35:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:04.069 03:35:49 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@720 -- # xtrace_disable 00:26:04.069 03:35:49 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:04.069 03:35:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@481 -- # nvmfpid=2475779 00:26:04.069 03:35:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:04.069 03:35:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@482 -- # waitforlisten 2475779 00:26:04.069 03:35:49 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@827 -- # '[' -z 2475779 ']' 00:26:04.069 03:35:49 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:04.069 03:35:49 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@832 -- # local max_retries=100 00:26:04.069 03:35:49 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:04.069 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:04.069 03:35:49 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@836 -- # xtrace_disable 00:26:04.069 03:35:49 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:04.069 [2024-07-21 03:35:49.084409] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:26:04.069 [2024-07-21 03:35:49.084480] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:04.069 EAL: No free 2048 kB hugepages reported on node 1 00:26:04.069 [2024-07-21 03:35:49.147936] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:04.069 [2024-07-21 03:35:49.233124] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:04.069 [2024-07-21 03:35:49.233190] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:04.069 [2024-07-21 03:35:49.233213] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:04.069 [2024-07-21 03:35:49.233225] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:04.069 [2024-07-21 03:35:49.233234] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:04.069 [2024-07-21 03:35:49.233328] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:04.069 [2024-07-21 03:35:49.233404] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:04.069 [2024-07-21 03:35:49.233460] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:04.069 [2024-07-21 03:35:49.233462] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:04.069 03:35:49 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:26:04.069 03:35:49 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@860 -- # return 0 00:26:04.069 03:35:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:04.069 03:35:49 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:04.069 03:35:49 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:04.069 03:35:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:04.069 03:35:49 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:26:04.069 03:35:49 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:04.069 03:35:49 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.069 03:35:49 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:04.325 Malloc0 00:26:04.325 03:35:49 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.325 03:35:49 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:26:04.325 03:35:49 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.325 03:35:49 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:04.325 Delay0 00:26:04.325 03:35:49 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.325 03:35:49 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:04.325 03:35:49 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.325 03:35:49 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:04.325 [2024-07-21 03:35:49.401113] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:04.325 03:35:49 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.325 03:35:49 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:26:04.325 03:35:49 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.325 03:35:49 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:04.325 03:35:49 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.325 03:35:49 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:04.325 03:35:49 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.325 03:35:49 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:04.325 03:35:49 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.325 03:35:49 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:04.325 03:35:49 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.325 03:35:49 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:04.325 [2024-07-21 03:35:49.429363] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:04.325 03:35:49 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.325 03:35:49 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:26:04.889 03:35:50 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:26:04.889 03:35:50 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1194 -- # local i=0 00:26:04.889 03:35:50 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:26:04.889 03:35:50 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:26:04.889 03:35:50 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1201 -- # sleep 2 00:26:07.406 03:35:52 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:26:07.406 03:35:52 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:26:07.406 03:35:52 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:26:07.406 03:35:52 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:26:07.406 03:35:52 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:26:07.406 03:35:52 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1204 -- # return 0 00:26:07.406 03:35:52 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=2476183 00:26:07.406 03:35:52 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:26:07.406 03:35:52 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:26:07.406 [global] 00:26:07.406 thread=1 00:26:07.406 invalidate=1 00:26:07.406 rw=write 00:26:07.406 time_based=1 00:26:07.406 runtime=60 00:26:07.406 ioengine=libaio 00:26:07.406 direct=1 00:26:07.406 bs=4096 00:26:07.406 iodepth=1 00:26:07.406 norandommap=0 00:26:07.406 numjobs=1 00:26:07.406 00:26:07.406 verify_dump=1 00:26:07.406 verify_backlog=512 00:26:07.406 verify_state_save=0 00:26:07.406 do_verify=1 00:26:07.406 verify=crc32c-intel 00:26:07.406 [job0] 00:26:07.406 filename=/dev/nvme0n1 00:26:07.406 Could not set queue depth (nvme0n1) 00:26:07.406 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:26:07.406 fio-3.35 00:26:07.406 Starting 1 thread 00:26:09.973 03:35:55 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:26:09.973 03:35:55 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.973 03:35:55 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:09.973 true 00:26:09.973 03:35:55 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.973 03:35:55 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:26:09.973 03:35:55 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.973 03:35:55 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:09.973 true 00:26:09.973 03:35:55 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.973 03:35:55 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:26:09.973 03:35:55 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.973 03:35:55 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:09.973 true 00:26:09.973 03:35:55 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.973 03:35:55 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:26:09.973 03:35:55 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.973 03:35:55 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:09.973 true 00:26:09.973 03:35:55 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.973 03:35:55 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:26:13.265 03:35:58 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:26:13.265 03:35:58 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.265 03:35:58 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:13.265 true 00:26:13.265 03:35:58 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.265 03:35:58 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:26:13.265 03:35:58 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.265 03:35:58 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:13.265 true 00:26:13.265 03:35:58 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.265 03:35:58 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:26:13.265 03:35:58 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.265 03:35:58 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:13.265 true 00:26:13.265 03:35:58 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.265 03:35:58 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:26:13.265 03:35:58 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.265 03:35:58 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:13.265 true 00:26:13.265 03:35:58 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.265 03:35:58 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:26:13.265 03:35:58 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 2476183 00:27:09.453 00:27:09.453 job0: (groupid=0, jobs=1): err= 0: pid=2476279: Sun Jul 21 03:36:52 2024 00:27:09.453 read: IOPS=191, BW=766KiB/s (785kB/s)(44.9MiB/60033msec) 00:27:09.453 slat (usec): min=5, max=15615, avg=13.41, stdev=168.10 00:27:09.453 clat (usec): min=223, max=41122k, avg=4944.01, stdev=383456.23 00:27:09.453 lat (usec): min=229, max=41122k, avg=4957.42, stdev=383456.30 00:27:09.453 clat percentiles (usec): 00:27:09.453 | 1.00th=[ 247], 5.00th=[ 255], 10.00th=[ 262], 20.00th=[ 273], 00:27:09.453 | 30.00th=[ 281], 40.00th=[ 293], 50.00th=[ 302], 60.00th=[ 310], 00:27:09.453 | 70.00th=[ 318], 80.00th=[ 330], 90.00th=[ 351], 95.00th=[ 420], 00:27:09.453 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:27:09.453 | 99.99th=[41157] 00:27:09.453 write: IOPS=196, BW=785KiB/s (803kB/s)(46.0MiB/60033msec); 0 zone resets 00:27:09.453 slat (nsec): min=6465, max=71769, avg=15384.86, stdev=8444.28 00:27:09.453 clat (usec): min=172, max=480, avg=232.41, stdev=46.41 00:27:09.453 lat (usec): min=179, max=519, avg=247.80, stdev=52.68 00:27:09.453 clat percentiles (usec): 00:27:09.453 | 1.00th=[ 182], 5.00th=[ 188], 10.00th=[ 192], 20.00th=[ 198], 00:27:09.453 | 30.00th=[ 204], 40.00th=[ 215], 50.00th=[ 223], 60.00th=[ 231], 00:27:09.453 | 70.00th=[ 239], 80.00th=[ 251], 90.00th=[ 285], 95.00th=[ 343], 00:27:09.453 | 99.00th=[ 412], 99.50th=[ 420], 99.90th=[ 441], 99.95th=[ 453], 00:27:09.453 | 99.99th=[ 478] 00:27:09.453 bw ( KiB/s): min= 1088, max= 8192, per=100.00%, avg=6729.14, stdev=2147.82, samples=14 00:27:09.453 iops : min= 272, max= 2048, avg=1682.29, stdev=536.96, samples=14 00:27:09.453 lat (usec) : 250=40.81%, 500=57.42%, 750=0.46% 00:27:09.453 lat (msec) : 2=0.02%, 4=0.01%, 50=1.29%, >=2000=0.01% 00:27:09.453 cpu : usr=0.39%, sys=0.71%, ctx=23282, majf=0, minf=2 00:27:09.453 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:09.453 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:09.453 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:09.453 issued rwts: total=11503,11776,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:09.453 latency : target=0, window=0, percentile=100.00%, depth=1 00:27:09.453 00:27:09.453 Run status group 0 (all jobs): 00:27:09.453 READ: bw=766KiB/s (785kB/s), 766KiB/s-766KiB/s (785kB/s-785kB/s), io=44.9MiB (47.1MB), run=60033-60033msec 00:27:09.453 WRITE: bw=785KiB/s (803kB/s), 785KiB/s-785KiB/s (803kB/s-803kB/s), io=46.0MiB (48.2MB), run=60033-60033msec 00:27:09.453 00:27:09.453 Disk stats (read/write): 00:27:09.453 nvme0n1: ios=11598/11776, merge=0/0, ticks=16768/2641, in_queue=19409, util=99.67% 00:27:09.453 03:36:52 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:27:09.453 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:27:09.453 03:36:52 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:27:09.453 03:36:52 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1215 -- # local i=0 00:27:09.453 03:36:52 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:27:09.453 03:36:52 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:27:09.453 03:36:52 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:27:09.453 03:36:52 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:27:09.453 03:36:52 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # return 0 00:27:09.453 03:36:52 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:27:09.453 03:36:52 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:27:09.453 nvmf hotplug test: fio successful as expected 00:27:09.453 03:36:52 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:09.453 03:36:52 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:09.453 03:36:52 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:09.453 03:36:52 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:09.453 03:36:52 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:27:09.453 03:36:52 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:27:09.453 03:36:52 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:27:09.453 03:36:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:09.453 03:36:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # sync 00:27:09.453 03:36:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:09.453 03:36:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@120 -- # set +e 00:27:09.453 03:36:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:09.453 03:36:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:09.453 rmmod nvme_tcp 00:27:09.453 rmmod nvme_fabrics 00:27:09.453 rmmod nvme_keyring 00:27:09.453 03:36:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:09.453 03:36:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set -e 00:27:09.453 03:36:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # return 0 00:27:09.453 03:36:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@489 -- # '[' -n 2475779 ']' 00:27:09.453 03:36:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@490 -- # killprocess 2475779 00:27:09.453 03:36:52 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@946 -- # '[' -z 2475779 ']' 00:27:09.453 03:36:52 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@950 -- # kill -0 2475779 00:27:09.453 03:36:52 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@951 -- # uname 00:27:09.453 03:36:52 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:09.453 03:36:52 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2475779 00:27:09.453 03:36:52 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:27:09.453 03:36:52 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:27:09.453 03:36:52 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2475779' 00:27:09.453 killing process with pid 2475779 00:27:09.453 03:36:52 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@965 -- # kill 2475779 00:27:09.453 03:36:52 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@970 -- # wait 2475779 00:27:09.453 03:36:53 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:09.453 03:36:53 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:09.453 03:36:53 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:09.453 03:36:53 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:09.453 03:36:53 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:09.453 03:36:53 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:09.453 03:36:53 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:09.453 03:36:53 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:10.019 03:36:55 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:10.019 00:27:10.019 real 1m8.144s 00:27:10.019 user 4m10.790s 00:27:10.019 sys 0m7.025s 00:27:10.019 03:36:55 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:10.019 03:36:55 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:10.019 ************************************ 00:27:10.019 END TEST nvmf_initiator_timeout 00:27:10.019 ************************************ 00:27:10.019 03:36:55 nvmf_tcp -- nvmf/nvmf.sh@71 -- # [[ phy == phy ]] 00:27:10.019 03:36:55 nvmf_tcp -- nvmf/nvmf.sh@72 -- # '[' tcp = tcp ']' 00:27:10.019 03:36:55 nvmf_tcp -- nvmf/nvmf.sh@73 -- # gather_supported_nvmf_pci_devs 00:27:10.019 03:36:55 nvmf_tcp -- nvmf/common.sh@285 -- # xtrace_disable 00:27:10.019 03:36:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:11.918 03:36:57 nvmf_tcp -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:11.918 03:36:57 nvmf_tcp -- nvmf/common.sh@291 -- # pci_devs=() 00:27:11.918 03:36:57 nvmf_tcp -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:11.918 03:36:57 nvmf_tcp -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:11.918 03:36:57 nvmf_tcp -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:11.918 03:36:57 nvmf_tcp -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:11.918 03:36:57 nvmf_tcp -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:11.918 03:36:57 nvmf_tcp -- nvmf/common.sh@295 -- # net_devs=() 00:27:11.918 03:36:57 nvmf_tcp -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:11.918 03:36:57 nvmf_tcp -- nvmf/common.sh@296 -- # e810=() 00:27:11.918 03:36:57 nvmf_tcp -- nvmf/common.sh@296 -- # local -ga e810 00:27:11.918 03:36:57 nvmf_tcp -- nvmf/common.sh@297 -- # x722=() 00:27:11.918 03:36:57 nvmf_tcp -- nvmf/common.sh@297 -- # local -ga x722 00:27:11.918 03:36:57 nvmf_tcp -- nvmf/common.sh@298 -- # mlx=() 00:27:11.918 03:36:57 nvmf_tcp -- nvmf/common.sh@298 -- # local -ga mlx 00:27:11.918 03:36:57 nvmf_tcp -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:11.918 03:36:57 nvmf_tcp -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:11.918 03:36:57 nvmf_tcp -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:11.918 03:36:57 nvmf_tcp -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:11.918 03:36:57 nvmf_tcp -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:11.918 03:36:57 nvmf_tcp -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:11.918 03:36:57 nvmf_tcp -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:11.918 03:36:57 nvmf_tcp -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:11.918 03:36:57 nvmf_tcp -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:11.918 03:36:57 nvmf_tcp -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:11.918 03:36:57 nvmf_tcp -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:11.918 03:36:57 nvmf_tcp -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:11.918 03:36:57 nvmf_tcp -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:11.918 03:36:57 nvmf_tcp -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:11.918 03:36:57 nvmf_tcp -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:11.918 03:36:57 nvmf_tcp -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:11.918 03:36:57 nvmf_tcp -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:11.918 03:36:57 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:11.918 03:36:57 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:11.918 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:11.918 03:36:57 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:11.918 03:36:57 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:11.918 03:36:57 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:11.918 03:36:57 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:11.918 03:36:57 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:11.918 03:36:57 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:11.918 03:36:57 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:11.918 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:11.918 03:36:57 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:11.918 03:36:57 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:11.918 03:36:57 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:11.918 03:36:57 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:11.918 03:36:57 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:11.918 03:36:57 nvmf_tcp -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:11.918 03:36:57 nvmf_tcp -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:11.919 03:36:57 nvmf_tcp -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:11.919 03:36:57 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:11.919 03:36:57 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:11.919 03:36:57 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:11.919 03:36:57 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:11.919 03:36:57 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:11.919 03:36:57 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:11.919 03:36:57 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:11.919 03:36:57 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:11.919 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:11.919 03:36:57 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:11.919 03:36:57 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:11.919 03:36:57 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:11.919 03:36:57 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:11.919 03:36:57 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:11.919 03:36:57 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:11.919 03:36:57 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:11.919 03:36:57 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:11.919 03:36:57 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:11.919 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:11.919 03:36:57 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:11.919 03:36:57 nvmf_tcp -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:11.919 03:36:57 nvmf_tcp -- nvmf/nvmf.sh@74 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:11.919 03:36:57 nvmf_tcp -- nvmf/nvmf.sh@75 -- # (( 2 > 0 )) 00:27:11.919 03:36:57 nvmf_tcp -- nvmf/nvmf.sh@76 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:27:11.919 03:36:57 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:27:11.919 03:36:57 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:11.919 03:36:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:11.919 ************************************ 00:27:11.919 START TEST nvmf_perf_adq 00:27:11.919 ************************************ 00:27:11.919 03:36:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:27:11.919 * Looking for test storage... 00:27:11.919 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:11.919 03:36:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:11.919 03:36:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:27:11.919 03:36:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:11.919 03:36:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:11.919 03:36:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:11.919 03:36:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:11.919 03:36:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:11.919 03:36:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:11.919 03:36:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:11.919 03:36:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:11.919 03:36:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:11.919 03:36:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:11.919 03:36:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:11.919 03:36:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:11.919 03:36:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:11.919 03:36:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:11.919 03:36:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:11.919 03:36:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:11.919 03:36:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:11.919 03:36:57 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:11.919 03:36:57 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:11.919 03:36:57 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:11.919 03:36:57 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:11.919 03:36:57 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:11.919 03:36:57 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:11.919 03:36:57 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:27:11.919 03:36:57 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:11.919 03:36:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:27:11.919 03:36:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:11.919 03:36:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:11.919 03:36:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:11.919 03:36:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:11.919 03:36:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:11.919 03:36:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:11.919 03:36:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:11.919 03:36:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:11.919 03:36:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:27:11.919 03:36:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:27:11.919 03:36:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:14.446 03:36:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:14.446 03:36:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:27:14.446 03:36:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:14.446 03:36:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:14.446 03:36:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:14.446 03:36:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:14.446 03:36:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:14.446 03:36:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:27:14.446 03:36:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:14.446 03:36:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:27:14.446 03:36:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:27:14.446 03:36:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:27:14.446 03:36:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:27:14.446 03:36:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:27:14.446 03:36:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:27:14.446 03:36:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:14.446 03:36:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:14.446 03:36:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:14.446 03:36:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:14.446 03:36:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:14.446 03:36:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:14.446 03:36:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:14.446 03:36:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:14.446 03:36:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:14.446 03:36:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:14.446 03:36:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:14.446 03:36:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:14.446 03:36:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:14.446 03:36:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:14.446 03:36:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:14.446 03:36:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:14.446 03:36:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:14.446 03:36:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:14.446 03:36:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:14.446 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:14.446 03:36:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:14.446 03:36:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:14.446 03:36:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:14.446 03:36:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:14.446 03:36:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:14.446 03:36:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:14.446 03:36:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:14.446 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:14.446 03:36:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:14.446 03:36:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:14.446 03:36:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:14.446 03:36:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:14.446 03:36:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:14.446 03:36:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:14.446 03:36:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:14.446 03:36:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:14.446 03:36:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:14.446 03:36:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:14.446 03:36:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:14.446 03:36:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:14.446 03:36:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:14.446 03:36:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:14.446 03:36:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:14.446 03:36:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:14.446 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:14.446 03:36:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:14.446 03:36:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:14.446 03:36:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:14.446 03:36:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:14.446 03:36:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:14.446 03:36:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:14.446 03:36:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:14.446 03:36:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:14.446 03:36:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:14.446 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:14.446 03:36:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:14.446 03:36:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:14.446 03:36:59 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:14.446 03:36:59 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:27:14.446 03:36:59 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:27:14.446 03:36:59 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:27:14.446 03:36:59 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:27:14.705 03:36:59 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:27:16.614 03:37:01 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:27:21.882 03:37:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:27:21.882 03:37:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:21.882 03:37:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:21.882 03:37:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:21.882 03:37:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:21.882 03:37:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:21.882 03:37:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:21.882 03:37:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:21.882 03:37:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:21.882 03:37:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:21.882 03:37:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:21.882 03:37:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:27:21.882 03:37:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:21.882 03:37:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:21.882 03:37:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:27:21.882 03:37:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:21.882 03:37:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:21.882 03:37:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:21.882 03:37:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:21.882 03:37:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:21.882 03:37:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:27:21.882 03:37:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:21.882 03:37:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:27:21.882 03:37:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:27:21.882 03:37:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:27:21.882 03:37:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:27:21.882 03:37:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:27:21.882 03:37:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:27:21.882 03:37:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:21.882 03:37:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:21.882 03:37:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:21.882 03:37:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:21.882 03:37:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:21.882 03:37:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:21.882 03:37:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:21.882 03:37:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:21.882 03:37:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:21.882 03:37:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:21.882 03:37:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:21.882 03:37:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:21.882 03:37:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:21.882 03:37:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:21.882 03:37:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:21.882 03:37:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:21.882 03:37:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:21.882 03:37:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:21.882 03:37:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:21.882 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:21.882 03:37:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:21.882 03:37:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:21.882 03:37:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:21.882 03:37:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:21.882 03:37:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:21.882 03:37:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:21.882 03:37:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:21.882 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:21.882 03:37:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:21.882 03:37:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:21.882 03:37:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:21.882 03:37:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:21.882 03:37:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:21.882 03:37:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:21.882 03:37:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:21.882 03:37:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:21.882 03:37:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:21.882 03:37:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:21.882 03:37:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:21.882 03:37:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:21.882 03:37:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:21.882 03:37:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:21.882 03:37:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:21.882 03:37:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:21.882 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:21.882 03:37:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:21.882 03:37:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:21.882 03:37:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:21.882 03:37:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:21.882 03:37:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:21.882 03:37:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:21.882 03:37:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:21.882 03:37:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:21.882 03:37:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:21.882 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:21.882 03:37:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:21.882 03:37:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:21.882 03:37:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:27:21.882 03:37:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:21.882 03:37:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:21.882 03:37:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:21.882 03:37:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:21.882 03:37:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:21.882 03:37:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:21.882 03:37:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:21.882 03:37:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:21.882 03:37:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:21.882 03:37:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:21.882 03:37:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:21.882 03:37:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:21.882 03:37:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:21.882 03:37:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:21.882 03:37:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:21.882 03:37:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:21.882 03:37:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:21.882 03:37:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:21.882 03:37:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:21.882 03:37:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:21.882 03:37:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:21.882 03:37:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:21.882 03:37:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:21.882 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:21.883 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.207 ms 00:27:21.883 00:27:21.883 --- 10.0.0.2 ping statistics --- 00:27:21.883 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:21.883 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:27:21.883 03:37:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:21.883 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:21.883 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.191 ms 00:27:21.883 00:27:21.883 --- 10.0.0.1 ping statistics --- 00:27:21.883 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:21.883 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:27:21.883 03:37:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:21.883 03:37:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:27:21.883 03:37:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:21.883 03:37:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:21.883 03:37:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:21.883 03:37:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:21.883 03:37:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:21.883 03:37:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:21.883 03:37:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:21.883 03:37:07 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:27:21.883 03:37:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:21.883 03:37:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:21.883 03:37:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:21.883 03:37:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=2488399 00:27:21.883 03:37:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:27:21.883 03:37:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 2488399 00:27:21.883 03:37:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@827 -- # '[' -z 2488399 ']' 00:27:21.883 03:37:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:21.883 03:37:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:21.883 03:37:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:21.883 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:21.883 03:37:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:21.883 03:37:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:21.883 [2024-07-21 03:37:07.105324] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:27:21.883 [2024-07-21 03:37:07.105418] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:21.883 EAL: No free 2048 kB hugepages reported on node 1 00:27:21.883 [2024-07-21 03:37:07.181528] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:22.142 [2024-07-21 03:37:07.281003] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:22.142 [2024-07-21 03:37:07.281069] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:22.142 [2024-07-21 03:37:07.281085] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:22.142 [2024-07-21 03:37:07.281098] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:22.142 [2024-07-21 03:37:07.281111] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:22.142 [2024-07-21 03:37:07.281193] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:22.142 [2024-07-21 03:37:07.281245] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:22.142 [2024-07-21 03:37:07.281296] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:22.142 [2024-07-21 03:37:07.281299] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:22.142 03:37:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:22.142 03:37:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@860 -- # return 0 00:27:22.142 03:37:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:22.142 03:37:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:22.142 03:37:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:22.142 03:37:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:22.142 03:37:07 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:27:22.142 03:37:07 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:27:22.142 03:37:07 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:27:22.142 03:37:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:22.142 03:37:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:22.142 03:37:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:22.142 03:37:07 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:27:22.142 03:37:07 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:27:22.142 03:37:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:22.142 03:37:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:22.142 03:37:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:22.142 03:37:07 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:27:22.142 03:37:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:22.142 03:37:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:22.400 03:37:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:22.400 03:37:07 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:27:22.400 03:37:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:22.400 03:37:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:22.400 [2024-07-21 03:37:07.517111] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:22.400 03:37:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:22.400 03:37:07 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:27:22.400 03:37:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:22.400 03:37:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:22.400 Malloc1 00:27:22.400 03:37:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:22.400 03:37:07 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:22.400 03:37:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:22.400 03:37:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:22.400 03:37:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:22.400 03:37:07 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:27:22.400 03:37:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:22.400 03:37:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:22.400 03:37:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:22.400 03:37:07 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:22.400 03:37:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:22.400 03:37:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:22.400 [2024-07-21 03:37:07.568331] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:22.400 03:37:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:22.400 03:37:07 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=2488466 00:27:22.400 03:37:07 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:27:22.400 03:37:07 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:27:22.400 EAL: No free 2048 kB hugepages reported on node 1 00:27:24.335 03:37:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:27:24.335 03:37:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.335 03:37:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:24.335 03:37:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.335 03:37:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:27:24.335 "tick_rate": 2700000000, 00:27:24.335 "poll_groups": [ 00:27:24.335 { 00:27:24.335 "name": "nvmf_tgt_poll_group_000", 00:27:24.335 "admin_qpairs": 1, 00:27:24.335 "io_qpairs": 1, 00:27:24.335 "current_admin_qpairs": 1, 00:27:24.335 "current_io_qpairs": 1, 00:27:24.335 "pending_bdev_io": 0, 00:27:24.335 "completed_nvme_io": 20295, 00:27:24.335 "transports": [ 00:27:24.335 { 00:27:24.335 "trtype": "TCP" 00:27:24.335 } 00:27:24.335 ] 00:27:24.335 }, 00:27:24.335 { 00:27:24.335 "name": "nvmf_tgt_poll_group_001", 00:27:24.335 "admin_qpairs": 0, 00:27:24.335 "io_qpairs": 1, 00:27:24.335 "current_admin_qpairs": 0, 00:27:24.335 "current_io_qpairs": 1, 00:27:24.335 "pending_bdev_io": 0, 00:27:24.335 "completed_nvme_io": 20395, 00:27:24.335 "transports": [ 00:27:24.335 { 00:27:24.335 "trtype": "TCP" 00:27:24.335 } 00:27:24.335 ] 00:27:24.335 }, 00:27:24.335 { 00:27:24.335 "name": "nvmf_tgt_poll_group_002", 00:27:24.335 "admin_qpairs": 0, 00:27:24.335 "io_qpairs": 1, 00:27:24.335 "current_admin_qpairs": 0, 00:27:24.335 "current_io_qpairs": 1, 00:27:24.335 "pending_bdev_io": 0, 00:27:24.335 "completed_nvme_io": 20179, 00:27:24.335 "transports": [ 00:27:24.335 { 00:27:24.335 "trtype": "TCP" 00:27:24.335 } 00:27:24.335 ] 00:27:24.335 }, 00:27:24.335 { 00:27:24.335 "name": "nvmf_tgt_poll_group_003", 00:27:24.335 "admin_qpairs": 0, 00:27:24.335 "io_qpairs": 1, 00:27:24.335 "current_admin_qpairs": 0, 00:27:24.335 "current_io_qpairs": 1, 00:27:24.335 "pending_bdev_io": 0, 00:27:24.335 "completed_nvme_io": 19862, 00:27:24.335 "transports": [ 00:27:24.335 { 00:27:24.335 "trtype": "TCP" 00:27:24.335 } 00:27:24.335 ] 00:27:24.335 } 00:27:24.335 ] 00:27:24.335 }' 00:27:24.335 03:37:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:27:24.335 03:37:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:27:24.335 03:37:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:27:24.335 03:37:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:27:24.335 03:37:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 2488466 00:27:32.437 Initializing NVMe Controllers 00:27:32.437 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:32.437 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:27:32.437 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:27:32.437 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:27:32.437 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:27:32.437 Initialization complete. Launching workers. 00:27:32.437 ======================================================== 00:27:32.437 Latency(us) 00:27:32.437 Device Information : IOPS MiB/s Average min max 00:27:32.437 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10517.40 41.08 6085.20 2474.48 10595.47 00:27:32.437 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10653.60 41.62 6008.66 2670.22 9374.23 00:27:32.437 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10610.40 41.45 6032.06 2698.54 9977.68 00:27:32.437 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10668.10 41.67 6001.14 2887.57 8871.82 00:27:32.437 ======================================================== 00:27:32.437 Total : 42449.49 165.82 6031.58 2474.48 10595.47 00:27:32.437 00:27:32.438 03:37:17 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:27:32.438 03:37:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:32.438 03:37:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:27:32.438 03:37:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:32.438 03:37:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:27:32.438 03:37:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:32.438 03:37:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:32.438 rmmod nvme_tcp 00:27:32.438 rmmod nvme_fabrics 00:27:32.438 rmmod nvme_keyring 00:27:32.695 03:37:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:32.695 03:37:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:27:32.695 03:37:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:27:32.695 03:37:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 2488399 ']' 00:27:32.695 03:37:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 2488399 00:27:32.695 03:37:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@946 -- # '[' -z 2488399 ']' 00:27:32.695 03:37:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@950 -- # kill -0 2488399 00:27:32.695 03:37:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@951 -- # uname 00:27:32.695 03:37:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:32.695 03:37:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2488399 00:27:32.695 03:37:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:27:32.695 03:37:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:27:32.695 03:37:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2488399' 00:27:32.695 killing process with pid 2488399 00:27:32.695 03:37:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@965 -- # kill 2488399 00:27:32.695 03:37:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@970 -- # wait 2488399 00:27:32.954 03:37:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:32.954 03:37:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:32.954 03:37:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:32.954 03:37:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:32.954 03:37:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:32.954 03:37:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:32.954 03:37:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:32.954 03:37:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:34.858 03:37:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:34.858 03:37:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:27:34.858 03:37:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:27:35.793 03:37:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:27:37.693 03:37:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:27:42.958 03:37:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:27:42.958 03:37:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:42.958 03:37:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:42.958 03:37:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:42.958 03:37:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:42.958 03:37:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:42.958 03:37:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:42.958 03:37:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:42.958 03:37:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:42.958 03:37:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:42.958 03:37:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:42.958 03:37:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:27:42.958 03:37:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:42.958 03:37:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:42.958 03:37:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:27:42.958 03:37:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:42.958 03:37:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:42.958 03:37:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:42.958 03:37:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:42.958 03:37:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:42.958 03:37:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:27:42.958 03:37:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:42.958 03:37:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:27:42.958 03:37:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:27:42.958 03:37:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:27:42.958 03:37:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:27:42.958 03:37:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:27:42.958 03:37:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:27:42.958 03:37:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:42.958 03:37:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:42.958 03:37:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:42.958 03:37:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:42.958 03:37:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:42.958 03:37:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:42.958 03:37:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:42.958 03:37:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:42.958 03:37:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:42.958 03:37:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:42.958 03:37:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:42.958 03:37:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:42.958 03:37:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:42.958 03:37:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:42.958 03:37:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:42.958 03:37:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:42.958 03:37:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:42.958 03:37:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:42.958 03:37:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:42.958 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:42.958 03:37:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:42.958 03:37:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:42.958 03:37:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:42.958 03:37:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:42.958 03:37:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:42.959 03:37:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:42.959 03:37:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:42.959 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:42.959 03:37:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:42.959 03:37:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:42.959 03:37:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:42.959 03:37:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:42.959 03:37:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:42.959 03:37:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:42.959 03:37:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:42.959 03:37:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:42.959 03:37:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:42.959 03:37:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:42.959 03:37:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:42.959 03:37:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:42.959 03:37:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:42.959 03:37:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:42.959 03:37:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:42.959 03:37:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:42.959 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:42.959 03:37:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:42.959 03:37:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:42.959 03:37:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:42.959 03:37:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:42.959 03:37:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:42.959 03:37:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:42.959 03:37:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:42.959 03:37:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:42.959 03:37:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:42.959 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:42.959 03:37:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:42.959 03:37:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:42.959 03:37:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:27:42.959 03:37:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:42.959 03:37:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:42.959 03:37:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:42.959 03:37:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:42.959 03:37:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:42.959 03:37:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:42.959 03:37:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:42.959 03:37:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:42.959 03:37:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:42.959 03:37:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:42.959 03:37:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:42.959 03:37:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:42.959 03:37:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:42.959 03:37:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:42.959 03:37:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:42.959 03:37:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:42.959 03:37:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:42.959 03:37:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:42.959 03:37:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:42.959 03:37:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:42.959 03:37:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:42.959 03:37:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:42.959 03:37:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:42.959 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:42.959 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.200 ms 00:27:42.959 00:27:42.959 --- 10.0.0.2 ping statistics --- 00:27:42.959 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:42.959 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:27:42.959 03:37:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:42.959 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:42.959 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.151 ms 00:27:42.959 00:27:42.959 --- 10.0.0.1 ping statistics --- 00:27:42.959 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:42.959 rtt min/avg/max/mdev = 0.151/0.151/0.151/0.000 ms 00:27:42.959 03:37:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:42.959 03:37:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:27:42.959 03:37:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:42.959 03:37:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:42.959 03:37:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:42.959 03:37:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:42.959 03:37:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:42.959 03:37:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:42.959 03:37:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:42.959 03:37:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:27:42.959 03:37:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:27:42.959 03:37:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:27:42.959 03:37:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:27:42.959 net.core.busy_poll = 1 00:27:42.959 03:37:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:27:42.959 net.core.busy_read = 1 00:27:42.959 03:37:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:27:42.959 03:37:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:27:42.959 03:37:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:27:42.959 03:37:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:27:42.959 03:37:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:27:42.959 03:37:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:27:42.959 03:37:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:42.959 03:37:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:42.959 03:37:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:42.959 03:37:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=2491054 00:27:42.959 03:37:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:27:42.959 03:37:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 2491054 00:27:42.959 03:37:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@827 -- # '[' -z 2491054 ']' 00:27:42.959 03:37:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:42.959 03:37:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:42.959 03:37:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:42.959 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:42.959 03:37:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:42.959 03:37:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:42.959 [2024-07-21 03:37:28.106884] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:27:42.959 [2024-07-21 03:37:28.106977] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:42.959 EAL: No free 2048 kB hugepages reported on node 1 00:27:42.959 [2024-07-21 03:37:28.180179] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:43.218 [2024-07-21 03:37:28.272599] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:43.218 [2024-07-21 03:37:28.272658] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:43.218 [2024-07-21 03:37:28.272685] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:43.218 [2024-07-21 03:37:28.272699] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:43.218 [2024-07-21 03:37:28.272711] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:43.218 [2024-07-21 03:37:28.272768] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:43.218 [2024-07-21 03:37:28.272824] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:43.218 [2024-07-21 03:37:28.272915] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:43.218 [2024-07-21 03:37:28.272917] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:43.218 03:37:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:43.218 03:37:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@860 -- # return 0 00:27:43.218 03:37:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:43.218 03:37:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:43.218 03:37:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:43.218 03:37:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:43.218 03:37:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:27:43.218 03:37:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:27:43.218 03:37:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:27:43.218 03:37:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:43.218 03:37:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:43.218 03:37:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:43.218 03:37:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:27:43.218 03:37:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:27:43.218 03:37:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:43.218 03:37:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:43.218 03:37:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:43.218 03:37:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:27:43.218 03:37:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:43.218 03:37:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:43.218 03:37:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:43.218 03:37:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:27:43.218 03:37:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:43.218 03:37:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:43.218 [2024-07-21 03:37:28.489291] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:43.218 03:37:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:43.218 03:37:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:27:43.218 03:37:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:43.218 03:37:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:43.218 Malloc1 00:27:43.218 03:37:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:43.218 03:37:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:43.218 03:37:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:43.218 03:37:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:43.218 03:37:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:43.477 03:37:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:27:43.477 03:37:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:43.477 03:37:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:43.477 03:37:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:43.477 03:37:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:43.477 03:37:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:43.477 03:37:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:43.477 [2024-07-21 03:37:28.542255] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:43.477 03:37:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:43.477 03:37:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=2491192 00:27:43.477 03:37:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:27:43.477 03:37:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:27:43.477 EAL: No free 2048 kB hugepages reported on node 1 00:27:45.377 03:37:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:27:45.377 03:37:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:45.377 03:37:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:45.377 03:37:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:45.377 03:37:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:27:45.377 "tick_rate": 2700000000, 00:27:45.377 "poll_groups": [ 00:27:45.377 { 00:27:45.377 "name": "nvmf_tgt_poll_group_000", 00:27:45.377 "admin_qpairs": 1, 00:27:45.377 "io_qpairs": 2, 00:27:45.377 "current_admin_qpairs": 1, 00:27:45.377 "current_io_qpairs": 2, 00:27:45.377 "pending_bdev_io": 0, 00:27:45.377 "completed_nvme_io": 26931, 00:27:45.377 "transports": [ 00:27:45.377 { 00:27:45.377 "trtype": "TCP" 00:27:45.377 } 00:27:45.377 ] 00:27:45.377 }, 00:27:45.377 { 00:27:45.377 "name": "nvmf_tgt_poll_group_001", 00:27:45.377 "admin_qpairs": 0, 00:27:45.377 "io_qpairs": 2, 00:27:45.377 "current_admin_qpairs": 0, 00:27:45.377 "current_io_qpairs": 2, 00:27:45.377 "pending_bdev_io": 0, 00:27:45.377 "completed_nvme_io": 24625, 00:27:45.377 "transports": [ 00:27:45.377 { 00:27:45.377 "trtype": "TCP" 00:27:45.377 } 00:27:45.377 ] 00:27:45.377 }, 00:27:45.377 { 00:27:45.377 "name": "nvmf_tgt_poll_group_002", 00:27:45.377 "admin_qpairs": 0, 00:27:45.377 "io_qpairs": 0, 00:27:45.377 "current_admin_qpairs": 0, 00:27:45.377 "current_io_qpairs": 0, 00:27:45.377 "pending_bdev_io": 0, 00:27:45.377 "completed_nvme_io": 0, 00:27:45.377 "transports": [ 00:27:45.377 { 00:27:45.377 "trtype": "TCP" 00:27:45.377 } 00:27:45.377 ] 00:27:45.377 }, 00:27:45.377 { 00:27:45.377 "name": "nvmf_tgt_poll_group_003", 00:27:45.377 "admin_qpairs": 0, 00:27:45.377 "io_qpairs": 0, 00:27:45.377 "current_admin_qpairs": 0, 00:27:45.377 "current_io_qpairs": 0, 00:27:45.377 "pending_bdev_io": 0, 00:27:45.377 "completed_nvme_io": 0, 00:27:45.377 "transports": [ 00:27:45.377 { 00:27:45.377 "trtype": "TCP" 00:27:45.377 } 00:27:45.377 ] 00:27:45.377 } 00:27:45.377 ] 00:27:45.377 }' 00:27:45.377 03:37:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:27:45.377 03:37:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:27:45.377 03:37:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=2 00:27:45.377 03:37:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 2 -lt 2 ]] 00:27:45.377 03:37:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 2491192 00:27:53.482 Initializing NVMe Controllers 00:27:53.482 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:53.482 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:27:53.482 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:27:53.482 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:27:53.482 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:27:53.482 Initialization complete. Launching workers. 00:27:53.482 ======================================================== 00:27:53.482 Latency(us) 00:27:53.482 Device Information : IOPS MiB/s Average min max 00:27:53.482 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 7041.47 27.51 9091.88 1091.07 54546.66 00:27:53.482 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 7435.47 29.04 8608.92 1736.91 54135.36 00:27:53.482 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 6193.07 24.19 10366.53 1805.83 55551.05 00:27:53.482 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 6525.57 25.49 9810.85 1806.75 53771.30 00:27:53.482 ======================================================== 00:27:53.482 Total : 27195.59 106.23 9422.62 1091.07 55551.05 00:27:53.482 00:27:53.482 03:37:38 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:27:53.482 03:37:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:53.482 03:37:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:27:53.483 03:37:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:53.483 03:37:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:27:53.483 03:37:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:53.483 03:37:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:53.483 rmmod nvme_tcp 00:27:53.483 rmmod nvme_fabrics 00:27:53.483 rmmod nvme_keyring 00:27:53.483 03:37:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:53.483 03:37:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:27:53.483 03:37:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:27:53.483 03:37:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 2491054 ']' 00:27:53.483 03:37:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 2491054 00:27:53.483 03:37:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@946 -- # '[' -z 2491054 ']' 00:27:53.483 03:37:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@950 -- # kill -0 2491054 00:27:53.483 03:37:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@951 -- # uname 00:27:53.483 03:37:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:53.483 03:37:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2491054 00:27:53.740 03:37:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:27:53.740 03:37:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:27:53.740 03:37:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2491054' 00:27:53.740 killing process with pid 2491054 00:27:53.740 03:37:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@965 -- # kill 2491054 00:27:53.740 03:37:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@970 -- # wait 2491054 00:27:53.740 03:37:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:53.740 03:37:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:53.740 03:37:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:53.740 03:37:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:53.740 03:37:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:53.740 03:37:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:53.740 03:37:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:53.740 03:37:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:57.050 03:37:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:57.050 03:37:42 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:27:57.050 00:27:57.050 real 0m45.008s 00:27:57.050 user 2m35.986s 00:27:57.050 sys 0m11.085s 00:27:57.050 03:37:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:57.050 03:37:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:57.050 ************************************ 00:27:57.050 END TEST nvmf_perf_adq 00:27:57.050 ************************************ 00:27:57.050 03:37:42 nvmf_tcp -- nvmf/nvmf.sh@83 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:27:57.050 03:37:42 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:27:57.050 03:37:42 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:57.050 03:37:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:57.050 ************************************ 00:27:57.050 START TEST nvmf_shutdown 00:27:57.050 ************************************ 00:27:57.050 03:37:42 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:27:57.050 * Looking for test storage... 00:27:57.050 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:57.050 03:37:42 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:57.050 03:37:42 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:27:57.050 03:37:42 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:57.050 03:37:42 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:57.050 03:37:42 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:57.050 03:37:42 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:57.050 03:37:42 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:57.050 03:37:42 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:57.050 03:37:42 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:57.050 03:37:42 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:57.050 03:37:42 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:57.050 03:37:42 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:57.050 03:37:42 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:57.050 03:37:42 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:57.050 03:37:42 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:57.050 03:37:42 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:57.050 03:37:42 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:57.050 03:37:42 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:57.050 03:37:42 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:57.050 03:37:42 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:57.050 03:37:42 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:57.050 03:37:42 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:57.050 03:37:42 nvmf_tcp.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:57.050 03:37:42 nvmf_tcp.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:57.050 03:37:42 nvmf_tcp.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:57.050 03:37:42 nvmf_tcp.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:27:57.050 03:37:42 nvmf_tcp.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:57.050 03:37:42 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:27:57.050 03:37:42 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:57.050 03:37:42 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:57.050 03:37:42 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:57.050 03:37:42 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:57.050 03:37:42 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:57.050 03:37:42 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:57.050 03:37:42 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:57.050 03:37:42 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:57.050 03:37:42 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:57.050 03:37:42 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:57.050 03:37:42 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:27:57.050 03:37:42 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:27:57.050 03:37:42 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:57.050 03:37:42 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:57.050 ************************************ 00:27:57.050 START TEST nvmf_shutdown_tc1 00:27:57.050 ************************************ 00:27:57.050 03:37:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1121 -- # nvmf_shutdown_tc1 00:27:57.050 03:37:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:27:57.050 03:37:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:27:57.050 03:37:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:57.050 03:37:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:57.050 03:37:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:57.050 03:37:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:57.050 03:37:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:57.050 03:37:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:57.050 03:37:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:57.050 03:37:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:57.050 03:37:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:57.051 03:37:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:57.051 03:37:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:27:57.051 03:37:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:58.948 03:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:58.948 03:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:27:58.948 03:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:58.948 03:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:58.948 03:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:58.948 03:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:58.948 03:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:58.948 03:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:27:58.948 03:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:58.948 03:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:27:58.948 03:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:27:58.948 03:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:27:58.948 03:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:27:58.948 03:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:27:58.948 03:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:27:58.948 03:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:58.948 03:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:58.948 03:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:58.948 03:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:58.948 03:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:58.948 03:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:58.948 03:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:58.948 03:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:58.948 03:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:58.948 03:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:58.948 03:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:58.948 03:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:58.948 03:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:58.948 03:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:58.948 03:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:58.948 03:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:58.948 03:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:58.948 03:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:58.948 03:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:58.948 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:58.948 03:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:58.948 03:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:58.948 03:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:58.948 03:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:58.948 03:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:58.948 03:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:58.948 03:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:58.949 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:58.949 03:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:58.949 03:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:58.949 03:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:58.949 03:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:58.949 03:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:58.949 03:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:58.949 03:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:58.949 03:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:58.949 03:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:58.949 03:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:58.949 03:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:58.949 03:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:58.949 03:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:58.949 03:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:58.949 03:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:58.949 03:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:58.949 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:58.949 03:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:58.949 03:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:58.949 03:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:58.949 03:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:58.949 03:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:58.949 03:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:58.949 03:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:58.949 03:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:58.949 03:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:58.949 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:58.949 03:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:58.949 03:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:58.949 03:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:27:58.949 03:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:58.949 03:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:58.949 03:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:58.949 03:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:58.949 03:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:58.949 03:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:58.949 03:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:58.949 03:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:58.949 03:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:58.949 03:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:58.949 03:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:58.949 03:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:58.949 03:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:58.949 03:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:58.949 03:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:58.949 03:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:58.949 03:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:58.949 03:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:58.949 03:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:58.949 03:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:59.205 03:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:59.205 03:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:59.205 03:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:59.205 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:59.205 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.263 ms 00:27:59.205 00:27:59.205 --- 10.0.0.2 ping statistics --- 00:27:59.205 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:59.205 rtt min/avg/max/mdev = 0.263/0.263/0.263/0.000 ms 00:27:59.205 03:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:59.205 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:59.205 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.065 ms 00:27:59.205 00:27:59.205 --- 10.0.0.1 ping statistics --- 00:27:59.205 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:59.205 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:27:59.205 03:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:59.205 03:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:27:59.205 03:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:59.205 03:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:59.205 03:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:59.205 03:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:59.205 03:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:59.205 03:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:59.205 03:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:59.205 03:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:27:59.205 03:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:59.205 03:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:59.205 03:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:59.205 03:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=2494470 00:27:59.205 03:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:27:59.205 03:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 2494470 00:27:59.205 03:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@827 -- # '[' -z 2494470 ']' 00:27:59.205 03:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:59.205 03:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:59.205 03:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:59.205 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:59.205 03:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:59.205 03:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:59.205 [2024-07-21 03:37:44.364268] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:27:59.205 [2024-07-21 03:37:44.364343] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:59.205 EAL: No free 2048 kB hugepages reported on node 1 00:27:59.205 [2024-07-21 03:37:44.427901] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:59.205 [2024-07-21 03:37:44.516486] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:59.205 [2024-07-21 03:37:44.516534] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:59.206 [2024-07-21 03:37:44.516564] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:59.206 [2024-07-21 03:37:44.516577] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:59.206 [2024-07-21 03:37:44.516587] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:59.206 [2024-07-21 03:37:44.516658] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:59.206 [2024-07-21 03:37:44.516708] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:59.206 [2024-07-21 03:37:44.516755] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:27:59.206 [2024-07-21 03:37:44.516757] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:59.462 03:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:59.462 03:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # return 0 00:27:59.462 03:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:59.462 03:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:59.462 03:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:59.462 03:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:59.462 03:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:59.462 03:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:59.462 03:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:59.462 [2024-07-21 03:37:44.664310] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:59.462 03:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:59.462 03:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:27:59.462 03:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:27:59.462 03:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:59.462 03:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:59.462 03:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:59.462 03:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:59.462 03:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:59.462 03:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:59.462 03:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:59.462 03:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:59.462 03:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:59.462 03:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:59.462 03:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:59.462 03:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:59.462 03:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:59.462 03:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:59.462 03:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:59.462 03:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:59.462 03:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:59.462 03:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:59.462 03:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:59.462 03:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:59.462 03:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:59.462 03:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:59.463 03:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:59.463 03:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:27:59.463 03:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:59.463 03:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:59.463 Malloc1 00:27:59.463 [2024-07-21 03:37:44.750019] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:59.720 Malloc2 00:27:59.720 Malloc3 00:27:59.720 Malloc4 00:27:59.720 Malloc5 00:27:59.720 Malloc6 00:27:59.720 Malloc7 00:27:59.978 Malloc8 00:27:59.978 Malloc9 00:27:59.978 Malloc10 00:27:59.978 03:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:59.978 03:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:27:59.978 03:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:59.978 03:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:59.978 03:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=2494651 00:27:59.978 03:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 2494651 /var/tmp/bdevperf.sock 00:27:59.978 03:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@827 -- # '[' -z 2494651 ']' 00:27:59.978 03:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:59.978 03:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:27:59.978 03:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:27:59.978 03:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:59.978 03:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:27:59.978 03:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:59.978 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:59.978 03:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:27:59.978 03:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:59.978 03:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:59.978 03:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:59.978 03:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:59.978 { 00:27:59.978 "params": { 00:27:59.978 "name": "Nvme$subsystem", 00:27:59.978 "trtype": "$TEST_TRANSPORT", 00:27:59.978 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:59.978 "adrfam": "ipv4", 00:27:59.978 "trsvcid": "$NVMF_PORT", 00:27:59.978 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:59.978 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:59.978 "hdgst": ${hdgst:-false}, 00:27:59.978 "ddgst": ${ddgst:-false} 00:27:59.978 }, 00:27:59.978 "method": "bdev_nvme_attach_controller" 00:27:59.978 } 00:27:59.978 EOF 00:27:59.978 )") 00:27:59.978 03:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:59.978 03:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:59.978 03:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:59.978 { 00:27:59.978 "params": { 00:27:59.978 "name": "Nvme$subsystem", 00:27:59.978 "trtype": "$TEST_TRANSPORT", 00:27:59.978 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:59.978 "adrfam": "ipv4", 00:27:59.978 "trsvcid": "$NVMF_PORT", 00:27:59.978 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:59.978 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:59.978 "hdgst": ${hdgst:-false}, 00:27:59.978 "ddgst": ${ddgst:-false} 00:27:59.978 }, 00:27:59.978 "method": "bdev_nvme_attach_controller" 00:27:59.978 } 00:27:59.978 EOF 00:27:59.978 )") 00:27:59.978 03:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:59.978 03:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:59.978 03:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:59.978 { 00:27:59.978 "params": { 00:27:59.978 "name": "Nvme$subsystem", 00:27:59.978 "trtype": "$TEST_TRANSPORT", 00:27:59.978 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:59.978 "adrfam": "ipv4", 00:27:59.978 "trsvcid": "$NVMF_PORT", 00:27:59.978 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:59.978 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:59.978 "hdgst": ${hdgst:-false}, 00:27:59.978 "ddgst": ${ddgst:-false} 00:27:59.978 }, 00:27:59.978 "method": "bdev_nvme_attach_controller" 00:27:59.978 } 00:27:59.978 EOF 00:27:59.978 )") 00:27:59.978 03:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:59.978 03:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:59.978 03:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:59.978 { 00:27:59.978 "params": { 00:27:59.978 "name": "Nvme$subsystem", 00:27:59.978 "trtype": "$TEST_TRANSPORT", 00:27:59.978 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:59.978 "adrfam": "ipv4", 00:27:59.978 "trsvcid": "$NVMF_PORT", 00:27:59.978 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:59.978 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:59.978 "hdgst": ${hdgst:-false}, 00:27:59.978 "ddgst": ${ddgst:-false} 00:27:59.978 }, 00:27:59.978 "method": "bdev_nvme_attach_controller" 00:27:59.978 } 00:27:59.978 EOF 00:27:59.978 )") 00:27:59.978 03:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:59.978 03:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:59.978 03:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:59.978 { 00:27:59.978 "params": { 00:27:59.978 "name": "Nvme$subsystem", 00:27:59.978 "trtype": "$TEST_TRANSPORT", 00:27:59.978 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:59.978 "adrfam": "ipv4", 00:27:59.978 "trsvcid": "$NVMF_PORT", 00:27:59.978 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:59.978 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:59.978 "hdgst": ${hdgst:-false}, 00:27:59.978 "ddgst": ${ddgst:-false} 00:27:59.978 }, 00:27:59.978 "method": "bdev_nvme_attach_controller" 00:27:59.978 } 00:27:59.978 EOF 00:27:59.978 )") 00:27:59.978 03:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:59.978 03:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:59.978 03:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:59.978 { 00:27:59.978 "params": { 00:27:59.978 "name": "Nvme$subsystem", 00:27:59.978 "trtype": "$TEST_TRANSPORT", 00:27:59.978 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:59.978 "adrfam": "ipv4", 00:27:59.978 "trsvcid": "$NVMF_PORT", 00:27:59.978 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:59.978 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:59.978 "hdgst": ${hdgst:-false}, 00:27:59.978 "ddgst": ${ddgst:-false} 00:27:59.978 }, 00:27:59.978 "method": "bdev_nvme_attach_controller" 00:27:59.978 } 00:27:59.978 EOF 00:27:59.978 )") 00:27:59.978 03:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:59.978 03:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:59.978 03:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:59.978 { 00:27:59.978 "params": { 00:27:59.978 "name": "Nvme$subsystem", 00:27:59.978 "trtype": "$TEST_TRANSPORT", 00:27:59.978 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:59.978 "adrfam": "ipv4", 00:27:59.978 "trsvcid": "$NVMF_PORT", 00:27:59.978 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:59.978 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:59.978 "hdgst": ${hdgst:-false}, 00:27:59.978 "ddgst": ${ddgst:-false} 00:27:59.978 }, 00:27:59.978 "method": "bdev_nvme_attach_controller" 00:27:59.978 } 00:27:59.978 EOF 00:27:59.978 )") 00:27:59.978 03:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:59.978 03:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:59.978 03:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:59.978 { 00:27:59.978 "params": { 00:27:59.978 "name": "Nvme$subsystem", 00:27:59.978 "trtype": "$TEST_TRANSPORT", 00:27:59.978 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:59.978 "adrfam": "ipv4", 00:27:59.978 "trsvcid": "$NVMF_PORT", 00:27:59.978 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:59.978 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:59.978 "hdgst": ${hdgst:-false}, 00:27:59.978 "ddgst": ${ddgst:-false} 00:27:59.978 }, 00:27:59.978 "method": "bdev_nvme_attach_controller" 00:27:59.978 } 00:27:59.978 EOF 00:27:59.978 )") 00:27:59.978 03:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:59.978 03:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:59.978 03:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:59.978 { 00:27:59.978 "params": { 00:27:59.978 "name": "Nvme$subsystem", 00:27:59.978 "trtype": "$TEST_TRANSPORT", 00:27:59.979 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:59.979 "adrfam": "ipv4", 00:27:59.979 "trsvcid": "$NVMF_PORT", 00:27:59.979 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:59.979 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:59.979 "hdgst": ${hdgst:-false}, 00:27:59.979 "ddgst": ${ddgst:-false} 00:27:59.979 }, 00:27:59.979 "method": "bdev_nvme_attach_controller" 00:27:59.979 } 00:27:59.979 EOF 00:27:59.979 )") 00:27:59.979 03:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:59.979 03:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:59.979 03:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:59.979 { 00:27:59.979 "params": { 00:27:59.979 "name": "Nvme$subsystem", 00:27:59.979 "trtype": "$TEST_TRANSPORT", 00:27:59.979 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:59.979 "adrfam": "ipv4", 00:27:59.979 "trsvcid": "$NVMF_PORT", 00:27:59.979 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:59.979 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:59.979 "hdgst": ${hdgst:-false}, 00:27:59.979 "ddgst": ${ddgst:-false} 00:27:59.979 }, 00:27:59.979 "method": "bdev_nvme_attach_controller" 00:27:59.979 } 00:27:59.979 EOF 00:27:59.979 )") 00:27:59.979 03:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:59.979 03:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:27:59.979 03:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:27:59.979 03:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:59.979 "params": { 00:27:59.979 "name": "Nvme1", 00:27:59.979 "trtype": "tcp", 00:27:59.979 "traddr": "10.0.0.2", 00:27:59.979 "adrfam": "ipv4", 00:27:59.979 "trsvcid": "4420", 00:27:59.979 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:59.979 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:59.979 "hdgst": false, 00:27:59.979 "ddgst": false 00:27:59.979 }, 00:27:59.979 "method": "bdev_nvme_attach_controller" 00:27:59.979 },{ 00:27:59.979 "params": { 00:27:59.979 "name": "Nvme2", 00:27:59.979 "trtype": "tcp", 00:27:59.979 "traddr": "10.0.0.2", 00:27:59.979 "adrfam": "ipv4", 00:27:59.979 "trsvcid": "4420", 00:27:59.979 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:59.979 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:59.979 "hdgst": false, 00:27:59.979 "ddgst": false 00:27:59.979 }, 00:27:59.979 "method": "bdev_nvme_attach_controller" 00:27:59.979 },{ 00:27:59.979 "params": { 00:27:59.979 "name": "Nvme3", 00:27:59.979 "trtype": "tcp", 00:27:59.979 "traddr": "10.0.0.2", 00:27:59.979 "adrfam": "ipv4", 00:27:59.979 "trsvcid": "4420", 00:27:59.979 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:27:59.979 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:27:59.979 "hdgst": false, 00:27:59.979 "ddgst": false 00:27:59.979 }, 00:27:59.979 "method": "bdev_nvme_attach_controller" 00:27:59.979 },{ 00:27:59.979 "params": { 00:27:59.979 "name": "Nvme4", 00:27:59.979 "trtype": "tcp", 00:27:59.979 "traddr": "10.0.0.2", 00:27:59.979 "adrfam": "ipv4", 00:27:59.979 "trsvcid": "4420", 00:27:59.979 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:27:59.979 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:27:59.979 "hdgst": false, 00:27:59.979 "ddgst": false 00:27:59.979 }, 00:27:59.979 "method": "bdev_nvme_attach_controller" 00:27:59.979 },{ 00:27:59.979 "params": { 00:27:59.979 "name": "Nvme5", 00:27:59.979 "trtype": "tcp", 00:27:59.979 "traddr": "10.0.0.2", 00:27:59.979 "adrfam": "ipv4", 00:27:59.979 "trsvcid": "4420", 00:27:59.979 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:27:59.979 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:27:59.979 "hdgst": false, 00:27:59.979 "ddgst": false 00:27:59.979 }, 00:27:59.979 "method": "bdev_nvme_attach_controller" 00:27:59.979 },{ 00:27:59.979 "params": { 00:27:59.979 "name": "Nvme6", 00:27:59.979 "trtype": "tcp", 00:27:59.979 "traddr": "10.0.0.2", 00:27:59.979 "adrfam": "ipv4", 00:27:59.979 "trsvcid": "4420", 00:27:59.979 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:27:59.979 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:27:59.979 "hdgst": false, 00:27:59.979 "ddgst": false 00:27:59.979 }, 00:27:59.979 "method": "bdev_nvme_attach_controller" 00:27:59.979 },{ 00:27:59.979 "params": { 00:27:59.979 "name": "Nvme7", 00:27:59.979 "trtype": "tcp", 00:27:59.979 "traddr": "10.0.0.2", 00:27:59.979 "adrfam": "ipv4", 00:27:59.979 "trsvcid": "4420", 00:27:59.979 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:27:59.979 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:27:59.979 "hdgst": false, 00:27:59.979 "ddgst": false 00:27:59.979 }, 00:27:59.979 "method": "bdev_nvme_attach_controller" 00:27:59.979 },{ 00:27:59.979 "params": { 00:27:59.979 "name": "Nvme8", 00:27:59.979 "trtype": "tcp", 00:27:59.979 "traddr": "10.0.0.2", 00:27:59.979 "adrfam": "ipv4", 00:27:59.979 "trsvcid": "4420", 00:27:59.979 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:27:59.979 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:27:59.979 "hdgst": false, 00:27:59.979 "ddgst": false 00:27:59.979 }, 00:27:59.979 "method": "bdev_nvme_attach_controller" 00:27:59.979 },{ 00:27:59.979 "params": { 00:27:59.979 "name": "Nvme9", 00:27:59.979 "trtype": "tcp", 00:27:59.979 "traddr": "10.0.0.2", 00:27:59.979 "adrfam": "ipv4", 00:27:59.979 "trsvcid": "4420", 00:27:59.979 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:27:59.979 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:27:59.979 "hdgst": false, 00:27:59.979 "ddgst": false 00:27:59.979 }, 00:27:59.979 "method": "bdev_nvme_attach_controller" 00:27:59.979 },{ 00:27:59.979 "params": { 00:27:59.979 "name": "Nvme10", 00:27:59.979 "trtype": "tcp", 00:27:59.979 "traddr": "10.0.0.2", 00:27:59.979 "adrfam": "ipv4", 00:27:59.979 "trsvcid": "4420", 00:27:59.979 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:27:59.979 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:27:59.979 "hdgst": false, 00:27:59.979 "ddgst": false 00:27:59.979 }, 00:27:59.979 "method": "bdev_nvme_attach_controller" 00:27:59.979 }' 00:27:59.979 [2024-07-21 03:37:45.265108] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:27:59.979 [2024-07-21 03:37:45.265177] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:28:00.236 EAL: No free 2048 kB hugepages reported on node 1 00:28:00.236 [2024-07-21 03:37:45.329195] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:00.236 [2024-07-21 03:37:45.415521] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:02.129 03:37:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:02.129 03:37:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # return 0 00:28:02.129 03:37:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:28:02.129 03:37:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:02.129 03:37:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:02.129 03:37:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:02.129 03:37:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 2494651 00:28:02.129 03:37:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:28:02.129 03:37:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:28:03.061 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 2494651 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:28:03.061 03:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 2494470 00:28:03.061 03:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:28:03.061 03:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:03.061 03:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:28:03.061 03:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:28:03.061 03:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:03.061 03:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:03.061 { 00:28:03.061 "params": { 00:28:03.061 "name": "Nvme$subsystem", 00:28:03.061 "trtype": "$TEST_TRANSPORT", 00:28:03.061 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:03.061 "adrfam": "ipv4", 00:28:03.061 "trsvcid": "$NVMF_PORT", 00:28:03.061 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:03.061 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:03.061 "hdgst": ${hdgst:-false}, 00:28:03.061 "ddgst": ${ddgst:-false} 00:28:03.061 }, 00:28:03.061 "method": "bdev_nvme_attach_controller" 00:28:03.061 } 00:28:03.061 EOF 00:28:03.061 )") 00:28:03.061 03:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:03.061 03:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:03.061 03:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:03.061 { 00:28:03.061 "params": { 00:28:03.061 "name": "Nvme$subsystem", 00:28:03.061 "trtype": "$TEST_TRANSPORT", 00:28:03.061 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:03.061 "adrfam": "ipv4", 00:28:03.061 "trsvcid": "$NVMF_PORT", 00:28:03.061 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:03.061 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:03.061 "hdgst": ${hdgst:-false}, 00:28:03.061 "ddgst": ${ddgst:-false} 00:28:03.061 }, 00:28:03.061 "method": "bdev_nvme_attach_controller" 00:28:03.061 } 00:28:03.061 EOF 00:28:03.061 )") 00:28:03.061 03:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:03.061 03:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:03.061 03:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:03.061 { 00:28:03.061 "params": { 00:28:03.061 "name": "Nvme$subsystem", 00:28:03.061 "trtype": "$TEST_TRANSPORT", 00:28:03.061 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:03.061 "adrfam": "ipv4", 00:28:03.061 "trsvcid": "$NVMF_PORT", 00:28:03.061 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:03.061 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:03.061 "hdgst": ${hdgst:-false}, 00:28:03.061 "ddgst": ${ddgst:-false} 00:28:03.061 }, 00:28:03.061 "method": "bdev_nvme_attach_controller" 00:28:03.061 } 00:28:03.061 EOF 00:28:03.061 )") 00:28:03.061 03:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:03.061 03:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:03.061 03:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:03.061 { 00:28:03.061 "params": { 00:28:03.061 "name": "Nvme$subsystem", 00:28:03.061 "trtype": "$TEST_TRANSPORT", 00:28:03.061 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:03.061 "adrfam": "ipv4", 00:28:03.061 "trsvcid": "$NVMF_PORT", 00:28:03.061 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:03.061 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:03.061 "hdgst": ${hdgst:-false}, 00:28:03.061 "ddgst": ${ddgst:-false} 00:28:03.061 }, 00:28:03.061 "method": "bdev_nvme_attach_controller" 00:28:03.061 } 00:28:03.061 EOF 00:28:03.061 )") 00:28:03.061 03:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:03.061 03:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:03.061 03:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:03.061 { 00:28:03.061 "params": { 00:28:03.061 "name": "Nvme$subsystem", 00:28:03.062 "trtype": "$TEST_TRANSPORT", 00:28:03.062 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:03.062 "adrfam": "ipv4", 00:28:03.062 "trsvcid": "$NVMF_PORT", 00:28:03.062 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:03.062 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:03.062 "hdgst": ${hdgst:-false}, 00:28:03.062 "ddgst": ${ddgst:-false} 00:28:03.062 }, 00:28:03.062 "method": "bdev_nvme_attach_controller" 00:28:03.062 } 00:28:03.062 EOF 00:28:03.062 )") 00:28:03.062 03:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:03.062 03:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:03.062 03:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:03.062 { 00:28:03.062 "params": { 00:28:03.062 "name": "Nvme$subsystem", 00:28:03.062 "trtype": "$TEST_TRANSPORT", 00:28:03.062 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:03.062 "adrfam": "ipv4", 00:28:03.062 "trsvcid": "$NVMF_PORT", 00:28:03.062 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:03.062 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:03.062 "hdgst": ${hdgst:-false}, 00:28:03.062 "ddgst": ${ddgst:-false} 00:28:03.062 }, 00:28:03.062 "method": "bdev_nvme_attach_controller" 00:28:03.062 } 00:28:03.062 EOF 00:28:03.062 )") 00:28:03.062 03:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:03.062 03:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:03.062 03:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:03.062 { 00:28:03.062 "params": { 00:28:03.062 "name": "Nvme$subsystem", 00:28:03.062 "trtype": "$TEST_TRANSPORT", 00:28:03.062 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:03.062 "adrfam": "ipv4", 00:28:03.062 "trsvcid": "$NVMF_PORT", 00:28:03.062 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:03.062 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:03.062 "hdgst": ${hdgst:-false}, 00:28:03.062 "ddgst": ${ddgst:-false} 00:28:03.062 }, 00:28:03.062 "method": "bdev_nvme_attach_controller" 00:28:03.062 } 00:28:03.062 EOF 00:28:03.062 )") 00:28:03.062 03:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:03.062 03:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:03.062 03:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:03.062 { 00:28:03.062 "params": { 00:28:03.062 "name": "Nvme$subsystem", 00:28:03.062 "trtype": "$TEST_TRANSPORT", 00:28:03.062 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:03.062 "adrfam": "ipv4", 00:28:03.062 "trsvcid": "$NVMF_PORT", 00:28:03.062 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:03.062 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:03.062 "hdgst": ${hdgst:-false}, 00:28:03.062 "ddgst": ${ddgst:-false} 00:28:03.062 }, 00:28:03.062 "method": "bdev_nvme_attach_controller" 00:28:03.062 } 00:28:03.062 EOF 00:28:03.062 )") 00:28:03.062 03:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:03.062 03:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:03.062 03:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:03.062 { 00:28:03.062 "params": { 00:28:03.062 "name": "Nvme$subsystem", 00:28:03.062 "trtype": "$TEST_TRANSPORT", 00:28:03.062 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:03.062 "adrfam": "ipv4", 00:28:03.062 "trsvcid": "$NVMF_PORT", 00:28:03.062 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:03.062 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:03.062 "hdgst": ${hdgst:-false}, 00:28:03.062 "ddgst": ${ddgst:-false} 00:28:03.062 }, 00:28:03.062 "method": "bdev_nvme_attach_controller" 00:28:03.062 } 00:28:03.062 EOF 00:28:03.062 )") 00:28:03.062 03:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:03.062 03:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:03.062 03:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:03.062 { 00:28:03.062 "params": { 00:28:03.062 "name": "Nvme$subsystem", 00:28:03.062 "trtype": "$TEST_TRANSPORT", 00:28:03.062 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:03.062 "adrfam": "ipv4", 00:28:03.062 "trsvcid": "$NVMF_PORT", 00:28:03.062 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:03.062 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:03.062 "hdgst": ${hdgst:-false}, 00:28:03.062 "ddgst": ${ddgst:-false} 00:28:03.062 }, 00:28:03.062 "method": "bdev_nvme_attach_controller" 00:28:03.062 } 00:28:03.062 EOF 00:28:03.062 )") 00:28:03.062 03:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:03.062 03:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:28:03.062 03:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:28:03.062 03:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:03.062 "params": { 00:28:03.062 "name": "Nvme1", 00:28:03.062 "trtype": "tcp", 00:28:03.062 "traddr": "10.0.0.2", 00:28:03.062 "adrfam": "ipv4", 00:28:03.062 "trsvcid": "4420", 00:28:03.062 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:03.062 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:03.062 "hdgst": false, 00:28:03.062 "ddgst": false 00:28:03.062 }, 00:28:03.062 "method": "bdev_nvme_attach_controller" 00:28:03.062 },{ 00:28:03.062 "params": { 00:28:03.062 "name": "Nvme2", 00:28:03.062 "trtype": "tcp", 00:28:03.062 "traddr": "10.0.0.2", 00:28:03.062 "adrfam": "ipv4", 00:28:03.062 "trsvcid": "4420", 00:28:03.062 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:03.062 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:03.062 "hdgst": false, 00:28:03.062 "ddgst": false 00:28:03.062 }, 00:28:03.062 "method": "bdev_nvme_attach_controller" 00:28:03.062 },{ 00:28:03.062 "params": { 00:28:03.062 "name": "Nvme3", 00:28:03.062 "trtype": "tcp", 00:28:03.062 "traddr": "10.0.0.2", 00:28:03.062 "adrfam": "ipv4", 00:28:03.062 "trsvcid": "4420", 00:28:03.062 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:03.062 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:03.062 "hdgst": false, 00:28:03.062 "ddgst": false 00:28:03.062 }, 00:28:03.062 "method": "bdev_nvme_attach_controller" 00:28:03.062 },{ 00:28:03.062 "params": { 00:28:03.062 "name": "Nvme4", 00:28:03.062 "trtype": "tcp", 00:28:03.062 "traddr": "10.0.0.2", 00:28:03.062 "adrfam": "ipv4", 00:28:03.062 "trsvcid": "4420", 00:28:03.062 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:03.062 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:03.062 "hdgst": false, 00:28:03.062 "ddgst": false 00:28:03.062 }, 00:28:03.062 "method": "bdev_nvme_attach_controller" 00:28:03.062 },{ 00:28:03.062 "params": { 00:28:03.062 "name": "Nvme5", 00:28:03.062 "trtype": "tcp", 00:28:03.062 "traddr": "10.0.0.2", 00:28:03.062 "adrfam": "ipv4", 00:28:03.062 "trsvcid": "4420", 00:28:03.062 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:03.062 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:03.062 "hdgst": false, 00:28:03.062 "ddgst": false 00:28:03.062 }, 00:28:03.062 "method": "bdev_nvme_attach_controller" 00:28:03.062 },{ 00:28:03.062 "params": { 00:28:03.062 "name": "Nvme6", 00:28:03.062 "trtype": "tcp", 00:28:03.062 "traddr": "10.0.0.2", 00:28:03.062 "adrfam": "ipv4", 00:28:03.062 "trsvcid": "4420", 00:28:03.062 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:03.062 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:03.062 "hdgst": false, 00:28:03.062 "ddgst": false 00:28:03.062 }, 00:28:03.062 "method": "bdev_nvme_attach_controller" 00:28:03.062 },{ 00:28:03.062 "params": { 00:28:03.062 "name": "Nvme7", 00:28:03.062 "trtype": "tcp", 00:28:03.062 "traddr": "10.0.0.2", 00:28:03.062 "adrfam": "ipv4", 00:28:03.062 "trsvcid": "4420", 00:28:03.062 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:03.062 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:03.062 "hdgst": false, 00:28:03.062 "ddgst": false 00:28:03.062 }, 00:28:03.062 "method": "bdev_nvme_attach_controller" 00:28:03.062 },{ 00:28:03.062 "params": { 00:28:03.062 "name": "Nvme8", 00:28:03.062 "trtype": "tcp", 00:28:03.062 "traddr": "10.0.0.2", 00:28:03.062 "adrfam": "ipv4", 00:28:03.062 "trsvcid": "4420", 00:28:03.062 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:03.062 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:03.062 "hdgst": false, 00:28:03.062 "ddgst": false 00:28:03.062 }, 00:28:03.062 "method": "bdev_nvme_attach_controller" 00:28:03.062 },{ 00:28:03.062 "params": { 00:28:03.062 "name": "Nvme9", 00:28:03.062 "trtype": "tcp", 00:28:03.062 "traddr": "10.0.0.2", 00:28:03.062 "adrfam": "ipv4", 00:28:03.062 "trsvcid": "4420", 00:28:03.062 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:03.062 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:03.062 "hdgst": false, 00:28:03.062 "ddgst": false 00:28:03.062 }, 00:28:03.062 "method": "bdev_nvme_attach_controller" 00:28:03.062 },{ 00:28:03.062 "params": { 00:28:03.062 "name": "Nvme10", 00:28:03.062 "trtype": "tcp", 00:28:03.062 "traddr": "10.0.0.2", 00:28:03.062 "adrfam": "ipv4", 00:28:03.062 "trsvcid": "4420", 00:28:03.062 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:03.062 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:03.062 "hdgst": false, 00:28:03.062 "ddgst": false 00:28:03.062 }, 00:28:03.062 "method": "bdev_nvme_attach_controller" 00:28:03.062 }' 00:28:03.062 [2024-07-21 03:37:48.260433] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:28:03.062 [2024-07-21 03:37:48.260510] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2494957 ] 00:28:03.062 EAL: No free 2048 kB hugepages reported on node 1 00:28:03.062 [2024-07-21 03:37:48.325101] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:03.320 [2024-07-21 03:37:48.411666] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:04.687 Running I/O for 1 seconds... 00:28:06.071 00:28:06.071 Latency(us) 00:28:06.071 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:06.071 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:06.071 Verification LBA range: start 0x0 length 0x400 00:28:06.071 Nvme1n1 : 1.17 219.49 13.72 0.00 0.00 288758.71 21651.15 257872.02 00:28:06.071 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:06.071 Verification LBA range: start 0x0 length 0x400 00:28:06.071 Nvme2n1 : 1.10 232.18 14.51 0.00 0.00 267784.34 20000.62 251658.24 00:28:06.071 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:06.071 Verification LBA range: start 0x0 length 0x400 00:28:06.071 Nvme3n1 : 1.18 272.13 17.01 0.00 0.00 225522.42 16796.63 243891.01 00:28:06.071 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:06.071 Verification LBA range: start 0x0 length 0x400 00:28:06.071 Nvme4n1 : 1.09 234.88 14.68 0.00 0.00 255982.36 17185.00 253211.69 00:28:06.071 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:06.071 Verification LBA range: start 0x0 length 0x400 00:28:06.071 Nvme5n1 : 1.18 217.31 13.58 0.00 0.00 273262.93 22719.15 256318.58 00:28:06.071 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:06.071 Verification LBA range: start 0x0 length 0x400 00:28:06.071 Nvme6n1 : 1.17 223.07 13.94 0.00 0.00 260451.99 7281.78 246997.90 00:28:06.071 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:06.071 Verification LBA range: start 0x0 length 0x400 00:28:06.071 Nvme7n1 : 1.19 268.20 16.76 0.00 0.00 213978.07 19126.80 256318.58 00:28:06.071 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:06.071 Verification LBA range: start 0x0 length 0x400 00:28:06.071 Nvme8n1 : 1.20 267.35 16.71 0.00 0.00 211456.00 13495.56 256318.58 00:28:06.071 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:06.071 Verification LBA range: start 0x0 length 0x400 00:28:06.071 Nvme9n1 : 1.19 215.34 13.46 0.00 0.00 257917.91 22816.24 256318.58 00:28:06.071 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:06.071 Verification LBA range: start 0x0 length 0x400 00:28:06.071 Nvme10n1 : 1.18 216.34 13.52 0.00 0.00 252131.56 23301.69 274959.93 00:28:06.071 =================================================================================================================== 00:28:06.071 Total : 2366.28 147.89 0.00 0.00 248392.65 7281.78 274959.93 00:28:06.071 03:37:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:28:06.071 03:37:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:28:06.071 03:37:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:28:06.071 03:37:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:06.071 03:37:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:28:06.071 03:37:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:06.071 03:37:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:28:06.071 03:37:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:06.071 03:37:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:28:06.071 03:37:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:06.071 03:37:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:06.071 rmmod nvme_tcp 00:28:06.071 rmmod nvme_fabrics 00:28:06.071 rmmod nvme_keyring 00:28:06.071 03:37:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:06.071 03:37:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:28:06.071 03:37:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:28:06.071 03:37:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 2494470 ']' 00:28:06.071 03:37:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 2494470 00:28:06.071 03:37:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@946 -- # '[' -z 2494470 ']' 00:28:06.071 03:37:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@950 -- # kill -0 2494470 00:28:06.071 03:37:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@951 -- # uname 00:28:06.071 03:37:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:28:06.071 03:37:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2494470 00:28:06.071 03:37:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:28:06.071 03:37:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:28:06.071 03:37:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2494470' 00:28:06.071 killing process with pid 2494470 00:28:06.071 03:37:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@965 -- # kill 2494470 00:28:06.071 03:37:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@970 -- # wait 2494470 00:28:06.636 03:37:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:06.636 03:37:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:06.636 03:37:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:06.636 03:37:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:06.636 03:37:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:06.636 03:37:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:06.636 03:37:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:06.636 03:37:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:09.169 03:37:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:09.169 00:28:09.169 real 0m11.691s 00:28:09.169 user 0m33.738s 00:28:09.169 sys 0m3.225s 00:28:09.169 03:37:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:09.169 03:37:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:09.169 ************************************ 00:28:09.169 END TEST nvmf_shutdown_tc1 00:28:09.169 ************************************ 00:28:09.169 03:37:53 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:28:09.169 03:37:53 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:28:09.169 03:37:53 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:09.169 03:37:53 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:09.169 ************************************ 00:28:09.169 START TEST nvmf_shutdown_tc2 00:28:09.169 ************************************ 00:28:09.169 03:37:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1121 -- # nvmf_shutdown_tc2 00:28:09.169 03:37:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:28:09.169 03:37:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:28:09.169 03:37:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:09.169 03:37:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:09.169 03:37:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:09.169 03:37:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:09.169 03:37:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:09.169 03:37:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:09.169 03:37:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:09.169 03:37:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:09.169 03:37:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:09.169 03:37:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:09.169 03:37:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:28:09.169 03:37:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:09.169 03:37:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:09.169 03:37:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:28:09.169 03:37:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:09.169 03:37:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:09.169 03:37:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:09.169 03:37:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:09.169 03:37:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:09.169 03:37:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:28:09.169 03:37:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:09.169 03:37:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:28:09.169 03:37:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:28:09.169 03:37:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:28:09.169 03:37:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:28:09.169 03:37:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:28:09.169 03:37:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:28:09.169 03:37:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:09.169 03:37:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:09.169 03:37:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:09.169 03:37:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:09.169 03:37:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:09.169 03:37:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:09.169 03:37:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:09.169 03:37:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:09.169 03:37:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:09.169 03:37:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:09.169 03:37:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:09.169 03:37:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:09.169 03:37:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:09.169 03:37:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:09.169 03:37:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:09.169 03:37:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:09.169 03:37:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:09.169 03:37:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:09.169 03:37:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:09.169 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:09.169 03:37:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:09.169 03:37:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:09.169 03:37:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:09.169 03:37:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:09.169 03:37:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:09.169 03:37:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:09.169 03:37:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:09.169 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:09.169 03:37:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:09.169 03:37:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:09.169 03:37:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:09.169 03:37:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:09.169 03:37:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:09.169 03:37:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:09.169 03:37:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:09.169 03:37:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:09.169 03:37:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:09.169 03:37:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:09.169 03:37:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:09.169 03:37:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:09.169 03:37:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:09.169 03:37:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:09.169 03:37:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:09.169 03:37:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:09.169 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:09.169 03:37:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:09.169 03:37:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:09.169 03:37:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:09.169 03:37:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:09.169 03:37:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:09.169 03:37:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:09.169 03:37:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:09.169 03:37:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:09.169 03:37:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:09.169 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:09.169 03:37:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:09.169 03:37:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:09.169 03:37:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:28:09.169 03:37:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:09.169 03:37:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:09.170 03:37:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:09.170 03:37:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:09.170 03:37:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:09.170 03:37:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:09.170 03:37:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:09.170 03:37:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:09.170 03:37:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:09.170 03:37:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:09.170 03:37:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:09.170 03:37:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:09.170 03:37:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:09.170 03:37:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:09.170 03:37:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:09.170 03:37:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:09.170 03:37:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:09.170 03:37:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:09.170 03:37:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:09.170 03:37:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:09.170 03:37:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:09.170 03:37:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:09.170 03:37:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:09.170 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:09.170 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.112 ms 00:28:09.170 00:28:09.170 --- 10.0.0.2 ping statistics --- 00:28:09.170 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:09.170 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:28:09.170 03:37:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:09.170 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:09.170 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.073 ms 00:28:09.170 00:28:09.170 --- 10.0.0.1 ping statistics --- 00:28:09.170 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:09.170 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:28:09.170 03:37:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:09.170 03:37:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:28:09.170 03:37:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:09.170 03:37:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:09.170 03:37:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:09.170 03:37:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:09.170 03:37:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:09.170 03:37:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:09.170 03:37:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:09.170 03:37:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:28:09.170 03:37:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:09.170 03:37:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@720 -- # xtrace_disable 00:28:09.170 03:37:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:09.170 03:37:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=2495722 00:28:09.170 03:37:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:28:09.170 03:37:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 2495722 00:28:09.170 03:37:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@827 -- # '[' -z 2495722 ']' 00:28:09.170 03:37:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:09.170 03:37:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:09.170 03:37:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:09.170 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:09.170 03:37:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:09.170 03:37:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:09.170 [2024-07-21 03:37:54.160624] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:28:09.170 [2024-07-21 03:37:54.160698] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:09.170 EAL: No free 2048 kB hugepages reported on node 1 00:28:09.170 [2024-07-21 03:37:54.232626] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:09.170 [2024-07-21 03:37:54.327781] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:09.170 [2024-07-21 03:37:54.327843] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:09.170 [2024-07-21 03:37:54.327859] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:09.170 [2024-07-21 03:37:54.327873] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:09.170 [2024-07-21 03:37:54.327885] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:09.170 [2024-07-21 03:37:54.327958] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:09.170 [2024-07-21 03:37:54.328077] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:09.170 [2024-07-21 03:37:54.328156] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:28:09.170 [2024-07-21 03:37:54.328158] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:09.170 03:37:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:09.170 03:37:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # return 0 00:28:09.170 03:37:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:09.170 03:37:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:09.170 03:37:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:09.170 03:37:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:09.170 03:37:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:09.170 03:37:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:09.170 03:37:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:09.170 [2024-07-21 03:37:54.470141] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:09.170 03:37:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:09.170 03:37:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:28:09.170 03:37:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:28:09.170 03:37:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@720 -- # xtrace_disable 00:28:09.170 03:37:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:09.427 03:37:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:09.427 03:37:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:09.427 03:37:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:28:09.427 03:37:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:09.427 03:37:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:28:09.427 03:37:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:09.427 03:37:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:28:09.427 03:37:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:09.427 03:37:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:28:09.427 03:37:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:09.427 03:37:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:28:09.427 03:37:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:09.427 03:37:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:28:09.427 03:37:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:09.427 03:37:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:28:09.427 03:37:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:09.427 03:37:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:28:09.427 03:37:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:09.427 03:37:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:28:09.427 03:37:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:09.427 03:37:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:28:09.427 03:37:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:28:09.427 03:37:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:09.427 03:37:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:09.427 Malloc1 00:28:09.427 [2024-07-21 03:37:54.545122] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:09.427 Malloc2 00:28:09.427 Malloc3 00:28:09.427 Malloc4 00:28:09.427 Malloc5 00:28:09.684 Malloc6 00:28:09.684 Malloc7 00:28:09.684 Malloc8 00:28:09.684 Malloc9 00:28:09.684 Malloc10 00:28:09.684 03:37:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:09.684 03:37:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:28:09.684 03:37:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:09.684 03:37:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:09.941 03:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=2495899 00:28:09.941 03:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 2495899 /var/tmp/bdevperf.sock 00:28:09.941 03:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@827 -- # '[' -z 2495899 ']' 00:28:09.941 03:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:09.941 03:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:09.941 03:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:28:09.941 03:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:09.941 03:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:09.941 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:09.941 03:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:28:09.941 03:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:09.941 03:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:28:09.941 03:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:09.941 03:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:09.941 03:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:09.941 { 00:28:09.941 "params": { 00:28:09.941 "name": "Nvme$subsystem", 00:28:09.941 "trtype": "$TEST_TRANSPORT", 00:28:09.941 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:09.941 "adrfam": "ipv4", 00:28:09.941 "trsvcid": "$NVMF_PORT", 00:28:09.942 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:09.942 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:09.942 "hdgst": ${hdgst:-false}, 00:28:09.942 "ddgst": ${ddgst:-false} 00:28:09.942 }, 00:28:09.942 "method": "bdev_nvme_attach_controller" 00:28:09.942 } 00:28:09.942 EOF 00:28:09.942 )") 00:28:09.942 03:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:28:09.942 03:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:09.942 03:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:09.942 { 00:28:09.942 "params": { 00:28:09.942 "name": "Nvme$subsystem", 00:28:09.942 "trtype": "$TEST_TRANSPORT", 00:28:09.942 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:09.942 "adrfam": "ipv4", 00:28:09.942 "trsvcid": "$NVMF_PORT", 00:28:09.942 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:09.942 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:09.942 "hdgst": ${hdgst:-false}, 00:28:09.942 "ddgst": ${ddgst:-false} 00:28:09.942 }, 00:28:09.942 "method": "bdev_nvme_attach_controller" 00:28:09.942 } 00:28:09.942 EOF 00:28:09.942 )") 00:28:09.942 03:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:28:09.942 03:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:09.942 03:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:09.942 { 00:28:09.942 "params": { 00:28:09.942 "name": "Nvme$subsystem", 00:28:09.942 "trtype": "$TEST_TRANSPORT", 00:28:09.942 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:09.942 "adrfam": "ipv4", 00:28:09.942 "trsvcid": "$NVMF_PORT", 00:28:09.942 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:09.942 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:09.942 "hdgst": ${hdgst:-false}, 00:28:09.942 "ddgst": ${ddgst:-false} 00:28:09.942 }, 00:28:09.942 "method": "bdev_nvme_attach_controller" 00:28:09.942 } 00:28:09.942 EOF 00:28:09.942 )") 00:28:09.942 03:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:28:09.942 03:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:09.942 03:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:09.942 { 00:28:09.942 "params": { 00:28:09.942 "name": "Nvme$subsystem", 00:28:09.942 "trtype": "$TEST_TRANSPORT", 00:28:09.942 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:09.942 "adrfam": "ipv4", 00:28:09.942 "trsvcid": "$NVMF_PORT", 00:28:09.942 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:09.942 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:09.942 "hdgst": ${hdgst:-false}, 00:28:09.942 "ddgst": ${ddgst:-false} 00:28:09.942 }, 00:28:09.942 "method": "bdev_nvme_attach_controller" 00:28:09.942 } 00:28:09.942 EOF 00:28:09.942 )") 00:28:09.942 03:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:28:09.942 03:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:09.942 03:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:09.942 { 00:28:09.942 "params": { 00:28:09.942 "name": "Nvme$subsystem", 00:28:09.942 "trtype": "$TEST_TRANSPORT", 00:28:09.942 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:09.942 "adrfam": "ipv4", 00:28:09.942 "trsvcid": "$NVMF_PORT", 00:28:09.942 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:09.942 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:09.942 "hdgst": ${hdgst:-false}, 00:28:09.942 "ddgst": ${ddgst:-false} 00:28:09.942 }, 00:28:09.942 "method": "bdev_nvme_attach_controller" 00:28:09.942 } 00:28:09.942 EOF 00:28:09.942 )") 00:28:09.942 03:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:28:09.942 03:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:09.942 03:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:09.942 { 00:28:09.942 "params": { 00:28:09.942 "name": "Nvme$subsystem", 00:28:09.942 "trtype": "$TEST_TRANSPORT", 00:28:09.942 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:09.942 "adrfam": "ipv4", 00:28:09.942 "trsvcid": "$NVMF_PORT", 00:28:09.942 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:09.942 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:09.942 "hdgst": ${hdgst:-false}, 00:28:09.942 "ddgst": ${ddgst:-false} 00:28:09.942 }, 00:28:09.942 "method": "bdev_nvme_attach_controller" 00:28:09.942 } 00:28:09.942 EOF 00:28:09.942 )") 00:28:09.942 03:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:28:09.942 03:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:09.942 03:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:09.942 { 00:28:09.942 "params": { 00:28:09.942 "name": "Nvme$subsystem", 00:28:09.942 "trtype": "$TEST_TRANSPORT", 00:28:09.942 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:09.942 "adrfam": "ipv4", 00:28:09.942 "trsvcid": "$NVMF_PORT", 00:28:09.942 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:09.942 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:09.942 "hdgst": ${hdgst:-false}, 00:28:09.942 "ddgst": ${ddgst:-false} 00:28:09.942 }, 00:28:09.942 "method": "bdev_nvme_attach_controller" 00:28:09.942 } 00:28:09.942 EOF 00:28:09.942 )") 00:28:09.942 03:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:28:09.942 03:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:09.942 03:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:09.942 { 00:28:09.942 "params": { 00:28:09.942 "name": "Nvme$subsystem", 00:28:09.942 "trtype": "$TEST_TRANSPORT", 00:28:09.942 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:09.942 "adrfam": "ipv4", 00:28:09.942 "trsvcid": "$NVMF_PORT", 00:28:09.942 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:09.942 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:09.942 "hdgst": ${hdgst:-false}, 00:28:09.942 "ddgst": ${ddgst:-false} 00:28:09.942 }, 00:28:09.942 "method": "bdev_nvme_attach_controller" 00:28:09.942 } 00:28:09.942 EOF 00:28:09.942 )") 00:28:09.942 03:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:28:09.942 03:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:09.942 03:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:09.942 { 00:28:09.942 "params": { 00:28:09.942 "name": "Nvme$subsystem", 00:28:09.942 "trtype": "$TEST_TRANSPORT", 00:28:09.942 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:09.942 "adrfam": "ipv4", 00:28:09.942 "trsvcid": "$NVMF_PORT", 00:28:09.942 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:09.942 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:09.942 "hdgst": ${hdgst:-false}, 00:28:09.942 "ddgst": ${ddgst:-false} 00:28:09.942 }, 00:28:09.942 "method": "bdev_nvme_attach_controller" 00:28:09.942 } 00:28:09.942 EOF 00:28:09.942 )") 00:28:09.942 03:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:28:09.942 03:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:09.942 03:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:09.942 { 00:28:09.942 "params": { 00:28:09.942 "name": "Nvme$subsystem", 00:28:09.942 "trtype": "$TEST_TRANSPORT", 00:28:09.942 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:09.942 "adrfam": "ipv4", 00:28:09.942 "trsvcid": "$NVMF_PORT", 00:28:09.942 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:09.942 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:09.942 "hdgst": ${hdgst:-false}, 00:28:09.942 "ddgst": ${ddgst:-false} 00:28:09.942 }, 00:28:09.942 "method": "bdev_nvme_attach_controller" 00:28:09.942 } 00:28:09.942 EOF 00:28:09.942 )") 00:28:09.942 03:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:28:09.942 03:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:28:09.942 03:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:28:09.942 03:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:09.942 "params": { 00:28:09.942 "name": "Nvme1", 00:28:09.942 "trtype": "tcp", 00:28:09.942 "traddr": "10.0.0.2", 00:28:09.942 "adrfam": "ipv4", 00:28:09.942 "trsvcid": "4420", 00:28:09.942 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:09.942 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:09.942 "hdgst": false, 00:28:09.942 "ddgst": false 00:28:09.942 }, 00:28:09.942 "method": "bdev_nvme_attach_controller" 00:28:09.942 },{ 00:28:09.942 "params": { 00:28:09.942 "name": "Nvme2", 00:28:09.942 "trtype": "tcp", 00:28:09.942 "traddr": "10.0.0.2", 00:28:09.942 "adrfam": "ipv4", 00:28:09.942 "trsvcid": "4420", 00:28:09.942 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:09.942 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:09.942 "hdgst": false, 00:28:09.942 "ddgst": false 00:28:09.942 }, 00:28:09.942 "method": "bdev_nvme_attach_controller" 00:28:09.942 },{ 00:28:09.942 "params": { 00:28:09.942 "name": "Nvme3", 00:28:09.942 "trtype": "tcp", 00:28:09.942 "traddr": "10.0.0.2", 00:28:09.942 "adrfam": "ipv4", 00:28:09.942 "trsvcid": "4420", 00:28:09.942 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:09.942 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:09.942 "hdgst": false, 00:28:09.943 "ddgst": false 00:28:09.943 }, 00:28:09.943 "method": "bdev_nvme_attach_controller" 00:28:09.943 },{ 00:28:09.943 "params": { 00:28:09.943 "name": "Nvme4", 00:28:09.943 "trtype": "tcp", 00:28:09.943 "traddr": "10.0.0.2", 00:28:09.943 "adrfam": "ipv4", 00:28:09.943 "trsvcid": "4420", 00:28:09.943 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:09.943 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:09.943 "hdgst": false, 00:28:09.943 "ddgst": false 00:28:09.943 }, 00:28:09.943 "method": "bdev_nvme_attach_controller" 00:28:09.943 },{ 00:28:09.943 "params": { 00:28:09.943 "name": "Nvme5", 00:28:09.943 "trtype": "tcp", 00:28:09.943 "traddr": "10.0.0.2", 00:28:09.943 "adrfam": "ipv4", 00:28:09.943 "trsvcid": "4420", 00:28:09.943 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:09.943 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:09.943 "hdgst": false, 00:28:09.943 "ddgst": false 00:28:09.943 }, 00:28:09.943 "method": "bdev_nvme_attach_controller" 00:28:09.943 },{ 00:28:09.943 "params": { 00:28:09.943 "name": "Nvme6", 00:28:09.943 "trtype": "tcp", 00:28:09.943 "traddr": "10.0.0.2", 00:28:09.943 "adrfam": "ipv4", 00:28:09.943 "trsvcid": "4420", 00:28:09.943 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:09.943 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:09.943 "hdgst": false, 00:28:09.943 "ddgst": false 00:28:09.943 }, 00:28:09.943 "method": "bdev_nvme_attach_controller" 00:28:09.943 },{ 00:28:09.943 "params": { 00:28:09.943 "name": "Nvme7", 00:28:09.943 "trtype": "tcp", 00:28:09.943 "traddr": "10.0.0.2", 00:28:09.943 "adrfam": "ipv4", 00:28:09.943 "trsvcid": "4420", 00:28:09.943 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:09.943 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:09.943 "hdgst": false, 00:28:09.943 "ddgst": false 00:28:09.943 }, 00:28:09.943 "method": "bdev_nvme_attach_controller" 00:28:09.943 },{ 00:28:09.943 "params": { 00:28:09.943 "name": "Nvme8", 00:28:09.943 "trtype": "tcp", 00:28:09.943 "traddr": "10.0.0.2", 00:28:09.943 "adrfam": "ipv4", 00:28:09.943 "trsvcid": "4420", 00:28:09.943 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:09.943 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:09.943 "hdgst": false, 00:28:09.943 "ddgst": false 00:28:09.943 }, 00:28:09.943 "method": "bdev_nvme_attach_controller" 00:28:09.943 },{ 00:28:09.943 "params": { 00:28:09.943 "name": "Nvme9", 00:28:09.943 "trtype": "tcp", 00:28:09.943 "traddr": "10.0.0.2", 00:28:09.943 "adrfam": "ipv4", 00:28:09.943 "trsvcid": "4420", 00:28:09.943 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:09.943 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:09.943 "hdgst": false, 00:28:09.943 "ddgst": false 00:28:09.943 }, 00:28:09.943 "method": "bdev_nvme_attach_controller" 00:28:09.943 },{ 00:28:09.943 "params": { 00:28:09.943 "name": "Nvme10", 00:28:09.943 "trtype": "tcp", 00:28:09.943 "traddr": "10.0.0.2", 00:28:09.943 "adrfam": "ipv4", 00:28:09.943 "trsvcid": "4420", 00:28:09.943 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:09.943 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:09.943 "hdgst": false, 00:28:09.943 "ddgst": false 00:28:09.943 }, 00:28:09.943 "method": "bdev_nvme_attach_controller" 00:28:09.943 }' 00:28:09.943 [2024-07-21 03:37:55.047638] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:28:09.943 [2024-07-21 03:37:55.047720] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2495899 ] 00:28:09.943 EAL: No free 2048 kB hugepages reported on node 1 00:28:09.943 [2024-07-21 03:37:55.112473] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:09.943 [2024-07-21 03:37:55.198976] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:11.312 Running I/O for 10 seconds... 00:28:11.877 03:37:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:11.877 03:37:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # return 0 00:28:11.877 03:37:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:28:11.877 03:37:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:11.877 03:37:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:11.877 03:37:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:11.877 03:37:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:28:11.877 03:37:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:28:11.877 03:37:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:28:11.877 03:37:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:28:11.877 03:37:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:28:11.877 03:37:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:28:11.877 03:37:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:28:11.877 03:37:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:11.877 03:37:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:28:11.877 03:37:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:11.877 03:37:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:11.877 03:37:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:11.877 03:37:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=67 00:28:11.877 03:37:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:28:11.877 03:37:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:28:12.135 03:37:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:28:12.135 03:37:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:28:12.135 03:37:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:12.135 03:37:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:28:12.135 03:37:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:12.135 03:37:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:12.135 03:37:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:12.392 03:37:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=131 00:28:12.393 03:37:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:28:12.393 03:37:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:28:12.393 03:37:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:28:12.393 03:37:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:28:12.393 03:37:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 2495899 00:28:12.393 03:37:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@946 -- # '[' -z 2495899 ']' 00:28:12.393 03:37:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # kill -0 2495899 00:28:12.393 03:37:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@951 -- # uname 00:28:12.393 03:37:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:28:12.393 03:37:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2495899 00:28:12.393 03:37:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:28:12.393 03:37:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:28:12.393 03:37:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2495899' 00:28:12.393 killing process with pid 2495899 00:28:12.393 03:37:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@965 -- # kill 2495899 00:28:12.393 03:37:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@970 -- # wait 2495899 00:28:12.393 Received shutdown signal, test time was about 0.927953 seconds 00:28:12.393 00:28:12.393 Latency(us) 00:28:12.393 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:12.393 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:12.393 Verification LBA range: start 0x0 length 0x400 00:28:12.393 Nvme1n1 : 0.88 219.19 13.70 0.00 0.00 288384.70 19029.71 257872.02 00:28:12.393 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:12.393 Verification LBA range: start 0x0 length 0x400 00:28:12.393 Nvme2n1 : 0.90 213.92 13.37 0.00 0.00 289515.39 22816.24 253211.69 00:28:12.393 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:12.393 Verification LBA range: start 0x0 length 0x400 00:28:12.393 Nvme3n1 : 0.93 276.12 17.26 0.00 0.00 219883.52 17282.09 254765.13 00:28:12.393 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:12.393 Verification LBA range: start 0x0 length 0x400 00:28:12.393 Nvme4n1 : 0.92 278.38 17.40 0.00 0.00 213292.75 16602.45 254765.13 00:28:12.393 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:12.393 Verification LBA range: start 0x0 length 0x400 00:28:12.393 Nvme5n1 : 0.89 219.44 13.71 0.00 0.00 262790.20 3689.43 239230.67 00:28:12.393 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:12.393 Verification LBA range: start 0x0 length 0x400 00:28:12.393 Nvme6n1 : 0.92 209.61 13.10 0.00 0.00 271052.04 28350.39 282727.16 00:28:12.393 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:12.393 Verification LBA range: start 0x0 length 0x400 00:28:12.393 Nvme7n1 : 0.89 216.48 13.53 0.00 0.00 255434.90 17767.54 220589.32 00:28:12.393 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:12.393 Verification LBA range: start 0x0 length 0x400 00:28:12.393 Nvme8n1 : 0.91 291.59 18.22 0.00 0.00 185159.52 3568.07 253211.69 00:28:12.393 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:12.393 Verification LBA range: start 0x0 length 0x400 00:28:12.393 Nvme9n1 : 0.91 211.46 13.22 0.00 0.00 250491.13 36505.98 274959.93 00:28:12.393 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:12.393 Verification LBA range: start 0x0 length 0x400 00:28:12.393 Nvme10n1 : 0.92 208.01 13.00 0.00 0.00 249723.01 27379.48 278066.82 00:28:12.393 =================================================================================================================== 00:28:12.393 Total : 2344.20 146.51 0.00 0.00 244522.38 3568.07 282727.16 00:28:12.650 03:37:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:28:13.579 03:37:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 2495722 00:28:13.579 03:37:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:28:13.579 03:37:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:28:13.579 03:37:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:28:13.579 03:37:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:13.579 03:37:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:28:13.579 03:37:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:13.579 03:37:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:28:13.579 03:37:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:13.579 03:37:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:28:13.579 03:37:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:13.579 03:37:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:13.579 rmmod nvme_tcp 00:28:13.579 rmmod nvme_fabrics 00:28:13.579 rmmod nvme_keyring 00:28:13.579 03:37:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:13.579 03:37:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:28:13.579 03:37:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:28:13.579 03:37:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 2495722 ']' 00:28:13.579 03:37:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 2495722 00:28:13.579 03:37:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@946 -- # '[' -z 2495722 ']' 00:28:13.579 03:37:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # kill -0 2495722 00:28:13.579 03:37:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@951 -- # uname 00:28:13.579 03:37:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:28:13.579 03:37:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2495722 00:28:13.579 03:37:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:28:13.579 03:37:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:28:13.579 03:37:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2495722' 00:28:13.579 killing process with pid 2495722 00:28:13.579 03:37:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@965 -- # kill 2495722 00:28:13.579 03:37:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@970 -- # wait 2495722 00:28:14.143 03:37:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:14.143 03:37:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:14.143 03:37:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:14.143 03:37:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:14.143 03:37:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:14.143 03:37:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:14.143 03:37:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:14.143 03:37:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:16.669 03:38:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:16.669 00:28:16.669 real 0m7.496s 00:28:16.669 user 0m22.409s 00:28:16.669 sys 0m1.437s 00:28:16.669 03:38:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:16.669 03:38:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:16.669 ************************************ 00:28:16.669 END TEST nvmf_shutdown_tc2 00:28:16.669 ************************************ 00:28:16.669 03:38:01 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:28:16.669 03:38:01 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:28:16.669 03:38:01 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:16.669 03:38:01 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:16.669 ************************************ 00:28:16.669 START TEST nvmf_shutdown_tc3 00:28:16.669 ************************************ 00:28:16.669 03:38:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1121 -- # nvmf_shutdown_tc3 00:28:16.669 03:38:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:28:16.669 03:38:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:28:16.669 03:38:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:16.669 03:38:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:16.669 03:38:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:16.669 03:38:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:16.669 03:38:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:16.669 03:38:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:16.669 03:38:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:16.669 03:38:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:16.669 03:38:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:16.669 03:38:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:16.669 03:38:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:28:16.669 03:38:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:16.669 03:38:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:16.669 03:38:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:28:16.669 03:38:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:16.669 03:38:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:16.669 03:38:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:16.669 03:38:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:16.669 03:38:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:16.669 03:38:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:28:16.669 03:38:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:16.669 03:38:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:28:16.669 03:38:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:28:16.669 03:38:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:28:16.669 03:38:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:28:16.669 03:38:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:28:16.669 03:38:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:28:16.669 03:38:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:16.669 03:38:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:16.670 03:38:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:16.670 03:38:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:16.670 03:38:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:16.670 03:38:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:16.670 03:38:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:16.670 03:38:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:16.670 03:38:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:16.670 03:38:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:16.670 03:38:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:16.670 03:38:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:16.670 03:38:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:16.670 03:38:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:16.670 03:38:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:16.670 03:38:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:16.670 03:38:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:16.670 03:38:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:16.670 03:38:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:16.670 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:16.670 03:38:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:16.670 03:38:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:16.670 03:38:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:16.670 03:38:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:16.670 03:38:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:16.670 03:38:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:16.670 03:38:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:16.670 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:16.670 03:38:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:16.670 03:38:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:16.670 03:38:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:16.670 03:38:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:16.670 03:38:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:16.670 03:38:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:16.670 03:38:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:16.670 03:38:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:16.670 03:38:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:16.670 03:38:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:16.670 03:38:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:16.670 03:38:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:16.670 03:38:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:16.670 03:38:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:16.670 03:38:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:16.670 03:38:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:16.670 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:16.670 03:38:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:16.670 03:38:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:16.670 03:38:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:16.670 03:38:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:16.670 03:38:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:16.670 03:38:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:16.670 03:38:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:16.670 03:38:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:16.670 03:38:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:16.670 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:16.670 03:38:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:16.670 03:38:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:16.670 03:38:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:28:16.670 03:38:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:16.670 03:38:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:16.670 03:38:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:16.670 03:38:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:16.670 03:38:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:16.670 03:38:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:16.670 03:38:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:16.670 03:38:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:16.670 03:38:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:16.670 03:38:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:16.670 03:38:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:16.670 03:38:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:16.670 03:38:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:16.670 03:38:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:16.670 03:38:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:16.670 03:38:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:16.670 03:38:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:16.670 03:38:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:16.670 03:38:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:16.670 03:38:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:16.670 03:38:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:16.670 03:38:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:16.670 03:38:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:16.670 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:16.670 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.217 ms 00:28:16.670 00:28:16.670 --- 10.0.0.2 ping statistics --- 00:28:16.670 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:16.670 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:28:16.670 03:38:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:16.670 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:16.670 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.174 ms 00:28:16.670 00:28:16.670 --- 10.0.0.1 ping statistics --- 00:28:16.670 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:16.670 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:28:16.670 03:38:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:16.670 03:38:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:28:16.670 03:38:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:16.670 03:38:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:16.670 03:38:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:16.670 03:38:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:16.670 03:38:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:16.670 03:38:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:16.670 03:38:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:16.670 03:38:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:28:16.670 03:38:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:16.670 03:38:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@720 -- # xtrace_disable 00:28:16.670 03:38:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:16.670 03:38:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=2496810 00:28:16.670 03:38:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:28:16.670 03:38:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 2496810 00:28:16.670 03:38:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@827 -- # '[' -z 2496810 ']' 00:28:16.670 03:38:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:16.670 03:38:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:16.670 03:38:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:16.670 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:16.670 03:38:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:16.670 03:38:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:16.670 [2024-07-21 03:38:01.734703] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:28:16.670 [2024-07-21 03:38:01.734773] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:16.670 EAL: No free 2048 kB hugepages reported on node 1 00:28:16.670 [2024-07-21 03:38:01.806833] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:16.670 [2024-07-21 03:38:01.900610] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:16.671 [2024-07-21 03:38:01.900691] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:16.671 [2024-07-21 03:38:01.900717] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:16.671 [2024-07-21 03:38:01.900731] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:16.671 [2024-07-21 03:38:01.900743] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:16.671 [2024-07-21 03:38:01.900808] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:16.671 [2024-07-21 03:38:01.900936] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:16.671 [2024-07-21 03:38:01.900993] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:28:16.671 [2024-07-21 03:38:01.900995] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:16.929 03:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:16.929 03:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # return 0 00:28:16.929 03:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:16.929 03:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:16.929 03:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:16.929 03:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:16.929 03:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:16.929 03:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:16.929 03:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:16.929 [2024-07-21 03:38:02.036167] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:16.929 03:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:16.929 03:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:28:16.929 03:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:28:16.929 03:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@720 -- # xtrace_disable 00:28:16.929 03:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:16.929 03:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:16.929 03:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:16.929 03:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:16.929 03:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:16.929 03:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:16.929 03:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:16.929 03:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:16.929 03:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:16.929 03:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:16.929 03:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:16.929 03:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:16.929 03:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:16.929 03:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:16.929 03:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:16.929 03:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:16.929 03:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:16.929 03:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:16.929 03:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:16.929 03:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:16.929 03:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:16.929 03:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:16.929 03:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:28:16.929 03:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:16.929 03:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:16.929 Malloc1 00:28:16.929 [2024-07-21 03:38:02.111049] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:16.929 Malloc2 00:28:16.929 Malloc3 00:28:16.929 Malloc4 00:28:17.191 Malloc5 00:28:17.191 Malloc6 00:28:17.191 Malloc7 00:28:17.191 Malloc8 00:28:17.191 Malloc9 00:28:17.508 Malloc10 00:28:17.508 03:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:17.508 03:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:28:17.508 03:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:17.508 03:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:17.508 03:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=2496994 00:28:17.508 03:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 2496994 /var/tmp/bdevperf.sock 00:28:17.508 03:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@827 -- # '[' -z 2496994 ']' 00:28:17.508 03:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:17.508 03:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:28:17.508 03:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:17.508 03:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:17.508 03:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:28:17.508 03:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:17.508 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:17.508 03:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:28:17.508 03:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:17.508 03:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:17.508 03:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:17.508 03:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:17.508 { 00:28:17.508 "params": { 00:28:17.508 "name": "Nvme$subsystem", 00:28:17.508 "trtype": "$TEST_TRANSPORT", 00:28:17.508 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:17.508 "adrfam": "ipv4", 00:28:17.508 "trsvcid": "$NVMF_PORT", 00:28:17.508 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:17.508 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:17.508 "hdgst": ${hdgst:-false}, 00:28:17.508 "ddgst": ${ddgst:-false} 00:28:17.508 }, 00:28:17.508 "method": "bdev_nvme_attach_controller" 00:28:17.508 } 00:28:17.508 EOF 00:28:17.508 )") 00:28:17.508 03:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:17.508 03:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:17.508 03:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:17.508 { 00:28:17.508 "params": { 00:28:17.508 "name": "Nvme$subsystem", 00:28:17.508 "trtype": "$TEST_TRANSPORT", 00:28:17.508 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:17.508 "adrfam": "ipv4", 00:28:17.508 "trsvcid": "$NVMF_PORT", 00:28:17.508 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:17.508 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:17.508 "hdgst": ${hdgst:-false}, 00:28:17.508 "ddgst": ${ddgst:-false} 00:28:17.508 }, 00:28:17.508 "method": "bdev_nvme_attach_controller" 00:28:17.508 } 00:28:17.508 EOF 00:28:17.508 )") 00:28:17.508 03:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:17.508 03:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:17.508 03:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:17.508 { 00:28:17.508 "params": { 00:28:17.508 "name": "Nvme$subsystem", 00:28:17.508 "trtype": "$TEST_TRANSPORT", 00:28:17.508 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:17.508 "adrfam": "ipv4", 00:28:17.508 "trsvcid": "$NVMF_PORT", 00:28:17.508 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:17.508 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:17.508 "hdgst": ${hdgst:-false}, 00:28:17.508 "ddgst": ${ddgst:-false} 00:28:17.508 }, 00:28:17.508 "method": "bdev_nvme_attach_controller" 00:28:17.508 } 00:28:17.508 EOF 00:28:17.508 )") 00:28:17.508 03:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:17.508 03:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:17.508 03:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:17.508 { 00:28:17.508 "params": { 00:28:17.508 "name": "Nvme$subsystem", 00:28:17.508 "trtype": "$TEST_TRANSPORT", 00:28:17.508 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:17.508 "adrfam": "ipv4", 00:28:17.508 "trsvcid": "$NVMF_PORT", 00:28:17.508 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:17.508 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:17.508 "hdgst": ${hdgst:-false}, 00:28:17.508 "ddgst": ${ddgst:-false} 00:28:17.508 }, 00:28:17.508 "method": "bdev_nvme_attach_controller" 00:28:17.508 } 00:28:17.508 EOF 00:28:17.508 )") 00:28:17.508 03:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:17.508 03:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:17.508 03:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:17.508 { 00:28:17.508 "params": { 00:28:17.508 "name": "Nvme$subsystem", 00:28:17.508 "trtype": "$TEST_TRANSPORT", 00:28:17.508 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:17.508 "adrfam": "ipv4", 00:28:17.508 "trsvcid": "$NVMF_PORT", 00:28:17.508 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:17.508 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:17.508 "hdgst": ${hdgst:-false}, 00:28:17.508 "ddgst": ${ddgst:-false} 00:28:17.508 }, 00:28:17.508 "method": "bdev_nvme_attach_controller" 00:28:17.508 } 00:28:17.508 EOF 00:28:17.508 )") 00:28:17.508 03:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:17.508 03:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:17.508 03:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:17.508 { 00:28:17.508 "params": { 00:28:17.508 "name": "Nvme$subsystem", 00:28:17.508 "trtype": "$TEST_TRANSPORT", 00:28:17.508 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:17.508 "adrfam": "ipv4", 00:28:17.508 "trsvcid": "$NVMF_PORT", 00:28:17.508 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:17.508 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:17.508 "hdgst": ${hdgst:-false}, 00:28:17.508 "ddgst": ${ddgst:-false} 00:28:17.508 }, 00:28:17.508 "method": "bdev_nvme_attach_controller" 00:28:17.508 } 00:28:17.508 EOF 00:28:17.508 )") 00:28:17.508 03:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:17.508 03:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:17.508 03:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:17.508 { 00:28:17.508 "params": { 00:28:17.508 "name": "Nvme$subsystem", 00:28:17.508 "trtype": "$TEST_TRANSPORT", 00:28:17.508 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:17.508 "adrfam": "ipv4", 00:28:17.508 "trsvcid": "$NVMF_PORT", 00:28:17.508 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:17.508 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:17.508 "hdgst": ${hdgst:-false}, 00:28:17.508 "ddgst": ${ddgst:-false} 00:28:17.508 }, 00:28:17.508 "method": "bdev_nvme_attach_controller" 00:28:17.508 } 00:28:17.508 EOF 00:28:17.508 )") 00:28:17.508 03:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:17.508 03:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:17.508 03:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:17.508 { 00:28:17.508 "params": { 00:28:17.508 "name": "Nvme$subsystem", 00:28:17.508 "trtype": "$TEST_TRANSPORT", 00:28:17.508 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:17.508 "adrfam": "ipv4", 00:28:17.508 "trsvcid": "$NVMF_PORT", 00:28:17.508 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:17.509 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:17.509 "hdgst": ${hdgst:-false}, 00:28:17.509 "ddgst": ${ddgst:-false} 00:28:17.509 }, 00:28:17.509 "method": "bdev_nvme_attach_controller" 00:28:17.509 } 00:28:17.509 EOF 00:28:17.509 )") 00:28:17.509 03:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:17.509 03:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:17.509 03:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:17.509 { 00:28:17.509 "params": { 00:28:17.509 "name": "Nvme$subsystem", 00:28:17.509 "trtype": "$TEST_TRANSPORT", 00:28:17.509 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:17.509 "adrfam": "ipv4", 00:28:17.509 "trsvcid": "$NVMF_PORT", 00:28:17.509 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:17.509 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:17.509 "hdgst": ${hdgst:-false}, 00:28:17.509 "ddgst": ${ddgst:-false} 00:28:17.509 }, 00:28:17.509 "method": "bdev_nvme_attach_controller" 00:28:17.509 } 00:28:17.509 EOF 00:28:17.509 )") 00:28:17.509 03:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:17.509 03:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:17.509 03:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:17.509 { 00:28:17.509 "params": { 00:28:17.509 "name": "Nvme$subsystem", 00:28:17.509 "trtype": "$TEST_TRANSPORT", 00:28:17.509 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:17.509 "adrfam": "ipv4", 00:28:17.509 "trsvcid": "$NVMF_PORT", 00:28:17.509 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:17.509 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:17.509 "hdgst": ${hdgst:-false}, 00:28:17.509 "ddgst": ${ddgst:-false} 00:28:17.509 }, 00:28:17.509 "method": "bdev_nvme_attach_controller" 00:28:17.509 } 00:28:17.509 EOF 00:28:17.509 )") 00:28:17.509 03:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:17.509 03:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:28:17.509 03:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:28:17.509 03:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:17.509 "params": { 00:28:17.509 "name": "Nvme1", 00:28:17.509 "trtype": "tcp", 00:28:17.509 "traddr": "10.0.0.2", 00:28:17.509 "adrfam": "ipv4", 00:28:17.509 "trsvcid": "4420", 00:28:17.509 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:17.509 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:17.509 "hdgst": false, 00:28:17.509 "ddgst": false 00:28:17.509 }, 00:28:17.509 "method": "bdev_nvme_attach_controller" 00:28:17.509 },{ 00:28:17.509 "params": { 00:28:17.509 "name": "Nvme2", 00:28:17.509 "trtype": "tcp", 00:28:17.509 "traddr": "10.0.0.2", 00:28:17.509 "adrfam": "ipv4", 00:28:17.509 "trsvcid": "4420", 00:28:17.509 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:17.509 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:17.509 "hdgst": false, 00:28:17.509 "ddgst": false 00:28:17.509 }, 00:28:17.509 "method": "bdev_nvme_attach_controller" 00:28:17.509 },{ 00:28:17.509 "params": { 00:28:17.509 "name": "Nvme3", 00:28:17.509 "trtype": "tcp", 00:28:17.509 "traddr": "10.0.0.2", 00:28:17.509 "adrfam": "ipv4", 00:28:17.509 "trsvcid": "4420", 00:28:17.509 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:17.509 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:17.509 "hdgst": false, 00:28:17.509 "ddgst": false 00:28:17.509 }, 00:28:17.509 "method": "bdev_nvme_attach_controller" 00:28:17.509 },{ 00:28:17.509 "params": { 00:28:17.509 "name": "Nvme4", 00:28:17.509 "trtype": "tcp", 00:28:17.509 "traddr": "10.0.0.2", 00:28:17.509 "adrfam": "ipv4", 00:28:17.509 "trsvcid": "4420", 00:28:17.509 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:17.509 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:17.509 "hdgst": false, 00:28:17.509 "ddgst": false 00:28:17.509 }, 00:28:17.509 "method": "bdev_nvme_attach_controller" 00:28:17.509 },{ 00:28:17.509 "params": { 00:28:17.509 "name": "Nvme5", 00:28:17.509 "trtype": "tcp", 00:28:17.509 "traddr": "10.0.0.2", 00:28:17.509 "adrfam": "ipv4", 00:28:17.509 "trsvcid": "4420", 00:28:17.509 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:17.509 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:17.509 "hdgst": false, 00:28:17.509 "ddgst": false 00:28:17.509 }, 00:28:17.509 "method": "bdev_nvme_attach_controller" 00:28:17.509 },{ 00:28:17.509 "params": { 00:28:17.509 "name": "Nvme6", 00:28:17.509 "trtype": "tcp", 00:28:17.509 "traddr": "10.0.0.2", 00:28:17.509 "adrfam": "ipv4", 00:28:17.509 "trsvcid": "4420", 00:28:17.509 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:17.509 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:17.509 "hdgst": false, 00:28:17.509 "ddgst": false 00:28:17.509 }, 00:28:17.509 "method": "bdev_nvme_attach_controller" 00:28:17.509 },{ 00:28:17.509 "params": { 00:28:17.509 "name": "Nvme7", 00:28:17.509 "trtype": "tcp", 00:28:17.509 "traddr": "10.0.0.2", 00:28:17.509 "adrfam": "ipv4", 00:28:17.509 "trsvcid": "4420", 00:28:17.509 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:17.509 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:17.509 "hdgst": false, 00:28:17.509 "ddgst": false 00:28:17.509 }, 00:28:17.509 "method": "bdev_nvme_attach_controller" 00:28:17.509 },{ 00:28:17.509 "params": { 00:28:17.509 "name": "Nvme8", 00:28:17.509 "trtype": "tcp", 00:28:17.509 "traddr": "10.0.0.2", 00:28:17.509 "adrfam": "ipv4", 00:28:17.509 "trsvcid": "4420", 00:28:17.509 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:17.509 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:17.509 "hdgst": false, 00:28:17.509 "ddgst": false 00:28:17.509 }, 00:28:17.509 "method": "bdev_nvme_attach_controller" 00:28:17.509 },{ 00:28:17.509 "params": { 00:28:17.509 "name": "Nvme9", 00:28:17.509 "trtype": "tcp", 00:28:17.509 "traddr": "10.0.0.2", 00:28:17.509 "adrfam": "ipv4", 00:28:17.509 "trsvcid": "4420", 00:28:17.509 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:17.509 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:17.509 "hdgst": false, 00:28:17.509 "ddgst": false 00:28:17.509 }, 00:28:17.509 "method": "bdev_nvme_attach_controller" 00:28:17.509 },{ 00:28:17.509 "params": { 00:28:17.509 "name": "Nvme10", 00:28:17.509 "trtype": "tcp", 00:28:17.509 "traddr": "10.0.0.2", 00:28:17.509 "adrfam": "ipv4", 00:28:17.509 "trsvcid": "4420", 00:28:17.509 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:17.509 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:17.509 "hdgst": false, 00:28:17.509 "ddgst": false 00:28:17.509 }, 00:28:17.509 "method": "bdev_nvme_attach_controller" 00:28:17.509 }' 00:28:17.509 [2024-07-21 03:38:02.601976] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:28:17.509 [2024-07-21 03:38:02.602070] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2496994 ] 00:28:17.509 EAL: No free 2048 kB hugepages reported on node 1 00:28:17.509 [2024-07-21 03:38:02.666884] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:17.509 [2024-07-21 03:38:02.753471] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:18.880 Running I/O for 10 seconds... 00:28:19.446 03:38:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:19.446 03:38:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # return 0 00:28:19.446 03:38:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:28:19.446 03:38:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:19.446 03:38:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:19.446 03:38:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:19.446 03:38:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:19.446 03:38:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:28:19.446 03:38:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:28:19.446 03:38:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:28:19.446 03:38:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:28:19.446 03:38:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:28:19.446 03:38:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:28:19.446 03:38:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:28:19.446 03:38:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:19.446 03:38:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:28:19.446 03:38:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:19.446 03:38:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:19.446 03:38:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:19.446 03:38:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=67 00:28:19.446 03:38:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:28:19.446 03:38:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:28:19.718 03:38:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:28:19.718 03:38:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:28:19.718 03:38:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:19.718 03:38:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:28:19.718 03:38:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:19.718 03:38:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:19.718 03:38:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:19.718 03:38:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=131 00:28:19.718 03:38:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:28:19.718 03:38:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:28:19.719 03:38:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:28:19.719 03:38:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:28:19.719 03:38:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 2496810 00:28:19.719 03:38:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@946 -- # '[' -z 2496810 ']' 00:28:19.719 03:38:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@950 -- # kill -0 2496810 00:28:19.719 03:38:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@951 -- # uname 00:28:19.719 03:38:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:28:19.719 03:38:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2496810 00:28:19.719 03:38:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:28:19.719 03:38:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:28:19.719 03:38:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2496810' 00:28:19.719 killing process with pid 2496810 00:28:19.719 03:38:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@965 -- # kill 2496810 00:28:19.719 03:38:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@970 -- # wait 2496810 00:28:19.719 [2024-07-21 03:38:04.937628] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x71e700 is same with the state(5) to be set 00:28:19.719 [2024-07-21 03:38:04.937940] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x71e700 is same with the state(5) to be set 00:28:19.719 [2024-07-21 03:38:04.937958] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x71e700 is same with the state(5) to be set 00:28:19.719 [2024-07-21 03:38:04.937998] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x71e700 is same with the state(5) to be set 00:28:19.719 [2024-07-21 03:38:04.938011] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x71e700 is same with the state(5) to be set 00:28:19.719 [2024-07-21 03:38:04.938022] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x71e700 is same with the state(5) to be set 00:28:19.719 [2024-07-21 03:38:04.938036] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x71e700 is same with the state(5) to be set 00:28:19.719 [2024-07-21 03:38:04.938049] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x71e700 is same with the state(5) to be set 00:28:19.719 [2024-07-21 03:38:04.938061] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x71e700 is same with the state(5) to be set 00:28:19.719 [2024-07-21 03:38:04.938073] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x71e700 is same with the state(5) to be set 00:28:19.719 [2024-07-21 03:38:04.938085] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x71e700 is same with the state(5) to be set 00:28:19.719 [2024-07-21 03:38:04.938099] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x71e700 is same with the state(5) to be set 00:28:19.719 [2024-07-21 03:38:04.938111] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x71e700 is same with the state(5) to be set 00:28:19.719 [2024-07-21 03:38:04.938123] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x71e700 is same with the state(5) to be set 00:28:19.719 [2024-07-21 03:38:04.938134] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x71e700 is same with the state(5) to be set 00:28:19.719 [2024-07-21 03:38:04.938149] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x71e700 is same with the state(5) to be set 00:28:19.719 [2024-07-21 03:38:04.938162] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x71e700 is same with the state(5) to be set 00:28:19.719 [2024-07-21 03:38:04.938174] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x71e700 is same with the state(5) to be set 00:28:19.719 [2024-07-21 03:38:04.938196] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x71e700 is same with the state(5) to be set 00:28:19.719 [2024-07-21 03:38:04.938209] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x71e700 is same with the state(5) to be set 00:28:19.719 [2024-07-21 03:38:04.938223] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x71e700 is same with the state(5) to be set 00:28:19.719 [2024-07-21 03:38:04.938236] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x71e700 is same with the state(5) to be set 00:28:19.719 [2024-07-21 03:38:04.938249] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x71e700 is same with the state(5) to be set 00:28:19.719 [2024-07-21 03:38:04.938261] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x71e700 is same with the state(5) to be set 00:28:19.719 [2024-07-21 03:38:04.938275] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x71e700 is same with the state(5) to be set 00:28:19.719 [2024-07-21 03:38:04.938287] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x71e700 is same with the state(5) to be set 00:28:19.719 [2024-07-21 03:38:04.938299] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x71e700 is same with the state(5) to be set 00:28:19.719 [2024-07-21 03:38:04.938311] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x71e700 is same with the state(5) to be set 00:28:19.719 [2024-07-21 03:38:04.938323] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x71e700 is same with the state(5) to be set 00:28:19.719 [2024-07-21 03:38:04.938337] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x71e700 is same with the state(5) to be set 00:28:19.719 [2024-07-21 03:38:04.938349] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x71e700 is same with the state(5) to be set 00:28:19.719 [2024-07-21 03:38:04.938361] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x71e700 is same with the state(5) to be set 00:28:19.719 [2024-07-21 03:38:04.938373] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x71e700 is same with the state(5) to be set 00:28:19.719 [2024-07-21 03:38:04.938387] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x71e700 is same with the state(5) to be set 00:28:19.719 [2024-07-21 03:38:04.938400] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x71e700 is same with the state(5) to be set 00:28:19.719 [2024-07-21 03:38:04.938412] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x71e700 is same with the state(5) to be set 00:28:19.719 [2024-07-21 03:38:04.938424] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x71e700 is same with the state(5) to be set 00:28:19.719 [2024-07-21 03:38:04.938436] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x71e700 is same with the state(5) to be set 00:28:19.719 [2024-07-21 03:38:04.938450] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x71e700 is same with the state(5) to be set 00:28:19.719 [2024-07-21 03:38:04.938462] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x71e700 is same with the state(5) to be set 00:28:19.719 [2024-07-21 03:38:04.938474] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x71e700 is same with the state(5) to be set 00:28:19.719 [2024-07-21 03:38:04.938486] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x71e700 is same with the state(5) to be set 00:28:19.719 [2024-07-21 03:38:04.938499] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x71e700 is same with the state(5) to be set 00:28:19.719 [2024-07-21 03:38:04.938511] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x71e700 is same with the state(5) to be set 00:28:19.719 [2024-07-21 03:38:04.938523] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x71e700 is same with the state(5) to be set 00:28:19.719 [2024-07-21 03:38:04.938538] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x71e700 is same with the state(5) to be set 00:28:19.719 [2024-07-21 03:38:04.938552] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x71e700 is same with the state(5) to be set 00:28:19.719 [2024-07-21 03:38:04.938565] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x71e700 is same with the state(5) to be set 00:28:19.719 [2024-07-21 03:38:04.938577] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x71e700 is same with the state(5) to be set 00:28:19.719 [2024-07-21 03:38:04.938589] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x71e700 is same with the state(5) to be set 00:28:19.719 [2024-07-21 03:38:04.938600] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x71e700 is same with the state(5) to be set 00:28:19.719 [2024-07-21 03:38:04.938633] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x71e700 is same with the state(5) to be set 00:28:19.719 [2024-07-21 03:38:04.938649] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x71e700 is same with the state(5) to be set 00:28:19.719 [2024-07-21 03:38:04.938673] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x71e700 is same with the state(5) to be set 00:28:19.719 [2024-07-21 03:38:04.938685] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x71e700 is same with the state(5) to be set 00:28:19.719 [2024-07-21 03:38:04.938697] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x71e700 is same with the state(5) to be set 00:28:19.719 [2024-07-21 03:38:04.938709] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x71e700 is same with the state(5) to be set 00:28:19.719 [2024-07-21 03:38:04.938723] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x71e700 is same with the state(5) to be set 00:28:19.719 [2024-07-21 03:38:04.938737] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x71e700 is same with the state(5) to be set 00:28:19.719 [2024-07-21 03:38:04.938750] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x71e700 is same with the state(5) to be set 00:28:19.719 [2024-07-21 03:38:04.938762] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x71e700 is same with the state(5) to be set 00:28:19.719 [2024-07-21 03:38:04.938774] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x71e700 is same with the state(5) to be set 00:28:19.719 [2024-07-21 03:38:04.938785] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x71e700 is same with the state(5) to be set 00:28:19.719 [2024-07-21 03:38:04.940274] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d7a00 is same with the state(5) to be set 00:28:19.719 [2024-07-21 03:38:04.940298] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d7a00 is same with the state(5) to be set 00:28:19.719 [2024-07-21 03:38:04.940313] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d7a00 is same with the state(5) to be set 00:28:19.719 [2024-07-21 03:38:04.940327] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d7a00 is same with the state(5) to be set 00:28:19.719 [2024-07-21 03:38:04.940340] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d7a00 is same with the state(5) to be set 00:28:19.719 [2024-07-21 03:38:04.940353] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d7a00 is same with the state(5) to be set 00:28:19.719 [2024-07-21 03:38:04.940382] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d7a00 is same with the state(5) to be set 00:28:19.719 [2024-07-21 03:38:04.940397] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d7a00 is same with the state(5) to be set 00:28:19.719 [2024-07-21 03:38:04.940410] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d7a00 is same with the state(5) to be set 00:28:19.719 [2024-07-21 03:38:04.940428] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d7a00 is same with the state(5) to be set 00:28:19.719 [2024-07-21 03:38:04.940441] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d7a00 is same with the state(5) to be set 00:28:19.719 [2024-07-21 03:38:04.940454] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d7a00 is same with the state(5) to be set 00:28:19.719 [2024-07-21 03:38:04.940467] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d7a00 is same with the state(5) to be set 00:28:19.719 [2024-07-21 03:38:04.940480] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d7a00 is same with the state(5) to be set 00:28:19.719 [2024-07-21 03:38:04.940493] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d7a00 is same with the state(5) to be set 00:28:19.719 [2024-07-21 03:38:04.940505] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d7a00 is same with the state(5) to be set 00:28:19.719 [2024-07-21 03:38:04.940519] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d7a00 is same with the state(5) to be set 00:28:19.719 [2024-07-21 03:38:04.940531] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d7a00 is same with the state(5) to be set 00:28:19.720 [2024-07-21 03:38:04.940544] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d7a00 is same with the state(5) to be set 00:28:19.720 [2024-07-21 03:38:04.940556] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d7a00 is same with the state(5) to be set 00:28:19.720 [2024-07-21 03:38:04.940570] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d7a00 is same with the state(5) to be set 00:28:19.720 [2024-07-21 03:38:04.940583] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d7a00 is same with the state(5) to be set 00:28:19.720 [2024-07-21 03:38:04.940595] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d7a00 is same with the state(5) to be set 00:28:19.720 [2024-07-21 03:38:04.940607] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d7a00 is same with the state(5) to be set 00:28:19.720 [2024-07-21 03:38:04.940643] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d7a00 is same with the state(5) to be set 00:28:19.720 [2024-07-21 03:38:04.940660] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d7a00 is same with the state(5) to be set 00:28:19.720 [2024-07-21 03:38:04.940672] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d7a00 is same with the state(5) to be set 00:28:19.720 [2024-07-21 03:38:04.940684] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d7a00 is same with the state(5) to be set 00:28:19.720 [2024-07-21 03:38:04.940700] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d7a00 is same with the state(5) to be set 00:28:19.720 [2024-07-21 03:38:04.940713] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d7a00 is same with the state(5) to be set 00:28:19.720 [2024-07-21 03:38:04.940726] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d7a00 is same with the state(5) to be set 00:28:19.720 [2024-07-21 03:38:04.940740] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d7a00 is same with the state(5) to be set 00:28:19.720 [2024-07-21 03:38:04.940753] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d7a00 is same with the state(5) to be set 00:28:19.720 [2024-07-21 03:38:04.940766] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d7a00 is same with the state(5) to be set 00:28:19.720 [2024-07-21 03:38:04.940779] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d7a00 is same with the state(5) to be set 00:28:19.720 [2024-07-21 03:38:04.940793] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d7a00 is same with the state(5) to be set 00:28:19.720 [2024-07-21 03:38:04.940806] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d7a00 is same with the state(5) to be set 00:28:19.720 [2024-07-21 03:38:04.940826] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d7a00 is same with the state(5) to be set 00:28:19.720 [2024-07-21 03:38:04.940839] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d7a00 is same with the state(5) to be set 00:28:19.720 [2024-07-21 03:38:04.940853] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d7a00 is same with the state(5) to be set 00:28:19.720 [2024-07-21 03:38:04.940867] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d7a00 is same with the state(5) to be set 00:28:19.720 [2024-07-21 03:38:04.940880] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d7a00 is same with the state(5) to be set 00:28:19.720 [2024-07-21 03:38:04.940893] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d7a00 is same with the state(5) to be set 00:28:19.720 [2024-07-21 03:38:04.940910] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d7a00 is same with the state(5) to be set 00:28:19.720 [2024-07-21 03:38:04.940924] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d7a00 is same with the state(5) to be set 00:28:19.720 [2024-07-21 03:38:04.940937] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d7a00 is same with the state(5) to be set 00:28:19.720 [2024-07-21 03:38:04.940965] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d7a00 is same with the state(5) to be set 00:28:19.720 [2024-07-21 03:38:04.940977] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d7a00 is same with the state(5) to be set 00:28:19.720 [2024-07-21 03:38:04.940989] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d7a00 is same with the state(5) to be set 00:28:19.720 [2024-07-21 03:38:04.941002] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d7a00 is same with the state(5) to be set 00:28:19.720 [2024-07-21 03:38:04.941015] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d7a00 is same with the state(5) to be set 00:28:19.720 [2024-07-21 03:38:04.941027] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d7a00 is same with the state(5) to be set 00:28:19.720 [2024-07-21 03:38:04.941040] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d7a00 is same with the state(5) to be set 00:28:19.720 [2024-07-21 03:38:04.941053] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d7a00 is same with the state(5) to be set 00:28:19.720 [2024-07-21 03:38:04.941066] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d7a00 is same with the state(5) to be set 00:28:19.720 [2024-07-21 03:38:04.941079] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d7a00 is same with the state(5) to be set 00:28:19.720 [2024-07-21 03:38:04.941091] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d7a00 is same with the state(5) to be set 00:28:19.720 [2024-07-21 03:38:04.941105] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d7a00 is same with the state(5) to be set 00:28:19.720 [2024-07-21 03:38:04.941117] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d7a00 is same with the state(5) to be set 00:28:19.720 [2024-07-21 03:38:04.941131] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d7a00 is same with the state(5) to be set 00:28:19.720 [2024-07-21 03:38:04.941143] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d7a00 is same with the state(5) to be set 00:28:19.720 [2024-07-21 03:38:04.941156] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d7a00 is same with the state(5) to be set 00:28:19.720 [2024-07-21 03:38:04.941167] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d7a00 is same with the state(5) to be set 00:28:19.720 [2024-07-21 03:38:04.942883] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:19.720 [2024-07-21 03:38:04.942937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.720 [2024-07-21 03:38:04.942956] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:19.720 [2024-07-21 03:38:04.942976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.720 [2024-07-21 03:38:04.942991] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:19.720 [2024-07-21 03:38:04.943004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.720 [2024-07-21 03:38:04.943018] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:19.720 [2024-07-21 03:38:04.943031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.720 [2024-07-21 03:38:04.943045] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129ff90 is same with the state(5) to be set 00:28:19.720 [2024-07-21 03:38:04.943130] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:19.720 [2024-07-21 03:38:04.943151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.720 [2024-07-21 03:38:04.943166] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:19.720 [2024-07-21 03:38:04.943179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.720 [2024-07-21 03:38:04.943194] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:19.720 [2024-07-21 03:38:04.943208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.720 [2024-07-21 03:38:04.943223] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:19.720 [2024-07-21 03:38:04.943237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.720 [2024-07-21 03:38:04.943251] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1245190 is same with the state(5) to be set 00:28:19.720 [2024-07-21 03:38:04.943305] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:19.720 [2024-07-21 03:38:04.943327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.720 [2024-07-21 03:38:04.943343] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:19.720 [2024-07-21 03:38:04.943362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.720 [2024-07-21 03:38:04.943379] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:19.720 [2024-07-21 03:38:04.943392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.720 [2024-07-21 03:38:04.943406] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:19.720 [2024-07-21 03:38:04.943419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.720 [2024-07-21 03:38:04.943437] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1247300 is same with the state(5) to be set 00:28:19.720 [2024-07-21 03:38:04.943822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.720 [2024-07-21 03:38:04.943849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.720 [2024-07-21 03:38:04.943879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.720 [2024-07-21 03:38:04.943909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.720 [2024-07-21 03:38:04.943945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.720 [2024-07-21 03:38:04.943961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.720 [2024-07-21 03:38:04.943985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.720 [2024-07-21 03:38:04.943999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.720 [2024-07-21 03:38:04.944015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.720 [2024-07-21 03:38:04.944029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.720 [2024-07-21 03:38:04.944045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.720 [2024-07-21 03:38:04.944059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.720 [2024-07-21 03:38:04.944075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.720 [2024-07-21 03:38:04.944088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.720 [2024-07-21 03:38:04.944104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.720 [2024-07-21 03:38:04.944118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.721 [2024-07-21 03:38:04.944133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.721 [2024-07-21 03:38:04.944147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.721 [2024-07-21 03:38:04.944163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.721 [2024-07-21 03:38:04.944177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.721 [2024-07-21 03:38:04.944208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.721 [2024-07-21 03:38:04.944224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.721 [2024-07-21 03:38:04.944245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.721 [2024-07-21 03:38:04.944260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.721 [2024-07-21 03:38:04.944287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.721 [2024-07-21 03:38:04.944304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.721 [2024-07-21 03:38:04.944325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.721 [2024-07-21 03:38:04.944341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.721 [2024-07-21 03:38:04.944361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.721 [2024-07-21 03:38:04.944377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.721 [2024-07-21 03:38:04.944394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.721 [2024-07-21 03:38:04.944408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.721 [2024-07-21 03:38:04.944424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.721 [2024-07-21 03:38:04.944439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.721 [2024-07-21 03:38:04.944455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.721 [2024-07-21 03:38:04.944469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.721 [2024-07-21 03:38:04.944485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.721 [2024-07-21 03:38:04.944499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.721 [2024-07-21 03:38:04.944514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.721 [2024-07-21 03:38:04.944528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.721 [2024-07-21 03:38:04.944544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.721 [2024-07-21 03:38:04.944558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.721 [2024-07-21 03:38:04.944573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.721 [2024-07-21 03:38:04.944587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.721 [2024-07-21 03:38:04.944603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.721 [2024-07-21 03:38:04.944598] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d7ea0 is same with the state(5) to be set 00:28:19.721 [2024-07-21 03:38:04.944623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.721 [2024-07-21 03:38:04.944640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:1[2024-07-21 03:38:04.944639] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d7ea0 is same with t28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.721 he state(5) to be set 00:28:19.721 [2024-07-21 03:38:04.944657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.721 [2024-07-21 03:38:04.944667] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d7ea0 is same with the state(5) to be set 00:28:19.721 [2024-07-21 03:38:04.944682] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d7ea0 is same with the state(5) to be set 00:28:19.721 [2024-07-21 03:38:04.944683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.721 [2024-07-21 03:38:04.944695] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d7ea0 is same with the state(5) to be set 00:28:19.721 [2024-07-21 03:38:04.944698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.721 [2024-07-21 03:38:04.944708] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d7ea0 is same with the state(5) to be set 00:28:19.721 [2024-07-21 03:38:04.944714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.721 [2024-07-21 03:38:04.944721] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d7ea0 is same with the state(5) to be set 00:28:19.721 [2024-07-21 03:38:04.944728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.721 [2024-07-21 03:38:04.944733] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d7ea0 is same with the state(5) to be set 00:28:19.721 [2024-07-21 03:38:04.944744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:1[2024-07-21 03:38:04.944746] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d7ea0 is same with t28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.721 he state(5) to be set 00:28:19.721 [2024-07-21 03:38:04.944759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-21 03:38:04.944760] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d7ea0 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.721 he state(5) to be set 00:28:19.721 [2024-07-21 03:38:04.944776] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d7ea0 is same with t[2024-07-21 03:38:04.944776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:1he state(5) to be set 00:28:19.721 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.721 [2024-07-21 03:38:04.944793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-21 03:38:04.944793] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d7ea0 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.721 he state(5) to be set 00:28:19.721 [2024-07-21 03:38:04.944810] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d7ea0 is same with the state(5) to be set 00:28:19.721 [2024-07-21 03:38:04.944812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.721 [2024-07-21 03:38:04.944824] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d7ea0 is same with t[2024-07-21 03:38:04.944826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(5) to be set 00:28:19.721 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.721 [2024-07-21 03:38:04.944839] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d7ea0 is same with the state(5) to be set 00:28:19.721 [2024-07-21 03:38:04.944843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.721 [2024-07-21 03:38:04.944851] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d7ea0 is same with the state(5) to be set 00:28:19.721 [2024-07-21 03:38:04.944857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.721 [2024-07-21 03:38:04.944864] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d7ea0 is same with the state(5) to be set 00:28:19.721 [2024-07-21 03:38:04.944876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:1[2024-07-21 03:38:04.944879] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d7ea0 is same with t28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.721 he state(5) to be set 00:28:19.721 [2024-07-21 03:38:04.944893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.721 [2024-07-21 03:38:04.944903] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d7ea0 is same with the state(5) to be set 00:28:19.721 [2024-07-21 03:38:04.944931] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d7ea0 is same with the state(5) to be set 00:28:19.721 [2024-07-21 03:38:04.944933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.721 [2024-07-21 03:38:04.944943] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d7ea0 is same with the state(5) to be set 00:28:19.721 [2024-07-21 03:38:04.944949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.721 [2024-07-21 03:38:04.944956] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d7ea0 is same with the state(5) to be set 00:28:19.721 [2024-07-21 03:38:04.944965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.721 [2024-07-21 03:38:04.944970] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d7ea0 is same with the state(5) to be set 00:28:19.721 [2024-07-21 03:38:04.944980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.721 [2024-07-21 03:38:04.944984] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d7ea0 is same with the state(5) to be set 00:28:19.721 [2024-07-21 03:38:04.944995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:1[2024-07-21 03:38:04.944996] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d7ea0 is same with t28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.721 he state(5) to be set 00:28:19.721 [2024-07-21 03:38:04.945011] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d7ea0 is same with t[2024-07-21 03:38:04.945011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(5) to be set 00:28:19.721 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.721 [2024-07-21 03:38:04.945026] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d7ea0 is same with the state(5) to be set 00:28:19.721 [2024-07-21 03:38:04.945029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.721 [2024-07-21 03:38:04.945039] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d7ea0 is same with the state(5) to be set 00:28:19.721 [2024-07-21 03:38:04.945043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.721 [2024-07-21 03:38:04.945051] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d7ea0 is same with the state(5) to be set 00:28:19.721 [2024-07-21 03:38:04.945058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.721 [2024-07-21 03:38:04.945064] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d7ea0 is same with the state(5) to be set 00:28:19.721 [2024-07-21 03:38:04.945072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.721 [2024-07-21 03:38:04.945078] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d7ea0 is same with the state(5) to be set 00:28:19.722 [2024-07-21 03:38:04.945091] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d7ea0 is same with the state(5) to be set 00:28:19.722 [2024-07-21 03:38:04.945092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.722 [2024-07-21 03:38:04.945102] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d7ea0 is same with the state(5) to be set 00:28:19.722 [2024-07-21 03:38:04.945106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.722 [2024-07-21 03:38:04.945115] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d7ea0 is same with the state(5) to be set 00:28:19.722 [2024-07-21 03:38:04.945122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.722 [2024-07-21 03:38:04.945128] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d7ea0 is same with the state(5) to be set 00:28:19.722 [2024-07-21 03:38:04.945136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.722 [2024-07-21 03:38:04.945143] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d7ea0 is same with the state(5) to be set 00:28:19.722 [2024-07-21 03:38:04.945152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.722 [2024-07-21 03:38:04.945156] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d7ea0 is same with the state(5) to be set 00:28:19.722 [2024-07-21 03:38:04.945166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.722 [2024-07-21 03:38:04.945169] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d7ea0 is same with the state(5) to be set 00:28:19.722 [2024-07-21 03:38:04.945182] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d7ea0 is same with t[2024-07-21 03:38:04.945182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:1he state(5) to be set 00:28:19.722 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.722 [2024-07-21 03:38:04.945198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-21 03:38:04.945198] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d7ea0 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.722 he state(5) to be set 00:28:19.722 [2024-07-21 03:38:04.945213] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d7ea0 is same with the state(5) to be set 00:28:19.722 [2024-07-21 03:38:04.945215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.722 [2024-07-21 03:38:04.945225] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d7ea0 is same with the state(5) to be set 00:28:19.722 [2024-07-21 03:38:04.945229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.722 [2024-07-21 03:38:04.945237] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d7ea0 is same with the state(5) to be set 00:28:19.722 [2024-07-21 03:38:04.945244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.722 [2024-07-21 03:38:04.945250] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d7ea0 is same with the state(5) to be set 00:28:19.722 [2024-07-21 03:38:04.945258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.722 [2024-07-21 03:38:04.945264] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d7ea0 is same with the state(5) to be set 00:28:19.722 [2024-07-21 03:38:04.945278] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d7ea0 is same with t[2024-07-21 03:38:04.945277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:1he state(5) to be set 00:28:19.722 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.722 [2024-07-21 03:38:04.945292] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d7ea0 is same with t[2024-07-21 03:38:04.945294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(5) to be set 00:28:19.722 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.722 [2024-07-21 03:38:04.945306] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d7ea0 is same with the state(5) to be set 00:28:19.722 [2024-07-21 03:38:04.945310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.722 [2024-07-21 03:38:04.945321] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d7ea0 is same with the state(5) to be set 00:28:19.722 [2024-07-21 03:38:04.945325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.722 [2024-07-21 03:38:04.945334] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d7ea0 is same with the state(5) to be set 00:28:19.722 [2024-07-21 03:38:04.945341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.722 [2024-07-21 03:38:04.945346] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d7ea0 is same with the state(5) to be set 00:28:19.722 [2024-07-21 03:38:04.945355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.722 [2024-07-21 03:38:04.945359] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d7ea0 is same with the state(5) to be set 00:28:19.722 [2024-07-21 03:38:04.945371] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d7ea0 is same with t[2024-07-21 03:38:04.945371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:1he state(5) to be set 00:28:19.722 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.722 [2024-07-21 03:38:04.945387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-21 03:38:04.945387] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d7ea0 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.722 he state(5) to be set 00:28:19.722 [2024-07-21 03:38:04.945402] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d7ea0 is same with the state(5) to be set 00:28:19.722 [2024-07-21 03:38:04.945404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.722 [2024-07-21 03:38:04.945414] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d7ea0 is same with the state(5) to be set 00:28:19.722 [2024-07-21 03:38:04.945419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.722 [2024-07-21 03:38:04.945428] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d7ea0 is same with the state(5) to be set 00:28:19.722 [2024-07-21 03:38:04.945434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.722 [2024-07-21 03:38:04.945442] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d7ea0 is same with the state(5) to be set 00:28:19.722 [2024-07-21 03:38:04.945449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-21 03:38:04.945455] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d7ea0 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.722 he state(5) to be set 00:28:19.722 [2024-07-21 03:38:04.945469] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d7ea0 is same with the state(5) to be set 00:28:19.722 [2024-07-21 03:38:04.945472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.722 [2024-07-21 03:38:04.945481] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d7ea0 is same with the state(5) to be set 00:28:19.722 [2024-07-21 03:38:04.945487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.722 [2024-07-21 03:38:04.945494] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d7ea0 is same with the state(5) to be set 00:28:19.722 [2024-07-21 03:38:04.945503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.722 [2024-07-21 03:38:04.945515] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d7ea0 is same with the state(5) to be set 00:28:19.722 [2024-07-21 03:38:04.945517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.722 [2024-07-21 03:38:04.945533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.722 [2024-07-21 03:38:04.945548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.722 [2024-07-21 03:38:04.945563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.722 [2024-07-21 03:38:04.945577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.722 [2024-07-21 03:38:04.945592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.722 [2024-07-21 03:38:04.945606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.722 [2024-07-21 03:38:04.945655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.722 [2024-07-21 03:38:04.945671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.722 [2024-07-21 03:38:04.945687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.722 [2024-07-21 03:38:04.945701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.722 [2024-07-21 03:38:04.945718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.722 [2024-07-21 03:38:04.945732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.722 [2024-07-21 03:38:04.945748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.722 [2024-07-21 03:38:04.945762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.722 [2024-07-21 03:38:04.945777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.722 [2024-07-21 03:38:04.945792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.722 [2024-07-21 03:38:04.945808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.722 [2024-07-21 03:38:04.945826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.722 [2024-07-21 03:38:04.945842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.722 [2024-07-21 03:38:04.945857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.722 [2024-07-21 03:38:04.945872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.722 [2024-07-21 03:38:04.945887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.722 [2024-07-21 03:38:04.945909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.722 [2024-07-21 03:38:04.945945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.722 [2024-07-21 03:38:04.945962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.722 [2024-07-21 03:38:04.945976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.723 [2024-07-21 03:38:04.945990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.723 [2024-07-21 03:38:04.946004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.723 [2024-07-21 03:38:04.946044] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:19.723 [2024-07-21 03:38:04.946120] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x131eef0 was disconnected and freed. reset controller. 00:28:19.723 [2024-07-21 03:38:04.948268] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d8800 is same with the state(5) to be set 00:28:19.723 [2024-07-21 03:38:04.948300] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d8800 is same with the state(5) to be set 00:28:19.723 [2024-07-21 03:38:04.948315] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d8800 is same with the state(5) to be set 00:28:19.723 [2024-07-21 03:38:04.948343] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d8800 is same with the state(5) to be set 00:28:19.723 [2024-07-21 03:38:04.948355] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d8800 is same with the state(5) to be set 00:28:19.723 [2024-07-21 03:38:04.948369] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d8800 is same with the state(5) to be set 00:28:19.723 [2024-07-21 03:38:04.948382] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d8800 is same with the state(5) to be set 00:28:19.723 [2024-07-21 03:38:04.948393] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d8800 is same with the state(5) to be set 00:28:19.723 [2024-07-21 03:38:04.948405] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d8800 is same with the state(5) to be set 00:28:19.723 [2024-07-21 03:38:04.948419] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d8800 is same with the state(5) to be set 00:28:19.723 [2024-07-21 03:38:04.948432] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d8800 is same with the state(5) to be set 00:28:19.723 [2024-07-21 03:38:04.948444] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d8800 is same with the state(5) to be set 00:28:19.723 [2024-07-21 03:38:04.948456] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d8800 is same with the state(5) to be set 00:28:19.723 [2024-07-21 03:38:04.948474] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d8800 is same with the state(5) to be set 00:28:19.723 [2024-07-21 03:38:04.948489] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d8800 is same with the state(5) to be set 00:28:19.723 [2024-07-21 03:38:04.948502] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d8800 is same with the state(5) to be set 00:28:19.723 [2024-07-21 03:38:04.948513] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d8800 is same with the state(5) to be set 00:28:19.723 [2024-07-21 03:38:04.948525] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d8800 is same with the state(5) to be set 00:28:19.723 [2024-07-21 03:38:04.948538] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d8800 is same with the state(5) to be set 00:28:19.723 [2024-07-21 03:38:04.948550] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d8800 is same with the state(5) to be set 00:28:19.723 [2024-07-21 03:38:04.948562] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d8800 is same with the state(5) to be set 00:28:19.723 [2024-07-21 03:38:04.948574] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d8800 is same with the state(5) to be set 00:28:19.723 [2024-07-21 03:38:04.948586] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d8800 is same with the state(5) to be set 00:28:19.723 [2024-07-21 03:38:04.948599] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d8800 is same with the state(5) to be set 00:28:19.723 [2024-07-21 03:38:04.948619] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d8800 is same with the state(5) to be set 00:28:19.723 [2024-07-21 03:38:04.948650] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d8800 is same with the state(5) to be set 00:28:19.723 [2024-07-21 03:38:04.948666] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d8800 is same with the state(5) to be set 00:28:19.723 [2024-07-21 03:38:04.948680] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d8800 is same with the state(5) to be set 00:28:19.723 [2024-07-21 03:38:04.948693] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d8800 is same with the state(5) to be set 00:28:19.723 [2024-07-21 03:38:04.948705] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d8800 is same with the state(5) to be set 00:28:19.723 [2024-07-21 03:38:04.948717] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d8800 is same with the state(5) to be set 00:28:19.723 [2024-07-21 03:38:04.948729] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d8800 is same with the state(5) to be set 00:28:19.723 [2024-07-21 03:38:04.948743] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d8800 is same with the state(5) to be set 00:28:19.723 [2024-07-21 03:38:04.948756] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d8800 is same with the state(5) to be set 00:28:19.723 [2024-07-21 03:38:04.948769] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d8800 is same with the state(5) to be set 00:28:19.723 [2024-07-21 03:38:04.948781] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d8800 is same with the state(5) to be set 00:28:19.723 [2024-07-21 03:38:04.948795] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d8800 is same with the state(5) to be set 00:28:19.723 [2024-07-21 03:38:04.948808] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d8800 is same with the state(5) to be set 00:28:19.723 [2024-07-21 03:38:04.948820] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d8800 is same with the state(5) to be set 00:28:19.723 [2024-07-21 03:38:04.948832] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d8800 is same with the state(5) to be set 00:28:19.723 [2024-07-21 03:38:04.948850] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d8800 is same with the state(5) to be set 00:28:19.723 [2024-07-21 03:38:04.948863] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d8800 is same with the state(5) to be set 00:28:19.723 [2024-07-21 03:38:04.948875] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d8800 is same with the state(5) to be set 00:28:19.723 [2024-07-21 03:38:04.948887] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d8800 is same with the state(5) to be set 00:28:19.723 [2024-07-21 03:38:04.948899] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d8800 is same with the state(5) to be set 00:28:19.723 [2024-07-21 03:38:04.948923] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d8800 is same with the state(5) to be set 00:28:19.723 [2024-07-21 03:38:04.948950] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d8800 is same with the state(5) to be set 00:28:19.723 [2024-07-21 03:38:04.948962] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d8800 is same with the state(5) to be set 00:28:19.723 [2024-07-21 03:38:04.948976] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d8800 is same with the state(5) to be set 00:28:19.723 [2024-07-21 03:38:04.948989] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d8800 is same with the state(5) to be set 00:28:19.723 [2024-07-21 03:38:04.949017] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d8800 is same with the state(5) to be set 00:28:19.723 [2024-07-21 03:38:04.949030] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d8800 is same with the state(5) to be set 00:28:19.723 [2024-07-21 03:38:04.949042] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d8800 is same with the state(5) to be set 00:28:19.723 [2024-07-21 03:38:04.949054] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d8800 is same with the state(5) to be set 00:28:19.723 [2024-07-21 03:38:04.949066] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d8800 is same with the state(5) to be set 00:28:19.723 [2024-07-21 03:38:04.949078] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d8800 is same with the state(5) to be set 00:28:19.723 [2024-07-21 03:38:04.949096] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d8800 is same with the state(5) to be set 00:28:19.723 [2024-07-21 03:38:04.949114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:12[2024-07-21 03:38:04.949120] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d8800 is same with t8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.723 he state(5) to be set 00:28:19.723 [2024-07-21 03:38:04.949144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.723 [2024-07-21 03:38:04.949146] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d8800 is same with the state(5) to be set 00:28:19.723 [2024-07-21 03:38:04.949166] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d8800 is same with the state(5) to be set 00:28:19.723 [2024-07-21 03:38:04.949167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.723 [2024-07-21 03:38:04.949179] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d8800 is same with the state(5) to be set 00:28:19.723 [2024-07-21 03:38:04.949183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.723 [2024-07-21 03:38:04.949191] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d8800 is same with the state(5) to be set 00:28:19.724 [2024-07-21 03:38:04.949200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:12[2024-07-21 03:38:04.949204] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d8800 is same with t8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.724 he state(5) to be set 00:28:19.724 [2024-07-21 03:38:04.949223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.724 [2024-07-21 03:38:04.949240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.724 [2024-07-21 03:38:04.949254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.724 [2024-07-21 03:38:04.949270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.724 [2024-07-21 03:38:04.949284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.724 [2024-07-21 03:38:04.949299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.724 [2024-07-21 03:38:04.949313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.724 [2024-07-21 03:38:04.949328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.724 [2024-07-21 03:38:04.949342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.724 [2024-07-21 03:38:04.949358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.724 [2024-07-21 03:38:04.949372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.724 [2024-07-21 03:38:04.949387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.724 [2024-07-21 03:38:04.949401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.724 [2024-07-21 03:38:04.949417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.724 [2024-07-21 03:38:04.949431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.724 [2024-07-21 03:38:04.949448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.724 [2024-07-21 03:38:04.949462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.724 [2024-07-21 03:38:04.949477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.724 [2024-07-21 03:38:04.949491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.724 [2024-07-21 03:38:04.949507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.724 [2024-07-21 03:38:04.949521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.724 [2024-07-21 03:38:04.949536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.724 [2024-07-21 03:38:04.949550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.724 [2024-07-21 03:38:04.949565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.724 [2024-07-21 03:38:04.949583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.724 [2024-07-21 03:38:04.949599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.724 [2024-07-21 03:38:04.949621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.724 [2024-07-21 03:38:04.949639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.724 [2024-07-21 03:38:04.949656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.724 [2024-07-21 03:38:04.949672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.724 [2024-07-21 03:38:04.949686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.724 [2024-07-21 03:38:04.949701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.724 [2024-07-21 03:38:04.949715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.724 [2024-07-21 03:38:04.949731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.724 [2024-07-21 03:38:04.949745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.724 [2024-07-21 03:38:04.949760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.724 [2024-07-21 03:38:04.949774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.724 [2024-07-21 03:38:04.949789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.724 [2024-07-21 03:38:04.949803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.724 [2024-07-21 03:38:04.949818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.724 [2024-07-21 03:38:04.949832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.724 [2024-07-21 03:38:04.949848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.724 [2024-07-21 03:38:04.949861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.724 [2024-07-21 03:38:04.949877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.724 [2024-07-21 03:38:04.949897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.724 [2024-07-21 03:38:04.949913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.724 [2024-07-21 03:38:04.949942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.724 [2024-07-21 03:38:04.949963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.724 [2024-07-21 03:38:04.949977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.724 [2024-07-21 03:38:04.949996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.724 [2024-07-21 03:38:04.950011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.724 [2024-07-21 03:38:04.950026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.724 [2024-07-21 03:38:04.950040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.724 [2024-07-21 03:38:04.950056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.724 [2024-07-21 03:38:04.950069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.724 [2024-07-21 03:38:04.950084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.724 [2024-07-21 03:38:04.950098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.724 [2024-07-21 03:38:04.950113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.724 [2024-07-21 03:38:04.950126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.724 [2024-07-21 03:38:04.950141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.724 [2024-07-21 03:38:04.950155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.724 [2024-07-21 03:38:04.950170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.724 [2024-07-21 03:38:04.950183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.724 [2024-07-21 03:38:04.950198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.724 [2024-07-21 03:38:04.950212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.724 [2024-07-21 03:38:04.950227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.724 [2024-07-21 03:38:04.950241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.724 [2024-07-21 03:38:04.950256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.724 [2024-07-21 03:38:04.950269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.724 [2024-07-21 03:38:04.950284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.724 [2024-07-21 03:38:04.950298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.724 [2024-07-21 03:38:04.950313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.724 [2024-07-21 03:38:04.950326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.724 [2024-07-21 03:38:04.950341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.724 [2024-07-21 03:38:04.950358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.724 [2024-07-21 03:38:04.950374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.724 [2024-07-21 03:38:04.950388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.724 [2024-07-21 03:38:04.950403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.724 [2024-07-21 03:38:04.950417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.724 [2024-07-21 03:38:04.950418] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d8cc0 is same with the state(5) to be set 00:28:19.724 [2024-07-21 03:38:04.950431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.724 [2024-07-21 03:38:04.950445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-21 03:38:04.950444] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d8cc0 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.724 he state(5) to be set 00:28:19.725 [2024-07-21 03:38:04.950462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:1[2024-07-21 03:38:04.950463] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d8cc0 is same with t28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.725 he state(5) to be set 00:28:19.725 [2024-07-21 03:38:04.950477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-21 03:38:04.950478] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d8cc0 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.725 he state(5) to be set 00:28:19.725 [2024-07-21 03:38:04.950493] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d8cc0 is same with the state(5) to be set 00:28:19.725 [2024-07-21 03:38:04.950494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.725 [2024-07-21 03:38:04.950504] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d8cc0 is same with the state(5) to be set 00:28:19.725 [2024-07-21 03:38:04.950509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.725 [2024-07-21 03:38:04.950517] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d8cc0 is same with the state(5) to be set 00:28:19.725 [2024-07-21 03:38:04.950524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.725 [2024-07-21 03:38:04.950531] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d8cc0 is same with the state(5) to be set 00:28:19.725 [2024-07-21 03:38:04.950538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.725 [2024-07-21 03:38:04.950544] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d8cc0 is same with the state(5) to be set 00:28:19.725 [2024-07-21 03:38:04.950554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:1[2024-07-21 03:38:04.950556] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d8cc0 is same with t28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.725 he state(5) to be set 00:28:19.725 [2024-07-21 03:38:04.950569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-21 03:38:04.950569] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d8cc0 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.725 he state(5) to be set 00:28:19.725 [2024-07-21 03:38:04.950590] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d8cc0 is same with t[2024-07-21 03:38:04.950591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.725 he state(5) to be set 00:28:19.725 [2024-07-21 03:38:04.950627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-21 03:38:04.950628] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d8cc0 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.725 he state(5) to be set 00:28:19.725 [2024-07-21 03:38:04.950647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.725 [2024-07-21 03:38:04.950664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.725 [2024-07-21 03:38:04.950672] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d8cc0 is same with the state(5) to be set 00:28:19.725 [2024-07-21 03:38:04.950680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.725 [2024-07-21 03:38:04.950690] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d8cc0 is same with the state(5) to be set 00:28:19.725 [2024-07-21 03:38:04.950695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.725 [2024-07-21 03:38:04.950704] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d8cc0 is same with the state(5) to be set 00:28:19.725 [2024-07-21 03:38:04.950710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.725 [2024-07-21 03:38:04.950716] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d8cc0 is same with the state(5) to be set 00:28:19.725 [2024-07-21 03:38:04.950725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.725 [2024-07-21 03:38:04.950731] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d8cc0 is same with the state(5) to be set 00:28:19.725 [2024-07-21 03:38:04.950741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.725 [2024-07-21 03:38:04.950744] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d8cc0 is same with the state(5) to be set 00:28:19.725 [2024-07-21 03:38:04.950756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.725 [2024-07-21 03:38:04.950758] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d8cc0 is same with the state(5) to be set 00:28:19.725 [2024-07-21 03:38:04.950771] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d8cc0 is same with t[2024-07-21 03:38:04.950772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:1he state(5) to be set 00:28:19.725 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.725 [2024-07-21 03:38:04.950785] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d8cc0 is same with the state(5) to be set 00:28:19.725 [2024-07-21 03:38:04.950787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.725 [2024-07-21 03:38:04.950799] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d8cc0 is same with the state(5) to be set 00:28:19.725 [2024-07-21 03:38:04.950804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.725 [2024-07-21 03:38:04.950814] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d8cc0 is same with t[2024-07-21 03:38:04.950818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(5) to be set 00:28:19.725 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.725 [2024-07-21 03:38:04.950832] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d8cc0 is same with the state(5) to be set 00:28:19.725 [2024-07-21 03:38:04.950835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.725 [2024-07-21 03:38:04.950846] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d8cc0 is same with the state(5) to be set 00:28:19.725 [2024-07-21 03:38:04.950850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.725 [2024-07-21 03:38:04.950861] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d8cc0 is same with the state(5) to be set 00:28:19.725 [2024-07-21 03:38:04.950866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.725 [2024-07-21 03:38:04.950874] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d8cc0 is same with the state(5) to be set 00:28:19.725 [2024-07-21 03:38:04.950880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.725 [2024-07-21 03:38:04.950887] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d8cc0 is same with the state(5) to be set 00:28:19.725 [2024-07-21 03:38:04.950905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:1[2024-07-21 03:38:04.950906] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d8cc0 is same with t28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.725 he state(5) to be set 00:28:19.725 [2024-07-21 03:38:04.950936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.725 [2024-07-21 03:38:04.950938] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d8cc0 is same with the state(5) to be set 00:28:19.725 [2024-07-21 03:38:04.950951] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d8cc0 is same with the state(5) to be set 00:28:19.725 [2024-07-21 03:38:04.950952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.725 [2024-07-21 03:38:04.950971] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d8cc0 is same with the state(5) to be set 00:28:19.725 [2024-07-21 03:38:04.950974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.725 [2024-07-21 03:38:04.950983] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d8cc0 is same with the state(5) to be set 00:28:19.725 [2024-07-21 03:38:04.950989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.725 [2024-07-21 03:38:04.950996] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d8cc0 is same with the state(5) to be set 00:28:19.725 [2024-07-21 03:38:04.951003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.725 [2024-07-21 03:38:04.951010] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d8cc0 is same with the state(5) to be set 00:28:19.725 [2024-07-21 03:38:04.951019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.725 [2024-07-21 03:38:04.951022] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d8cc0 is same with the state(5) to be set 00:28:19.725 [2024-07-21 03:38:04.951033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-21 03:38:04.951035] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d8cc0 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.725 he state(5) to be set 00:28:19.725 [2024-07-21 03:38:04.951051] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d8cc0 is same with the state(5) to be set 00:28:19.725 [2024-07-21 03:38:04.951054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.725 [2024-07-21 03:38:04.951064] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d8cc0 is same with the state(5) to be set 00:28:19.725 [2024-07-21 03:38:04.951069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.725 [2024-07-21 03:38:04.951077] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d8cc0 is same with the state(5) to be set 00:28:19.725 [2024-07-21 03:38:04.951084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.725 [2024-07-21 03:38:04.951090] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d8cc0 is same with the state(5) to be set 00:28:19.725 [2024-07-21 03:38:04.951099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.725 [2024-07-21 03:38:04.951102] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d8cc0 is same with the state(5) to be set 00:28:19.725 [2024-07-21 03:38:04.951114] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d8cc0 is same with t[2024-07-21 03:38:04.951114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128he state(5) to be set 00:28:19.725 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.725 [2024-07-21 03:38:04.951130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-21 03:38:04.951129] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d8cc0 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.725 he state(5) to be set 00:28:19.725 [2024-07-21 03:38:04.951144] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d8cc0 is same with the state(5) to be set 00:28:19.725 [2024-07-21 03:38:04.951146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.725 [2024-07-21 03:38:04.951157] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d8cc0 is same with the state(5) to be set 00:28:19.725 [2024-07-21 03:38:04.951161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.726 [2024-07-21 03:38:04.951169] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d8cc0 is same with the state(5) to be set 00:28:19.726 [2024-07-21 03:38:04.951183] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d8cc0 is same with the state(5) to be set 00:28:19.726 [2024-07-21 03:38:04.951195] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d8cc0 is same with the state(5) to be set 00:28:19.726 [2024-07-21 03:38:04.951207] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d8cc0 is same with the state(5) to be set 00:28:19.726 [2024-07-21 03:38:04.951219] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d8cc0 is same with the state(5) to be set 00:28:19.726 [2024-07-21 03:38:04.951230] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d8cc0 is same with the state(5) to be set 00:28:19.726 [2024-07-21 03:38:04.951235] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x139f1f0 was disconnected and freed. reset controller. 00:28:19.726 [2024-07-21 03:38:04.951244] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d8cc0 is same with the state(5) to be set 00:28:19.726 [2024-07-21 03:38:04.951257] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d8cc0 is same with the state(5) to be set 00:28:19.726 [2024-07-21 03:38:04.951272] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d8cc0 is same with the state(5) to be set 00:28:19.726 [2024-07-21 03:38:04.951284] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d8cc0 is same with the state(5) to be set 00:28:19.726 [2024-07-21 03:38:04.951311] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d8cc0 is same with the state(5) to be set 00:28:19.726 [2024-07-21 03:38:04.951324] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d8cc0 is same with the state(5) to be set 00:28:19.726 [2024-07-21 03:38:04.951336] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d8cc0 is same with the state(5) to be set 00:28:19.726 [2024-07-21 03:38:04.951433] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d8cc0 is same with the state(5) to be set 00:28:19.726 [2024-07-21 03:38:04.951451] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d8cc0 is same with the state(5) to be set 00:28:19.726 [2024-07-21 03:38:04.951464] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d8cc0 is same with the state(5) to be set 00:28:19.726 [2024-07-21 03:38:04.951663] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:19.726 [2024-07-21 03:38:04.951707] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1247300 (9): Bad file descriptor 00:28:19.726 [2024-07-21 03:38:04.952837] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9160 is same with the state(5) to be set 00:28:19.726 [2024-07-21 03:38:04.952865] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9160 is same with the state(5) to be set 00:28:19.726 [2024-07-21 03:38:04.952879] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9160 is same with the state(5) to be set 00:28:19.726 [2024-07-21 03:38:04.952892] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9160 is same with the state(5) to be set 00:28:19.726 [2024-07-21 03:38:04.952906] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9160 is same with the state(5) to be set 00:28:19.726 [2024-07-21 03:38:04.952920] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9160 is same with the state(5) to be set 00:28:19.726 [2024-07-21 03:38:04.952932] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9160 is same with the state(5) to be set 00:28:19.726 [2024-07-21 03:38:04.952945] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9160 is same with the state(5) to be set 00:28:19.726 [2024-07-21 03:38:04.952959] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9160 is same with the state(5) to be set 00:28:19.726 [2024-07-21 03:38:04.952972] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9160 is same with the state(5) to be set 00:28:19.726 [2024-07-21 03:38:04.952985] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9160 is same with the state(5) to be set 00:28:19.726 [2024-07-21 03:38:04.952997] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9160 is same with the state(5) to be set 00:28:19.726 [2024-07-21 03:38:04.953010] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9160 is same with the state(5) to be set 00:28:19.726 [2024-07-21 03:38:04.953023] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9160 is same with the state(5) to be set 00:28:19.726 [2024-07-21 03:38:04.953037] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9160 is same with the state(5) to be set 00:28:19.726 [2024-07-21 03:38:04.953052] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9160 is same with the state(5) to be set 00:28:19.726 [2024-07-21 03:38:04.953065] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9160 is same with the state(5) to be set 00:28:19.726 [2024-07-21 03:38:04.953085] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9160 is same with the state(5) to be set 00:28:19.726 [2024-07-21 03:38:04.953100] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9160 is same with the state(5) to be set 00:28:19.726 [2024-07-21 03:38:04.953115] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9160 is same with the state(5) to be set 00:28:19.726 [2024-07-21 03:38:04.953129] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9160 is same with the state(5) to be set 00:28:19.726 [2024-07-21 03:38:04.953143] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9160 is same with the state(5) to be set 00:28:19.726 [2024-07-21 03:38:04.953156] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9160 is same with the state(5) to be set 00:28:19.726 [2024-07-21 03:38:04.953171] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9160 is same with the state(5) to be set 00:28:19.726 [2024-07-21 03:38:04.953184] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9160 is same with the state(5) to be set 00:28:19.726 [2024-07-21 03:38:04.953197] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9160 is same with the state(5) to be set 00:28:19.726 [2024-07-21 03:38:04.953210] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9160 is same with the state(5) to be set 00:28:19.726 [2024-07-21 03:38:04.953222] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9160 is same with the state(5) to be set 00:28:19.726 [2024-07-21 03:38:04.953235] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9160 is same with the state(5) to be set 00:28:19.726 [2024-07-21 03:38:04.953247] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9160 is same with the state(5) to be set 00:28:19.726 [2024-07-21 03:38:04.953260] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9160 is same with the state(5) to be set 00:28:19.726 [2024-07-21 03:38:04.953273] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9160 is same with the state(5) to be set 00:28:19.726 [2024-07-21 03:38:04.953289] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9160 is same with the state(5) to be set 00:28:19.726 [2024-07-21 03:38:04.953303] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9160 is same with the state(5) to be set 00:28:19.726 [2024-07-21 03:38:04.953316] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9160 is same with the state(5) to be set 00:28:19.726 [2024-07-21 03:38:04.953310] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:28:19.726 [2024-07-21 03:38:04.953329] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9160 is same with the state(5) to be set 00:28:19.726 [2024-07-21 03:38:04.953342] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9160 is same with the state(5) to be set 00:28:19.726 [2024-07-21 03:38:04.953355] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9160 is same with the state(5) to be set 00:28:19.726 [2024-07-21 03:38:04.953367] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9160 is same with the state(5) to be set 00:28:19.726 [2024-07-21 03:38:04.953375] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1275f90 (9): [2024-07-21 03:38:04.953379] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9160 is same with tBad file descriptor 00:28:19.726 he state(5) to be set 00:28:19.726 [2024-07-21 03:38:04.953398] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9160 is same with the state(5) to be set 00:28:19.726 [2024-07-21 03:38:04.953411] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9160 is same with the state(5) to be set 00:28:19.726 [2024-07-21 03:38:04.953424] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9160 is same with the state(5) to be set 00:28:19.726 [2024-07-21 03:38:04.953440] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9160 is same with the state(5) to be set 00:28:19.726 [2024-07-21 03:38:04.953453] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9160 is same with the state(5) to be set 00:28:19.726 [2024-07-21 03:38:04.953451] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:19.726 [2024-07-21 03:38:04.953466] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9160 is same with the state(5) to be set 00:28:19.726 [2024-07-21 03:38:04.953477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.726 [2024-07-21 03:38:04.953482] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9160 is same with the state(5) to be set 00:28:19.726 [2024-07-21 03:38:04.953496] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9160 is same with t[2024-07-21 03:38:04.953496] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nshe state(5) to be set 00:28:19.726 id:0 cdw10:00000000 cdw11:00000000 00:28:19.726 [2024-07-21 03:38:04.953510] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9160 is same with t[2024-07-21 03:38:04.953511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 che state(5) to be set 00:28:19.726 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.726 [2024-07-21 03:38:04.953525] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9160 is same with the state(5) to be set 00:28:19.726 [2024-07-21 03:38:04.953528] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:19.726 [2024-07-21 03:38:04.953538] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9160 is same with the state(5) to be set 00:28:19.726 [2024-07-21 03:38:04.953542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.726 [2024-07-21 03:38:04.953551] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9160 is same with the state(5) to be set 00:28:19.726 [2024-07-21 03:38:04.953556] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:19.726 [2024-07-21 03:38:04.953564] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9160 is same with the state(5) to be set 00:28:19.726 [2024-07-21 03:38:04.953570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.726 [2024-07-21 03:38:04.953577] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9160 is same with the state(5) to be set 00:28:19.726 [2024-07-21 03:38:04.953584] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1409ec0 is same with the state(5) to be set 00:28:19.726 [2024-07-21 03:38:04.953589] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9160 is same with the state(5) to be set 00:28:19.726 [2024-07-21 03:38:04.953603] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9160 is same with the state(5) to be set 00:28:19.726 [2024-07-21 03:38:04.953625] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9160 is same with the state(5) to be set 00:28:19.726 [2024-07-21 03:38:04.953626] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129ff90 (9): Bad file descriptor 00:28:19.726 [2024-07-21 03:38:04.953640] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9160 is same with the state(5) to be set 00:28:19.726 [2024-07-21 03:38:04.953661] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9160 is same with the state(5) to be set 00:28:19.726 [2024-07-21 03:38:04.953677] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9160 is same with the state(5) to be set 00:28:19.727 [2024-07-21 03:38:04.953690] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9160 is same with the state(5) to be set 00:28:19.727 [2024-07-21 03:38:04.953695] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 ns[2024-07-21 03:38:04.953702] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9160 is same with tid:0 cdw10:00000000 cdw11:00000000 00:28:19.727 he state(5) to be set 00:28:19.727 [2024-07-21 03:38:04.953718] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9160 is same with the state(5) to be set 00:28:19.727 [2024-07-21 03:38:04.953721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.727 [2024-07-21 03:38:04.953738] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:19.727 [2024-07-21 03:38:04.953752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.727 [2024-07-21 03:38:04.953766] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:19.727 [2024-07-21 03:38:04.953780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.727 [2024-07-21 03:38:04.953794] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:19.727 [2024-07-21 03:38:04.953808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.727 [2024-07-21 03:38:04.953820] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126a6b0 is same with the state(5) to be set 00:28:19.727 [2024-07-21 03:38:04.953866] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:19.727 [2024-07-21 03:38:04.953897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.727 [2024-07-21 03:38:04.953913] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:19.727 [2024-07-21 03:38:04.953926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.727 [2024-07-21 03:38:04.953940] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:19.727 [2024-07-21 03:38:04.953964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.727 [2024-07-21 03:38:04.953978] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:19.727 [2024-07-21 03:38:04.953999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.727 [2024-07-21 03:38:04.954013] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3f610 is same with the state(5) to be set 00:28:19.727 [2024-07-21 03:38:04.954052] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1245190 (9): Bad file descriptor 00:28:19.727 [2024-07-21 03:38:04.954102] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:19.727 [2024-07-21 03:38:04.954123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.727 [2024-07-21 03:38:04.954138] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:19.727 [2024-07-21 03:38:04.954157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.727 [2024-07-21 03:38:04.954171] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:19.727 [2024-07-21 03:38:04.954185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.727 [2024-07-21 03:38:04.954199] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:19.727 [2024-07-21 03:38:04.954212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.727 [2024-07-21 03:38:04.954225] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1272810 is same with the state(5) to be set 00:28:19.727 [2024-07-21 03:38:04.954803] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9600 is same with the state(5) to be set 00:28:19.727 [2024-07-21 03:38:04.954841] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9600 is same with the state(5) to be set 00:28:19.727 [2024-07-21 03:38:04.954857] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9600 is same with the state(5) to be set 00:28:19.727 [2024-07-21 03:38:04.954869] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9600 is same with the state(5) to be set 00:28:19.727 [2024-07-21 03:38:04.954912] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9600 is same with the state(5) to be set 00:28:19.727 [2024-07-21 03:38:04.954938] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9600 is same with the state(5) to be set 00:28:19.727 [2024-07-21 03:38:04.954952] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9600 is same with the state(5) to be set 00:28:19.727 [2024-07-21 03:38:04.954968] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9600 is same with the state(5) to be set 00:28:19.727 [2024-07-21 03:38:04.954980] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9600 is same with the state(5) to be set 00:28:19.727 [2024-07-21 03:38:04.954995] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9600 is same with the state(5) to be set 00:28:19.727 [2024-07-21 03:38:04.955008] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9600 is same with the state(5) to be set 00:28:19.727 [2024-07-21 03:38:04.955021] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9600 is same with the state(5) to be set 00:28:19.727 [2024-07-21 03:38:04.955033] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9600 is same with the state(5) to be set 00:28:19.727 [2024-07-21 03:38:04.955047] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9600 is same with the state(5) to be set 00:28:19.727 [2024-07-21 03:38:04.955046] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:19.727 [2024-07-21 03:38:04.955060] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9600 is same with the state(5) to be set 00:28:19.727 [2024-07-21 03:38:04.955072] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9600 is same with the state(5) to be set 00:28:19.727 [2024-07-21 03:38:04.955085] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9600 is same with the state(5) to be set 00:28:19.727 [2024-07-21 03:38:04.955099] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9600 is same with the state(5) to be set 00:28:19.727 [2024-07-21 03:38:04.955112] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9600 is same with the state(5) to be set 00:28:19.727 [2024-07-21 03:38:04.955124] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9600 is same with the state(5) to be set 00:28:19.727 [2024-07-21 03:38:04.955148] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9600 is same with the state(5) to be set 00:28:19.727 [2024-07-21 03:38:04.955163] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9600 is same with the state(5) to be set 00:28:19.727 [2024-07-21 03:38:04.955175] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9600 is same with the state(5) to be set 00:28:19.728 [2024-07-21 03:38:04.955189] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9600 is same with the state(5) to be set 00:28:19.728 [2024-07-21 03:38:04.955204] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9600 is same with the state(5) to be set 00:28:19.728 [2024-07-21 03:38:04.955217] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9600 is same with the state(5) to be set 00:28:19.728 [2024-07-21 03:38:04.955220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.728 [2024-07-21 03:38:04.955229] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9600 is same with the state(5) to be set 00:28:19.728 [2024-07-21 03:38:04.955242] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9600 is same with the state(5) to be set 00:28:19.728 [2024-07-21 03:38:04.955248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1247300 with addr=10.0.0.2, port=4420 00:28:19.728 [2024-07-21 03:38:04.955256] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9600 is same with the state(5) to be set 00:28:19.728 [2024-07-21 03:38:04.955266] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1247300 is same with the state(5) to be set 00:28:19.728 [2024-07-21 03:38:04.955269] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9600 is same with the state(5) to be set 00:28:19.728 [2024-07-21 03:38:04.955282] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9600 is same with the state(5) to be set 00:28:19.728 [2024-07-21 03:38:04.955294] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9600 is same with the state(5) to be set 00:28:19.728 [2024-07-21 03:38:04.955307] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9600 is same with the state(5) to be set 00:28:19.728 [2024-07-21 03:38:04.955321] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9600 is same with the state(5) to be set 00:28:19.728 [2024-07-21 03:38:04.955334] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9600 is same with the state(5) to be set 00:28:19.728 [2024-07-21 03:38:04.955347] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9600 is same with the state(5) to be set 00:28:19.728 [2024-07-21 03:38:04.955359] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9600 is same with the state(5) to be set 00:28:19.728 [2024-07-21 03:38:04.955373] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9600 is same with the state(5) to be set 00:28:19.728 [2024-07-21 03:38:04.955386] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9600 is same with the state(5) to be set 00:28:19.728 [2024-07-21 03:38:04.955398] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9600 is same with the state(5) to be set 00:28:19.728 [2024-07-21 03:38:04.955403] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:19.728 [2024-07-21 03:38:04.955410] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9600 is same with the state(5) to be set 00:28:19.728 [2024-07-21 03:38:04.955426] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9600 is same with the state(5) to be set 00:28:19.728 [2024-07-21 03:38:04.955439] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9600 is same with the state(5) to be set 00:28:19.728 [2024-07-21 03:38:04.955451] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9600 is same with the state(5) to be set 00:28:19.728 [2024-07-21 03:38:04.955466] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9600 is same with the state(5) to be set 00:28:19.728 [2024-07-21 03:38:04.955479] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9600 is same with the state(5) to be set 00:28:19.728 [2024-07-21 03:38:04.955492] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9600 is same with the state(5) to be set 00:28:19.728 [2024-07-21 03:38:04.955505] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9600 is same with the state(5) to be set 00:28:19.728 [2024-07-21 03:38:04.955517] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9600 is same with the state(5) to be set 00:28:19.728 [2024-07-21 03:38:04.955529] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9600 is same with the state(5) to be set 00:28:19.728 [2024-07-21 03:38:04.955543] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9600 is same with the state(5) to be set 00:28:19.728 [2024-07-21 03:38:04.955556] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9600 is same with the state(5) to be set 00:28:19.728 [2024-07-21 03:38:04.955568] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9600 is same with the state(5) to be set 00:28:19.728 [2024-07-21 03:38:04.955581] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9600 is same with the state(5) to be set 00:28:19.728 [2024-07-21 03:38:04.955594] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9600 is same with the state(5) to be set 00:28:19.728 [2024-07-21 03:38:04.955606] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9600 is same with the state(5) to be set 00:28:19.728 [2024-07-21 03:38:04.955631] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9600 is same with the state(5) to be set 00:28:19.728 [2024-07-21 03:38:04.955645] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9600 is same with the state(5) to be set 00:28:19.728 [2024-07-21 03:38:04.955665] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9600 is same with the state(5) to be set 00:28:19.728 [2024-07-21 03:38:04.955677] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9600 is same with the state(5) to be set 00:28:19.728 [2024-07-21 03:38:04.955689] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9600 is same with the state(5) to be set 00:28:19.728 [2024-07-21 03:38:04.955701] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9600 is same with the state(5) to be set 00:28:19.728 [2024-07-21 03:38:04.955713] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9600 is same with the state(5) to be set 00:28:19.728 [2024-07-21 03:38:04.956111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.728 [2024-07-21 03:38:04.956140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1275f90 with addr=10.0.0.2, port=4420 00:28:19.728 [2024-07-21 03:38:04.956157] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1275f90 is same with the state(5) to be set 00:28:19.728 [2024-07-21 03:38:04.956176] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1247300 (9): Bad file descriptor 00:28:19.728 [2024-07-21 03:38:04.956251] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:19.728 [2024-07-21 03:38:04.956343] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:19.728 [2024-07-21 03:38:04.956470] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9aa0 is same with the state(5) to be set 00:28:19.728 [2024-07-21 03:38:04.956497] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9aa0 is same with the state(5) to be set 00:28:19.728 [2024-07-21 03:38:04.956511] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9aa0 is same with the state(5) to be set 00:28:19.728 [2024-07-21 03:38:04.956524] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9aa0 is same with the state(5) to be set 00:28:19.728 [2024-07-21 03:38:04.956549] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9aa0 is same with the state(5) to be set 00:28:19.728 [2024-07-21 03:38:04.956563] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9aa0 is same with the state(5) to be set 00:28:19.728 [2024-07-21 03:38:04.956576] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9aa0 is same with the state(5) to be set 00:28:19.728 [2024-07-21 03:38:04.956589] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9aa0 is same with the state(5) to be set 00:28:19.728 [2024-07-21 03:38:04.956601] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9aa0 is same with the state(5) to be set 00:28:19.728 [2024-07-21 03:38:04.956623] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9aa0 is same with the state(5) to be set 00:28:19.728 [2024-07-21 03:38:04.956639] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9aa0 is same with the state(5) to be set 00:28:19.728 [2024-07-21 03:38:04.956643] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1275f90 (9): Bad file descriptor 00:28:19.728 [2024-07-21 03:38:04.956660] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9aa0 is same with the state(5) to be set 00:28:19.728 [2024-07-21 03:38:04.956669] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:19.728 [2024-07-21 03:38:04.956673] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9aa0 is same with the state(5) to be set 00:28:19.728 [2024-07-21 03:38:04.956684] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:19.728 [2024-07-21 03:38:04.956689] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9aa0 is same with the state(5) to be set 00:28:19.728 [2024-07-21 03:38:04.956700] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:19.728 [2024-07-21 03:38:04.956702] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9aa0 is same with the state(5) to be set 00:28:19.728 [2024-07-21 03:38:04.956716] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9aa0 is same with the state(5) to be set 00:28:19.728 [2024-07-21 03:38:04.956728] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9aa0 is same with the state(5) to be set 00:28:19.728 [2024-07-21 03:38:04.956740] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9aa0 is same with the state(5) to be set 00:28:19.728 [2024-07-21 03:38:04.956754] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9aa0 is same with the state(5) to be set 00:28:19.728 [2024-07-21 03:38:04.956767] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9aa0 is same with the state(5) to be set 00:28:19.728 [2024-07-21 03:38:04.956780] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9aa0 is same with the state(5) to be set 00:28:19.728 [2024-07-21 03:38:04.956792] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9aa0 is same with the state(5) to be set 00:28:19.728 [2024-07-21 03:38:04.956805] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:19.728 [2024-07-21 03:38:04.956820] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9aa0 is same with the state(5) to be set 00:28:19.728 [2024-07-21 03:38:04.956835] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9aa0 is same with the state(5) to be set 00:28:19.728 [2024-07-21 03:38:04.956849] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9aa0 is same with the state(5) to be set 00:28:19.728 [2024-07-21 03:38:04.956863] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9aa0 is same with the state(5) to be set 00:28:19.728 [2024-07-21 03:38:04.956882] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9aa0 is same with the state(5) to be set 00:28:19.728 [2024-07-21 03:38:04.956903] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9aa0 is same with the state(5) to be set 00:28:19.728 [2024-07-21 03:38:04.956917] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9aa0 is same with the state(5) to be set 00:28:19.728 [2024-07-21 03:38:04.956929] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9aa0 is same with the state(5) to be set 00:28:19.728 [2024-07-21 03:38:04.956942] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9aa0 is same with the state(5) to be set 00:28:19.728 [2024-07-21 03:38:04.956955] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9aa0 is same with the state(5) to be set 00:28:19.728 [2024-07-21 03:38:04.956969] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9aa0 is same with the state(5) to be set 00:28:19.728 [2024-07-21 03:38:04.956983] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9aa0 is same with the state(5) to be set 00:28:19.728 [2024-07-21 03:38:04.956995] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9aa0 is same with the state(5) to be set 00:28:19.728 [2024-07-21 03:38:04.957007] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9aa0 is same with the state(5) to be set 00:28:19.728 [2024-07-21 03:38:04.957020] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9aa0 is same with the state(5) to be set 00:28:19.728 [2024-07-21 03:38:04.957032] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9aa0 is same with the state(5) to be set 00:28:19.729 [2024-07-21 03:38:04.957038] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:19.729 [2024-07-21 03:38:04.957045] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9aa0 is same with the state(5) to be set 00:28:19.729 [2024-07-21 03:38:04.957062] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:28:19.729 [2024-07-21 03:38:04.957066] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9aa0 is same with the state(5) to be set 00:28:19.729 [2024-07-21 03:38:04.957077] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:28:19.729 [2024-07-21 03:38:04.957080] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9aa0 is same with the state(5) to be set 00:28:19.729 [2024-07-21 03:38:04.957091] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:28:19.729 [2024-07-21 03:38:04.957093] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9aa0 is same with the state(5) to be set 00:28:19.729 [2024-07-21 03:38:04.957108] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9aa0 is same with the state(5) to be set 00:28:19.729 [2024-07-21 03:38:04.957121] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9aa0 is same with the state(5) to be set 00:28:19.729 [2024-07-21 03:38:04.957134] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9aa0 is same with the state(5) to be set 00:28:19.729 [2024-07-21 03:38:04.957147] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9aa0 is same with the state(5) to be set 00:28:19.729 [2024-07-21 03:38:04.957161] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9aa0 is same with the state(5) to be set 00:28:19.729 [2024-07-21 03:38:04.957174] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9aa0 is same with the state(5) to be set 00:28:19.729 [2024-07-21 03:38:04.957187] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9aa0 is same with the state(5) to be set 00:28:19.729 [2024-07-21 03:38:04.957188] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:19.729 [2024-07-21 03:38:04.957199] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9aa0 is same with the state(5) to be set 00:28:19.729 [2024-07-21 03:38:04.957216] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9aa0 is same with the state(5) to be set 00:28:19.729 [2024-07-21 03:38:04.957230] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9aa0 is same with the state(5) to be set 00:28:19.729 [2024-07-21 03:38:04.957244] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9aa0 is same with the state(5) to be set 00:28:19.729 [2024-07-21 03:38:04.957256] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9aa0 is same with the state(5) to be set 00:28:19.729 [2024-07-21 03:38:04.957269] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9aa0 is same with the state(5) to be set 00:28:19.729 [2024-07-21 03:38:04.957282] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9aa0 is same with the state(5) to be set 00:28:19.729 [2024-07-21 03:38:04.957294] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9aa0 is same with the state(5) to be set 00:28:19.729 [2024-07-21 03:38:04.957306] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9aa0 is same with the state(5) to be set 00:28:19.729 [2024-07-21 03:38:04.957322] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9aa0 is same with the state(5) to be set 00:28:19.729 [2024-07-21 03:38:04.957328] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:19.729 [2024-07-21 03:38:04.957335] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9aa0 is same with the state(5) to be set 00:28:19.729 [2024-07-21 03:38:04.957349] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9aa0 is same with the state(5) to be set 00:28:19.729 [2024-07-21 03:38:04.957361] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9aa0 is same with the state(5) to be set 00:28:19.729 [2024-07-21 03:38:04.957373] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9aa0 is same with the state(5) to be set 00:28:19.729 [2024-07-21 03:38:04.957584] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:19.729 [2024-07-21 03:38:04.958121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.729 [2024-07-21 03:38:04.958145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.729 [2024-07-21 03:38:04.958168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.729 [2024-07-21 03:38:04.958184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.729 [2024-07-21 03:38:04.958200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.729 [2024-07-21 03:38:04.958216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.729 [2024-07-21 03:38:04.958232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.729 [2024-07-21 03:38:04.958246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.729 [2024-07-21 03:38:04.958262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.729 [2024-07-21 03:38:04.958275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.729 [2024-07-21 03:38:04.958290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.729 [2024-07-21 03:38:04.958309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.729 [2024-07-21 03:38:04.958326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.729 [2024-07-21 03:38:04.958340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.729 [2024-07-21 03:38:04.958356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.729 [2024-07-21 03:38:04.958370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.729 [2024-07-21 03:38:04.958386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.729 [2024-07-21 03:38:04.958400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.729 [2024-07-21 03:38:04.958416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.729 [2024-07-21 03:38:04.958430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.729 [2024-07-21 03:38:04.958445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.729 [2024-07-21 03:38:04.958461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.729 [2024-07-21 03:38:04.958477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.729 [2024-07-21 03:38:04.958492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.729 [2024-07-21 03:38:04.958508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.729 [2024-07-21 03:38:04.958522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.729 [2024-07-21 03:38:04.958537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.729 [2024-07-21 03:38:04.958551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.729 [2024-07-21 03:38:04.958567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.729 [2024-07-21 03:38:04.958587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.729 [2024-07-21 03:38:04.958603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.729 [2024-07-21 03:38:04.958625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.729 [2024-07-21 03:38:04.958643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.729 [2024-07-21 03:38:04.958661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.729 [2024-07-21 03:38:04.958677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.729 [2024-07-21 03:38:04.958692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.729 [2024-07-21 03:38:04.958707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.729 [2024-07-21 03:38:04.958725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.729 [2024-07-21 03:38:04.958742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.729 [2024-07-21 03:38:04.958757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.729 [2024-07-21 03:38:04.958772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.729 [2024-07-21 03:38:04.958786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.729 [2024-07-21 03:38:04.958802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.729 [2024-07-21 03:38:04.958815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.729 [2024-07-21 03:38:04.958831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.729 [2024-07-21 03:38:04.958845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.729 [2024-07-21 03:38:04.958861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.729 [2024-07-21 03:38:04.958876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.729 [2024-07-21 03:38:04.958891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.729 [2024-07-21 03:38:04.958915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.729 [2024-07-21 03:38:04.958947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.729 [2024-07-21 03:38:04.958970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.729 [2024-07-21 03:38:04.958985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.729 [2024-07-21 03:38:04.958999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.729 [2024-07-21 03:38:04.959014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.729 [2024-07-21 03:38:04.959028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.729 [2024-07-21 03:38:04.959043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.730 [2024-07-21 03:38:04.959056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.730 [2024-07-21 03:38:04.959071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.730 [2024-07-21 03:38:04.959085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.730 [2024-07-21 03:38:04.959100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.730 [2024-07-21 03:38:04.959119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.730 [2024-07-21 03:38:04.959138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.730 [2024-07-21 03:38:04.959152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.730 [2024-07-21 03:38:04.959167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.730 [2024-07-21 03:38:04.959181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.730 [2024-07-21 03:38:04.959197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.730 [2024-07-21 03:38:04.959211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.730 [2024-07-21 03:38:04.959226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.730 [2024-07-21 03:38:04.959239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.730 [2024-07-21 03:38:04.959254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.730 [2024-07-21 03:38:04.959267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.730 [2024-07-21 03:38:04.959283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.730 [2024-07-21 03:38:04.959296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.730 [2024-07-21 03:38:04.959311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.730 [2024-07-21 03:38:04.959325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.730 [2024-07-21 03:38:04.959340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.730 [2024-07-21 03:38:04.959354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.730 [2024-07-21 03:38:04.959369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.730 [2024-07-21 03:38:04.959384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.730 [2024-07-21 03:38:04.959399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.730 [2024-07-21 03:38:04.959413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.730 [2024-07-21 03:38:04.959428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.730 [2024-07-21 03:38:04.959441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.730 [2024-07-21 03:38:04.959456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.730 [2024-07-21 03:38:04.959470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.730 [2024-07-21 03:38:04.959484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.730 [2024-07-21 03:38:04.959501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.730 [2024-07-21 03:38:04.959517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.730 [2024-07-21 03:38:04.959530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.730 [2024-07-21 03:38:04.959545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.730 [2024-07-21 03:38:04.959559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.730 [2024-07-21 03:38:04.959574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.730 [2024-07-21 03:38:04.959594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.730 [2024-07-21 03:38:04.959609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.730 [2024-07-21 03:38:04.959646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.730 [2024-07-21 03:38:04.959670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.730 [2024-07-21 03:38:04.959685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.730 [2024-07-21 03:38:04.959700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.730 [2024-07-21 03:38:04.959714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.730 [2024-07-21 03:38:04.959731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.730 [2024-07-21 03:38:04.959745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.730 [2024-07-21 03:38:04.959761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.730 [2024-07-21 03:38:04.959775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.730 [2024-07-21 03:38:04.959791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.730 [2024-07-21 03:38:04.959805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.730 [2024-07-21 03:38:04.959821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.730 [2024-07-21 03:38:04.959835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.730 [2024-07-21 03:38:04.959850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.730 [2024-07-21 03:38:04.959864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.730 [2024-07-21 03:38:04.959880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.730 [2024-07-21 03:38:04.959903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.730 [2024-07-21 03:38:04.959922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.730 [2024-07-21 03:38:04.959951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.730 [2024-07-21 03:38:04.959970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.730 [2024-07-21 03:38:04.959983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.730 [2024-07-21 03:38:04.959998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.730 [2024-07-21 03:38:04.960012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.730 [2024-07-21 03:38:04.960027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.730 [2024-07-21 03:38:04.960040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.730 [2024-07-21 03:38:04.960055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.730 [2024-07-21 03:38:04.960069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.730 [2024-07-21 03:38:04.960084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.730 [2024-07-21 03:38:04.960097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.730 [2024-07-21 03:38:04.960112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.730 [2024-07-21 03:38:04.960127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.730 [2024-07-21 03:38:04.960143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.730 [2024-07-21 03:38:04.960158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.730 [2024-07-21 03:38:04.960173] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12433e0 is same with the state(5) to be set 00:28:19.730 [2024-07-21 03:38:04.960247] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x12433e0 was disconnected and freed. reset controller. 00:28:19.730 [2024-07-21 03:38:04.961448] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:28:19.730 [2024-07-21 03:38:04.961510] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13eaf50 (9): Bad file descriptor 00:28:19.730 [2024-07-21 03:38:04.962028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.730 [2024-07-21 03:38:04.962057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13eaf50 with addr=10.0.0.2, port=4420 00:28:19.730 [2024-07-21 03:38:04.962074] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13eaf50 is same with the state(5) to be set 00:28:19.730 [2024-07-21 03:38:04.962144] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13eaf50 (9): Bad file descriptor 00:28:19.730 [2024-07-21 03:38:04.962213] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:28:19.730 [2024-07-21 03:38:04.962232] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:28:19.730 [2024-07-21 03:38:04.962253] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:28:19.730 [2024-07-21 03:38:04.962312] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:19.730 [2024-07-21 03:38:04.963354] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1409ec0 (9): Bad file descriptor 00:28:19.730 [2024-07-21 03:38:04.963419] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:19.730 [2024-07-21 03:38:04.963441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.731 [2024-07-21 03:38:04.963457] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:19.731 [2024-07-21 03:38:04.963472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.731 [2024-07-21 03:38:04.963486] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:19.731 [2024-07-21 03:38:04.963500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.731 [2024-07-21 03:38:04.963514] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:19.731 [2024-07-21 03:38:04.963528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.731 [2024-07-21 03:38:04.963541] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ebf00 is same with the state(5) to be set 00:28:19.731 [2024-07-21 03:38:04.963571] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x126a6b0 (9): Bad file descriptor 00:28:19.731 [2024-07-21 03:38:04.963601] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd3f610 (9): Bad file descriptor 00:28:19.731 [2024-07-21 03:38:04.963650] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1272810 (9): Bad file descriptor 00:28:19.731 [2024-07-21 03:38:04.963782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.731 [2024-07-21 03:38:04.963805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.731 [2024-07-21 03:38:04.963827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.731 [2024-07-21 03:38:04.963843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.731 [2024-07-21 03:38:04.963860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.731 [2024-07-21 03:38:04.963875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.731 [2024-07-21 03:38:04.963891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.731 [2024-07-21 03:38:04.963906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.731 [2024-07-21 03:38:04.963922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.731 [2024-07-21 03:38:04.963936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.731 [2024-07-21 03:38:04.963952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.731 [2024-07-21 03:38:04.963975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.731 [2024-07-21 03:38:04.963992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.731 [2024-07-21 03:38:04.964007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.731 [2024-07-21 03:38:04.964024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.731 [2024-07-21 03:38:04.964038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.731 [2024-07-21 03:38:04.964054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.731 [2024-07-21 03:38:04.964068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.731 [2024-07-21 03:38:04.964084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.731 [2024-07-21 03:38:04.964098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.731 [2024-07-21 03:38:04.964116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.731 [2024-07-21 03:38:04.964130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.731 [2024-07-21 03:38:04.964147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.731 [2024-07-21 03:38:04.964162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.731 [2024-07-21 03:38:04.964178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.731 [2024-07-21 03:38:04.964193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.731 [2024-07-21 03:38:04.964210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.731 [2024-07-21 03:38:04.964224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.731 [2024-07-21 03:38:04.964241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.731 [2024-07-21 03:38:04.964256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.731 [2024-07-21 03:38:04.964272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.731 [2024-07-21 03:38:04.964287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.731 [2024-07-21 03:38:04.964303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.731 [2024-07-21 03:38:04.964318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.731 [2024-07-21 03:38:04.964335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.731 [2024-07-21 03:38:04.964349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.731 [2024-07-21 03:38:04.964370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.731 [2024-07-21 03:38:04.964385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.731 [2024-07-21 03:38:04.964401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.731 [2024-07-21 03:38:04.964416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.731 [2024-07-21 03:38:04.964432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.731 [2024-07-21 03:38:04.964447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.731 [2024-07-21 03:38:04.964463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.731 [2024-07-21 03:38:04.964478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.731 [2024-07-21 03:38:04.964494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.731 [2024-07-21 03:38:04.964508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.731 [2024-07-21 03:38:04.964524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.731 [2024-07-21 03:38:04.964538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.731 [2024-07-21 03:38:04.964554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.731 [2024-07-21 03:38:04.964569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.731 [2024-07-21 03:38:04.964585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.731 [2024-07-21 03:38:04.964599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.731 [2024-07-21 03:38:04.964624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.731 [2024-07-21 03:38:04.964642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.731 [2024-07-21 03:38:04.964658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.731 [2024-07-21 03:38:04.964672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.731 [2024-07-21 03:38:04.964688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.731 [2024-07-21 03:38:04.964703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.731 [2024-07-21 03:38:04.964719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.731 [2024-07-21 03:38:04.964733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.731 [2024-07-21 03:38:04.964748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.731 [2024-07-21 03:38:04.964766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.731 [2024-07-21 03:38:04.964783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.732 [2024-07-21 03:38:04.964798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.732 [2024-07-21 03:38:04.964814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.732 [2024-07-21 03:38:04.964829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.732 [2024-07-21 03:38:04.964845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.732 [2024-07-21 03:38:04.964860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.732 [2024-07-21 03:38:04.964876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.732 [2024-07-21 03:38:04.964890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.732 [2024-07-21 03:38:04.964905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.732 [2024-07-21 03:38:04.964920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.732 [2024-07-21 03:38:04.964936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.732 [2024-07-21 03:38:04.964950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.732 [2024-07-21 03:38:04.964966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.732 [2024-07-21 03:38:04.964980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.732 [2024-07-21 03:38:04.964996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.732 [2024-07-21 03:38:04.965010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.732 [2024-07-21 03:38:04.965026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.732 [2024-07-21 03:38:04.965040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.732 [2024-07-21 03:38:04.965057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.732 [2024-07-21 03:38:04.965070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.732 [2024-07-21 03:38:04.965087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.732 [2024-07-21 03:38:04.965100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.732 [2024-07-21 03:38:04.965116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.732 [2024-07-21 03:38:04.965137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.732 [2024-07-21 03:38:04.965157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.732 [2024-07-21 03:38:04.965172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.732 [2024-07-21 03:38:04.965188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.732 [2024-07-21 03:38:04.965202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.732 [2024-07-21 03:38:04.965218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.732 [2024-07-21 03:38:04.965233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.732 [2024-07-21 03:38:04.965249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.732 [2024-07-21 03:38:04.965263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.732 [2024-07-21 03:38:04.965279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.732 [2024-07-21 03:38:04.965293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.732 [2024-07-21 03:38:04.965314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.732 [2024-07-21 03:38:04.965329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.732 [2024-07-21 03:38:04.965344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.732 [2024-07-21 03:38:04.965358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.732 [2024-07-21 03:38:04.965374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.732 [2024-07-21 03:38:04.965388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.732 [2024-07-21 03:38:04.965403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.732 [2024-07-21 03:38:04.965417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.732 [2024-07-21 03:38:04.965433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.732 [2024-07-21 03:38:04.965447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.732 [2024-07-21 03:38:04.965463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.732 [2024-07-21 03:38:04.976808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.732 [2024-07-21 03:38:04.976882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.732 [2024-07-21 03:38:04.976899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.732 [2024-07-21 03:38:04.976914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.732 [2024-07-21 03:38:04.976941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.732 [2024-07-21 03:38:04.976958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.732 [2024-07-21 03:38:04.976972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.732 [2024-07-21 03:38:04.976988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.732 [2024-07-21 03:38:04.977002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.732 [2024-07-21 03:38:04.977017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.732 [2024-07-21 03:38:04.977034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.732 [2024-07-21 03:38:04.977049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.732 [2024-07-21 03:38:04.977063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.732 [2024-07-21 03:38:04.977079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.732 [2024-07-21 03:38:04.977093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.732 [2024-07-21 03:38:04.977109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.732 [2024-07-21 03:38:04.977123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.732 [2024-07-21 03:38:04.977139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.732 [2024-07-21 03:38:04.977153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.732 [2024-07-21 03:38:04.977169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.732 [2024-07-21 03:38:04.977183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.732 [2024-07-21 03:38:04.977200] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13201d0 is same with the state(5) to be set 00:28:19.732 [2024-07-21 03:38:04.978621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.732 [2024-07-21 03:38:04.978646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.732 [2024-07-21 03:38:04.978672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.732 [2024-07-21 03:38:04.978688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.732 [2024-07-21 03:38:04.978704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.732 [2024-07-21 03:38:04.978719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.732 [2024-07-21 03:38:04.978735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.732 [2024-07-21 03:38:04.978754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.732 [2024-07-21 03:38:04.978772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.732 [2024-07-21 03:38:04.978787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.732 [2024-07-21 03:38:04.978803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.732 [2024-07-21 03:38:04.978818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.732 [2024-07-21 03:38:04.978833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.732 [2024-07-21 03:38:04.978848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.732 [2024-07-21 03:38:04.978863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.732 [2024-07-21 03:38:04.978878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.732 [2024-07-21 03:38:04.978894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.732 [2024-07-21 03:38:04.978908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.732 [2024-07-21 03:38:04.978925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.732 [2024-07-21 03:38:04.978939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.733 [2024-07-21 03:38:04.978955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.733 [2024-07-21 03:38:04.978969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.733 [2024-07-21 03:38:04.978986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.733 [2024-07-21 03:38:04.979000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.733 [2024-07-21 03:38:04.979016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.733 [2024-07-21 03:38:04.979030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.733 [2024-07-21 03:38:04.979046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.733 [2024-07-21 03:38:04.979060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.733 [2024-07-21 03:38:04.979076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.733 [2024-07-21 03:38:04.979091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.733 [2024-07-21 03:38:04.979107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.733 [2024-07-21 03:38:04.979121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.733 [2024-07-21 03:38:04.979138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.733 [2024-07-21 03:38:04.979156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.733 [2024-07-21 03:38:04.979172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.733 [2024-07-21 03:38:04.979187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.733 [2024-07-21 03:38:04.979203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.733 [2024-07-21 03:38:04.979217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.733 [2024-07-21 03:38:04.979234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.733 [2024-07-21 03:38:04.979248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.733 [2024-07-21 03:38:04.979264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.733 [2024-07-21 03:38:04.979278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.733 [2024-07-21 03:38:04.979294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.733 [2024-07-21 03:38:04.979308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.733 [2024-07-21 03:38:04.979324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.733 [2024-07-21 03:38:04.979339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.733 [2024-07-21 03:38:04.979355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.733 [2024-07-21 03:38:04.979369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.733 [2024-07-21 03:38:04.979386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.733 [2024-07-21 03:38:04.979400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.733 [2024-07-21 03:38:04.979417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.733 [2024-07-21 03:38:04.979431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.733 [2024-07-21 03:38:04.979447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.733 [2024-07-21 03:38:04.979461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.733 [2024-07-21 03:38:04.979478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.733 [2024-07-21 03:38:04.979492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.733 [2024-07-21 03:38:04.979508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.733 [2024-07-21 03:38:04.979523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.733 [2024-07-21 03:38:04.979543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.733 [2024-07-21 03:38:04.979558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.733 [2024-07-21 03:38:04.979574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.733 [2024-07-21 03:38:04.979588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.733 [2024-07-21 03:38:04.979605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.733 [2024-07-21 03:38:04.979628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.733 [2024-07-21 03:38:04.979645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.733 [2024-07-21 03:38:04.979660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.733 [2024-07-21 03:38:04.979676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.733 [2024-07-21 03:38:04.979691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.733 [2024-07-21 03:38:04.979707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.733 [2024-07-21 03:38:04.979721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.733 [2024-07-21 03:38:04.979738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.733 [2024-07-21 03:38:04.979752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.733 [2024-07-21 03:38:04.979768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.733 [2024-07-21 03:38:04.979782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.733 [2024-07-21 03:38:04.979798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.733 [2024-07-21 03:38:04.979812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.733 [2024-07-21 03:38:04.979828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.733 [2024-07-21 03:38:04.979843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.733 [2024-07-21 03:38:04.979859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.733 [2024-07-21 03:38:04.979872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.733 [2024-07-21 03:38:04.979889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.733 [2024-07-21 03:38:04.979903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.733 [2024-07-21 03:38:04.979920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.733 [2024-07-21 03:38:04.979937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.733 [2024-07-21 03:38:04.979954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.733 [2024-07-21 03:38:04.979969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.733 [2024-07-21 03:38:04.979985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.733 [2024-07-21 03:38:04.979999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.733 [2024-07-21 03:38:04.980015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.733 [2024-07-21 03:38:04.980029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.733 [2024-07-21 03:38:04.980045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.733 [2024-07-21 03:38:04.980059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.733 [2024-07-21 03:38:04.980075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.733 [2024-07-21 03:38:04.980090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.733 [2024-07-21 03:38:04.980106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.733 [2024-07-21 03:38:04.980121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.733 [2024-07-21 03:38:04.980137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.733 [2024-07-21 03:38:04.980152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.733 [2024-07-21 03:38:04.980168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.733 [2024-07-21 03:38:04.980183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.733 [2024-07-21 03:38:04.980199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.733 [2024-07-21 03:38:04.980213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.733 [2024-07-21 03:38:04.980229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.733 [2024-07-21 03:38:04.980244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.733 [2024-07-21 03:38:04.980260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.733 [2024-07-21 03:38:04.980274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.734 [2024-07-21 03:38:04.980290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.734 [2024-07-21 03:38:04.980304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.734 [2024-07-21 03:38:04.980324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.734 [2024-07-21 03:38:04.980340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.734 [2024-07-21 03:38:04.980356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.734 [2024-07-21 03:38:04.980371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.734 [2024-07-21 03:38:04.980386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.734 [2024-07-21 03:38:04.980401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.734 [2024-07-21 03:38:04.980417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.734 [2024-07-21 03:38:04.980431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.734 [2024-07-21 03:38:04.980447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.734 [2024-07-21 03:38:04.980461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.734 [2024-07-21 03:38:04.980477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.734 [2024-07-21 03:38:04.980492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.734 [2024-07-21 03:38:04.980508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.734 [2024-07-21 03:38:04.980522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.734 [2024-07-21 03:38:04.980538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.734 [2024-07-21 03:38:04.980552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.734 [2024-07-21 03:38:04.980568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.734 [2024-07-21 03:38:04.980582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.734 [2024-07-21 03:38:04.980600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.734 [2024-07-21 03:38:04.980622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.734 [2024-07-21 03:38:04.980639] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13178b0 is same with the state(5) to be set 00:28:19.734 [2024-07-21 03:38:04.982716] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:28:19.734 [2024-07-21 03:38:04.982749] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:28:19.734 [2024-07-21 03:38:04.982866] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13ebf00 (9): Bad file descriptor 00:28:19.734 [2024-07-21 03:38:04.982926] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:19.734 [2024-07-21 03:38:04.983031] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:19.734 [2024-07-21 03:38:04.983273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.734 [2024-07-21 03:38:04.983302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1245190 with addr=10.0.0.2, port=4420 00:28:19.734 [2024-07-21 03:38:04.983320] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1245190 is same with the state(5) to be set 00:28:19.734 [2024-07-21 03:38:04.983436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.734 [2024-07-21 03:38:04.983461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129ff90 with addr=10.0.0.2, port=4420 00:28:19.734 [2024-07-21 03:38:04.983476] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129ff90 is same with the state(5) to be set 00:28:19.734 [2024-07-21 03:38:04.983815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.734 [2024-07-21 03:38:04.983838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.734 [2024-07-21 03:38:04.983860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.734 [2024-07-21 03:38:04.983876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.734 [2024-07-21 03:38:04.983893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.734 [2024-07-21 03:38:04.983908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.734 [2024-07-21 03:38:04.983924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.734 [2024-07-21 03:38:04.983939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.734 [2024-07-21 03:38:04.983956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.734 [2024-07-21 03:38:04.983970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.734 [2024-07-21 03:38:04.983988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.734 [2024-07-21 03:38:04.984002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.734 [2024-07-21 03:38:04.984019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.734 [2024-07-21 03:38:04.984033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.734 [2024-07-21 03:38:04.984049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.734 [2024-07-21 03:38:04.984064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.734 [2024-07-21 03:38:04.984080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.734 [2024-07-21 03:38:04.984094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.734 [2024-07-21 03:38:04.984111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.734 [2024-07-21 03:38:04.984126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.734 [2024-07-21 03:38:04.984148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.734 [2024-07-21 03:38:04.984163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.734 [2024-07-21 03:38:04.984179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.734 [2024-07-21 03:38:04.984193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.734 [2024-07-21 03:38:04.984210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.734 [2024-07-21 03:38:04.984224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.734 [2024-07-21 03:38:04.984240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.734 [2024-07-21 03:38:04.984254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.734 [2024-07-21 03:38:04.984270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.734 [2024-07-21 03:38:04.984285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.734 [2024-07-21 03:38:04.984301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.734 [2024-07-21 03:38:04.984316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.734 [2024-07-21 03:38:04.984332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.734 [2024-07-21 03:38:04.984346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.734 [2024-07-21 03:38:04.984362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.734 [2024-07-21 03:38:04.984377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.734 [2024-07-21 03:38:04.984393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.734 [2024-07-21 03:38:04.984406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.734 [2024-07-21 03:38:04.984422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.734 [2024-07-21 03:38:04.984437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.734 [2024-07-21 03:38:04.984453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.734 [2024-07-21 03:38:04.984467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.734 [2024-07-21 03:38:04.984483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.734 [2024-07-21 03:38:04.984497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.734 [2024-07-21 03:38:04.984512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.734 [2024-07-21 03:38:04.984530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.734 [2024-07-21 03:38:04.984548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.734 [2024-07-21 03:38:04.984563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.734 [2024-07-21 03:38:04.984579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.734 [2024-07-21 03:38:04.984593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.734 [2024-07-21 03:38:04.984611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.734 [2024-07-21 03:38:04.984635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.734 [2024-07-21 03:38:04.984652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.735 [2024-07-21 03:38:04.984666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.735 [2024-07-21 03:38:04.984682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.735 [2024-07-21 03:38:04.984697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.735 [2024-07-21 03:38:04.984713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.735 [2024-07-21 03:38:04.984727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.735 [2024-07-21 03:38:04.984743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.735 [2024-07-21 03:38:04.984757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.735 [2024-07-21 03:38:04.984773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.735 [2024-07-21 03:38:04.984788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.735 [2024-07-21 03:38:04.984804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.735 [2024-07-21 03:38:04.984818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.735 [2024-07-21 03:38:04.984834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.735 [2024-07-21 03:38:04.984848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.735 [2024-07-21 03:38:04.984864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.735 [2024-07-21 03:38:04.984879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.735 [2024-07-21 03:38:04.984895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.735 [2024-07-21 03:38:04.984909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.735 [2024-07-21 03:38:04.984929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.735 [2024-07-21 03:38:04.984944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.735 [2024-07-21 03:38:04.984960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.735 [2024-07-21 03:38:04.984974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.735 [2024-07-21 03:38:04.984990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.735 [2024-07-21 03:38:04.985005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.735 [2024-07-21 03:38:04.985021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.735 [2024-07-21 03:38:04.985035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.735 [2024-07-21 03:38:04.985051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.735 [2024-07-21 03:38:04.985066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.735 [2024-07-21 03:38:04.985082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.735 [2024-07-21 03:38:04.985096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.735 [2024-07-21 03:38:04.985113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.735 [2024-07-21 03:38:04.985127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.735 [2024-07-21 03:38:04.985143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.735 [2024-07-21 03:38:04.985157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.735 [2024-07-21 03:38:04.985173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.735 [2024-07-21 03:38:04.985187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.735 [2024-07-21 03:38:04.985203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.735 [2024-07-21 03:38:04.985218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.735 [2024-07-21 03:38:04.985234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.735 [2024-07-21 03:38:04.985248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.735 [2024-07-21 03:38:04.985265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.735 [2024-07-21 03:38:04.985280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.735 [2024-07-21 03:38:04.985296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.735 [2024-07-21 03:38:04.985313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.735 [2024-07-21 03:38:04.985330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.735 [2024-07-21 03:38:04.985344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.735 [2024-07-21 03:38:04.985360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.735 [2024-07-21 03:38:04.985375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.735 [2024-07-21 03:38:04.985391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.735 [2024-07-21 03:38:04.985405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.735 [2024-07-21 03:38:04.985422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.735 [2024-07-21 03:38:04.985437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.735 [2024-07-21 03:38:04.985453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.735 [2024-07-21 03:38:04.985467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.735 [2024-07-21 03:38:04.985483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.735 [2024-07-21 03:38:04.985497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.735 [2024-07-21 03:38:04.985513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.735 [2024-07-21 03:38:04.985528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.735 [2024-07-21 03:38:04.985545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.735 [2024-07-21 03:38:04.985559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.735 [2024-07-21 03:38:04.985575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.735 [2024-07-21 03:38:04.985590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.735 [2024-07-21 03:38:04.985606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.735 [2024-07-21 03:38:04.985626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.735 [2024-07-21 03:38:04.985643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.735 [2024-07-21 03:38:04.985657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.735 [2024-07-21 03:38:04.985673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.735 [2024-07-21 03:38:04.985688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.735 [2024-07-21 03:38:04.985708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.735 [2024-07-21 03:38:04.985722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.735 [2024-07-21 03:38:04.985738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.735 [2024-07-21 03:38:04.985753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.735 [2024-07-21 03:38:04.985769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.735 [2024-07-21 03:38:04.985783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.735 [2024-07-21 03:38:04.985799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.736 [2024-07-21 03:38:04.985813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.736 [2024-07-21 03:38:04.985827] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139dcd0 is same with the state(5) to be set 00:28:19.736 [2024-07-21 03:38:04.987098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.736 [2024-07-21 03:38:04.987122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.736 [2024-07-21 03:38:04.987143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.736 [2024-07-21 03:38:04.987159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.736 [2024-07-21 03:38:04.987175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.736 [2024-07-21 03:38:04.987190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.736 [2024-07-21 03:38:04.987206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.736 [2024-07-21 03:38:04.987220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.736 [2024-07-21 03:38:04.987236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.736 [2024-07-21 03:38:04.987250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.736 [2024-07-21 03:38:04.987266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.736 [2024-07-21 03:38:04.987280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.736 [2024-07-21 03:38:04.987296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.736 [2024-07-21 03:38:04.987311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.736 [2024-07-21 03:38:04.987327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.736 [2024-07-21 03:38:04.987341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.736 [2024-07-21 03:38:04.987362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.736 [2024-07-21 03:38:04.987377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.736 [2024-07-21 03:38:04.987393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.736 [2024-07-21 03:38:04.987407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.736 [2024-07-21 03:38:04.987424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.736 [2024-07-21 03:38:04.987438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.736 [2024-07-21 03:38:04.987454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.736 [2024-07-21 03:38:04.987468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.736 [2024-07-21 03:38:04.987484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.736 [2024-07-21 03:38:04.987498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.736 [2024-07-21 03:38:04.987514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.736 [2024-07-21 03:38:04.987528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.736 [2024-07-21 03:38:04.987543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.736 [2024-07-21 03:38:04.987557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.736 [2024-07-21 03:38:04.987573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.736 [2024-07-21 03:38:04.987587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.736 [2024-07-21 03:38:04.987604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.736 [2024-07-21 03:38:04.987627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.736 [2024-07-21 03:38:04.987644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.736 [2024-07-21 03:38:04.987659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.736 [2024-07-21 03:38:04.987676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.736 [2024-07-21 03:38:04.987691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.736 [2024-07-21 03:38:04.987706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.736 [2024-07-21 03:38:04.987720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.736 [2024-07-21 03:38:04.987736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.736 [2024-07-21 03:38:04.987754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.736 [2024-07-21 03:38:04.987771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.736 [2024-07-21 03:38:04.987785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.736 [2024-07-21 03:38:04.987801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.736 [2024-07-21 03:38:04.987815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.736 [2024-07-21 03:38:04.987832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.736 [2024-07-21 03:38:04.987845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.736 [2024-07-21 03:38:04.987861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.736 [2024-07-21 03:38:04.987875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.736 [2024-07-21 03:38:04.987891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.736 [2024-07-21 03:38:04.987905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.736 [2024-07-21 03:38:04.987921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.736 [2024-07-21 03:38:04.987935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.736 [2024-07-21 03:38:04.987951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.736 [2024-07-21 03:38:04.987965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.736 [2024-07-21 03:38:04.987982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.736 [2024-07-21 03:38:04.987996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.736 [2024-07-21 03:38:04.988012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.736 [2024-07-21 03:38:04.988026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.736 [2024-07-21 03:38:04.988042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.736 [2024-07-21 03:38:04.988056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.736 [2024-07-21 03:38:04.988071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.736 [2024-07-21 03:38:04.988086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.736 [2024-07-21 03:38:04.988101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.736 [2024-07-21 03:38:04.988115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.736 [2024-07-21 03:38:04.988134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.736 [2024-07-21 03:38:04.988149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.736 [2024-07-21 03:38:04.988165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.736 [2024-07-21 03:38:04.988180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.736 [2024-07-21 03:38:04.988195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.736 [2024-07-21 03:38:04.988209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.736 [2024-07-21 03:38:04.988225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.736 [2024-07-21 03:38:04.988240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.736 [2024-07-21 03:38:04.988256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.736 [2024-07-21 03:38:04.988270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.736 [2024-07-21 03:38:04.988287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.737 [2024-07-21 03:38:04.988301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.737 [2024-07-21 03:38:04.988318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.737 [2024-07-21 03:38:04.988332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.737 [2024-07-21 03:38:04.988347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.737 [2024-07-21 03:38:04.988361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.737 [2024-07-21 03:38:04.988377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.737 [2024-07-21 03:38:04.988391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.737 [2024-07-21 03:38:04.988408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.737 [2024-07-21 03:38:04.988422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.737 [2024-07-21 03:38:04.988437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.737 [2024-07-21 03:38:04.988451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.737 [2024-07-21 03:38:04.988467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.737 [2024-07-21 03:38:04.988482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.737 [2024-07-21 03:38:04.988498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.737 [2024-07-21 03:38:04.988515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.737 [2024-07-21 03:38:04.988531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.737 [2024-07-21 03:38:04.988546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.737 [2024-07-21 03:38:04.988562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.737 [2024-07-21 03:38:04.988576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.737 [2024-07-21 03:38:04.988592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.737 [2024-07-21 03:38:04.988606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.737 [2024-07-21 03:38:04.988629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.737 [2024-07-21 03:38:04.988644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.737 [2024-07-21 03:38:04.988660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.737 [2024-07-21 03:38:04.988674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.737 [2024-07-21 03:38:04.988690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.737 [2024-07-21 03:38:04.988704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.737 [2024-07-21 03:38:04.988722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.737 [2024-07-21 03:38:04.988736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.737 [2024-07-21 03:38:04.988752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.737 [2024-07-21 03:38:04.988766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.737 [2024-07-21 03:38:04.988783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.737 [2024-07-21 03:38:04.988798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.737 [2024-07-21 03:38:04.988814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.737 [2024-07-21 03:38:04.988827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.737 [2024-07-21 03:38:04.988844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.737 [2024-07-21 03:38:04.988858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.737 [2024-07-21 03:38:04.988874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.737 [2024-07-21 03:38:04.988888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.737 [2024-07-21 03:38:04.988908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.737 [2024-07-21 03:38:04.988923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.737 [2024-07-21 03:38:04.988939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.737 [2024-07-21 03:38:04.988954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.737 [2024-07-21 03:38:04.988969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.737 [2024-07-21 03:38:04.988984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.737 [2024-07-21 03:38:04.989000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.737 [2024-07-21 03:38:04.989014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.737 [2024-07-21 03:38:04.989030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.737 [2024-07-21 03:38:04.989045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.737 [2024-07-21 03:38:04.989061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.737 [2024-07-21 03:38:04.989074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.737 [2024-07-21 03:38:04.989089] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a06f0 is same with the state(5) to be set 00:28:19.737 [2024-07-21 03:38:04.990322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.737 [2024-07-21 03:38:04.990345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.737 [2024-07-21 03:38:04.990365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.737 [2024-07-21 03:38:04.990381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.737 [2024-07-21 03:38:04.990398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.737 [2024-07-21 03:38:04.990413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.737 [2024-07-21 03:38:04.990429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.737 [2024-07-21 03:38:04.990443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.737 [2024-07-21 03:38:04.990458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.737 [2024-07-21 03:38:04.990472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.737 [2024-07-21 03:38:04.990488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.737 [2024-07-21 03:38:04.990503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.737 [2024-07-21 03:38:04.990524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.737 [2024-07-21 03:38:04.990539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.737 [2024-07-21 03:38:04.990555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.737 [2024-07-21 03:38:04.990569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.737 [2024-07-21 03:38:04.990586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.737 [2024-07-21 03:38:04.990600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.737 [2024-07-21 03:38:04.990631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.737 [2024-07-21 03:38:04.990648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.737 [2024-07-21 03:38:04.990664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.737 [2024-07-21 03:38:04.990679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.737 [2024-07-21 03:38:04.990695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.737 [2024-07-21 03:38:04.990709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.737 [2024-07-21 03:38:04.990726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.737 [2024-07-21 03:38:04.990740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.737 [2024-07-21 03:38:04.990756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.737 [2024-07-21 03:38:04.990771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.737 [2024-07-21 03:38:04.990787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.737 [2024-07-21 03:38:04.990801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.737 [2024-07-21 03:38:04.990817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.737 [2024-07-21 03:38:04.990831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.737 [2024-07-21 03:38:04.990847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.738 [2024-07-21 03:38:04.990861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.738 [2024-07-21 03:38:04.990877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.738 [2024-07-21 03:38:04.990892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.738 [2024-07-21 03:38:04.990908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.738 [2024-07-21 03:38:04.990923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.738 [2024-07-21 03:38:04.990943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.738 [2024-07-21 03:38:04.990958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.738 [2024-07-21 03:38:04.990973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.738 [2024-07-21 03:38:04.990988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.738 [2024-07-21 03:38:04.991004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.738 [2024-07-21 03:38:04.991020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.738 [2024-07-21 03:38:04.991036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.738 [2024-07-21 03:38:04.991050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.738 [2024-07-21 03:38:04.991065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.738 [2024-07-21 03:38:04.991080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.738 [2024-07-21 03:38:04.991096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.738 [2024-07-21 03:38:04.991110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.738 [2024-07-21 03:38:04.991126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.738 [2024-07-21 03:38:04.991140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.738 [2024-07-21 03:38:04.991156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.738 [2024-07-21 03:38:04.991171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.738 [2024-07-21 03:38:04.991186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.738 [2024-07-21 03:38:04.991200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.738 [2024-07-21 03:38:04.991216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.738 [2024-07-21 03:38:04.991231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.738 [2024-07-21 03:38:04.991247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.738 [2024-07-21 03:38:04.991261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.738 [2024-07-21 03:38:04.991276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.738 [2024-07-21 03:38:04.991291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.738 [2024-07-21 03:38:04.991307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.738 [2024-07-21 03:38:04.991324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.738 [2024-07-21 03:38:04.991342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.738 [2024-07-21 03:38:04.991357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.738 [2024-07-21 03:38:04.991372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.738 [2024-07-21 03:38:04.991386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.738 [2024-07-21 03:38:04.991402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.738 [2024-07-21 03:38:04.991416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.738 [2024-07-21 03:38:04.991432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.738 [2024-07-21 03:38:04.991446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.738 [2024-07-21 03:38:04.991462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.738 [2024-07-21 03:38:04.991477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.738 [2024-07-21 03:38:04.991493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.738 [2024-07-21 03:38:04.991507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.738 [2024-07-21 03:38:04.991523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.738 [2024-07-21 03:38:04.991538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.738 [2024-07-21 03:38:04.991554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.738 [2024-07-21 03:38:04.991568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.738 [2024-07-21 03:38:04.991584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.738 [2024-07-21 03:38:04.991599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.738 [2024-07-21 03:38:04.991621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.738 [2024-07-21 03:38:04.991637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.738 [2024-07-21 03:38:04.991654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.738 [2024-07-21 03:38:04.991668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.738 [2024-07-21 03:38:04.991684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.738 [2024-07-21 03:38:04.991698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.738 [2024-07-21 03:38:04.991722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.738 [2024-07-21 03:38:04.991737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.738 [2024-07-21 03:38:04.991753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.738 [2024-07-21 03:38:04.991768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.738 [2024-07-21 03:38:04.991783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.738 [2024-07-21 03:38:04.991797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.738 [2024-07-21 03:38:04.991813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.738 [2024-07-21 03:38:04.991828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.738 [2024-07-21 03:38:04.991844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.738 [2024-07-21 03:38:04.991858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.738 [2024-07-21 03:38:04.991874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.738 [2024-07-21 03:38:04.991888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.738 [2024-07-21 03:38:04.991904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.738 [2024-07-21 03:38:04.991919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.738 [2024-07-21 03:38:04.991934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.738 [2024-07-21 03:38:04.991950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.738 [2024-07-21 03:38:04.991966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.738 [2024-07-21 03:38:04.991981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.738 [2024-07-21 03:38:04.991997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.738 [2024-07-21 03:38:04.992011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.738 [2024-07-21 03:38:04.992027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.738 [2024-07-21 03:38:04.992042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.738 [2024-07-21 03:38:04.992057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.738 [2024-07-21 03:38:04.992072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.738 [2024-07-21 03:38:04.992088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.738 [2024-07-21 03:38:04.992106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.738 [2024-07-21 03:38:04.992122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.738 [2024-07-21 03:38:04.992137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.738 [2024-07-21 03:38:04.992154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.738 [2024-07-21 03:38:04.992168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.738 [2024-07-21 03:38:04.992184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.739 [2024-07-21 03:38:04.992199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.739 [2024-07-21 03:38:04.992215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.739 [2024-07-21 03:38:04.992229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.739 [2024-07-21 03:38:04.992245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.739 [2024-07-21 03:38:04.992260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.739 [2024-07-21 03:38:04.992275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.739 [2024-07-21 03:38:04.992290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.739 [2024-07-21 03:38:04.992306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.739 [2024-07-21 03:38:04.992320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.739 [2024-07-21 03:38:04.992336] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123f630 is same with the state(5) to be set 00:28:19.739 [2024-07-21 03:38:04.993585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.739 [2024-07-21 03:38:04.993608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.739 [2024-07-21 03:38:04.993635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.739 [2024-07-21 03:38:04.993653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.739 [2024-07-21 03:38:04.993669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.739 [2024-07-21 03:38:04.993684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.739 [2024-07-21 03:38:04.993701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.739 [2024-07-21 03:38:04.993716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.739 [2024-07-21 03:38:04.993733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.739 [2024-07-21 03:38:04.993752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.739 [2024-07-21 03:38:04.993769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.739 [2024-07-21 03:38:04.993784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.739 [2024-07-21 03:38:04.993800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.739 [2024-07-21 03:38:04.993814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.739 [2024-07-21 03:38:04.993830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.739 [2024-07-21 03:38:04.993844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.739 [2024-07-21 03:38:04.993860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.739 [2024-07-21 03:38:04.993874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.739 [2024-07-21 03:38:04.993890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.739 [2024-07-21 03:38:04.993905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.739 [2024-07-21 03:38:04.993921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.739 [2024-07-21 03:38:04.993936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.739 [2024-07-21 03:38:04.993952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.739 [2024-07-21 03:38:04.993966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.739 [2024-07-21 03:38:04.993983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.739 [2024-07-21 03:38:04.993997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.739 [2024-07-21 03:38:04.994013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.739 [2024-07-21 03:38:04.994027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.739 [2024-07-21 03:38:04.994043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.739 [2024-07-21 03:38:04.994057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.739 [2024-07-21 03:38:04.994073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.739 [2024-07-21 03:38:04.994088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.739 [2024-07-21 03:38:04.994104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.739 [2024-07-21 03:38:04.994118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.739 [2024-07-21 03:38:04.994138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.739 [2024-07-21 03:38:04.994153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.739 [2024-07-21 03:38:04.994169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.739 [2024-07-21 03:38:04.994183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.739 [2024-07-21 03:38:04.994200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.739 [2024-07-21 03:38:04.994214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.739 [2024-07-21 03:38:04.994230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.739 [2024-07-21 03:38:04.994246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.739 [2024-07-21 03:38:04.994262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.739 [2024-07-21 03:38:04.994277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.739 [2024-07-21 03:38:04.994293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.739 [2024-07-21 03:38:04.994308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.739 [2024-07-21 03:38:04.994325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.739 [2024-07-21 03:38:04.994339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.739 [2024-07-21 03:38:04.994355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.739 [2024-07-21 03:38:04.994369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.739 [2024-07-21 03:38:04.994385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.739 [2024-07-21 03:38:04.994399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.739 [2024-07-21 03:38:04.994415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.739 [2024-07-21 03:38:04.994430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.739 [2024-07-21 03:38:04.994446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.739 [2024-07-21 03:38:04.994460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.739 [2024-07-21 03:38:04.994476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.739 [2024-07-21 03:38:04.994490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.739 [2024-07-21 03:38:04.994506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.739 [2024-07-21 03:38:04.994523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.739 [2024-07-21 03:38:04.994539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.739 [2024-07-21 03:38:04.994554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.739 [2024-07-21 03:38:04.994570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.739 [2024-07-21 03:38:04.994584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.739 [2024-07-21 03:38:04.994600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.739 [2024-07-21 03:38:04.994623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.739 [2024-07-21 03:38:04.994642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.739 [2024-07-21 03:38:04.994656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.739 [2024-07-21 03:38:04.994672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.739 [2024-07-21 03:38:04.994686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.739 [2024-07-21 03:38:04.994703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.739 [2024-07-21 03:38:04.994717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.739 [2024-07-21 03:38:04.994733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.739 [2024-07-21 03:38:04.994747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.739 [2024-07-21 03:38:04.994764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.739 [2024-07-21 03:38:04.994778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.740 [2024-07-21 03:38:04.994795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.740 [2024-07-21 03:38:04.994809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.740 [2024-07-21 03:38:04.994824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.740 [2024-07-21 03:38:04.994839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.740 [2024-07-21 03:38:04.994855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.740 [2024-07-21 03:38:04.994869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.740 [2024-07-21 03:38:04.994885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.740 [2024-07-21 03:38:04.994899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.740 [2024-07-21 03:38:04.994919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.740 [2024-07-21 03:38:04.994934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.740 [2024-07-21 03:38:04.994951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.740 [2024-07-21 03:38:04.994965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.740 [2024-07-21 03:38:04.994981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.740 [2024-07-21 03:38:04.994995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.740 [2024-07-21 03:38:04.995011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.740 [2024-07-21 03:38:04.995026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.740 [2024-07-21 03:38:04.995041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.740 [2024-07-21 03:38:04.995056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.740 [2024-07-21 03:38:04.995072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.740 [2024-07-21 03:38:04.995086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.740 [2024-07-21 03:38:04.995102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.740 [2024-07-21 03:38:04.995117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.740 [2024-07-21 03:38:04.995132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.740 [2024-07-21 03:38:04.995146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.740 [2024-07-21 03:38:04.995162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.740 [2024-07-21 03:38:04.995176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.740 [2024-07-21 03:38:04.995193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.740 [2024-07-21 03:38:04.995208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.740 [2024-07-21 03:38:04.995224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.740 [2024-07-21 03:38:04.995238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.740 [2024-07-21 03:38:04.995254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.740 [2024-07-21 03:38:04.995268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.740 [2024-07-21 03:38:04.995283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.740 [2024-07-21 03:38:04.995302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.740 [2024-07-21 03:38:04.995318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.740 [2024-07-21 03:38:04.995333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.740 [2024-07-21 03:38:04.995349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.740 [2024-07-21 03:38:04.995363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.740 [2024-07-21 03:38:04.995379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.740 [2024-07-21 03:38:04.995393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.740 [2024-07-21 03:38:04.995409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.740 [2024-07-21 03:38:04.995423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.740 [2024-07-21 03:38:04.995439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.740 [2024-07-21 03:38:04.995453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.740 [2024-07-21 03:38:04.995469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.740 [2024-07-21 03:38:04.995484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.740 [2024-07-21 03:38:04.995500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.740 [2024-07-21 03:38:04.995514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.740 [2024-07-21 03:38:04.995530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.740 [2024-07-21 03:38:04.995544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.740 [2024-07-21 03:38:04.995560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.740 [2024-07-21 03:38:04.995574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.740 [2024-07-21 03:38:04.995589] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1240b10 is same with the state(5) to be set 00:28:19.740 [2024-07-21 03:38:04.997087] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:28:19.740 [2024-07-21 03:38:04.997120] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:28:19.740 [2024-07-21 03:38:04.997140] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:28:19.740 [2024-07-21 03:38:04.997159] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:28:19.740 [2024-07-21 03:38:04.997176] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:28:19.740 [2024-07-21 03:38:04.997445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.740 [2024-07-21 03:38:04.997475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1247300 with addr=10.0.0.2, port=4420 00:28:19.740 [2024-07-21 03:38:04.997497] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1247300 is same with the state(5) to be set 00:28:19.740 [2024-07-21 03:38:04.997525] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1245190 (9): Bad file descriptor 00:28:19.740 [2024-07-21 03:38:04.997545] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129ff90 (9): Bad file descriptor 00:28:19.740 [2024-07-21 03:38:04.997626] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:19.740 [2024-07-21 03:38:04.997666] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:19.740 [2024-07-21 03:38:04.997687] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:19.740 [2024-07-21 03:38:04.997711] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1247300 (9): Bad file descriptor 00:28:19.740 [2024-07-21 03:38:04.997818] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:28:19.740 [2024-07-21 03:38:04.997968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.740 [2024-07-21 03:38:04.997995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1275f90 with addr=10.0.0.2, port=4420 00:28:19.740 [2024-07-21 03:38:04.998011] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1275f90 is same with the state(5) to be set 00:28:19.740 [2024-07-21 03:38:04.998109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.740 [2024-07-21 03:38:04.998134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13eaf50 with addr=10.0.0.2, port=4420 00:28:19.740 [2024-07-21 03:38:04.998150] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13eaf50 is same with the state(5) to be set 00:28:19.740 [2024-07-21 03:38:04.998233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.740 [2024-07-21 03:38:04.998257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1272810 with addr=10.0.0.2, port=4420 00:28:19.741 [2024-07-21 03:38:04.998274] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1272810 is same with the state(5) to be set 00:28:19.741 [2024-07-21 03:38:04.998390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.741 [2024-07-21 03:38:04.998415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x126a6b0 with addr=10.0.0.2, port=4420 00:28:19.741 [2024-07-21 03:38:04.998430] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126a6b0 is same with the state(5) to be set 00:28:19.741 [2024-07-21 03:38:04.998516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.741 [2024-07-21 03:38:04.998540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1409ec0 with addr=10.0.0.2, port=4420 00:28:19.741 [2024-07-21 03:38:04.998557] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1409ec0 is same with the state(5) to be set 00:28:19.741 [2024-07-21 03:38:04.998575] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:28:19.741 [2024-07-21 03:38:04.998590] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:28:19.741 [2024-07-21 03:38:04.998606] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:28:19.741 [2024-07-21 03:38:04.998637] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:28:19.741 [2024-07-21 03:38:04.998654] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:28:19.741 [2024-07-21 03:38:04.998667] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:28:19.741 [2024-07-21 03:38:04.999747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.741 [2024-07-21 03:38:04.999772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.741 [2024-07-21 03:38:04.999796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.741 [2024-07-21 03:38:04.999812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.741 [2024-07-21 03:38:04.999829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.741 [2024-07-21 03:38:04.999844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.741 [2024-07-21 03:38:04.999861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.741 [2024-07-21 03:38:04.999876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.741 [2024-07-21 03:38:04.999892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.741 [2024-07-21 03:38:04.999907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.741 [2024-07-21 03:38:04.999923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.741 [2024-07-21 03:38:04.999937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.741 [2024-07-21 03:38:04.999954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.741 [2024-07-21 03:38:04.999968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.741 [2024-07-21 03:38:04.999985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.741 [2024-07-21 03:38:05.000000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.741 [2024-07-21 03:38:05.000016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.741 [2024-07-21 03:38:05.000031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.741 [2024-07-21 03:38:05.000047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.741 [2024-07-21 03:38:05.000062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.741 [2024-07-21 03:38:05.000078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.741 [2024-07-21 03:38:05.000093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.741 [2024-07-21 03:38:05.000109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.741 [2024-07-21 03:38:05.000124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.741 [2024-07-21 03:38:05.000140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.741 [2024-07-21 03:38:05.000159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.741 [2024-07-21 03:38:05.000176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.741 [2024-07-21 03:38:05.000191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.741 [2024-07-21 03:38:05.000207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.741 [2024-07-21 03:38:05.000221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.741 [2024-07-21 03:38:05.000238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.741 [2024-07-21 03:38:05.000252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.741 [2024-07-21 03:38:05.000268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.741 [2024-07-21 03:38:05.000283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.741 [2024-07-21 03:38:05.000299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.741 [2024-07-21 03:38:05.000313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.741 [2024-07-21 03:38:05.000329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.741 [2024-07-21 03:38:05.000344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.741 [2024-07-21 03:38:05.000360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.741 [2024-07-21 03:38:05.000374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.741 [2024-07-21 03:38:05.000390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.741 [2024-07-21 03:38:05.000404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.741 [2024-07-21 03:38:05.000420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.741 [2024-07-21 03:38:05.000435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.741 [2024-07-21 03:38:05.000451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.741 [2024-07-21 03:38:05.000466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.741 [2024-07-21 03:38:05.000482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.741 [2024-07-21 03:38:05.000496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.741 [2024-07-21 03:38:05.000512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.741 [2024-07-21 03:38:05.000527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.741 [2024-07-21 03:38:05.000546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.741 [2024-07-21 03:38:05.000562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.741 [2024-07-21 03:38:05.000578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.741 [2024-07-21 03:38:05.000595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.741 [2024-07-21 03:38:05.000611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.741 [2024-07-21 03:38:05.000633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.741 [2024-07-21 03:38:05.000650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.741 [2024-07-21 03:38:05.000664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.741 [2024-07-21 03:38:05.000681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.741 [2024-07-21 03:38:05.000695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.741 [2024-07-21 03:38:05.000711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.741 [2024-07-21 03:38:05.000726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.741 [2024-07-21 03:38:05.000742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.741 [2024-07-21 03:38:05.000756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.741 [2024-07-21 03:38:05.000773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.741 [2024-07-21 03:38:05.000787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.741 [2024-07-21 03:38:05.000804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.741 [2024-07-21 03:38:05.000818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.741 [2024-07-21 03:38:05.000834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.741 [2024-07-21 03:38:05.000848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.741 [2024-07-21 03:38:05.000865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.741 [2024-07-21 03:38:05.000879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.741 [2024-07-21 03:38:05.000895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.741 [2024-07-21 03:38:05.000909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.742 [2024-07-21 03:38:05.000926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.742 [2024-07-21 03:38:05.000944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.742 [2024-07-21 03:38:05.000960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.742 [2024-07-21 03:38:05.000975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.742 [2024-07-21 03:38:05.000991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.742 [2024-07-21 03:38:05.001005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.742 [2024-07-21 03:38:05.001021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.742 [2024-07-21 03:38:05.001036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.742 [2024-07-21 03:38:05.001052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.742 [2024-07-21 03:38:05.001066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.742 [2024-07-21 03:38:05.001083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.742 [2024-07-21 03:38:05.001097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.742 [2024-07-21 03:38:05.001113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.742 [2024-07-21 03:38:05.001127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.742 [2024-07-21 03:38:05.001143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.742 [2024-07-21 03:38:05.001157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.742 [2024-07-21 03:38:05.001174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.742 [2024-07-21 03:38:05.001189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.742 [2024-07-21 03:38:05.001205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.742 [2024-07-21 03:38:05.001219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.742 [2024-07-21 03:38:05.001235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.742 [2024-07-21 03:38:05.001250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.742 [2024-07-21 03:38:05.001267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.742 [2024-07-21 03:38:05.001282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.742 [2024-07-21 03:38:05.001298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.742 [2024-07-21 03:38:05.001312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.742 [2024-07-21 03:38:05.001332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.742 [2024-07-21 03:38:05.001347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.742 [2024-07-21 03:38:05.001363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.742 [2024-07-21 03:38:05.001377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.742 [2024-07-21 03:38:05.001393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.742 [2024-07-21 03:38:05.001408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.742 [2024-07-21 03:38:05.001424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.742 [2024-07-21 03:38:05.001438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.742 [2024-07-21 03:38:05.001454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.742 [2024-07-21 03:38:05.001468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.742 [2024-07-21 03:38:05.001484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.742 [2024-07-21 03:38:05.001499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.742 [2024-07-21 03:38:05.001514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.742 [2024-07-21 03:38:05.001529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.742 [2024-07-21 03:38:05.001545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.742 [2024-07-21 03:38:05.001559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.742 [2024-07-21 03:38:05.001575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.742 [2024-07-21 03:38:05.001591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.742 [2024-07-21 03:38:05.001607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.742 [2024-07-21 03:38:05.001630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.742 [2024-07-21 03:38:05.001647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.742 [2024-07-21 03:38:05.001662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.742 [2024-07-21 03:38:05.001678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.742 [2024-07-21 03:38:05.001691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.742 [2024-07-21 03:38:05.001707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.742 [2024-07-21 03:38:05.001725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.742 [2024-07-21 03:38:05.001742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.742 [2024-07-21 03:38:05.001757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.742 [2024-07-21 03:38:05.001772] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1242010 is same with the state(5) to be set 00:28:19.742 [2024-07-21 03:38:05.003936] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:19.742 [2024-07-21 03:38:05.003962] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:20.001 task offset: 26112 on job bdev=Nvme1n1 fails 00:28:20.001 00:28:20.001 Latency(us) 00:28:20.001 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:20.001 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:20.001 Job: Nvme1n1 ended in about 0.88 seconds with error 00:28:20.001 Verification LBA range: start 0x0 length 0x400 00:28:20.001 Nvme1n1 : 0.88 217.53 13.60 72.51 0.00 218082.37 4296.25 251658.24 00:28:20.001 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:20.001 Job: Nvme2n1 ended in about 0.91 seconds with error 00:28:20.001 Verification LBA range: start 0x0 length 0x400 00:28:20.001 Nvme2n1 : 0.91 140.34 8.77 70.17 0.00 294487.99 21845.33 253211.69 00:28:20.001 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:20.001 Job: Nvme3n1 ended in about 0.92 seconds with error 00:28:20.001 Verification LBA range: start 0x0 length 0x400 00:28:20.001 Nvme3n1 : 0.92 139.03 8.69 69.52 0.00 291146.21 30874.74 262532.36 00:28:20.001 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:20.001 Job: Nvme4n1 ended in about 0.89 seconds with error 00:28:20.001 Verification LBA range: start 0x0 length 0x400 00:28:20.001 Nvme4n1 : 0.89 216.53 13.53 72.18 0.00 205262.60 4805.97 262532.36 00:28:20.001 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:20.001 Job: Nvme5n1 ended in about 0.92 seconds with error 00:28:20.001 Verification LBA range: start 0x0 length 0x400 00:28:20.001 Nvme5n1 : 0.92 138.54 8.66 69.27 0.00 280027.84 38253.61 279620.27 00:28:20.001 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:20.001 Job: Nvme6n1 ended in about 0.93 seconds with error 00:28:20.001 Verification LBA range: start 0x0 length 0x400 00:28:20.001 Nvme6n1 : 0.93 138.06 8.63 69.03 0.00 274989.01 20388.98 267192.70 00:28:20.001 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:20.001 Job: Nvme7n1 ended in about 0.93 seconds with error 00:28:20.001 Verification LBA range: start 0x0 length 0x400 00:28:20.001 Nvme7n1 : 0.93 217.11 13.57 68.79 0.00 194890.85 13495.56 250104.79 00:28:20.001 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:20.001 Job: Nvme8n1 ended in about 0.94 seconds with error 00:28:20.001 Verification LBA range: start 0x0 length 0x400 00:28:20.001 Nvme8n1 : 0.94 136.67 8.54 68.33 0.00 266005.36 21748.24 253211.69 00:28:20.001 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:20.001 Job: Nvme9n1 ended in about 0.90 seconds with error 00:28:20.001 Verification LBA range: start 0x0 length 0x400 00:28:20.001 Nvme9n1 : 0.90 142.99 8.94 71.50 0.00 246434.07 25437.68 260978.92 00:28:20.001 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:20.001 Job: Nvme10n1 ended in about 0.92 seconds with error 00:28:20.001 Verification LBA range: start 0x0 length 0x400 00:28:20.001 Nvme10n1 : 0.92 139.82 8.74 69.91 0.00 247262.37 20971.52 298261.62 00:28:20.001 =================================================================================================================== 00:28:20.001 Total : 1626.61 101.66 701.19 0.00 247448.17 4296.25 298261.62 00:28:20.001 [2024-07-21 03:38:05.032659] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:28:20.001 [2024-07-21 03:38:05.032752] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:28:20.001 [2024-07-21 03:38:05.033011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.001 [2024-07-21 03:38:05.033047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f610 with addr=10.0.0.2, port=4420 00:28:20.001 [2024-07-21 03:38:05.033068] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3f610 is same with the state(5) to be set 00:28:20.001 [2024-07-21 03:38:05.033095] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1275f90 (9): Bad file descriptor 00:28:20.001 [2024-07-21 03:38:05.033119] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13eaf50 (9): Bad file descriptor 00:28:20.001 [2024-07-21 03:38:05.033139] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1272810 (9): Bad file descriptor 00:28:20.001 [2024-07-21 03:38:05.033159] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x126a6b0 (9): Bad file descriptor 00:28:20.001 [2024-07-21 03:38:05.033178] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1409ec0 (9): Bad file descriptor 00:28:20.001 [2024-07-21 03:38:05.033195] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:20.001 [2024-07-21 03:38:05.033210] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:20.001 [2024-07-21 03:38:05.033226] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:20.001 [2024-07-21 03:38:05.033708] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:20.001 [2024-07-21 03:38:05.033863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.001 [2024-07-21 03:38:05.033893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13ebf00 with addr=10.0.0.2, port=4420 00:28:20.001 [2024-07-21 03:38:05.033911] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ebf00 is same with the state(5) to be set 00:28:20.001 [2024-07-21 03:38:05.033930] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd3f610 (9): Bad file descriptor 00:28:20.001 [2024-07-21 03:38:05.033949] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:28:20.001 [2024-07-21 03:38:05.033963] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:28:20.001 [2024-07-21 03:38:05.033978] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:28:20.001 [2024-07-21 03:38:05.033997] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:28:20.001 [2024-07-21 03:38:05.034012] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:28:20.001 [2024-07-21 03:38:05.034025] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:28:20.001 [2024-07-21 03:38:05.034042] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:28:20.002 [2024-07-21 03:38:05.034057] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:28:20.002 [2024-07-21 03:38:05.034071] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:28:20.002 [2024-07-21 03:38:05.034090] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:28:20.002 [2024-07-21 03:38:05.034105] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:28:20.002 [2024-07-21 03:38:05.034129] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:28:20.002 [2024-07-21 03:38:05.034149] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:28:20.002 [2024-07-21 03:38:05.034164] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:28:20.002 [2024-07-21 03:38:05.034177] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:28:20.002 [2024-07-21 03:38:05.034220] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:20.002 [2024-07-21 03:38:05.034242] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:20.002 [2024-07-21 03:38:05.034260] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:20.002 [2024-07-21 03:38:05.034279] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:20.002 [2024-07-21 03:38:05.034297] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:20.002 [2024-07-21 03:38:05.034316] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:20.002 [2024-07-21 03:38:05.034694] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:20.002 [2024-07-21 03:38:05.034719] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:20.002 [2024-07-21 03:38:05.034733] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:20.002 [2024-07-21 03:38:05.034746] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:20.002 [2024-07-21 03:38:05.034759] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:20.002 [2024-07-21 03:38:05.034783] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13ebf00 (9): Bad file descriptor 00:28:20.002 [2024-07-21 03:38:05.034803] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:28:20.002 [2024-07-21 03:38:05.034816] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:28:20.002 [2024-07-21 03:38:05.034830] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:28:20.002 [2024-07-21 03:38:05.035286] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:28:20.002 [2024-07-21 03:38:05.035316] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:28:20.002 [2024-07-21 03:38:05.035334] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:20.002 [2024-07-21 03:38:05.035363] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:28:20.002 [2024-07-21 03:38:05.035380] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:28:20.002 [2024-07-21 03:38:05.035393] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:28:20.002 [2024-07-21 03:38:05.035431] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:20.002 [2024-07-21 03:38:05.035464] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:20.002 [2024-07-21 03:38:05.035594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.002 [2024-07-21 03:38:05.035630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129ff90 with addr=10.0.0.2, port=4420 00:28:20.002 [2024-07-21 03:38:05.035651] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129ff90 is same with the state(5) to be set 00:28:20.002 [2024-07-21 03:38:05.035748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.002 [2024-07-21 03:38:05.035778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1245190 with addr=10.0.0.2, port=4420 00:28:20.002 [2024-07-21 03:38:05.035796] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1245190 is same with the state(5) to be set 00:28:20.002 [2024-07-21 03:38:05.035911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.002 [2024-07-21 03:38:05.035937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1247300 with addr=10.0.0.2, port=4420 00:28:20.002 [2024-07-21 03:38:05.035953] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1247300 is same with the state(5) to be set 00:28:20.002 [2024-07-21 03:38:05.035974] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129ff90 (9): Bad file descriptor 00:28:20.002 [2024-07-21 03:38:05.035994] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1245190 (9): Bad file descriptor 00:28:20.002 [2024-07-21 03:38:05.036039] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1247300 (9): Bad file descriptor 00:28:20.002 [2024-07-21 03:38:05.036062] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:28:20.002 [2024-07-21 03:38:05.036077] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:28:20.002 [2024-07-21 03:38:05.036091] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:28:20.002 [2024-07-21 03:38:05.036108] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:28:20.002 [2024-07-21 03:38:05.036123] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:28:20.002 [2024-07-21 03:38:05.036136] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:28:20.002 [2024-07-21 03:38:05.036172] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:20.002 [2024-07-21 03:38:05.036191] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:20.002 [2024-07-21 03:38:05.036204] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:20.002 [2024-07-21 03:38:05.036217] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:20.002 [2024-07-21 03:38:05.036231] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:20.002 [2024-07-21 03:38:05.036271] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:20.260 03:38:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:28:20.260 03:38:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:28:21.191 03:38:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 2496994 00:28:21.191 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (2496994) - No such process 00:28:21.191 03:38:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:28:21.191 03:38:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:28:21.449 03:38:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:28:21.449 03:38:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:28:21.449 03:38:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:21.449 03:38:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:28:21.449 03:38:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:21.449 03:38:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:28:21.449 03:38:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:21.449 03:38:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:28:21.449 03:38:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:21.449 03:38:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:21.449 rmmod nvme_tcp 00:28:21.449 rmmod nvme_fabrics 00:28:21.449 rmmod nvme_keyring 00:28:21.449 03:38:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:21.449 03:38:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:28:21.449 03:38:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:28:21.449 03:38:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:28:21.449 03:38:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:21.449 03:38:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:21.449 03:38:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:21.449 03:38:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:21.449 03:38:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:21.449 03:38:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:21.449 03:38:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:21.449 03:38:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:23.346 03:38:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:23.346 00:28:23.346 real 0m7.095s 00:28:23.346 user 0m16.319s 00:28:23.346 sys 0m1.440s 00:28:23.346 03:38:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:23.346 03:38:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:23.346 ************************************ 00:28:23.346 END TEST nvmf_shutdown_tc3 00:28:23.346 ************************************ 00:28:23.346 03:38:08 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:28:23.346 00:28:23.346 real 0m26.490s 00:28:23.346 user 1m12.560s 00:28:23.346 sys 0m6.229s 00:28:23.346 03:38:08 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:23.346 03:38:08 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:23.347 ************************************ 00:28:23.347 END TEST nvmf_shutdown 00:28:23.347 ************************************ 00:28:23.347 03:38:08 nvmf_tcp -- nvmf/nvmf.sh@86 -- # timing_exit target 00:28:23.347 03:38:08 nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:23.347 03:38:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:23.604 03:38:08 nvmf_tcp -- nvmf/nvmf.sh@88 -- # timing_enter host 00:28:23.604 03:38:08 nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:28:23.604 03:38:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:23.604 03:38:08 nvmf_tcp -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 00:28:23.604 03:38:08 nvmf_tcp -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:28:23.604 03:38:08 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:28:23.604 03:38:08 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:23.604 03:38:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:23.604 ************************************ 00:28:23.604 START TEST nvmf_multicontroller 00:28:23.604 ************************************ 00:28:23.604 03:38:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:28:23.604 * Looking for test storage... 00:28:23.604 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:23.604 03:38:08 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:23.604 03:38:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:28:23.604 03:38:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:23.604 03:38:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:23.604 03:38:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:23.604 03:38:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:23.604 03:38:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:23.604 03:38:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:23.604 03:38:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:23.604 03:38:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:23.604 03:38:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:23.604 03:38:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:23.604 03:38:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:23.604 03:38:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:23.604 03:38:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:23.604 03:38:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:23.604 03:38:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:23.604 03:38:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:23.604 03:38:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:23.604 03:38:08 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:23.604 03:38:08 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:23.604 03:38:08 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:23.604 03:38:08 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:23.604 03:38:08 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:23.604 03:38:08 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:23.604 03:38:08 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:28:23.604 03:38:08 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:23.604 03:38:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:28:23.604 03:38:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:23.604 03:38:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:23.604 03:38:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:23.604 03:38:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:23.604 03:38:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:23.604 03:38:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:23.604 03:38:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:23.604 03:38:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:23.604 03:38:08 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:23.604 03:38:08 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:23.604 03:38:08 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:28:23.604 03:38:08 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:28:23.604 03:38:08 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:28:23.604 03:38:08 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:28:23.604 03:38:08 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:28:23.604 03:38:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:23.604 03:38:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:23.604 03:38:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:23.604 03:38:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:23.604 03:38:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:23.604 03:38:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:23.604 03:38:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:23.604 03:38:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:23.604 03:38:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:23.604 03:38:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:23.604 03:38:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:28:23.604 03:38:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:25.502 03:38:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:25.502 03:38:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:28:25.502 03:38:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:25.502 03:38:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:25.502 03:38:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:25.502 03:38:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:25.502 03:38:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:25.502 03:38:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:28:25.502 03:38:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:25.502 03:38:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:28:25.502 03:38:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:28:25.502 03:38:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:28:25.502 03:38:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:28:25.502 03:38:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:28:25.502 03:38:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:28:25.502 03:38:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:25.502 03:38:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:25.502 03:38:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:25.502 03:38:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:25.502 03:38:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:25.502 03:38:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:25.502 03:38:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:25.502 03:38:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:25.503 03:38:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:25.503 03:38:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:25.503 03:38:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:25.503 03:38:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:25.503 03:38:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:25.503 03:38:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:25.503 03:38:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:25.503 03:38:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:25.503 03:38:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:25.503 03:38:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:25.503 03:38:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:25.503 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:25.503 03:38:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:25.503 03:38:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:25.503 03:38:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:25.503 03:38:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:25.503 03:38:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:25.503 03:38:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:25.503 03:38:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:25.503 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:25.503 03:38:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:25.503 03:38:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:25.503 03:38:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:25.503 03:38:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:25.503 03:38:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:25.503 03:38:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:25.503 03:38:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:25.503 03:38:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:25.503 03:38:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:25.503 03:38:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:25.503 03:38:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:25.503 03:38:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:25.503 03:38:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:25.503 03:38:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:25.503 03:38:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:25.503 03:38:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:25.503 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:25.503 03:38:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:25.503 03:38:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:25.503 03:38:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:25.503 03:38:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:25.503 03:38:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:25.503 03:38:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:25.503 03:38:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:25.503 03:38:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:25.503 03:38:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:25.503 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:25.503 03:38:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:25.503 03:38:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:25.503 03:38:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:28:25.503 03:38:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:25.503 03:38:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:25.503 03:38:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:25.503 03:38:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:25.503 03:38:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:25.503 03:38:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:25.503 03:38:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:25.503 03:38:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:25.503 03:38:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:25.503 03:38:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:25.503 03:38:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:25.503 03:38:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:25.503 03:38:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:25.503 03:38:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:25.503 03:38:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:25.503 03:38:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:25.503 03:38:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:25.503 03:38:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:25.503 03:38:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:25.503 03:38:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:25.503 03:38:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:25.761 03:38:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:25.761 03:38:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:25.761 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:25.761 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.214 ms 00:28:25.761 00:28:25.761 --- 10.0.0.2 ping statistics --- 00:28:25.761 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:25.761 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:28:25.761 03:38:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:25.761 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:25.761 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.117 ms 00:28:25.761 00:28:25.761 --- 10.0.0.1 ping statistics --- 00:28:25.761 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:25.761 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:28:25.761 03:38:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:25.761 03:38:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:28:25.761 03:38:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:25.761 03:38:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:25.761 03:38:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:25.761 03:38:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:25.761 03:38:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:25.761 03:38:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:25.761 03:38:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:25.761 03:38:10 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:28:25.761 03:38:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:25.761 03:38:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@720 -- # xtrace_disable 00:28:25.761 03:38:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:25.761 03:38:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=2499386 00:28:25.761 03:38:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 2499386 00:28:25.761 03:38:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:28:25.761 03:38:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@827 -- # '[' -z 2499386 ']' 00:28:25.761 03:38:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:25.761 03:38:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:25.761 03:38:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:25.761 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:25.761 03:38:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:25.761 03:38:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:25.761 [2024-07-21 03:38:10.917006] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:28:25.761 [2024-07-21 03:38:10.917091] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:25.761 EAL: No free 2048 kB hugepages reported on node 1 00:28:25.761 [2024-07-21 03:38:10.990981] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:26.019 [2024-07-21 03:38:11.081314] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:26.019 [2024-07-21 03:38:11.081367] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:26.019 [2024-07-21 03:38:11.081380] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:26.019 [2024-07-21 03:38:11.081391] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:26.019 [2024-07-21 03:38:11.081401] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:26.019 [2024-07-21 03:38:11.081485] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:26.019 [2024-07-21 03:38:11.081547] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:26.019 [2024-07-21 03:38:11.081549] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:26.019 03:38:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:26.019 03:38:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@860 -- # return 0 00:28:26.019 03:38:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:26.019 03:38:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:26.019 03:38:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:26.019 03:38:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:26.019 03:38:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:26.019 03:38:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:26.019 03:38:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:26.019 [2024-07-21 03:38:11.230187] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:26.019 03:38:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:26.019 03:38:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:26.019 03:38:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:26.019 03:38:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:26.019 Malloc0 00:28:26.019 03:38:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:26.019 03:38:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:26.019 03:38:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:26.019 03:38:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:26.019 03:38:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:26.019 03:38:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:26.019 03:38:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:26.019 03:38:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:26.019 03:38:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:26.019 03:38:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:26.019 03:38:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:26.019 03:38:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:26.019 [2024-07-21 03:38:11.294504] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:26.019 03:38:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:26.019 03:38:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:28:26.019 03:38:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:26.019 03:38:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:26.019 [2024-07-21 03:38:11.302370] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:28:26.019 03:38:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:26.019 03:38:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:28:26.019 03:38:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:26.019 03:38:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:26.019 Malloc1 00:28:26.019 03:38:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:26.019 03:38:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:28:26.019 03:38:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:26.019 03:38:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:26.277 03:38:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:26.277 03:38:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:28:26.277 03:38:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:26.277 03:38:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:26.277 03:38:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:26.277 03:38:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:28:26.277 03:38:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:26.277 03:38:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:26.277 03:38:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:26.277 03:38:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:28:26.277 03:38:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:26.277 03:38:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:26.277 03:38:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:26.277 03:38:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=2499472 00:28:26.277 03:38:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:28:26.277 03:38:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:26.277 03:38:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 2499472 /var/tmp/bdevperf.sock 00:28:26.277 03:38:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@827 -- # '[' -z 2499472 ']' 00:28:26.277 03:38:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:26.277 03:38:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:26.277 03:38:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:26.277 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:26.277 03:38:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:26.277 03:38:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:26.535 03:38:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:26.535 03:38:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@860 -- # return 0 00:28:26.535 03:38:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:28:26.535 03:38:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:26.535 03:38:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:26.535 NVMe0n1 00:28:26.535 03:38:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:26.535 03:38:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:26.535 03:38:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:28:26.535 03:38:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:26.535 03:38:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:26.535 03:38:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:26.535 1 00:28:26.535 03:38:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:28:26.535 03:38:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:28:26.535 03:38:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:28:26.535 03:38:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:28:26.535 03:38:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:26.535 03:38:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:28:26.535 03:38:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:26.535 03:38:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:28:26.535 03:38:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:26.535 03:38:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:26.535 request: 00:28:26.535 { 00:28:26.535 "name": "NVMe0", 00:28:26.535 "trtype": "tcp", 00:28:26.535 "traddr": "10.0.0.2", 00:28:26.535 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:28:26.535 "hostaddr": "10.0.0.2", 00:28:26.535 "hostsvcid": "60000", 00:28:26.535 "adrfam": "ipv4", 00:28:26.535 "trsvcid": "4420", 00:28:26.535 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:26.535 "method": "bdev_nvme_attach_controller", 00:28:26.535 "req_id": 1 00:28:26.535 } 00:28:26.535 Got JSON-RPC error response 00:28:26.535 response: 00:28:26.535 { 00:28:26.535 "code": -114, 00:28:26.535 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:28:26.535 } 00:28:26.535 03:38:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:28:26.535 03:38:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:28:26.535 03:38:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:26.535 03:38:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:26.535 03:38:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:26.535 03:38:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:28:26.535 03:38:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:28:26.535 03:38:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:28:26.535 03:38:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:28:26.535 03:38:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:26.535 03:38:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:28:26.535 03:38:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:26.535 03:38:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:28:26.535 03:38:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:26.535 03:38:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:26.535 request: 00:28:26.535 { 00:28:26.535 "name": "NVMe0", 00:28:26.535 "trtype": "tcp", 00:28:26.535 "traddr": "10.0.0.2", 00:28:26.535 "hostaddr": "10.0.0.2", 00:28:26.535 "hostsvcid": "60000", 00:28:26.535 "adrfam": "ipv4", 00:28:26.535 "trsvcid": "4420", 00:28:26.535 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:26.535 "method": "bdev_nvme_attach_controller", 00:28:26.535 "req_id": 1 00:28:26.535 } 00:28:26.535 Got JSON-RPC error response 00:28:26.535 response: 00:28:26.535 { 00:28:26.535 "code": -114, 00:28:26.535 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:28:26.535 } 00:28:26.535 03:38:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:28:26.535 03:38:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:28:26.535 03:38:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:26.536 03:38:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:26.536 03:38:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:26.536 03:38:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:28:26.536 03:38:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:28:26.536 03:38:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:28:26.536 03:38:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:28:26.536 03:38:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:26.536 03:38:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:28:26.536 03:38:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:26.536 03:38:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:28:26.536 03:38:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:26.536 03:38:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:26.536 request: 00:28:26.536 { 00:28:26.536 "name": "NVMe0", 00:28:26.536 "trtype": "tcp", 00:28:26.536 "traddr": "10.0.0.2", 00:28:26.536 "hostaddr": "10.0.0.2", 00:28:26.536 "hostsvcid": "60000", 00:28:26.536 "adrfam": "ipv4", 00:28:26.536 "trsvcid": "4420", 00:28:26.536 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:26.536 "multipath": "disable", 00:28:26.536 "method": "bdev_nvme_attach_controller", 00:28:26.536 "req_id": 1 00:28:26.536 } 00:28:26.536 Got JSON-RPC error response 00:28:26.536 response: 00:28:26.536 { 00:28:26.536 "code": -114, 00:28:26.536 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:28:26.536 } 00:28:26.536 03:38:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:28:26.536 03:38:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:28:26.536 03:38:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:26.536 03:38:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:26.536 03:38:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:26.536 03:38:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:28:26.536 03:38:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:28:26.536 03:38:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:28:26.536 03:38:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:28:26.536 03:38:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:26.536 03:38:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:28:26.536 03:38:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:26.536 03:38:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:28:26.536 03:38:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:26.536 03:38:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:26.536 request: 00:28:26.536 { 00:28:26.536 "name": "NVMe0", 00:28:26.536 "trtype": "tcp", 00:28:26.536 "traddr": "10.0.0.2", 00:28:26.536 "hostaddr": "10.0.0.2", 00:28:26.536 "hostsvcid": "60000", 00:28:26.536 "adrfam": "ipv4", 00:28:26.536 "trsvcid": "4420", 00:28:26.536 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:26.536 "multipath": "failover", 00:28:26.536 "method": "bdev_nvme_attach_controller", 00:28:26.536 "req_id": 1 00:28:26.536 } 00:28:26.536 Got JSON-RPC error response 00:28:26.536 response: 00:28:26.536 { 00:28:26.536 "code": -114, 00:28:26.536 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:28:26.536 } 00:28:26.536 03:38:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:28:26.536 03:38:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:28:26.536 03:38:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:26.536 03:38:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:26.536 03:38:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:26.536 03:38:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:26.536 03:38:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:26.536 03:38:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:26.794 00:28:26.794 03:38:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:26.794 03:38:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:26.794 03:38:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:26.794 03:38:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:26.794 03:38:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:26.794 03:38:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:28:26.794 03:38:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:26.794 03:38:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:26.794 00:28:26.794 03:38:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:26.794 03:38:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:26.794 03:38:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:28:26.794 03:38:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:26.794 03:38:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:26.794 03:38:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:26.794 03:38:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:28:26.794 03:38:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:28:28.166 0 00:28:28.166 03:38:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:28:28.166 03:38:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:28.166 03:38:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:28.166 03:38:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:28.166 03:38:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 2499472 00:28:28.166 03:38:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@946 -- # '[' -z 2499472 ']' 00:28:28.166 03:38:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@950 -- # kill -0 2499472 00:28:28.166 03:38:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # uname 00:28:28.166 03:38:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:28:28.166 03:38:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2499472 00:28:28.166 03:38:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:28:28.166 03:38:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:28:28.166 03:38:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2499472' 00:28:28.166 killing process with pid 2499472 00:28:28.166 03:38:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@965 -- # kill 2499472 00:28:28.166 03:38:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@970 -- # wait 2499472 00:28:28.166 03:38:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:28.166 03:38:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:28.166 03:38:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:28.166 03:38:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:28.166 03:38:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:28:28.166 03:38:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:28.166 03:38:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:28.166 03:38:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:28.166 03:38:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:28:28.166 03:38:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:28:28.166 03:38:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1608 -- # read -r file 00:28:28.166 03:38:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1607 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:28:28.166 03:38:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1607 -- # sort -u 00:28:28.166 03:38:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1609 -- # cat 00:28:28.166 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:28:28.166 [2024-07-21 03:38:11.410346] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:28:28.167 [2024-07-21 03:38:11.410437] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2499472 ] 00:28:28.167 EAL: No free 2048 kB hugepages reported on node 1 00:28:28.167 [2024-07-21 03:38:11.472691] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:28.167 [2024-07-21 03:38:11.558338] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:28.167 [2024-07-21 03:38:11.982141] bdev.c:4580:bdev_name_add: *ERROR*: Bdev name b05d4498-edbb-4c55-8edb-eb4a06d2550a already exists 00:28:28.167 [2024-07-21 03:38:11.982181] bdev.c:7696:bdev_register: *ERROR*: Unable to add uuid:b05d4498-edbb-4c55-8edb-eb4a06d2550a alias for bdev NVMe1n1 00:28:28.167 [2024-07-21 03:38:11.982198] bdev_nvme.c:4314:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:28:28.167 Running I/O for 1 seconds... 00:28:28.167 00:28:28.167 Latency(us) 00:28:28.167 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:28.167 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:28:28.167 NVMe0n1 : 1.00 19106.10 74.63 0.00 0.00 6688.85 5776.88 19709.35 00:28:28.167 =================================================================================================================== 00:28:28.167 Total : 19106.10 74.63 0.00 0.00 6688.85 5776.88 19709.35 00:28:28.167 Received shutdown signal, test time was about 1.000000 seconds 00:28:28.167 00:28:28.167 Latency(us) 00:28:28.167 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:28.167 =================================================================================================================== 00:28:28.167 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:28.167 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:28:28.167 03:38:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1614 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:28:28.167 03:38:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1608 -- # read -r file 00:28:28.167 03:38:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:28:28.167 03:38:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:28.167 03:38:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:28:28.167 03:38:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:28.167 03:38:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:28:28.167 03:38:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:28.167 03:38:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:28.167 rmmod nvme_tcp 00:28:28.167 rmmod nvme_fabrics 00:28:28.167 rmmod nvme_keyring 00:28:28.167 03:38:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:28.167 03:38:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:28:28.167 03:38:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:28:28.167 03:38:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 2499386 ']' 00:28:28.167 03:38:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 2499386 00:28:28.167 03:38:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@946 -- # '[' -z 2499386 ']' 00:28:28.167 03:38:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@950 -- # kill -0 2499386 00:28:28.167 03:38:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # uname 00:28:28.167 03:38:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:28:28.167 03:38:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2499386 00:28:28.423 03:38:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:28:28.423 03:38:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:28:28.423 03:38:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2499386' 00:28:28.423 killing process with pid 2499386 00:28:28.423 03:38:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@965 -- # kill 2499386 00:28:28.423 03:38:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@970 -- # wait 2499386 00:28:28.681 03:38:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:28.681 03:38:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:28.681 03:38:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:28.681 03:38:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:28.681 03:38:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:28.681 03:38:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:28.681 03:38:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:28.681 03:38:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:30.586 03:38:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:30.586 00:28:30.586 real 0m7.128s 00:28:30.586 user 0m10.698s 00:28:30.586 sys 0m2.261s 00:28:30.586 03:38:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:30.586 03:38:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:30.586 ************************************ 00:28:30.586 END TEST nvmf_multicontroller 00:28:30.586 ************************************ 00:28:30.586 03:38:15 nvmf_tcp -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:28:30.586 03:38:15 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:28:30.586 03:38:15 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:30.586 03:38:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:30.586 ************************************ 00:28:30.586 START TEST nvmf_aer 00:28:30.586 ************************************ 00:28:30.586 03:38:15 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:28:30.843 * Looking for test storage... 00:28:30.843 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:30.843 03:38:15 nvmf_tcp.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:30.843 03:38:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:28:30.843 03:38:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:30.843 03:38:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:30.843 03:38:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:30.843 03:38:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:30.843 03:38:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:30.843 03:38:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:30.843 03:38:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:30.843 03:38:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:30.843 03:38:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:30.844 03:38:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:30.844 03:38:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:30.844 03:38:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:30.844 03:38:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:30.844 03:38:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:30.844 03:38:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:30.844 03:38:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:30.844 03:38:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:30.844 03:38:15 nvmf_tcp.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:30.844 03:38:15 nvmf_tcp.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:30.844 03:38:15 nvmf_tcp.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:30.844 03:38:15 nvmf_tcp.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:30.844 03:38:15 nvmf_tcp.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:30.844 03:38:15 nvmf_tcp.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:30.844 03:38:15 nvmf_tcp.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:28:30.844 03:38:15 nvmf_tcp.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:30.844 03:38:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:28:30.844 03:38:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:30.844 03:38:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:30.844 03:38:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:30.844 03:38:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:30.844 03:38:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:30.844 03:38:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:30.844 03:38:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:30.844 03:38:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:30.844 03:38:15 nvmf_tcp.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:28:30.844 03:38:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:30.844 03:38:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:30.844 03:38:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:30.844 03:38:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:30.844 03:38:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:30.844 03:38:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:30.844 03:38:15 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:30.844 03:38:15 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:30.844 03:38:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:30.844 03:38:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:30.844 03:38:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:28:30.844 03:38:15 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:32.743 03:38:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:32.743 03:38:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:28:32.743 03:38:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:32.743 03:38:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:32.743 03:38:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:32.743 03:38:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:32.743 03:38:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:32.744 03:38:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:28:32.744 03:38:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:32.744 03:38:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:28:32.744 03:38:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:28:32.744 03:38:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:28:32.744 03:38:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:28:32.744 03:38:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:28:32.744 03:38:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:28:32.744 03:38:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:32.744 03:38:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:32.744 03:38:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:32.744 03:38:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:32.744 03:38:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:32.744 03:38:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:32.744 03:38:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:32.744 03:38:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:32.744 03:38:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:32.744 03:38:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:32.744 03:38:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:32.744 03:38:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:32.744 03:38:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:32.744 03:38:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:32.744 03:38:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:32.744 03:38:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:32.744 03:38:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:32.744 03:38:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:32.744 03:38:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:32.744 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:32.744 03:38:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:32.744 03:38:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:32.744 03:38:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:32.744 03:38:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:32.744 03:38:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:32.744 03:38:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:32.744 03:38:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:32.744 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:32.744 03:38:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:32.744 03:38:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:32.744 03:38:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:32.744 03:38:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:32.744 03:38:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:32.744 03:38:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:32.744 03:38:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:32.744 03:38:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:32.744 03:38:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:32.744 03:38:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:32.744 03:38:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:32.744 03:38:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:32.744 03:38:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:32.744 03:38:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:32.744 03:38:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:32.744 03:38:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:32.744 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:32.744 03:38:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:32.744 03:38:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:32.744 03:38:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:32.744 03:38:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:32.744 03:38:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:32.744 03:38:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:32.744 03:38:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:32.744 03:38:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:32.744 03:38:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:32.744 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:32.744 03:38:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:32.744 03:38:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:32.744 03:38:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:28:32.744 03:38:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:32.744 03:38:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:32.744 03:38:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:32.744 03:38:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:32.744 03:38:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:32.744 03:38:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:32.744 03:38:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:32.744 03:38:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:32.744 03:38:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:32.744 03:38:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:32.744 03:38:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:32.744 03:38:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:32.744 03:38:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:32.744 03:38:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:32.744 03:38:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:32.744 03:38:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:32.744 03:38:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:32.744 03:38:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:32.744 03:38:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:32.744 03:38:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:32.744 03:38:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:32.744 03:38:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:32.744 03:38:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:32.744 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:32.744 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.242 ms 00:28:32.744 00:28:32.744 --- 10.0.0.2 ping statistics --- 00:28:32.744 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:32.744 rtt min/avg/max/mdev = 0.242/0.242/0.242/0.000 ms 00:28:32.744 03:38:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:32.744 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:32.744 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.190 ms 00:28:32.744 00:28:32.744 --- 10.0.0.1 ping statistics --- 00:28:32.744 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:32.744 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:28:32.744 03:38:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:32.744 03:38:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:28:32.744 03:38:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:32.744 03:38:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:32.744 03:38:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:32.744 03:38:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:32.744 03:38:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:32.744 03:38:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:32.744 03:38:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:32.744 03:38:17 nvmf_tcp.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:28:32.744 03:38:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:32.744 03:38:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@720 -- # xtrace_disable 00:28:32.744 03:38:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:32.744 03:38:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=2501624 00:28:32.744 03:38:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:28:32.744 03:38:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 2501624 00:28:32.744 03:38:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@827 -- # '[' -z 2501624 ']' 00:28:32.744 03:38:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:32.744 03:38:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:32.744 03:38:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:32.744 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:32.744 03:38:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:32.744 03:38:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:32.744 [2024-07-21 03:38:18.043903] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:28:32.744 [2024-07-21 03:38:18.044005] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:33.003 EAL: No free 2048 kB hugepages reported on node 1 00:28:33.003 [2024-07-21 03:38:18.111570] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:33.003 [2024-07-21 03:38:18.202079] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:33.003 [2024-07-21 03:38:18.202144] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:33.003 [2024-07-21 03:38:18.202157] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:33.003 [2024-07-21 03:38:18.202168] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:33.003 [2024-07-21 03:38:18.202178] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:33.003 [2024-07-21 03:38:18.202311] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:33.003 [2024-07-21 03:38:18.202344] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:33.003 [2024-07-21 03:38:18.202404] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:33.003 [2024-07-21 03:38:18.202406] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:33.262 03:38:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:33.262 03:38:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@860 -- # return 0 00:28:33.262 03:38:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:33.262 03:38:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:33.262 03:38:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:33.262 03:38:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:33.262 03:38:18 nvmf_tcp.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:33.262 03:38:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:33.262 03:38:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:33.262 [2024-07-21 03:38:18.359435] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:33.262 03:38:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:33.262 03:38:18 nvmf_tcp.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:28:33.262 03:38:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:33.262 03:38:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:33.262 Malloc0 00:28:33.262 03:38:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:33.262 03:38:18 nvmf_tcp.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:28:33.262 03:38:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:33.262 03:38:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:33.262 03:38:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:33.262 03:38:18 nvmf_tcp.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:33.262 03:38:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:33.262 03:38:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:33.262 03:38:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:33.262 03:38:18 nvmf_tcp.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:33.262 03:38:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:33.262 03:38:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:33.262 [2024-07-21 03:38:18.412963] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:33.262 03:38:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:33.262 03:38:18 nvmf_tcp.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:28:33.262 03:38:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:33.262 03:38:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:33.262 [ 00:28:33.262 { 00:28:33.262 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:28:33.262 "subtype": "Discovery", 00:28:33.262 "listen_addresses": [], 00:28:33.262 "allow_any_host": true, 00:28:33.262 "hosts": [] 00:28:33.262 }, 00:28:33.262 { 00:28:33.262 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:28:33.262 "subtype": "NVMe", 00:28:33.262 "listen_addresses": [ 00:28:33.262 { 00:28:33.262 "trtype": "TCP", 00:28:33.262 "adrfam": "IPv4", 00:28:33.262 "traddr": "10.0.0.2", 00:28:33.262 "trsvcid": "4420" 00:28:33.262 } 00:28:33.262 ], 00:28:33.262 "allow_any_host": true, 00:28:33.262 "hosts": [], 00:28:33.262 "serial_number": "SPDK00000000000001", 00:28:33.262 "model_number": "SPDK bdev Controller", 00:28:33.262 "max_namespaces": 2, 00:28:33.262 "min_cntlid": 1, 00:28:33.262 "max_cntlid": 65519, 00:28:33.262 "namespaces": [ 00:28:33.262 { 00:28:33.262 "nsid": 1, 00:28:33.262 "bdev_name": "Malloc0", 00:28:33.262 "name": "Malloc0", 00:28:33.262 "nguid": "4B3677E32DF54B859C42D59AB3FF82A7", 00:28:33.262 "uuid": "4b3677e3-2df5-4b85-9c42-d59ab3ff82a7" 00:28:33.262 } 00:28:33.262 ] 00:28:33.262 } 00:28:33.262 ] 00:28:33.262 03:38:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:33.262 03:38:18 nvmf_tcp.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:28:33.262 03:38:18 nvmf_tcp.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:28:33.262 03:38:18 nvmf_tcp.nvmf_aer -- host/aer.sh@33 -- # aerpid=2501762 00:28:33.262 03:38:18 nvmf_tcp.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:28:33.262 03:38:18 nvmf_tcp.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:28:33.262 03:38:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1261 -- # local i=0 00:28:33.262 03:38:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:33.262 03:38:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1263 -- # '[' 0 -lt 200 ']' 00:28:33.262 03:38:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1264 -- # i=1 00:28:33.262 03:38:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # sleep 0.1 00:28:33.262 EAL: No free 2048 kB hugepages reported on node 1 00:28:33.262 03:38:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:33.262 03:38:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1263 -- # '[' 1 -lt 200 ']' 00:28:33.262 03:38:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1264 -- # i=2 00:28:33.262 03:38:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # sleep 0.1 00:28:33.521 03:38:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:33.521 03:38:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1263 -- # '[' 2 -lt 200 ']' 00:28:33.521 03:38:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1264 -- # i=3 00:28:33.521 03:38:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # sleep 0.1 00:28:33.521 03:38:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:33.521 03:38:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:33.521 03:38:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1272 -- # return 0 00:28:33.521 03:38:18 nvmf_tcp.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:28:33.521 03:38:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:33.521 03:38:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:33.521 Malloc1 00:28:33.521 03:38:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:33.521 03:38:18 nvmf_tcp.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:28:33.521 03:38:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:33.521 03:38:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:33.521 03:38:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:33.521 03:38:18 nvmf_tcp.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:28:33.521 03:38:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:33.521 03:38:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:33.521 Asynchronous Event Request test 00:28:33.521 Attaching to 10.0.0.2 00:28:33.521 Attached to 10.0.0.2 00:28:33.521 Registering asynchronous event callbacks... 00:28:33.521 Starting namespace attribute notice tests for all controllers... 00:28:33.521 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:28:33.521 aer_cb - Changed Namespace 00:28:33.521 Cleaning up... 00:28:33.521 [ 00:28:33.521 { 00:28:33.521 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:28:33.521 "subtype": "Discovery", 00:28:33.521 "listen_addresses": [], 00:28:33.521 "allow_any_host": true, 00:28:33.521 "hosts": [] 00:28:33.521 }, 00:28:33.521 { 00:28:33.521 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:28:33.521 "subtype": "NVMe", 00:28:33.521 "listen_addresses": [ 00:28:33.521 { 00:28:33.521 "trtype": "TCP", 00:28:33.521 "adrfam": "IPv4", 00:28:33.521 "traddr": "10.0.0.2", 00:28:33.521 "trsvcid": "4420" 00:28:33.521 } 00:28:33.521 ], 00:28:33.521 "allow_any_host": true, 00:28:33.521 "hosts": [], 00:28:33.521 "serial_number": "SPDK00000000000001", 00:28:33.521 "model_number": "SPDK bdev Controller", 00:28:33.521 "max_namespaces": 2, 00:28:33.521 "min_cntlid": 1, 00:28:33.521 "max_cntlid": 65519, 00:28:33.521 "namespaces": [ 00:28:33.521 { 00:28:33.521 "nsid": 1, 00:28:33.521 "bdev_name": "Malloc0", 00:28:33.521 "name": "Malloc0", 00:28:33.521 "nguid": "4B3677E32DF54B859C42D59AB3FF82A7", 00:28:33.521 "uuid": "4b3677e3-2df5-4b85-9c42-d59ab3ff82a7" 00:28:33.521 }, 00:28:33.521 { 00:28:33.521 "nsid": 2, 00:28:33.521 "bdev_name": "Malloc1", 00:28:33.521 "name": "Malloc1", 00:28:33.521 "nguid": "D936446D4C4E44668DF49FBA65005613", 00:28:33.521 "uuid": "d936446d-4c4e-4466-8df4-9fba65005613" 00:28:33.521 } 00:28:33.521 ] 00:28:33.521 } 00:28:33.521 ] 00:28:33.521 03:38:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:33.521 03:38:18 nvmf_tcp.nvmf_aer -- host/aer.sh@43 -- # wait 2501762 00:28:33.521 03:38:18 nvmf_tcp.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:28:33.521 03:38:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:33.521 03:38:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:33.779 03:38:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:33.779 03:38:18 nvmf_tcp.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:28:33.779 03:38:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:33.779 03:38:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:33.779 03:38:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:33.779 03:38:18 nvmf_tcp.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:33.779 03:38:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:33.779 03:38:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:33.779 03:38:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:33.779 03:38:18 nvmf_tcp.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:28:33.779 03:38:18 nvmf_tcp.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:28:33.779 03:38:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:33.779 03:38:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:28:33.779 03:38:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:33.779 03:38:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:28:33.779 03:38:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:33.779 03:38:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:33.779 rmmod nvme_tcp 00:28:33.779 rmmod nvme_fabrics 00:28:33.779 rmmod nvme_keyring 00:28:33.779 03:38:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:33.779 03:38:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:28:33.779 03:38:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:28:33.779 03:38:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 2501624 ']' 00:28:33.779 03:38:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 2501624 00:28:33.779 03:38:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@946 -- # '[' -z 2501624 ']' 00:28:33.779 03:38:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@950 -- # kill -0 2501624 00:28:33.779 03:38:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@951 -- # uname 00:28:33.779 03:38:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:28:33.779 03:38:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2501624 00:28:33.779 03:38:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:28:33.779 03:38:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:28:33.779 03:38:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2501624' 00:28:33.779 killing process with pid 2501624 00:28:33.779 03:38:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@965 -- # kill 2501624 00:28:33.779 03:38:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@970 -- # wait 2501624 00:28:34.037 03:38:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:34.037 03:38:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:34.037 03:38:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:34.037 03:38:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:34.037 03:38:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:34.037 03:38:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:34.037 03:38:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:34.037 03:38:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:35.937 03:38:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:35.937 00:28:35.937 real 0m5.383s 00:28:35.937 user 0m4.515s 00:28:35.937 sys 0m1.912s 00:28:35.937 03:38:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:35.937 03:38:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:35.937 ************************************ 00:28:35.937 END TEST nvmf_aer 00:28:35.937 ************************************ 00:28:36.196 03:38:21 nvmf_tcp -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:28:36.196 03:38:21 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:28:36.196 03:38:21 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:36.196 03:38:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:36.196 ************************************ 00:28:36.196 START TEST nvmf_async_init 00:28:36.196 ************************************ 00:28:36.196 03:38:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:28:36.196 * Looking for test storage... 00:28:36.196 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:36.196 03:38:21 nvmf_tcp.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:36.196 03:38:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:28:36.196 03:38:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:36.196 03:38:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:36.196 03:38:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:36.196 03:38:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:36.196 03:38:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:36.196 03:38:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:36.196 03:38:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:36.196 03:38:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:36.196 03:38:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:36.196 03:38:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:36.196 03:38:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:36.196 03:38:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:36.196 03:38:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:36.196 03:38:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:36.196 03:38:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:36.196 03:38:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:36.196 03:38:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:36.196 03:38:21 nvmf_tcp.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:36.196 03:38:21 nvmf_tcp.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:36.196 03:38:21 nvmf_tcp.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:36.196 03:38:21 nvmf_tcp.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:36.196 03:38:21 nvmf_tcp.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:36.196 03:38:21 nvmf_tcp.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:36.196 03:38:21 nvmf_tcp.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:28:36.196 03:38:21 nvmf_tcp.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:36.196 03:38:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:28:36.196 03:38:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:36.196 03:38:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:36.196 03:38:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:36.196 03:38:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:36.196 03:38:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:36.196 03:38:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:36.196 03:38:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:36.196 03:38:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:36.196 03:38:21 nvmf_tcp.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:28:36.196 03:38:21 nvmf_tcp.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:28:36.196 03:38:21 nvmf_tcp.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:28:36.196 03:38:21 nvmf_tcp.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:28:36.196 03:38:21 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:28:36.196 03:38:21 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:28:36.196 03:38:21 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # nguid=6ec68ffca515412b94e7c162f2c3fbee 00:28:36.196 03:38:21 nvmf_tcp.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:28:36.196 03:38:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:36.196 03:38:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:36.196 03:38:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:36.196 03:38:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:36.196 03:38:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:36.196 03:38:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:36.196 03:38:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:36.196 03:38:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:36.196 03:38:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:36.196 03:38:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:36.196 03:38:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:28:36.196 03:38:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:38.138 03:38:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:38.138 03:38:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:28:38.138 03:38:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:38.138 03:38:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:38.138 03:38:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:38.138 03:38:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:38.138 03:38:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:38.138 03:38:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:28:38.138 03:38:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:38.138 03:38:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:28:38.138 03:38:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:28:38.138 03:38:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:28:38.138 03:38:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:28:38.138 03:38:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:28:38.138 03:38:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:28:38.138 03:38:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:38.138 03:38:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:38.138 03:38:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:38.138 03:38:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:38.138 03:38:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:38.138 03:38:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:38.138 03:38:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:38.138 03:38:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:38.138 03:38:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:38.138 03:38:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:38.138 03:38:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:38.138 03:38:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:38.138 03:38:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:38.138 03:38:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:38.138 03:38:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:38.138 03:38:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:38.138 03:38:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:38.138 03:38:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:38.138 03:38:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:38.138 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:38.138 03:38:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:38.138 03:38:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:38.138 03:38:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:38.138 03:38:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:38.138 03:38:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:38.138 03:38:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:38.138 03:38:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:38.138 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:38.138 03:38:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:38.138 03:38:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:38.138 03:38:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:38.138 03:38:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:38.138 03:38:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:38.138 03:38:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:38.138 03:38:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:38.138 03:38:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:38.138 03:38:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:38.138 03:38:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:38.138 03:38:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:38.138 03:38:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:38.138 03:38:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:38.138 03:38:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:38.138 03:38:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:38.138 03:38:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:38.138 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:38.138 03:38:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:38.138 03:38:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:38.138 03:38:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:38.138 03:38:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:38.138 03:38:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:38.138 03:38:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:38.138 03:38:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:38.138 03:38:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:38.138 03:38:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:38.138 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:38.138 03:38:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:38.138 03:38:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:38.138 03:38:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:28:38.138 03:38:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:38.138 03:38:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:38.138 03:38:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:38.138 03:38:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:38.138 03:38:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:38.138 03:38:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:38.138 03:38:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:38.138 03:38:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:38.138 03:38:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:38.138 03:38:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:38.138 03:38:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:38.138 03:38:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:38.138 03:38:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:38.138 03:38:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:38.138 03:38:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:38.138 03:38:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:38.138 03:38:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:38.138 03:38:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:38.138 03:38:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:38.138 03:38:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:38.139 03:38:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:38.139 03:38:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:38.139 03:38:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:38.139 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:38.139 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.197 ms 00:28:38.139 00:28:38.139 --- 10.0.0.2 ping statistics --- 00:28:38.139 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:38.139 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:28:38.139 03:38:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:38.139 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:38.139 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.150 ms 00:28:38.139 00:28:38.139 --- 10.0.0.1 ping statistics --- 00:28:38.139 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:38.139 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:28:38.139 03:38:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:38.139 03:38:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:28:38.139 03:38:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:38.139 03:38:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:38.139 03:38:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:38.139 03:38:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:38.139 03:38:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:38.139 03:38:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:38.139 03:38:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:38.139 03:38:23 nvmf_tcp.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:28:38.139 03:38:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:38.139 03:38:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@720 -- # xtrace_disable 00:28:38.139 03:38:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:38.139 03:38:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=2503696 00:28:38.139 03:38:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:28:38.139 03:38:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 2503696 00:28:38.139 03:38:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@827 -- # '[' -z 2503696 ']' 00:28:38.139 03:38:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:38.139 03:38:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:38.139 03:38:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:38.139 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:38.139 03:38:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:38.139 03:38:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:38.397 [2024-07-21 03:38:23.486988] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:28:38.397 [2024-07-21 03:38:23.487081] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:38.397 EAL: No free 2048 kB hugepages reported on node 1 00:28:38.397 [2024-07-21 03:38:23.556365] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:38.397 [2024-07-21 03:38:23.647090] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:38.397 [2024-07-21 03:38:23.647148] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:38.397 [2024-07-21 03:38:23.647165] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:38.397 [2024-07-21 03:38:23.647178] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:38.397 [2024-07-21 03:38:23.647190] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:38.397 [2024-07-21 03:38:23.647225] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:38.656 03:38:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:38.656 03:38:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@860 -- # return 0 00:28:38.656 03:38:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:38.656 03:38:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:38.656 03:38:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:38.656 03:38:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:38.656 03:38:23 nvmf_tcp.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:28:38.656 03:38:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:38.656 03:38:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:38.656 [2024-07-21 03:38:23.800177] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:38.656 03:38:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:38.656 03:38:23 nvmf_tcp.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:28:38.656 03:38:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:38.656 03:38:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:38.656 null0 00:28:38.656 03:38:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:38.656 03:38:23 nvmf_tcp.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:28:38.656 03:38:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:38.656 03:38:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:38.656 03:38:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:38.656 03:38:23 nvmf_tcp.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:28:38.656 03:38:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:38.656 03:38:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:38.656 03:38:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:38.656 03:38:23 nvmf_tcp.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 6ec68ffca515412b94e7c162f2c3fbee 00:28:38.656 03:38:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:38.656 03:38:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:38.656 03:38:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:38.656 03:38:23 nvmf_tcp.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:38.656 03:38:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:38.656 03:38:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:38.656 [2024-07-21 03:38:23.840432] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:38.656 03:38:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:38.656 03:38:23 nvmf_tcp.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:28:38.656 03:38:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:38.656 03:38:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:38.914 nvme0n1 00:28:38.914 03:38:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:38.914 03:38:24 nvmf_tcp.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:28:38.914 03:38:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:38.914 03:38:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:38.914 [ 00:28:38.914 { 00:28:38.914 "name": "nvme0n1", 00:28:38.914 "aliases": [ 00:28:38.914 "6ec68ffc-a515-412b-94e7-c162f2c3fbee" 00:28:38.914 ], 00:28:38.914 "product_name": "NVMe disk", 00:28:38.914 "block_size": 512, 00:28:38.914 "num_blocks": 2097152, 00:28:38.914 "uuid": "6ec68ffc-a515-412b-94e7-c162f2c3fbee", 00:28:38.914 "assigned_rate_limits": { 00:28:38.914 "rw_ios_per_sec": 0, 00:28:38.914 "rw_mbytes_per_sec": 0, 00:28:38.914 "r_mbytes_per_sec": 0, 00:28:38.914 "w_mbytes_per_sec": 0 00:28:38.914 }, 00:28:38.914 "claimed": false, 00:28:38.914 "zoned": false, 00:28:38.914 "supported_io_types": { 00:28:38.914 "read": true, 00:28:38.914 "write": true, 00:28:38.914 "unmap": false, 00:28:38.914 "write_zeroes": true, 00:28:38.914 "flush": true, 00:28:38.914 "reset": true, 00:28:38.914 "compare": true, 00:28:38.914 "compare_and_write": true, 00:28:38.914 "abort": true, 00:28:38.914 "nvme_admin": true, 00:28:38.914 "nvme_io": true 00:28:38.914 }, 00:28:38.914 "memory_domains": [ 00:28:38.914 { 00:28:38.914 "dma_device_id": "system", 00:28:38.914 "dma_device_type": 1 00:28:38.914 } 00:28:38.914 ], 00:28:38.914 "driver_specific": { 00:28:38.914 "nvme": [ 00:28:38.914 { 00:28:38.914 "trid": { 00:28:38.914 "trtype": "TCP", 00:28:38.914 "adrfam": "IPv4", 00:28:38.914 "traddr": "10.0.0.2", 00:28:38.914 "trsvcid": "4420", 00:28:38.914 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:28:38.914 }, 00:28:38.914 "ctrlr_data": { 00:28:38.914 "cntlid": 1, 00:28:38.914 "vendor_id": "0x8086", 00:28:38.914 "model_number": "SPDK bdev Controller", 00:28:38.914 "serial_number": "00000000000000000000", 00:28:38.914 "firmware_revision": "24.05.1", 00:28:38.914 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:38.914 "oacs": { 00:28:38.914 "security": 0, 00:28:38.914 "format": 0, 00:28:38.914 "firmware": 0, 00:28:38.914 "ns_manage": 0 00:28:38.914 }, 00:28:38.914 "multi_ctrlr": true, 00:28:38.914 "ana_reporting": false 00:28:38.914 }, 00:28:38.914 "vs": { 00:28:38.914 "nvme_version": "1.3" 00:28:38.914 }, 00:28:38.914 "ns_data": { 00:28:38.914 "id": 1, 00:28:38.914 "can_share": true 00:28:38.914 } 00:28:38.914 } 00:28:38.914 ], 00:28:38.914 "mp_policy": "active_passive" 00:28:38.914 } 00:28:38.914 } 00:28:38.914 ] 00:28:38.914 03:38:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:38.914 03:38:24 nvmf_tcp.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:28:38.914 03:38:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:38.914 03:38:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:38.914 [2024-07-21 03:38:24.093116] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:28:38.915 [2024-07-21 03:38:24.093217] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13ff760 (9): Bad file descriptor 00:28:39.173 [2024-07-21 03:38:24.235779] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:28:39.173 03:38:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:39.173 03:38:24 nvmf_tcp.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:28:39.173 03:38:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:39.173 03:38:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:39.173 [ 00:28:39.173 { 00:28:39.173 "name": "nvme0n1", 00:28:39.173 "aliases": [ 00:28:39.173 "6ec68ffc-a515-412b-94e7-c162f2c3fbee" 00:28:39.173 ], 00:28:39.173 "product_name": "NVMe disk", 00:28:39.173 "block_size": 512, 00:28:39.173 "num_blocks": 2097152, 00:28:39.173 "uuid": "6ec68ffc-a515-412b-94e7-c162f2c3fbee", 00:28:39.173 "assigned_rate_limits": { 00:28:39.173 "rw_ios_per_sec": 0, 00:28:39.173 "rw_mbytes_per_sec": 0, 00:28:39.173 "r_mbytes_per_sec": 0, 00:28:39.173 "w_mbytes_per_sec": 0 00:28:39.173 }, 00:28:39.173 "claimed": false, 00:28:39.173 "zoned": false, 00:28:39.173 "supported_io_types": { 00:28:39.173 "read": true, 00:28:39.173 "write": true, 00:28:39.173 "unmap": false, 00:28:39.173 "write_zeroes": true, 00:28:39.173 "flush": true, 00:28:39.173 "reset": true, 00:28:39.173 "compare": true, 00:28:39.173 "compare_and_write": true, 00:28:39.173 "abort": true, 00:28:39.173 "nvme_admin": true, 00:28:39.173 "nvme_io": true 00:28:39.173 }, 00:28:39.173 "memory_domains": [ 00:28:39.173 { 00:28:39.173 "dma_device_id": "system", 00:28:39.173 "dma_device_type": 1 00:28:39.173 } 00:28:39.173 ], 00:28:39.173 "driver_specific": { 00:28:39.173 "nvme": [ 00:28:39.173 { 00:28:39.173 "trid": { 00:28:39.173 "trtype": "TCP", 00:28:39.173 "adrfam": "IPv4", 00:28:39.173 "traddr": "10.0.0.2", 00:28:39.173 "trsvcid": "4420", 00:28:39.173 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:28:39.173 }, 00:28:39.173 "ctrlr_data": { 00:28:39.173 "cntlid": 2, 00:28:39.173 "vendor_id": "0x8086", 00:28:39.173 "model_number": "SPDK bdev Controller", 00:28:39.173 "serial_number": "00000000000000000000", 00:28:39.173 "firmware_revision": "24.05.1", 00:28:39.173 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:39.173 "oacs": { 00:28:39.173 "security": 0, 00:28:39.173 "format": 0, 00:28:39.173 "firmware": 0, 00:28:39.173 "ns_manage": 0 00:28:39.173 }, 00:28:39.173 "multi_ctrlr": true, 00:28:39.173 "ana_reporting": false 00:28:39.173 }, 00:28:39.173 "vs": { 00:28:39.173 "nvme_version": "1.3" 00:28:39.173 }, 00:28:39.173 "ns_data": { 00:28:39.173 "id": 1, 00:28:39.173 "can_share": true 00:28:39.173 } 00:28:39.173 } 00:28:39.173 ], 00:28:39.173 "mp_policy": "active_passive" 00:28:39.173 } 00:28:39.173 } 00:28:39.173 ] 00:28:39.173 03:38:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:39.173 03:38:24 nvmf_tcp.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:39.173 03:38:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:39.173 03:38:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:39.173 03:38:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:39.173 03:38:24 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:28:39.173 03:38:24 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.Y7zVzmwDya 00:28:39.173 03:38:24 nvmf_tcp.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:28:39.174 03:38:24 nvmf_tcp.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.Y7zVzmwDya 00:28:39.174 03:38:24 nvmf_tcp.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:28:39.174 03:38:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:39.174 03:38:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:39.174 03:38:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:39.174 03:38:24 nvmf_tcp.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:28:39.174 03:38:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:39.174 03:38:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:39.174 [2024-07-21 03:38:24.289774] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:28:39.174 [2024-07-21 03:38:24.289986] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:28:39.174 03:38:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:39.174 03:38:24 nvmf_tcp.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Y7zVzmwDya 00:28:39.174 03:38:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:39.174 03:38:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:39.174 [2024-07-21 03:38:24.297793] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:28:39.174 03:38:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:39.174 03:38:24 nvmf_tcp.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Y7zVzmwDya 00:28:39.174 03:38:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:39.174 03:38:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:39.174 [2024-07-21 03:38:24.305805] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:28:39.174 [2024-07-21 03:38:24.305878] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:28:39.174 nvme0n1 00:28:39.174 03:38:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:39.174 03:38:24 nvmf_tcp.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:28:39.174 03:38:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:39.174 03:38:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:39.174 [ 00:28:39.174 { 00:28:39.174 "name": "nvme0n1", 00:28:39.174 "aliases": [ 00:28:39.174 "6ec68ffc-a515-412b-94e7-c162f2c3fbee" 00:28:39.174 ], 00:28:39.174 "product_name": "NVMe disk", 00:28:39.174 "block_size": 512, 00:28:39.174 "num_blocks": 2097152, 00:28:39.174 "uuid": "6ec68ffc-a515-412b-94e7-c162f2c3fbee", 00:28:39.174 "assigned_rate_limits": { 00:28:39.174 "rw_ios_per_sec": 0, 00:28:39.174 "rw_mbytes_per_sec": 0, 00:28:39.174 "r_mbytes_per_sec": 0, 00:28:39.174 "w_mbytes_per_sec": 0 00:28:39.174 }, 00:28:39.174 "claimed": false, 00:28:39.174 "zoned": false, 00:28:39.174 "supported_io_types": { 00:28:39.174 "read": true, 00:28:39.174 "write": true, 00:28:39.174 "unmap": false, 00:28:39.174 "write_zeroes": true, 00:28:39.174 "flush": true, 00:28:39.174 "reset": true, 00:28:39.174 "compare": true, 00:28:39.174 "compare_and_write": true, 00:28:39.174 "abort": true, 00:28:39.174 "nvme_admin": true, 00:28:39.174 "nvme_io": true 00:28:39.174 }, 00:28:39.174 "memory_domains": [ 00:28:39.174 { 00:28:39.174 "dma_device_id": "system", 00:28:39.174 "dma_device_type": 1 00:28:39.174 } 00:28:39.174 ], 00:28:39.174 "driver_specific": { 00:28:39.174 "nvme": [ 00:28:39.174 { 00:28:39.174 "trid": { 00:28:39.174 "trtype": "TCP", 00:28:39.174 "adrfam": "IPv4", 00:28:39.174 "traddr": "10.0.0.2", 00:28:39.174 "trsvcid": "4421", 00:28:39.174 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:28:39.174 }, 00:28:39.174 "ctrlr_data": { 00:28:39.174 "cntlid": 3, 00:28:39.174 "vendor_id": "0x8086", 00:28:39.174 "model_number": "SPDK bdev Controller", 00:28:39.174 "serial_number": "00000000000000000000", 00:28:39.174 "firmware_revision": "24.05.1", 00:28:39.174 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:39.174 "oacs": { 00:28:39.174 "security": 0, 00:28:39.174 "format": 0, 00:28:39.174 "firmware": 0, 00:28:39.174 "ns_manage": 0 00:28:39.174 }, 00:28:39.174 "multi_ctrlr": true, 00:28:39.174 "ana_reporting": false 00:28:39.174 }, 00:28:39.174 "vs": { 00:28:39.174 "nvme_version": "1.3" 00:28:39.174 }, 00:28:39.174 "ns_data": { 00:28:39.174 "id": 1, 00:28:39.174 "can_share": true 00:28:39.174 } 00:28:39.174 } 00:28:39.174 ], 00:28:39.174 "mp_policy": "active_passive" 00:28:39.174 } 00:28:39.174 } 00:28:39.174 ] 00:28:39.174 03:38:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:39.174 03:38:24 nvmf_tcp.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:39.174 03:38:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:39.174 03:38:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:39.174 03:38:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:39.174 03:38:24 nvmf_tcp.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.Y7zVzmwDya 00:28:39.174 03:38:24 nvmf_tcp.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:28:39.174 03:38:24 nvmf_tcp.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:28:39.174 03:38:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:39.174 03:38:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:28:39.174 03:38:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:39.174 03:38:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:28:39.174 03:38:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:39.174 03:38:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:39.174 rmmod nvme_tcp 00:28:39.174 rmmod nvme_fabrics 00:28:39.174 rmmod nvme_keyring 00:28:39.174 03:38:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:39.174 03:38:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:28:39.174 03:38:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:28:39.174 03:38:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 2503696 ']' 00:28:39.174 03:38:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 2503696 00:28:39.174 03:38:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@946 -- # '[' -z 2503696 ']' 00:28:39.174 03:38:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@950 -- # kill -0 2503696 00:28:39.174 03:38:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@951 -- # uname 00:28:39.174 03:38:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:28:39.174 03:38:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2503696 00:28:39.432 03:38:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:28:39.433 03:38:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:28:39.433 03:38:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2503696' 00:28:39.433 killing process with pid 2503696 00:28:39.433 03:38:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@965 -- # kill 2503696 00:28:39.433 [2024-07-21 03:38:24.506067] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:28:39.433 [2024-07-21 03:38:24.506110] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:28:39.433 03:38:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@970 -- # wait 2503696 00:28:39.433 03:38:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:39.433 03:38:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:39.433 03:38:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:39.433 03:38:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:39.433 03:38:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:39.433 03:38:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:39.433 03:38:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:39.433 03:38:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:41.960 03:38:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:41.960 00:28:41.960 real 0m5.460s 00:28:41.960 user 0m2.042s 00:28:41.960 sys 0m1.795s 00:28:41.960 03:38:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:41.960 03:38:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:41.960 ************************************ 00:28:41.960 END TEST nvmf_async_init 00:28:41.960 ************************************ 00:28:41.960 03:38:26 nvmf_tcp -- nvmf/nvmf.sh@94 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:28:41.960 03:38:26 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:28:41.960 03:38:26 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:41.960 03:38:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:41.960 ************************************ 00:28:41.960 START TEST dma 00:28:41.960 ************************************ 00:28:41.960 03:38:26 nvmf_tcp.dma -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:28:41.960 * Looking for test storage... 00:28:41.960 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:41.960 03:38:26 nvmf_tcp.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:41.960 03:38:26 nvmf_tcp.dma -- nvmf/common.sh@7 -- # uname -s 00:28:41.960 03:38:26 nvmf_tcp.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:41.960 03:38:26 nvmf_tcp.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:41.960 03:38:26 nvmf_tcp.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:41.960 03:38:26 nvmf_tcp.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:41.960 03:38:26 nvmf_tcp.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:41.960 03:38:26 nvmf_tcp.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:41.960 03:38:26 nvmf_tcp.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:41.960 03:38:26 nvmf_tcp.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:41.960 03:38:26 nvmf_tcp.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:41.960 03:38:26 nvmf_tcp.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:41.960 03:38:26 nvmf_tcp.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:41.960 03:38:26 nvmf_tcp.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:41.960 03:38:26 nvmf_tcp.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:41.960 03:38:26 nvmf_tcp.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:41.960 03:38:26 nvmf_tcp.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:41.960 03:38:26 nvmf_tcp.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:41.960 03:38:26 nvmf_tcp.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:41.960 03:38:26 nvmf_tcp.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:41.960 03:38:26 nvmf_tcp.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:41.960 03:38:26 nvmf_tcp.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:41.960 03:38:26 nvmf_tcp.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:41.961 03:38:26 nvmf_tcp.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:41.961 03:38:26 nvmf_tcp.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:41.961 03:38:26 nvmf_tcp.dma -- paths/export.sh@5 -- # export PATH 00:28:41.961 03:38:26 nvmf_tcp.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:41.961 03:38:26 nvmf_tcp.dma -- nvmf/common.sh@47 -- # : 0 00:28:41.961 03:38:26 nvmf_tcp.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:41.961 03:38:26 nvmf_tcp.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:41.961 03:38:26 nvmf_tcp.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:41.961 03:38:26 nvmf_tcp.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:41.961 03:38:26 nvmf_tcp.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:41.961 03:38:26 nvmf_tcp.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:41.961 03:38:26 nvmf_tcp.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:41.961 03:38:26 nvmf_tcp.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:41.961 03:38:26 nvmf_tcp.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:28:41.961 03:38:26 nvmf_tcp.dma -- host/dma.sh@13 -- # exit 0 00:28:41.961 00:28:41.961 real 0m0.056s 00:28:41.961 user 0m0.019s 00:28:41.961 sys 0m0.042s 00:28:41.961 03:38:26 nvmf_tcp.dma -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:41.961 03:38:26 nvmf_tcp.dma -- common/autotest_common.sh@10 -- # set +x 00:28:41.961 ************************************ 00:28:41.961 END TEST dma 00:28:41.961 ************************************ 00:28:41.961 03:38:26 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:28:41.961 03:38:26 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:28:41.961 03:38:26 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:41.961 03:38:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:41.961 ************************************ 00:28:41.961 START TEST nvmf_identify 00:28:41.961 ************************************ 00:28:41.961 03:38:26 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:28:41.961 * Looking for test storage... 00:28:41.961 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:41.961 03:38:26 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:41.961 03:38:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:28:41.961 03:38:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:41.961 03:38:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:41.961 03:38:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:41.961 03:38:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:41.961 03:38:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:41.961 03:38:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:41.961 03:38:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:41.961 03:38:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:41.961 03:38:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:41.961 03:38:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:41.961 03:38:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:41.961 03:38:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:41.961 03:38:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:41.961 03:38:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:41.961 03:38:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:41.961 03:38:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:41.961 03:38:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:41.961 03:38:26 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:41.961 03:38:26 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:41.961 03:38:26 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:41.961 03:38:26 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:41.961 03:38:26 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:41.961 03:38:26 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:41.961 03:38:26 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:28:41.961 03:38:26 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:41.961 03:38:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:28:41.961 03:38:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:41.961 03:38:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:41.961 03:38:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:41.961 03:38:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:41.961 03:38:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:41.961 03:38:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:41.961 03:38:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:41.961 03:38:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:41.961 03:38:26 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:41.961 03:38:26 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:41.961 03:38:26 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:28:41.961 03:38:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:41.961 03:38:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:41.961 03:38:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:41.961 03:38:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:41.961 03:38:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:41.961 03:38:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:41.961 03:38:26 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:41.961 03:38:26 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:41.961 03:38:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:41.961 03:38:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:41.961 03:38:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:28:41.961 03:38:26 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:43.859 03:38:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:43.859 03:38:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:28:43.859 03:38:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:43.859 03:38:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:43.859 03:38:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:43.859 03:38:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:43.859 03:38:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:43.859 03:38:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:28:43.859 03:38:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:43.859 03:38:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:28:43.859 03:38:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:28:43.859 03:38:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:28:43.859 03:38:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:28:43.859 03:38:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:28:43.859 03:38:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:28:43.859 03:38:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:43.859 03:38:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:43.859 03:38:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:43.859 03:38:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:43.859 03:38:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:43.859 03:38:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:43.859 03:38:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:43.859 03:38:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:43.859 03:38:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:43.859 03:38:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:43.859 03:38:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:43.859 03:38:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:43.859 03:38:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:43.859 03:38:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:43.859 03:38:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:43.859 03:38:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:43.859 03:38:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:43.859 03:38:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:43.859 03:38:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:43.859 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:43.859 03:38:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:43.859 03:38:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:43.859 03:38:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:43.859 03:38:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:43.859 03:38:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:43.859 03:38:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:43.859 03:38:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:43.859 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:43.859 03:38:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:43.859 03:38:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:43.859 03:38:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:43.859 03:38:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:43.859 03:38:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:43.859 03:38:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:43.859 03:38:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:43.859 03:38:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:43.859 03:38:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:43.859 03:38:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:43.859 03:38:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:43.859 03:38:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:43.859 03:38:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:43.859 03:38:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:43.859 03:38:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:43.859 03:38:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:43.859 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:43.859 03:38:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:43.859 03:38:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:43.859 03:38:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:43.859 03:38:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:43.859 03:38:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:43.859 03:38:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:43.859 03:38:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:43.859 03:38:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:43.859 03:38:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:43.859 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:43.859 03:38:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:43.859 03:38:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:43.859 03:38:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:28:43.859 03:38:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:43.859 03:38:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:43.859 03:38:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:43.859 03:38:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:43.859 03:38:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:43.859 03:38:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:43.859 03:38:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:43.859 03:38:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:43.859 03:38:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:43.859 03:38:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:43.859 03:38:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:43.859 03:38:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:43.859 03:38:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:43.859 03:38:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:43.859 03:38:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:43.859 03:38:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:43.859 03:38:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:43.859 03:38:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:43.859 03:38:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:43.859 03:38:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:43.859 03:38:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:43.859 03:38:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:43.859 03:38:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:43.859 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:43.859 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.202 ms 00:28:43.859 00:28:43.859 --- 10.0.0.2 ping statistics --- 00:28:43.859 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:43.859 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:28:43.859 03:38:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:43.859 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:43.859 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.177 ms 00:28:43.859 00:28:43.859 --- 10.0.0.1 ping statistics --- 00:28:43.859 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:43.859 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:28:43.859 03:38:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:43.859 03:38:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:28:43.860 03:38:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:43.860 03:38:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:43.860 03:38:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:43.860 03:38:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:43.860 03:38:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:43.860 03:38:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:43.860 03:38:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:43.860 03:38:29 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:28:43.860 03:38:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@720 -- # xtrace_disable 00:28:43.860 03:38:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:43.860 03:38:29 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=2505821 00:28:43.860 03:38:29 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:28:43.860 03:38:29 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:43.860 03:38:29 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 2505821 00:28:43.860 03:38:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@827 -- # '[' -z 2505821 ']' 00:28:43.860 03:38:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:43.860 03:38:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:43.860 03:38:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:43.860 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:43.860 03:38:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:43.860 03:38:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:44.118 [2024-07-21 03:38:29.200658] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:28:44.118 [2024-07-21 03:38:29.200747] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:44.118 EAL: No free 2048 kB hugepages reported on node 1 00:28:44.118 [2024-07-21 03:38:29.265380] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:44.118 [2024-07-21 03:38:29.354538] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:44.118 [2024-07-21 03:38:29.354592] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:44.118 [2024-07-21 03:38:29.354605] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:44.118 [2024-07-21 03:38:29.354623] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:44.118 [2024-07-21 03:38:29.354649] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:44.118 [2024-07-21 03:38:29.354703] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:44.118 [2024-07-21 03:38:29.354762] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:44.118 [2024-07-21 03:38:29.354828] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:44.118 [2024-07-21 03:38:29.354830] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:44.377 03:38:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:44.377 03:38:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@860 -- # return 0 00:28:44.377 03:38:29 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:44.377 03:38:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:44.377 03:38:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:44.377 [2024-07-21 03:38:29.486196] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:44.377 03:38:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:44.377 03:38:29 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:28:44.377 03:38:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:44.377 03:38:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:44.377 03:38:29 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:44.377 03:38:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:44.377 03:38:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:44.377 Malloc0 00:28:44.377 03:38:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:44.377 03:38:29 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:44.377 03:38:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:44.377 03:38:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:44.377 03:38:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:44.377 03:38:29 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:28:44.377 03:38:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:44.377 03:38:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:44.377 03:38:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:44.377 03:38:29 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:44.377 03:38:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:44.377 03:38:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:44.377 [2024-07-21 03:38:29.562582] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:44.377 03:38:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:44.377 03:38:29 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:44.377 03:38:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:44.377 03:38:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:44.377 03:38:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:44.377 03:38:29 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:28:44.377 03:38:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:44.377 03:38:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:44.377 [ 00:28:44.377 { 00:28:44.377 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:28:44.377 "subtype": "Discovery", 00:28:44.377 "listen_addresses": [ 00:28:44.377 { 00:28:44.377 "trtype": "TCP", 00:28:44.377 "adrfam": "IPv4", 00:28:44.377 "traddr": "10.0.0.2", 00:28:44.377 "trsvcid": "4420" 00:28:44.377 } 00:28:44.377 ], 00:28:44.378 "allow_any_host": true, 00:28:44.378 "hosts": [] 00:28:44.378 }, 00:28:44.378 { 00:28:44.378 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:28:44.378 "subtype": "NVMe", 00:28:44.378 "listen_addresses": [ 00:28:44.378 { 00:28:44.378 "trtype": "TCP", 00:28:44.378 "adrfam": "IPv4", 00:28:44.378 "traddr": "10.0.0.2", 00:28:44.378 "trsvcid": "4420" 00:28:44.378 } 00:28:44.378 ], 00:28:44.378 "allow_any_host": true, 00:28:44.378 "hosts": [], 00:28:44.378 "serial_number": "SPDK00000000000001", 00:28:44.378 "model_number": "SPDK bdev Controller", 00:28:44.378 "max_namespaces": 32, 00:28:44.378 "min_cntlid": 1, 00:28:44.378 "max_cntlid": 65519, 00:28:44.378 "namespaces": [ 00:28:44.378 { 00:28:44.378 "nsid": 1, 00:28:44.378 "bdev_name": "Malloc0", 00:28:44.378 "name": "Malloc0", 00:28:44.378 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:28:44.378 "eui64": "ABCDEF0123456789", 00:28:44.378 "uuid": "97e67867-8953-43c7-9770-671a5e36939d" 00:28:44.378 } 00:28:44.378 ] 00:28:44.378 } 00:28:44.378 ] 00:28:44.378 03:38:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:44.378 03:38:29 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:28:44.378 [2024-07-21 03:38:29.603784] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:28:44.378 [2024-07-21 03:38:29.603827] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2505904 ] 00:28:44.378 EAL: No free 2048 kB hugepages reported on node 1 00:28:44.378 [2024-07-21 03:38:29.640875] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:28:44.378 [2024-07-21 03:38:29.640954] nvme_tcp.c:2329:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:28:44.378 [2024-07-21 03:38:29.640964] nvme_tcp.c:2333:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:28:44.378 [2024-07-21 03:38:29.640979] nvme_tcp.c:2351:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:28:44.378 [2024-07-21 03:38:29.640992] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:28:44.378 [2024-07-21 03:38:29.641256] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:28:44.378 [2024-07-21 03:38:29.641313] nvme_tcp.c:1546:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x235b980 0 00:28:44.378 [2024-07-21 03:38:29.647631] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:28:44.378 [2024-07-21 03:38:29.647663] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:28:44.378 [2024-07-21 03:38:29.647670] nvme_tcp.c:1592:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:28:44.378 [2024-07-21 03:38:29.647676] nvme_tcp.c:1593:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:28:44.378 [2024-07-21 03:38:29.647727] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:44.378 [2024-07-21 03:38:29.647738] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:44.378 [2024-07-21 03:38:29.647745] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x235b980) 00:28:44.378 [2024-07-21 03:38:29.647766] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:28:44.378 [2024-07-21 03:38:29.647791] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23c34c0, cid 0, qid 0 00:28:44.378 [2024-07-21 03:38:29.655645] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:44.378 [2024-07-21 03:38:29.655663] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:44.378 [2024-07-21 03:38:29.655670] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:44.378 [2024-07-21 03:38:29.655684] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23c34c0) on tqpair=0x235b980 00:28:44.378 [2024-07-21 03:38:29.655706] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:28:44.378 [2024-07-21 03:38:29.655717] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:28:44.378 [2024-07-21 03:38:29.655726] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:28:44.378 [2024-07-21 03:38:29.655746] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:44.378 [2024-07-21 03:38:29.655755] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:44.378 [2024-07-21 03:38:29.655761] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x235b980) 00:28:44.378 [2024-07-21 03:38:29.655772] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.378 [2024-07-21 03:38:29.655796] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23c34c0, cid 0, qid 0 00:28:44.378 [2024-07-21 03:38:29.655958] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:44.378 [2024-07-21 03:38:29.655973] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:44.378 [2024-07-21 03:38:29.655979] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:44.378 [2024-07-21 03:38:29.655986] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23c34c0) on tqpair=0x235b980 00:28:44.378 [2024-07-21 03:38:29.656001] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:28:44.378 [2024-07-21 03:38:29.656014] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:28:44.378 [2024-07-21 03:38:29.656027] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:44.378 [2024-07-21 03:38:29.656034] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:44.378 [2024-07-21 03:38:29.656040] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x235b980) 00:28:44.378 [2024-07-21 03:38:29.656051] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.378 [2024-07-21 03:38:29.656071] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23c34c0, cid 0, qid 0 00:28:44.378 [2024-07-21 03:38:29.656158] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:44.378 [2024-07-21 03:38:29.656172] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:44.378 [2024-07-21 03:38:29.656178] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:44.378 [2024-07-21 03:38:29.656185] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23c34c0) on tqpair=0x235b980 00:28:44.378 [2024-07-21 03:38:29.656195] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:28:44.378 [2024-07-21 03:38:29.656209] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:28:44.378 [2024-07-21 03:38:29.656220] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:44.378 [2024-07-21 03:38:29.656227] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:44.378 [2024-07-21 03:38:29.656234] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x235b980) 00:28:44.378 [2024-07-21 03:38:29.656248] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.378 [2024-07-21 03:38:29.656269] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23c34c0, cid 0, qid 0 00:28:44.378 [2024-07-21 03:38:29.656361] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:44.378 [2024-07-21 03:38:29.656373] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:44.378 [2024-07-21 03:38:29.656379] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:44.378 [2024-07-21 03:38:29.656386] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23c34c0) on tqpair=0x235b980 00:28:44.378 [2024-07-21 03:38:29.656395] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:28:44.378 [2024-07-21 03:38:29.656411] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:44.378 [2024-07-21 03:38:29.656420] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:44.378 [2024-07-21 03:38:29.656426] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x235b980) 00:28:44.378 [2024-07-21 03:38:29.656436] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.378 [2024-07-21 03:38:29.656456] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23c34c0, cid 0, qid 0 00:28:44.378 [2024-07-21 03:38:29.656539] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:44.378 [2024-07-21 03:38:29.656553] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:44.378 [2024-07-21 03:38:29.656559] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:44.378 [2024-07-21 03:38:29.656566] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23c34c0) on tqpair=0x235b980 00:28:44.378 [2024-07-21 03:38:29.656575] nvme_ctrlr.c:3751:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:28:44.378 [2024-07-21 03:38:29.656583] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:28:44.378 [2024-07-21 03:38:29.656595] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:28:44.378 [2024-07-21 03:38:29.656705] nvme_ctrlr.c:3944:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:28:44.378 [2024-07-21 03:38:29.656716] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:28:44.378 [2024-07-21 03:38:29.656732] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:44.378 [2024-07-21 03:38:29.656739] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:44.378 [2024-07-21 03:38:29.656746] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x235b980) 00:28:44.378 [2024-07-21 03:38:29.656756] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.378 [2024-07-21 03:38:29.656778] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23c34c0, cid 0, qid 0 00:28:44.378 [2024-07-21 03:38:29.656914] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:44.378 [2024-07-21 03:38:29.656945] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:44.378 [2024-07-21 03:38:29.656952] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:44.378 [2024-07-21 03:38:29.656958] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23c34c0) on tqpair=0x235b980 00:28:44.378 [2024-07-21 03:38:29.656968] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:28:44.378 [2024-07-21 03:38:29.656984] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:44.378 [2024-07-21 03:38:29.656997] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:44.378 [2024-07-21 03:38:29.657004] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x235b980) 00:28:44.378 [2024-07-21 03:38:29.657014] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.378 [2024-07-21 03:38:29.657034] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23c34c0, cid 0, qid 0 00:28:44.378 [2024-07-21 03:38:29.657120] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:44.378 [2024-07-21 03:38:29.657131] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:44.378 [2024-07-21 03:38:29.657138] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:44.378 [2024-07-21 03:38:29.657145] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23c34c0) on tqpair=0x235b980 00:28:44.379 [2024-07-21 03:38:29.657153] nvme_ctrlr.c:3786:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:28:44.379 [2024-07-21 03:38:29.657161] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:28:44.379 [2024-07-21 03:38:29.657174] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:28:44.379 [2024-07-21 03:38:29.657187] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:28:44.379 [2024-07-21 03:38:29.657204] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:44.379 [2024-07-21 03:38:29.657213] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x235b980) 00:28:44.379 [2024-07-21 03:38:29.657223] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.379 [2024-07-21 03:38:29.657243] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23c34c0, cid 0, qid 0 00:28:44.379 [2024-07-21 03:38:29.657361] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:44.379 [2024-07-21 03:38:29.657391] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:44.379 [2024-07-21 03:38:29.657398] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:44.379 [2024-07-21 03:38:29.657404] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x235b980): datao=0, datal=4096, cccid=0 00:28:44.379 [2024-07-21 03:38:29.657411] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x23c34c0) on tqpair(0x235b980): expected_datao=0, payload_size=4096 00:28:44.379 [2024-07-21 03:38:29.657419] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:44.379 [2024-07-21 03:38:29.657436] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:44.379 [2024-07-21 03:38:29.657460] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:44.639 [2024-07-21 03:38:29.701628] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:44.639 [2024-07-21 03:38:29.701648] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:44.639 [2024-07-21 03:38:29.701657] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:44.639 [2024-07-21 03:38:29.701664] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23c34c0) on tqpair=0x235b980 00:28:44.639 [2024-07-21 03:38:29.701683] nvme_ctrlr.c:1986:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:28:44.639 [2024-07-21 03:38:29.701694] nvme_ctrlr.c:1990:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:28:44.639 [2024-07-21 03:38:29.701702] nvme_ctrlr.c:1993:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:28:44.639 [2024-07-21 03:38:29.701711] nvme_ctrlr.c:2017:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:28:44.639 [2024-07-21 03:38:29.701718] nvme_ctrlr.c:2032:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:28:44.639 [2024-07-21 03:38:29.701731] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:28:44.639 [2024-07-21 03:38:29.701748] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:28:44.639 [2024-07-21 03:38:29.701761] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:44.639 [2024-07-21 03:38:29.701768] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:44.639 [2024-07-21 03:38:29.701775] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x235b980) 00:28:44.639 [2024-07-21 03:38:29.701787] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:28:44.639 [2024-07-21 03:38:29.701811] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23c34c0, cid 0, qid 0 00:28:44.639 [2024-07-21 03:38:29.701960] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:44.639 [2024-07-21 03:38:29.701975] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:44.639 [2024-07-21 03:38:29.701982] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:44.639 [2024-07-21 03:38:29.702003] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23c34c0) on tqpair=0x235b980 00:28:44.639 [2024-07-21 03:38:29.702019] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:44.639 [2024-07-21 03:38:29.702026] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:44.639 [2024-07-21 03:38:29.702033] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x235b980) 00:28:44.639 [2024-07-21 03:38:29.702043] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:44.639 [2024-07-21 03:38:29.702053] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:44.639 [2024-07-21 03:38:29.702060] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:44.639 [2024-07-21 03:38:29.702066] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x235b980) 00:28:44.639 [2024-07-21 03:38:29.702075] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:44.639 [2024-07-21 03:38:29.702085] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:44.639 [2024-07-21 03:38:29.702092] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:44.639 [2024-07-21 03:38:29.702098] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x235b980) 00:28:44.639 [2024-07-21 03:38:29.702121] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:44.639 [2024-07-21 03:38:29.702131] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:44.639 [2024-07-21 03:38:29.702137] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:44.639 [2024-07-21 03:38:29.702144] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x235b980) 00:28:44.639 [2024-07-21 03:38:29.702152] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:44.639 [2024-07-21 03:38:29.702161] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:28:44.639 [2024-07-21 03:38:29.702180] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:28:44.639 [2024-07-21 03:38:29.702193] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:44.639 [2024-07-21 03:38:29.702200] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x235b980) 00:28:44.639 [2024-07-21 03:38:29.702210] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.639 [2024-07-21 03:38:29.702236] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23c34c0, cid 0, qid 0 00:28:44.639 [2024-07-21 03:38:29.702263] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23c3620, cid 1, qid 0 00:28:44.639 [2024-07-21 03:38:29.702272] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23c3780, cid 2, qid 0 00:28:44.639 [2024-07-21 03:38:29.702279] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23c38e0, cid 3, qid 0 00:28:44.639 [2024-07-21 03:38:29.702287] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23c3a40, cid 4, qid 0 00:28:44.639 [2024-07-21 03:38:29.702412] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:44.639 [2024-07-21 03:38:29.702427] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:44.639 [2024-07-21 03:38:29.702434] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:44.639 [2024-07-21 03:38:29.702440] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23c3a40) on tqpair=0x235b980 00:28:44.639 [2024-07-21 03:38:29.702450] nvme_ctrlr.c:2904:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:28:44.639 [2024-07-21 03:38:29.702459] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:28:44.639 [2024-07-21 03:38:29.702476] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:44.639 [2024-07-21 03:38:29.702485] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x235b980) 00:28:44.639 [2024-07-21 03:38:29.702496] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.640 [2024-07-21 03:38:29.702516] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23c3a40, cid 4, qid 0 00:28:44.640 [2024-07-21 03:38:29.702635] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:44.640 [2024-07-21 03:38:29.702649] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:44.640 [2024-07-21 03:38:29.702656] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:44.640 [2024-07-21 03:38:29.702663] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x235b980): datao=0, datal=4096, cccid=4 00:28:44.640 [2024-07-21 03:38:29.702671] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x23c3a40) on tqpair(0x235b980): expected_datao=0, payload_size=4096 00:28:44.640 [2024-07-21 03:38:29.702678] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:44.640 [2024-07-21 03:38:29.702689] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:44.640 [2024-07-21 03:38:29.702696] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:44.640 [2024-07-21 03:38:29.702708] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:44.640 [2024-07-21 03:38:29.702717] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:44.640 [2024-07-21 03:38:29.702724] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:44.640 [2024-07-21 03:38:29.702730] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23c3a40) on tqpair=0x235b980 00:28:44.640 [2024-07-21 03:38:29.702750] nvme_ctrlr.c:4038:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:28:44.640 [2024-07-21 03:38:29.702788] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:44.640 [2024-07-21 03:38:29.702798] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x235b980) 00:28:44.640 [2024-07-21 03:38:29.702810] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.640 [2024-07-21 03:38:29.702821] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:44.640 [2024-07-21 03:38:29.702828] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:44.640 [2024-07-21 03:38:29.702834] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x235b980) 00:28:44.640 [2024-07-21 03:38:29.702844] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:28:44.640 [2024-07-21 03:38:29.702876] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23c3a40, cid 4, qid 0 00:28:44.640 [2024-07-21 03:38:29.702889] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23c3ba0, cid 5, qid 0 00:28:44.640 [2024-07-21 03:38:29.703074] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:44.640 [2024-07-21 03:38:29.703089] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:44.640 [2024-07-21 03:38:29.703096] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:44.640 [2024-07-21 03:38:29.703102] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x235b980): datao=0, datal=1024, cccid=4 00:28:44.640 [2024-07-21 03:38:29.703110] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x23c3a40) on tqpair(0x235b980): expected_datao=0, payload_size=1024 00:28:44.640 [2024-07-21 03:38:29.703117] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:44.640 [2024-07-21 03:38:29.703127] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:44.640 [2024-07-21 03:38:29.703134] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:44.640 [2024-07-21 03:38:29.703142] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:44.640 [2024-07-21 03:38:29.703151] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:44.640 [2024-07-21 03:38:29.703157] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:44.640 [2024-07-21 03:38:29.703164] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23c3ba0) on tqpair=0x235b980 00:28:44.640 [2024-07-21 03:38:29.743727] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:44.640 [2024-07-21 03:38:29.743761] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:44.640 [2024-07-21 03:38:29.743769] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:44.640 [2024-07-21 03:38:29.743776] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23c3a40) on tqpair=0x235b980 00:28:44.640 [2024-07-21 03:38:29.743795] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:44.640 [2024-07-21 03:38:29.743805] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x235b980) 00:28:44.640 [2024-07-21 03:38:29.743817] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.640 [2024-07-21 03:38:29.743846] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23c3a40, cid 4, qid 0 00:28:44.640 [2024-07-21 03:38:29.743970] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:44.640 [2024-07-21 03:38:29.743985] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:44.640 [2024-07-21 03:38:29.743992] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:44.640 [2024-07-21 03:38:29.743998] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x235b980): datao=0, datal=3072, cccid=4 00:28:44.640 [2024-07-21 03:38:29.744006] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x23c3a40) on tqpair(0x235b980): expected_datao=0, payload_size=3072 00:28:44.640 [2024-07-21 03:38:29.744013] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:44.640 [2024-07-21 03:38:29.744033] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:44.640 [2024-07-21 03:38:29.744042] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:44.640 [2024-07-21 03:38:29.788628] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:44.640 [2024-07-21 03:38:29.788646] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:44.640 [2024-07-21 03:38:29.788669] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:44.640 [2024-07-21 03:38:29.788677] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23c3a40) on tqpair=0x235b980 00:28:44.640 [2024-07-21 03:38:29.788695] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:44.640 [2024-07-21 03:38:29.788704] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x235b980) 00:28:44.640 [2024-07-21 03:38:29.788715] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.640 [2024-07-21 03:38:29.788751] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23c3a40, cid 4, qid 0 00:28:44.640 [2024-07-21 03:38:29.788859] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:44.640 [2024-07-21 03:38:29.788871] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:44.640 [2024-07-21 03:38:29.788878] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:44.640 [2024-07-21 03:38:29.788885] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x235b980): datao=0, datal=8, cccid=4 00:28:44.640 [2024-07-21 03:38:29.788893] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x23c3a40) on tqpair(0x235b980): expected_datao=0, payload_size=8 00:28:44.640 [2024-07-21 03:38:29.788900] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:44.640 [2024-07-21 03:38:29.788925] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:44.640 [2024-07-21 03:38:29.788933] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:44.640 [2024-07-21 03:38:29.834648] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:44.640 [2024-07-21 03:38:29.834666] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:44.640 [2024-07-21 03:38:29.834674] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:44.640 [2024-07-21 03:38:29.834681] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23c3a40) on tqpair=0x235b980 00:28:44.640 ===================================================== 00:28:44.640 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:28:44.640 ===================================================== 00:28:44.640 Controller Capabilities/Features 00:28:44.640 ================================ 00:28:44.640 Vendor ID: 0000 00:28:44.640 Subsystem Vendor ID: 0000 00:28:44.640 Serial Number: .................... 00:28:44.640 Model Number: ........................................ 00:28:44.640 Firmware Version: 24.05.1 00:28:44.640 Recommended Arb Burst: 0 00:28:44.640 IEEE OUI Identifier: 00 00 00 00:28:44.640 Multi-path I/O 00:28:44.640 May have multiple subsystem ports: No 00:28:44.640 May have multiple controllers: No 00:28:44.640 Associated with SR-IOV VF: No 00:28:44.640 Max Data Transfer Size: 131072 00:28:44.640 Max Number of Namespaces: 0 00:28:44.640 Max Number of I/O Queues: 1024 00:28:44.640 NVMe Specification Version (VS): 1.3 00:28:44.640 NVMe Specification Version (Identify): 1.3 00:28:44.640 Maximum Queue Entries: 128 00:28:44.640 Contiguous Queues Required: Yes 00:28:44.640 Arbitration Mechanisms Supported 00:28:44.640 Weighted Round Robin: Not Supported 00:28:44.640 Vendor Specific: Not Supported 00:28:44.640 Reset Timeout: 15000 ms 00:28:44.640 Doorbell Stride: 4 bytes 00:28:44.640 NVM Subsystem Reset: Not Supported 00:28:44.640 Command Sets Supported 00:28:44.640 NVM Command Set: Supported 00:28:44.640 Boot Partition: Not Supported 00:28:44.640 Memory Page Size Minimum: 4096 bytes 00:28:44.640 Memory Page Size Maximum: 4096 bytes 00:28:44.640 Persistent Memory Region: Not Supported 00:28:44.640 Optional Asynchronous Events Supported 00:28:44.640 Namespace Attribute Notices: Not Supported 00:28:44.640 Firmware Activation Notices: Not Supported 00:28:44.640 ANA Change Notices: Not Supported 00:28:44.640 PLE Aggregate Log Change Notices: Not Supported 00:28:44.640 LBA Status Info Alert Notices: Not Supported 00:28:44.640 EGE Aggregate Log Change Notices: Not Supported 00:28:44.640 Normal NVM Subsystem Shutdown event: Not Supported 00:28:44.640 Zone Descriptor Change Notices: Not Supported 00:28:44.640 Discovery Log Change Notices: Supported 00:28:44.640 Controller Attributes 00:28:44.640 128-bit Host Identifier: Not Supported 00:28:44.640 Non-Operational Permissive Mode: Not Supported 00:28:44.640 NVM Sets: Not Supported 00:28:44.640 Read Recovery Levels: Not Supported 00:28:44.640 Endurance Groups: Not Supported 00:28:44.640 Predictable Latency Mode: Not Supported 00:28:44.640 Traffic Based Keep ALive: Not Supported 00:28:44.640 Namespace Granularity: Not Supported 00:28:44.640 SQ Associations: Not Supported 00:28:44.640 UUID List: Not Supported 00:28:44.640 Multi-Domain Subsystem: Not Supported 00:28:44.640 Fixed Capacity Management: Not Supported 00:28:44.640 Variable Capacity Management: Not Supported 00:28:44.640 Delete Endurance Group: Not Supported 00:28:44.640 Delete NVM Set: Not Supported 00:28:44.640 Extended LBA Formats Supported: Not Supported 00:28:44.640 Flexible Data Placement Supported: Not Supported 00:28:44.640 00:28:44.640 Controller Memory Buffer Support 00:28:44.640 ================================ 00:28:44.640 Supported: No 00:28:44.640 00:28:44.640 Persistent Memory Region Support 00:28:44.640 ================================ 00:28:44.640 Supported: No 00:28:44.640 00:28:44.640 Admin Command Set Attributes 00:28:44.640 ============================ 00:28:44.640 Security Send/Receive: Not Supported 00:28:44.640 Format NVM: Not Supported 00:28:44.641 Firmware Activate/Download: Not Supported 00:28:44.641 Namespace Management: Not Supported 00:28:44.641 Device Self-Test: Not Supported 00:28:44.641 Directives: Not Supported 00:28:44.641 NVMe-MI: Not Supported 00:28:44.641 Virtualization Management: Not Supported 00:28:44.641 Doorbell Buffer Config: Not Supported 00:28:44.641 Get LBA Status Capability: Not Supported 00:28:44.641 Command & Feature Lockdown Capability: Not Supported 00:28:44.641 Abort Command Limit: 1 00:28:44.641 Async Event Request Limit: 4 00:28:44.641 Number of Firmware Slots: N/A 00:28:44.641 Firmware Slot 1 Read-Only: N/A 00:28:44.641 Firmware Activation Without Reset: N/A 00:28:44.641 Multiple Update Detection Support: N/A 00:28:44.641 Firmware Update Granularity: No Information Provided 00:28:44.641 Per-Namespace SMART Log: No 00:28:44.641 Asymmetric Namespace Access Log Page: Not Supported 00:28:44.641 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:28:44.641 Command Effects Log Page: Not Supported 00:28:44.641 Get Log Page Extended Data: Supported 00:28:44.641 Telemetry Log Pages: Not Supported 00:28:44.641 Persistent Event Log Pages: Not Supported 00:28:44.641 Supported Log Pages Log Page: May Support 00:28:44.641 Commands Supported & Effects Log Page: Not Supported 00:28:44.641 Feature Identifiers & Effects Log Page:May Support 00:28:44.641 NVMe-MI Commands & Effects Log Page: May Support 00:28:44.641 Data Area 4 for Telemetry Log: Not Supported 00:28:44.641 Error Log Page Entries Supported: 128 00:28:44.641 Keep Alive: Not Supported 00:28:44.641 00:28:44.641 NVM Command Set Attributes 00:28:44.641 ========================== 00:28:44.641 Submission Queue Entry Size 00:28:44.641 Max: 1 00:28:44.641 Min: 1 00:28:44.641 Completion Queue Entry Size 00:28:44.641 Max: 1 00:28:44.641 Min: 1 00:28:44.641 Number of Namespaces: 0 00:28:44.641 Compare Command: Not Supported 00:28:44.641 Write Uncorrectable Command: Not Supported 00:28:44.641 Dataset Management Command: Not Supported 00:28:44.641 Write Zeroes Command: Not Supported 00:28:44.641 Set Features Save Field: Not Supported 00:28:44.641 Reservations: Not Supported 00:28:44.641 Timestamp: Not Supported 00:28:44.641 Copy: Not Supported 00:28:44.641 Volatile Write Cache: Not Present 00:28:44.641 Atomic Write Unit (Normal): 1 00:28:44.641 Atomic Write Unit (PFail): 1 00:28:44.641 Atomic Compare & Write Unit: 1 00:28:44.641 Fused Compare & Write: Supported 00:28:44.641 Scatter-Gather List 00:28:44.641 SGL Command Set: Supported 00:28:44.641 SGL Keyed: Supported 00:28:44.641 SGL Bit Bucket Descriptor: Not Supported 00:28:44.641 SGL Metadata Pointer: Not Supported 00:28:44.641 Oversized SGL: Not Supported 00:28:44.641 SGL Metadata Address: Not Supported 00:28:44.641 SGL Offset: Supported 00:28:44.641 Transport SGL Data Block: Not Supported 00:28:44.641 Replay Protected Memory Block: Not Supported 00:28:44.641 00:28:44.641 Firmware Slot Information 00:28:44.641 ========================= 00:28:44.641 Active slot: 0 00:28:44.641 00:28:44.641 00:28:44.641 Error Log 00:28:44.641 ========= 00:28:44.641 00:28:44.641 Active Namespaces 00:28:44.641 ================= 00:28:44.641 Discovery Log Page 00:28:44.641 ================== 00:28:44.641 Generation Counter: 2 00:28:44.641 Number of Records: 2 00:28:44.641 Record Format: 0 00:28:44.641 00:28:44.641 Discovery Log Entry 0 00:28:44.641 ---------------------- 00:28:44.641 Transport Type: 3 (TCP) 00:28:44.641 Address Family: 1 (IPv4) 00:28:44.641 Subsystem Type: 3 (Current Discovery Subsystem) 00:28:44.641 Entry Flags: 00:28:44.641 Duplicate Returned Information: 1 00:28:44.641 Explicit Persistent Connection Support for Discovery: 1 00:28:44.641 Transport Requirements: 00:28:44.641 Secure Channel: Not Required 00:28:44.641 Port ID: 0 (0x0000) 00:28:44.641 Controller ID: 65535 (0xffff) 00:28:44.641 Admin Max SQ Size: 128 00:28:44.641 Transport Service Identifier: 4420 00:28:44.641 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:28:44.641 Transport Address: 10.0.0.2 00:28:44.641 Discovery Log Entry 1 00:28:44.641 ---------------------- 00:28:44.641 Transport Type: 3 (TCP) 00:28:44.641 Address Family: 1 (IPv4) 00:28:44.641 Subsystem Type: 2 (NVM Subsystem) 00:28:44.641 Entry Flags: 00:28:44.641 Duplicate Returned Information: 0 00:28:44.641 Explicit Persistent Connection Support for Discovery: 0 00:28:44.641 Transport Requirements: 00:28:44.641 Secure Channel: Not Required 00:28:44.641 Port ID: 0 (0x0000) 00:28:44.641 Controller ID: 65535 (0xffff) 00:28:44.641 Admin Max SQ Size: 128 00:28:44.641 Transport Service Identifier: 4420 00:28:44.641 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:28:44.641 Transport Address: 10.0.0.2 [2024-07-21 03:38:29.834817] nvme_ctrlr.c:4234:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:28:44.641 [2024-07-21 03:38:29.834843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.641 [2024-07-21 03:38:29.834855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.641 [2024-07-21 03:38:29.834865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.641 [2024-07-21 03:38:29.834875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.641 [2024-07-21 03:38:29.834892] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:44.641 [2024-07-21 03:38:29.834902] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:44.641 [2024-07-21 03:38:29.834908] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x235b980) 00:28:44.641 [2024-07-21 03:38:29.834935] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.641 [2024-07-21 03:38:29.834960] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23c38e0, cid 3, qid 0 00:28:44.641 [2024-07-21 03:38:29.835049] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:44.641 [2024-07-21 03:38:29.835063] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:44.641 [2024-07-21 03:38:29.835070] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:44.641 [2024-07-21 03:38:29.835077] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23c38e0) on tqpair=0x235b980 00:28:44.641 [2024-07-21 03:38:29.835090] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:44.641 [2024-07-21 03:38:29.835098] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:44.641 [2024-07-21 03:38:29.835104] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x235b980) 00:28:44.641 [2024-07-21 03:38:29.835114] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.641 [2024-07-21 03:38:29.835140] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23c38e0, cid 3, qid 0 00:28:44.641 [2024-07-21 03:38:29.835249] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:44.641 [2024-07-21 03:38:29.835262] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:44.641 [2024-07-21 03:38:29.835273] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:44.641 [2024-07-21 03:38:29.835280] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23c38e0) on tqpair=0x235b980 00:28:44.641 [2024-07-21 03:38:29.835290] nvme_ctrlr.c:1084:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:28:44.641 [2024-07-21 03:38:29.835297] nvme_ctrlr.c:1087:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:28:44.641 [2024-07-21 03:38:29.835313] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:44.641 [2024-07-21 03:38:29.835322] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:44.641 [2024-07-21 03:38:29.835329] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x235b980) 00:28:44.641 [2024-07-21 03:38:29.835339] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.641 [2024-07-21 03:38:29.835359] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23c38e0, cid 3, qid 0 00:28:44.641 [2024-07-21 03:38:29.835449] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:44.641 [2024-07-21 03:38:29.835463] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:44.641 [2024-07-21 03:38:29.835469] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:44.641 [2024-07-21 03:38:29.835476] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23c38e0) on tqpair=0x235b980 00:28:44.641 [2024-07-21 03:38:29.835493] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:44.641 [2024-07-21 03:38:29.835503] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:44.641 [2024-07-21 03:38:29.835509] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x235b980) 00:28:44.641 [2024-07-21 03:38:29.835519] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.641 [2024-07-21 03:38:29.835539] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23c38e0, cid 3, qid 0 00:28:44.641 [2024-07-21 03:38:29.835641] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:44.641 [2024-07-21 03:38:29.835656] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:44.641 [2024-07-21 03:38:29.835663] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:44.641 [2024-07-21 03:38:29.835670] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23c38e0) on tqpair=0x235b980 00:28:44.641 [2024-07-21 03:38:29.835689] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:44.641 [2024-07-21 03:38:29.835698] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:44.641 [2024-07-21 03:38:29.835705] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x235b980) 00:28:44.641 [2024-07-21 03:38:29.835715] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.641 [2024-07-21 03:38:29.835736] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23c38e0, cid 3, qid 0 00:28:44.641 [2024-07-21 03:38:29.835871] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:44.641 [2024-07-21 03:38:29.835886] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:44.641 [2024-07-21 03:38:29.835892] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:44.641 [2024-07-21 03:38:29.835899] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23c38e0) on tqpair=0x235b980 00:28:44.642 [2024-07-21 03:38:29.835917] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:44.642 [2024-07-21 03:38:29.835941] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:44.642 [2024-07-21 03:38:29.835948] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x235b980) 00:28:44.642 [2024-07-21 03:38:29.835958] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.642 [2024-07-21 03:38:29.835979] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23c38e0, cid 3, qid 0 00:28:44.642 [2024-07-21 03:38:29.836073] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:44.642 [2024-07-21 03:38:29.836087] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:44.642 [2024-07-21 03:38:29.836094] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:44.642 [2024-07-21 03:38:29.836101] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23c38e0) on tqpair=0x235b980 00:28:44.642 [2024-07-21 03:38:29.836118] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:44.642 [2024-07-21 03:38:29.836127] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:44.642 [2024-07-21 03:38:29.836134] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x235b980) 00:28:44.642 [2024-07-21 03:38:29.836144] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.642 [2024-07-21 03:38:29.836164] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23c38e0, cid 3, qid 0 00:28:44.642 [2024-07-21 03:38:29.836277] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:44.642 [2024-07-21 03:38:29.836289] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:44.642 [2024-07-21 03:38:29.836296] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:44.642 [2024-07-21 03:38:29.836303] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23c38e0) on tqpair=0x235b980 00:28:44.642 [2024-07-21 03:38:29.836320] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:44.642 [2024-07-21 03:38:29.836329] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:44.642 [2024-07-21 03:38:29.836335] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x235b980) 00:28:44.642 [2024-07-21 03:38:29.836345] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.642 [2024-07-21 03:38:29.836365] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23c38e0, cid 3, qid 0 00:28:44.642 [2024-07-21 03:38:29.836445] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:44.642 [2024-07-21 03:38:29.836458] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:44.642 [2024-07-21 03:38:29.836465] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:44.642 [2024-07-21 03:38:29.836471] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23c38e0) on tqpair=0x235b980 00:28:44.642 [2024-07-21 03:38:29.836488] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:44.642 [2024-07-21 03:38:29.836497] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:44.642 [2024-07-21 03:38:29.836504] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x235b980) 00:28:44.642 [2024-07-21 03:38:29.836514] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.642 [2024-07-21 03:38:29.836534] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23c38e0, cid 3, qid 0 00:28:44.642 [2024-07-21 03:38:29.836645] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:44.642 [2024-07-21 03:38:29.836659] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:44.642 [2024-07-21 03:38:29.836666] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:44.642 [2024-07-21 03:38:29.836673] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23c38e0) on tqpair=0x235b980 00:28:44.642 [2024-07-21 03:38:29.836690] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:44.642 [2024-07-21 03:38:29.836700] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:44.642 [2024-07-21 03:38:29.836707] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x235b980) 00:28:44.642 [2024-07-21 03:38:29.836717] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.642 [2024-07-21 03:38:29.836738] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23c38e0, cid 3, qid 0 00:28:44.642 [2024-07-21 03:38:29.836875] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:44.642 [2024-07-21 03:38:29.836895] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:44.642 [2024-07-21 03:38:29.836903] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:44.642 [2024-07-21 03:38:29.836910] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23c38e0) on tqpair=0x235b980 00:28:44.642 [2024-07-21 03:38:29.836928] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:44.642 [2024-07-21 03:38:29.836953] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:44.642 [2024-07-21 03:38:29.836959] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x235b980) 00:28:44.642 [2024-07-21 03:38:29.836970] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.642 [2024-07-21 03:38:29.836990] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23c38e0, cid 3, qid 0 00:28:44.642 [2024-07-21 03:38:29.837074] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:44.642 [2024-07-21 03:38:29.837088] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:44.642 [2024-07-21 03:38:29.837095] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:44.642 [2024-07-21 03:38:29.837102] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23c38e0) on tqpair=0x235b980 00:28:44.642 [2024-07-21 03:38:29.837119] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:44.642 [2024-07-21 03:38:29.837128] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:44.642 [2024-07-21 03:38:29.837134] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x235b980) 00:28:44.642 [2024-07-21 03:38:29.837144] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.642 [2024-07-21 03:38:29.837164] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23c38e0, cid 3, qid 0 00:28:44.642 [2024-07-21 03:38:29.837248] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:44.642 [2024-07-21 03:38:29.837261] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:44.642 [2024-07-21 03:38:29.837268] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:44.642 [2024-07-21 03:38:29.837274] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23c38e0) on tqpair=0x235b980 00:28:44.642 [2024-07-21 03:38:29.837291] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:44.642 [2024-07-21 03:38:29.837301] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:44.642 [2024-07-21 03:38:29.837307] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x235b980) 00:28:44.642 [2024-07-21 03:38:29.837317] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.642 [2024-07-21 03:38:29.837337] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23c38e0, cid 3, qid 0 00:28:44.642 [2024-07-21 03:38:29.837427] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:44.642 [2024-07-21 03:38:29.837440] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:44.642 [2024-07-21 03:38:29.837447] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:44.642 [2024-07-21 03:38:29.837454] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23c38e0) on tqpair=0x235b980 00:28:44.642 [2024-07-21 03:38:29.837471] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:44.642 [2024-07-21 03:38:29.837480] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:44.642 [2024-07-21 03:38:29.837486] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x235b980) 00:28:44.642 [2024-07-21 03:38:29.837496] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.642 [2024-07-21 03:38:29.837516] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23c38e0, cid 3, qid 0 00:28:44.642 [2024-07-21 03:38:29.837633] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:44.642 [2024-07-21 03:38:29.837647] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:44.642 [2024-07-21 03:38:29.837659] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:44.642 [2024-07-21 03:38:29.837666] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23c38e0) on tqpair=0x235b980 00:28:44.642 [2024-07-21 03:38:29.837684] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:44.642 [2024-07-21 03:38:29.837694] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:44.642 [2024-07-21 03:38:29.837700] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x235b980) 00:28:44.642 [2024-07-21 03:38:29.837711] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.642 [2024-07-21 03:38:29.837732] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23c38e0, cid 3, qid 0 00:28:44.642 [2024-07-21 03:38:29.837868] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:44.642 [2024-07-21 03:38:29.837882] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:44.642 [2024-07-21 03:38:29.837889] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:44.642 [2024-07-21 03:38:29.837896] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23c38e0) on tqpair=0x235b980 00:28:44.642 [2024-07-21 03:38:29.837914] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:44.642 [2024-07-21 03:38:29.837923] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:44.642 [2024-07-21 03:38:29.837944] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x235b980) 00:28:44.642 [2024-07-21 03:38:29.837955] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.642 [2024-07-21 03:38:29.837975] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23c38e0, cid 3, qid 0 00:28:44.642 [2024-07-21 03:38:29.838059] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:44.642 [2024-07-21 03:38:29.838073] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:44.642 [2024-07-21 03:38:29.838079] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:44.642 [2024-07-21 03:38:29.838086] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23c38e0) on tqpair=0x235b980 00:28:44.642 [2024-07-21 03:38:29.838103] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:44.642 [2024-07-21 03:38:29.838112] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:44.642 [2024-07-21 03:38:29.838119] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x235b980) 00:28:44.642 [2024-07-21 03:38:29.838128] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.642 [2024-07-21 03:38:29.838148] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23c38e0, cid 3, qid 0 00:28:44.642 [2024-07-21 03:38:29.838232] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:44.642 [2024-07-21 03:38:29.838245] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:44.642 [2024-07-21 03:38:29.838251] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:44.642 [2024-07-21 03:38:29.838258] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23c38e0) on tqpair=0x235b980 00:28:44.642 [2024-07-21 03:38:29.838274] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:44.642 [2024-07-21 03:38:29.838284] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:44.642 [2024-07-21 03:38:29.838290] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x235b980) 00:28:44.642 [2024-07-21 03:38:29.838300] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.642 [2024-07-21 03:38:29.838319] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23c38e0, cid 3, qid 0 00:28:44.642 [2024-07-21 03:38:29.838471] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:44.643 [2024-07-21 03:38:29.838483] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:44.643 [2024-07-21 03:38:29.838490] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:44.643 [2024-07-21 03:38:29.838500] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23c38e0) on tqpair=0x235b980 00:28:44.643 [2024-07-21 03:38:29.838518] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:44.643 [2024-07-21 03:38:29.838527] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:44.643 [2024-07-21 03:38:29.838534] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x235b980) 00:28:44.643 [2024-07-21 03:38:29.838544] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.643 [2024-07-21 03:38:29.838564] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23c38e0, cid 3, qid 0 00:28:44.643 [2024-07-21 03:38:29.838702] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:44.643 [2024-07-21 03:38:29.838718] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:44.643 [2024-07-21 03:38:29.838724] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:44.643 [2024-07-21 03:38:29.838731] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23c38e0) on tqpair=0x235b980 00:28:44.643 [2024-07-21 03:38:29.838749] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:44.643 [2024-07-21 03:38:29.838759] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:44.643 [2024-07-21 03:38:29.838765] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x235b980) 00:28:44.643 [2024-07-21 03:38:29.838776] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.643 [2024-07-21 03:38:29.838797] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23c38e0, cid 3, qid 0 00:28:44.643 [2024-07-21 03:38:29.838883] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:44.643 [2024-07-21 03:38:29.838898] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:44.643 [2024-07-21 03:38:29.838905] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:44.643 [2024-07-21 03:38:29.838911] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23c38e0) on tqpair=0x235b980 00:28:44.643 [2024-07-21 03:38:29.838929] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:44.643 [2024-07-21 03:38:29.838953] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:44.643 [2024-07-21 03:38:29.838959] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x235b980) 00:28:44.643 [2024-07-21 03:38:29.838969] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.643 [2024-07-21 03:38:29.838989] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23c38e0, cid 3, qid 0 00:28:44.643 [2024-07-21 03:38:29.839075] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:44.643 [2024-07-21 03:38:29.839087] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:44.643 [2024-07-21 03:38:29.839094] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:44.643 [2024-07-21 03:38:29.839100] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23c38e0) on tqpair=0x235b980 00:28:44.643 [2024-07-21 03:38:29.839117] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:44.643 [2024-07-21 03:38:29.839126] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:44.643 [2024-07-21 03:38:29.839132] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x235b980) 00:28:44.643 [2024-07-21 03:38:29.839143] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.643 [2024-07-21 03:38:29.839162] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23c38e0, cid 3, qid 0 00:28:44.643 [2024-07-21 03:38:29.839277] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:44.643 [2024-07-21 03:38:29.839288] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:44.643 [2024-07-21 03:38:29.839295] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:44.643 [2024-07-21 03:38:29.839302] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23c38e0) on tqpair=0x235b980 00:28:44.643 [2024-07-21 03:38:29.839323] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:44.643 [2024-07-21 03:38:29.839333] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:44.643 [2024-07-21 03:38:29.839339] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x235b980) 00:28:44.643 [2024-07-21 03:38:29.839349] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.643 [2024-07-21 03:38:29.839369] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23c38e0, cid 3, qid 0 00:28:44.643 [2024-07-21 03:38:29.839453] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:44.643 [2024-07-21 03:38:29.839466] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:44.643 [2024-07-21 03:38:29.839473] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:44.643 [2024-07-21 03:38:29.839480] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23c38e0) on tqpair=0x235b980 00:28:44.643 [2024-07-21 03:38:29.839496] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:44.643 [2024-07-21 03:38:29.839506] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:44.643 [2024-07-21 03:38:29.839512] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x235b980) 00:28:44.643 [2024-07-21 03:38:29.839522] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.643 [2024-07-21 03:38:29.839542] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23c38e0, cid 3, qid 0 00:28:44.643 [2024-07-21 03:38:29.839646] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:44.643 [2024-07-21 03:38:29.839661] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:44.643 [2024-07-21 03:38:29.839668] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:44.643 [2024-07-21 03:38:29.839675] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23c38e0) on tqpair=0x235b980 00:28:44.643 [2024-07-21 03:38:29.839693] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:44.643 [2024-07-21 03:38:29.839702] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:44.643 [2024-07-21 03:38:29.839709] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x235b980) 00:28:44.643 [2024-07-21 03:38:29.839719] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.643 [2024-07-21 03:38:29.839740] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23c38e0, cid 3, qid 0 00:28:44.643 [2024-07-21 03:38:29.839859] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:44.643 [2024-07-21 03:38:29.839871] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:44.643 [2024-07-21 03:38:29.839878] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:44.643 [2024-07-21 03:38:29.839884] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23c38e0) on tqpair=0x235b980 00:28:44.643 [2024-07-21 03:38:29.839902] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:44.643 [2024-07-21 03:38:29.839911] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:44.643 [2024-07-21 03:38:29.839917] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x235b980) 00:28:44.643 [2024-07-21 03:38:29.839942] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.643 [2024-07-21 03:38:29.839963] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23c38e0, cid 3, qid 0 00:28:44.643 [2024-07-21 03:38:29.840047] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:44.643 [2024-07-21 03:38:29.840059] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:44.643 [2024-07-21 03:38:29.840066] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:44.643 [2024-07-21 03:38:29.840072] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23c38e0) on tqpair=0x235b980 00:28:44.643 [2024-07-21 03:38:29.840093] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:44.643 [2024-07-21 03:38:29.840103] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:44.643 [2024-07-21 03:38:29.840109] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x235b980) 00:28:44.643 [2024-07-21 03:38:29.840119] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.643 [2024-07-21 03:38:29.840139] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23c38e0, cid 3, qid 0 00:28:44.643 [2024-07-21 03:38:29.840222] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:44.644 [2024-07-21 03:38:29.840236] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:44.644 [2024-07-21 03:38:29.840242] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:44.644 [2024-07-21 03:38:29.840249] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23c38e0) on tqpair=0x235b980 00:28:44.644 [2024-07-21 03:38:29.840266] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:44.644 [2024-07-21 03:38:29.840275] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:44.644 [2024-07-21 03:38:29.840282] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x235b980) 00:28:44.644 [2024-07-21 03:38:29.840292] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.644 [2024-07-21 03:38:29.840311] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23c38e0, cid 3, qid 0 00:28:44.644 [2024-07-21 03:38:29.840406] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:44.644 [2024-07-21 03:38:29.840420] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:44.644 [2024-07-21 03:38:29.840426] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:44.644 [2024-07-21 03:38:29.840433] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23c38e0) on tqpair=0x235b980 00:28:44.644 [2024-07-21 03:38:29.840450] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:44.644 [2024-07-21 03:38:29.840474] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:44.644 [2024-07-21 03:38:29.840481] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x235b980) 00:28:44.644 [2024-07-21 03:38:29.840491] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.644 [2024-07-21 03:38:29.840511] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23c38e0, cid 3, qid 0 00:28:44.644 [2024-07-21 03:38:29.844618] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:44.644 [2024-07-21 03:38:29.844652] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:44.644 [2024-07-21 03:38:29.844659] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:44.644 [2024-07-21 03:38:29.844666] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23c38e0) on tqpair=0x235b980 00:28:44.644 [2024-07-21 03:38:29.844685] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:44.644 [2024-07-21 03:38:29.844695] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:44.644 [2024-07-21 03:38:29.844701] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x235b980) 00:28:44.644 [2024-07-21 03:38:29.844712] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.644 [2024-07-21 03:38:29.844733] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23c38e0, cid 3, qid 0 00:28:44.644 [2024-07-21 03:38:29.844838] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:44.644 [2024-07-21 03:38:29.844850] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:44.644 [2024-07-21 03:38:29.844857] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:44.644 [2024-07-21 03:38:29.844864] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23c38e0) on tqpair=0x235b980 00:28:44.644 [2024-07-21 03:38:29.844878] nvme_ctrlr.c:1206:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 9 milliseconds 00:28:44.644 00:28:44.644 03:38:29 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:28:44.644 [2024-07-21 03:38:29.879408] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:28:44.644 [2024-07-21 03:38:29.879454] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2505968 ] 00:28:44.644 EAL: No free 2048 kB hugepages reported on node 1 00:28:44.644 [2024-07-21 03:38:29.915439] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:28:44.644 [2024-07-21 03:38:29.915490] nvme_tcp.c:2329:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:28:44.644 [2024-07-21 03:38:29.915500] nvme_tcp.c:2333:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:28:44.644 [2024-07-21 03:38:29.915515] nvme_tcp.c:2351:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:28:44.644 [2024-07-21 03:38:29.915528] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:28:44.644 [2024-07-21 03:38:29.915701] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:28:44.644 [2024-07-21 03:38:29.915749] nvme_tcp.c:1546:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x7f9980 0 00:28:44.644 [2024-07-21 03:38:29.922641] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:28:44.644 [2024-07-21 03:38:29.922661] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:28:44.644 [2024-07-21 03:38:29.922669] nvme_tcp.c:1592:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:28:44.644 [2024-07-21 03:38:29.922675] nvme_tcp.c:1593:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:28:44.644 [2024-07-21 03:38:29.922715] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:44.644 [2024-07-21 03:38:29.922726] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:44.644 [2024-07-21 03:38:29.922733] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7f9980) 00:28:44.644 [2024-07-21 03:38:29.922747] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:28:44.644 [2024-07-21 03:38:29.922778] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8614c0, cid 0, qid 0 00:28:44.644 [2024-07-21 03:38:29.930630] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:44.644 [2024-07-21 03:38:29.930648] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:44.644 [2024-07-21 03:38:29.930656] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:44.644 [2024-07-21 03:38:29.930663] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8614c0) on tqpair=0x7f9980 00:28:44.644 [2024-07-21 03:38:29.930677] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:28:44.644 [2024-07-21 03:38:29.930688] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:28:44.644 [2024-07-21 03:38:29.930697] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:28:44.644 [2024-07-21 03:38:29.930716] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:44.644 [2024-07-21 03:38:29.930725] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:44.644 [2024-07-21 03:38:29.930732] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7f9980) 00:28:44.644 [2024-07-21 03:38:29.930744] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.644 [2024-07-21 03:38:29.930772] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8614c0, cid 0, qid 0 00:28:44.644 [2024-07-21 03:38:29.930870] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:44.644 [2024-07-21 03:38:29.930883] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:44.644 [2024-07-21 03:38:29.930890] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:44.644 [2024-07-21 03:38:29.930897] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8614c0) on tqpair=0x7f9980 00:28:44.644 [2024-07-21 03:38:29.930909] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:28:44.644 [2024-07-21 03:38:29.930924] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:28:44.644 [2024-07-21 03:38:29.930937] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:44.644 [2024-07-21 03:38:29.930944] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:44.644 [2024-07-21 03:38:29.930951] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7f9980) 00:28:44.644 [2024-07-21 03:38:29.930962] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.644 [2024-07-21 03:38:29.930983] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8614c0, cid 0, qid 0 00:28:44.644 [2024-07-21 03:38:29.931069] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:44.644 [2024-07-21 03:38:29.931084] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:44.644 [2024-07-21 03:38:29.931091] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:44.644 [2024-07-21 03:38:29.931098] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8614c0) on tqpair=0x7f9980 00:28:44.644 [2024-07-21 03:38:29.931106] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:28:44.644 [2024-07-21 03:38:29.931120] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:28:44.644 [2024-07-21 03:38:29.931133] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:44.644 [2024-07-21 03:38:29.931140] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:44.644 [2024-07-21 03:38:29.931147] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7f9980) 00:28:44.644 [2024-07-21 03:38:29.931157] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.644 [2024-07-21 03:38:29.931178] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8614c0, cid 0, qid 0 00:28:44.644 [2024-07-21 03:38:29.931265] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:44.644 [2024-07-21 03:38:29.931280] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:44.644 [2024-07-21 03:38:29.931287] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:44.644 [2024-07-21 03:38:29.931293] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8614c0) on tqpair=0x7f9980 00:28:44.644 [2024-07-21 03:38:29.931302] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:28:44.644 [2024-07-21 03:38:29.931319] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:44.644 [2024-07-21 03:38:29.931328] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:44.644 [2024-07-21 03:38:29.931335] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7f9980) 00:28:44.644 [2024-07-21 03:38:29.931346] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.644 [2024-07-21 03:38:29.931366] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8614c0, cid 0, qid 0 00:28:44.644 [2024-07-21 03:38:29.931446] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:44.644 [2024-07-21 03:38:29.931459] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:44.644 [2024-07-21 03:38:29.931469] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:44.644 [2024-07-21 03:38:29.931477] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8614c0) on tqpair=0x7f9980 00:28:44.644 [2024-07-21 03:38:29.931485] nvme_ctrlr.c:3751:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:28:44.644 [2024-07-21 03:38:29.931493] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:28:44.644 [2024-07-21 03:38:29.931506] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:28:44.644 [2024-07-21 03:38:29.931622] nvme_ctrlr.c:3944:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:28:44.644 [2024-07-21 03:38:29.931631] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:28:44.644 [2024-07-21 03:38:29.931643] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:44.644 [2024-07-21 03:38:29.931650] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:44.644 [2024-07-21 03:38:29.931657] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7f9980) 00:28:44.645 [2024-07-21 03:38:29.931668] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.645 [2024-07-21 03:38:29.931689] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8614c0, cid 0, qid 0 00:28:44.645 [2024-07-21 03:38:29.931772] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:44.645 [2024-07-21 03:38:29.931785] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:44.645 [2024-07-21 03:38:29.931792] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:44.645 [2024-07-21 03:38:29.931799] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8614c0) on tqpair=0x7f9980 00:28:44.645 [2024-07-21 03:38:29.931807] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:28:44.645 [2024-07-21 03:38:29.931823] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:44.645 [2024-07-21 03:38:29.931832] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:44.645 [2024-07-21 03:38:29.931838] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7f9980) 00:28:44.645 [2024-07-21 03:38:29.931849] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.645 [2024-07-21 03:38:29.931869] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8614c0, cid 0, qid 0 00:28:44.645 [2024-07-21 03:38:29.931952] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:44.645 [2024-07-21 03:38:29.931964] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:44.645 [2024-07-21 03:38:29.931971] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:44.645 [2024-07-21 03:38:29.931977] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8614c0) on tqpair=0x7f9980 00:28:44.645 [2024-07-21 03:38:29.931985] nvme_ctrlr.c:3786:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:28:44.645 [2024-07-21 03:38:29.931993] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:28:44.645 [2024-07-21 03:38:29.932006] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:28:44.645 [2024-07-21 03:38:29.932020] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:28:44.645 [2024-07-21 03:38:29.932036] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:44.645 [2024-07-21 03:38:29.932045] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7f9980) 00:28:44.645 [2024-07-21 03:38:29.932059] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.645 [2024-07-21 03:38:29.932082] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8614c0, cid 0, qid 0 00:28:44.645 [2024-07-21 03:38:29.932204] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:44.645 [2024-07-21 03:38:29.932217] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:44.645 [2024-07-21 03:38:29.932224] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:44.645 [2024-07-21 03:38:29.932230] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x7f9980): datao=0, datal=4096, cccid=0 00:28:44.645 [2024-07-21 03:38:29.932238] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x8614c0) on tqpair(0x7f9980): expected_datao=0, payload_size=4096 00:28:44.645 [2024-07-21 03:38:29.932245] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:44.645 [2024-07-21 03:38:29.932262] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:44.645 [2024-07-21 03:38:29.932271] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:44.904 [2024-07-21 03:38:29.972696] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:44.904 [2024-07-21 03:38:29.972717] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:44.904 [2024-07-21 03:38:29.972725] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:44.904 [2024-07-21 03:38:29.972733] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8614c0) on tqpair=0x7f9980 00:28:44.904 [2024-07-21 03:38:29.972749] nvme_ctrlr.c:1986:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:28:44.904 [2024-07-21 03:38:29.972759] nvme_ctrlr.c:1990:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:28:44.904 [2024-07-21 03:38:29.972767] nvme_ctrlr.c:1993:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:28:44.904 [2024-07-21 03:38:29.972774] nvme_ctrlr.c:2017:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:28:44.904 [2024-07-21 03:38:29.972782] nvme_ctrlr.c:2032:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:28:44.904 [2024-07-21 03:38:29.972790] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:28:44.904 [2024-07-21 03:38:29.972805] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:28:44.904 [2024-07-21 03:38:29.972818] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:44.904 [2024-07-21 03:38:29.972826] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:44.904 [2024-07-21 03:38:29.972833] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7f9980) 00:28:44.904 [2024-07-21 03:38:29.972845] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:28:44.904 [2024-07-21 03:38:29.972868] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8614c0, cid 0, qid 0 00:28:44.905 [2024-07-21 03:38:29.972952] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:44.905 [2024-07-21 03:38:29.972965] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:44.905 [2024-07-21 03:38:29.972972] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:44.905 [2024-07-21 03:38:29.972979] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8614c0) on tqpair=0x7f9980 00:28:44.905 [2024-07-21 03:38:29.972990] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:44.905 [2024-07-21 03:38:29.972997] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:44.905 [2024-07-21 03:38:29.973004] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7f9980) 00:28:44.905 [2024-07-21 03:38:29.973014] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:44.905 [2024-07-21 03:38:29.973028] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:44.905 [2024-07-21 03:38:29.973036] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:44.905 [2024-07-21 03:38:29.973043] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x7f9980) 00:28:44.905 [2024-07-21 03:38:29.973052] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:44.905 [2024-07-21 03:38:29.973061] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:44.905 [2024-07-21 03:38:29.973068] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:44.905 [2024-07-21 03:38:29.973074] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x7f9980) 00:28:44.905 [2024-07-21 03:38:29.973083] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:44.905 [2024-07-21 03:38:29.973093] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:44.905 [2024-07-21 03:38:29.973100] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:44.905 [2024-07-21 03:38:29.973106] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7f9980) 00:28:44.905 [2024-07-21 03:38:29.973115] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:44.905 [2024-07-21 03:38:29.973124] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:28:44.905 [2024-07-21 03:38:29.973143] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:28:44.905 [2024-07-21 03:38:29.973172] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:44.905 [2024-07-21 03:38:29.973179] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x7f9980) 00:28:44.905 [2024-07-21 03:38:29.973190] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.905 [2024-07-21 03:38:29.973212] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8614c0, cid 0, qid 0 00:28:44.905 [2024-07-21 03:38:29.973239] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x861620, cid 1, qid 0 00:28:44.905 [2024-07-21 03:38:29.973247] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x861780, cid 2, qid 0 00:28:44.905 [2024-07-21 03:38:29.973254] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8618e0, cid 3, qid 0 00:28:44.905 [2024-07-21 03:38:29.973262] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x861a40, cid 4, qid 0 00:28:44.905 [2024-07-21 03:38:29.973378] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:44.905 [2024-07-21 03:38:29.973393] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:44.905 [2024-07-21 03:38:29.973400] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:44.905 [2024-07-21 03:38:29.973407] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x861a40) on tqpair=0x7f9980 00:28:44.905 [2024-07-21 03:38:29.973415] nvme_ctrlr.c:2904:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:28:44.905 [2024-07-21 03:38:29.973424] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:28:44.905 [2024-07-21 03:38:29.973438] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:28:44.905 [2024-07-21 03:38:29.973449] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:28:44.905 [2024-07-21 03:38:29.973460] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:44.905 [2024-07-21 03:38:29.973468] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:44.905 [2024-07-21 03:38:29.973478] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x7f9980) 00:28:44.905 [2024-07-21 03:38:29.973490] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:28:44.905 [2024-07-21 03:38:29.973511] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x861a40, cid 4, qid 0 00:28:44.905 [2024-07-21 03:38:29.973595] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:44.905 [2024-07-21 03:38:29.973608] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:44.905 [2024-07-21 03:38:29.973622] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:44.905 [2024-07-21 03:38:29.973630] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x861a40) on tqpair=0x7f9980 00:28:44.905 [2024-07-21 03:38:29.973698] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:28:44.905 [2024-07-21 03:38:29.973718] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:28:44.905 [2024-07-21 03:38:29.973733] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:44.905 [2024-07-21 03:38:29.973741] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x7f9980) 00:28:44.905 [2024-07-21 03:38:29.973752] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.905 [2024-07-21 03:38:29.973773] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x861a40, cid 4, qid 0 00:28:44.905 [2024-07-21 03:38:29.973869] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:44.905 [2024-07-21 03:38:29.973882] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:44.905 [2024-07-21 03:38:29.973889] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:44.905 [2024-07-21 03:38:29.973896] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x7f9980): datao=0, datal=4096, cccid=4 00:28:44.905 [2024-07-21 03:38:29.973903] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x861a40) on tqpair(0x7f9980): expected_datao=0, payload_size=4096 00:28:44.905 [2024-07-21 03:38:29.973911] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:44.905 [2024-07-21 03:38:29.973927] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:44.905 [2024-07-21 03:38:29.973936] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:44.905 [2024-07-21 03:38:30.014747] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:44.905 [2024-07-21 03:38:30.014786] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:44.905 [2024-07-21 03:38:30.014799] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:44.905 [2024-07-21 03:38:30.014811] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x861a40) on tqpair=0x7f9980 00:28:44.905 [2024-07-21 03:38:30.014840] nvme_ctrlr.c:4570:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:28:44.905 [2024-07-21 03:38:30.014871] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:28:44.905 [2024-07-21 03:38:30.014898] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:28:44.905 [2024-07-21 03:38:30.014923] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:44.905 [2024-07-21 03:38:30.014937] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x7f9980) 00:28:44.905 [2024-07-21 03:38:30.014965] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.905 [2024-07-21 03:38:30.015006] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x861a40, cid 4, qid 0 00:28:44.905 [2024-07-21 03:38:30.015128] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:44.905 [2024-07-21 03:38:30.015148] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:44.905 [2024-07-21 03:38:30.015165] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:44.905 [2024-07-21 03:38:30.015177] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x7f9980): datao=0, datal=4096, cccid=4 00:28:44.905 [2024-07-21 03:38:30.015190] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x861a40) on tqpair(0x7f9980): expected_datao=0, payload_size=4096 00:28:44.905 [2024-07-21 03:38:30.015204] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:44.905 [2024-07-21 03:38:30.015230] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:44.905 [2024-07-21 03:38:30.015243] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:44.905 [2024-07-21 03:38:30.057634] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:44.905 [2024-07-21 03:38:30.057667] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:44.905 [2024-07-21 03:38:30.057679] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:44.905 [2024-07-21 03:38:30.057690] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x861a40) on tqpair=0x7f9980 00:28:44.905 [2024-07-21 03:38:30.057723] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:28:44.905 [2024-07-21 03:38:30.057758] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:28:44.905 [2024-07-21 03:38:30.057782] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:44.905 [2024-07-21 03:38:30.057794] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x7f9980) 00:28:44.905 [2024-07-21 03:38:30.057815] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.905 [2024-07-21 03:38:30.057851] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x861a40, cid 4, qid 0 00:28:44.905 [2024-07-21 03:38:30.057976] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:44.905 [2024-07-21 03:38:30.057996] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:44.905 [2024-07-21 03:38:30.058006] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:44.905 [2024-07-21 03:38:30.058016] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x7f9980): datao=0, datal=4096, cccid=4 00:28:44.905 [2024-07-21 03:38:30.058029] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x861a40) on tqpair(0x7f9980): expected_datao=0, payload_size=4096 00:28:44.905 [2024-07-21 03:38:30.058040] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:44.905 [2024-07-21 03:38:30.058057] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:44.905 [2024-07-21 03:38:30.058071] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:44.905 [2024-07-21 03:38:30.058087] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:44.905 [2024-07-21 03:38:30.058101] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:44.905 [2024-07-21 03:38:30.058111] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:44.905 [2024-07-21 03:38:30.058122] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x861a40) on tqpair=0x7f9980 00:28:44.905 [2024-07-21 03:38:30.058141] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:28:44.905 [2024-07-21 03:38:30.058167] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:28:44.905 [2024-07-21 03:38:30.058190] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:28:44.906 [2024-07-21 03:38:30.058205] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:28:44.906 [2024-07-21 03:38:30.058218] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:28:44.906 [2024-07-21 03:38:30.058239] nvme_ctrlr.c:2992:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:28:44.906 [2024-07-21 03:38:30.058255] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:28:44.906 [2024-07-21 03:38:30.058272] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:28:44.906 [2024-07-21 03:38:30.058310] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:44.906 [2024-07-21 03:38:30.058326] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x7f9980) 00:28:44.906 [2024-07-21 03:38:30.058344] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.906 [2024-07-21 03:38:30.058364] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:44.906 [2024-07-21 03:38:30.058379] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:44.906 [2024-07-21 03:38:30.058388] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x7f9980) 00:28:44.906 [2024-07-21 03:38:30.058401] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:28:44.906 [2024-07-21 03:38:30.058452] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x861a40, cid 4, qid 0 00:28:44.906 [2024-07-21 03:38:30.058470] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x861ba0, cid 5, qid 0 00:28:44.906 [2024-07-21 03:38:30.058659] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:44.906 [2024-07-21 03:38:30.058680] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:44.906 [2024-07-21 03:38:30.058691] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:44.906 [2024-07-21 03:38:30.058703] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x861a40) on tqpair=0x7f9980 00:28:44.906 [2024-07-21 03:38:30.058719] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:44.906 [2024-07-21 03:38:30.058736] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:44.906 [2024-07-21 03:38:30.058746] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:44.906 [2024-07-21 03:38:30.058755] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x861ba0) on tqpair=0x7f9980 00:28:44.906 [2024-07-21 03:38:30.058777] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:44.906 [2024-07-21 03:38:30.058791] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x7f9980) 00:28:44.906 [2024-07-21 03:38:30.058807] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.906 [2024-07-21 03:38:30.058840] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x861ba0, cid 5, qid 0 00:28:44.906 [2024-07-21 03:38:30.058935] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:44.906 [2024-07-21 03:38:30.058955] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:44.906 [2024-07-21 03:38:30.058965] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:44.906 [2024-07-21 03:38:30.058974] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x861ba0) on tqpair=0x7f9980 00:28:44.906 [2024-07-21 03:38:30.058997] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:44.906 [2024-07-21 03:38:30.059013] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x7f9980) 00:28:44.906 [2024-07-21 03:38:30.059030] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.906 [2024-07-21 03:38:30.059059] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x861ba0, cid 5, qid 0 00:28:44.906 [2024-07-21 03:38:30.059150] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:44.906 [2024-07-21 03:38:30.059170] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:44.906 [2024-07-21 03:38:30.059180] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:44.906 [2024-07-21 03:38:30.059198] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x861ba0) on tqpair=0x7f9980 00:28:44.906 [2024-07-21 03:38:30.059224] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:44.906 [2024-07-21 03:38:30.059240] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x7f9980) 00:28:44.906 [2024-07-21 03:38:30.059254] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.906 [2024-07-21 03:38:30.059282] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x861ba0, cid 5, qid 0 00:28:44.906 [2024-07-21 03:38:30.059365] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:44.906 [2024-07-21 03:38:30.059384] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:44.906 [2024-07-21 03:38:30.059396] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:44.906 [2024-07-21 03:38:30.059407] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x861ba0) on tqpair=0x7f9980 00:28:44.906 [2024-07-21 03:38:30.059437] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:44.906 [2024-07-21 03:38:30.059452] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x7f9980) 00:28:44.906 [2024-07-21 03:38:30.059467] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.906 [2024-07-21 03:38:30.059484] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:44.906 [2024-07-21 03:38:30.059497] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x7f9980) 00:28:44.906 [2024-07-21 03:38:30.059512] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.906 [2024-07-21 03:38:30.059533] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:44.906 [2024-07-21 03:38:30.059546] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x7f9980) 00:28:44.906 [2024-07-21 03:38:30.059558] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.906 [2024-07-21 03:38:30.059576] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:44.906 [2024-07-21 03:38:30.059589] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x7f9980) 00:28:44.906 [2024-07-21 03:38:30.059603] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.906 [2024-07-21 03:38:30.059646] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x861ba0, cid 5, qid 0 00:28:44.906 [2024-07-21 03:38:30.059664] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x861a40, cid 4, qid 0 00:28:44.906 [2024-07-21 03:38:30.059674] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x861d00, cid 6, qid 0 00:28:44.906 [2024-07-21 03:38:30.059684] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x861e60, cid 7, qid 0 00:28:44.906 [2024-07-21 03:38:30.059879] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:44.906 [2024-07-21 03:38:30.059902] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:44.906 [2024-07-21 03:38:30.059914] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:44.906 [2024-07-21 03:38:30.059925] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x7f9980): datao=0, datal=8192, cccid=5 00:28:44.906 [2024-07-21 03:38:30.059937] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x861ba0) on tqpair(0x7f9980): expected_datao=0, payload_size=8192 00:28:44.906 [2024-07-21 03:38:30.059951] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:44.906 [2024-07-21 03:38:30.059968] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:44.906 [2024-07-21 03:38:30.059978] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:44.906 [2024-07-21 03:38:30.059995] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:44.906 [2024-07-21 03:38:30.060011] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:44.906 [2024-07-21 03:38:30.060022] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:44.906 [2024-07-21 03:38:30.060032] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x7f9980): datao=0, datal=512, cccid=4 00:28:44.906 [2024-07-21 03:38:30.060045] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x861a40) on tqpair(0x7f9980): expected_datao=0, payload_size=512 00:28:44.906 [2024-07-21 03:38:30.060058] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:44.906 [2024-07-21 03:38:30.060070] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:44.906 [2024-07-21 03:38:30.060080] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:44.906 [2024-07-21 03:38:30.060091] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:44.906 [2024-07-21 03:38:30.060104] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:44.906 [2024-07-21 03:38:30.060115] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:44.906 [2024-07-21 03:38:30.060124] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x7f9980): datao=0, datal=512, cccid=6 00:28:44.906 [2024-07-21 03:38:30.060137] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x861d00) on tqpair(0x7f9980): expected_datao=0, payload_size=512 00:28:44.906 [2024-07-21 03:38:30.060150] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:44.906 [2024-07-21 03:38:30.060164] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:44.906 [2024-07-21 03:38:30.060174] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:44.906 [2024-07-21 03:38:30.060185] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:44.906 [2024-07-21 03:38:30.060198] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:44.906 [2024-07-21 03:38:30.060210] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:44.906 [2024-07-21 03:38:30.060221] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x7f9980): datao=0, datal=4096, cccid=7 00:28:44.906 [2024-07-21 03:38:30.060232] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x861e60) on tqpair(0x7f9980): expected_datao=0, payload_size=4096 00:28:44.906 [2024-07-21 03:38:30.060245] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:44.906 [2024-07-21 03:38:30.060261] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:44.906 [2024-07-21 03:38:30.060271] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:44.906 [2024-07-21 03:38:30.060287] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:44.906 [2024-07-21 03:38:30.060300] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:44.906 [2024-07-21 03:38:30.060311] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:44.906 [2024-07-21 03:38:30.060322] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x861ba0) on tqpair=0x7f9980 00:28:44.906 [2024-07-21 03:38:30.060352] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:44.906 [2024-07-21 03:38:30.060370] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:44.906 [2024-07-21 03:38:30.060379] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:44.906 [2024-07-21 03:38:30.060402] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x861a40) on tqpair=0x7f9980 00:28:44.906 [2024-07-21 03:38:30.060421] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:44.906 [2024-07-21 03:38:30.060436] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:44.906 [2024-07-21 03:38:30.060447] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:44.906 [2024-07-21 03:38:30.060459] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x861d00) on tqpair=0x7f9980 00:28:44.906 [2024-07-21 03:38:30.060480] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:44.906 [2024-07-21 03:38:30.060495] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:44.906 [2024-07-21 03:38:30.060504] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:44.906 [2024-07-21 03:38:30.060518] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x861e60) on tqpair=0x7f9980 00:28:44.906 ===================================================== 00:28:44.906 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:44.907 ===================================================== 00:28:44.907 Controller Capabilities/Features 00:28:44.907 ================================ 00:28:44.907 Vendor ID: 8086 00:28:44.907 Subsystem Vendor ID: 8086 00:28:44.907 Serial Number: SPDK00000000000001 00:28:44.907 Model Number: SPDK bdev Controller 00:28:44.907 Firmware Version: 24.05.1 00:28:44.907 Recommended Arb Burst: 6 00:28:44.907 IEEE OUI Identifier: e4 d2 5c 00:28:44.907 Multi-path I/O 00:28:44.907 May have multiple subsystem ports: Yes 00:28:44.907 May have multiple controllers: Yes 00:28:44.907 Associated with SR-IOV VF: No 00:28:44.907 Max Data Transfer Size: 131072 00:28:44.907 Max Number of Namespaces: 32 00:28:44.907 Max Number of I/O Queues: 127 00:28:44.907 NVMe Specification Version (VS): 1.3 00:28:44.907 NVMe Specification Version (Identify): 1.3 00:28:44.907 Maximum Queue Entries: 128 00:28:44.907 Contiguous Queues Required: Yes 00:28:44.907 Arbitration Mechanisms Supported 00:28:44.907 Weighted Round Robin: Not Supported 00:28:44.907 Vendor Specific: Not Supported 00:28:44.907 Reset Timeout: 15000 ms 00:28:44.907 Doorbell Stride: 4 bytes 00:28:44.907 NVM Subsystem Reset: Not Supported 00:28:44.907 Command Sets Supported 00:28:44.907 NVM Command Set: Supported 00:28:44.907 Boot Partition: Not Supported 00:28:44.907 Memory Page Size Minimum: 4096 bytes 00:28:44.907 Memory Page Size Maximum: 4096 bytes 00:28:44.907 Persistent Memory Region: Not Supported 00:28:44.907 Optional Asynchronous Events Supported 00:28:44.907 Namespace Attribute Notices: Supported 00:28:44.907 Firmware Activation Notices: Not Supported 00:28:44.907 ANA Change Notices: Not Supported 00:28:44.907 PLE Aggregate Log Change Notices: Not Supported 00:28:44.907 LBA Status Info Alert Notices: Not Supported 00:28:44.907 EGE Aggregate Log Change Notices: Not Supported 00:28:44.907 Normal NVM Subsystem Shutdown event: Not Supported 00:28:44.907 Zone Descriptor Change Notices: Not Supported 00:28:44.907 Discovery Log Change Notices: Not Supported 00:28:44.907 Controller Attributes 00:28:44.907 128-bit Host Identifier: Supported 00:28:44.907 Non-Operational Permissive Mode: Not Supported 00:28:44.907 NVM Sets: Not Supported 00:28:44.907 Read Recovery Levels: Not Supported 00:28:44.907 Endurance Groups: Not Supported 00:28:44.907 Predictable Latency Mode: Not Supported 00:28:44.907 Traffic Based Keep ALive: Not Supported 00:28:44.907 Namespace Granularity: Not Supported 00:28:44.907 SQ Associations: Not Supported 00:28:44.907 UUID List: Not Supported 00:28:44.907 Multi-Domain Subsystem: Not Supported 00:28:44.907 Fixed Capacity Management: Not Supported 00:28:44.907 Variable Capacity Management: Not Supported 00:28:44.907 Delete Endurance Group: Not Supported 00:28:44.907 Delete NVM Set: Not Supported 00:28:44.907 Extended LBA Formats Supported: Not Supported 00:28:44.907 Flexible Data Placement Supported: Not Supported 00:28:44.907 00:28:44.907 Controller Memory Buffer Support 00:28:44.907 ================================ 00:28:44.907 Supported: No 00:28:44.907 00:28:44.907 Persistent Memory Region Support 00:28:44.907 ================================ 00:28:44.907 Supported: No 00:28:44.907 00:28:44.907 Admin Command Set Attributes 00:28:44.907 ============================ 00:28:44.907 Security Send/Receive: Not Supported 00:28:44.907 Format NVM: Not Supported 00:28:44.907 Firmware Activate/Download: Not Supported 00:28:44.907 Namespace Management: Not Supported 00:28:44.907 Device Self-Test: Not Supported 00:28:44.907 Directives: Not Supported 00:28:44.907 NVMe-MI: Not Supported 00:28:44.907 Virtualization Management: Not Supported 00:28:44.907 Doorbell Buffer Config: Not Supported 00:28:44.907 Get LBA Status Capability: Not Supported 00:28:44.907 Command & Feature Lockdown Capability: Not Supported 00:28:44.907 Abort Command Limit: 4 00:28:44.907 Async Event Request Limit: 4 00:28:44.907 Number of Firmware Slots: N/A 00:28:44.907 Firmware Slot 1 Read-Only: N/A 00:28:44.907 Firmware Activation Without Reset: N/A 00:28:44.907 Multiple Update Detection Support: N/A 00:28:44.907 Firmware Update Granularity: No Information Provided 00:28:44.907 Per-Namespace SMART Log: No 00:28:44.907 Asymmetric Namespace Access Log Page: Not Supported 00:28:44.907 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:28:44.907 Command Effects Log Page: Supported 00:28:44.907 Get Log Page Extended Data: Supported 00:28:44.907 Telemetry Log Pages: Not Supported 00:28:44.907 Persistent Event Log Pages: Not Supported 00:28:44.907 Supported Log Pages Log Page: May Support 00:28:44.907 Commands Supported & Effects Log Page: Not Supported 00:28:44.907 Feature Identifiers & Effects Log Page:May Support 00:28:44.907 NVMe-MI Commands & Effects Log Page: May Support 00:28:44.907 Data Area 4 for Telemetry Log: Not Supported 00:28:44.907 Error Log Page Entries Supported: 128 00:28:44.907 Keep Alive: Supported 00:28:44.907 Keep Alive Granularity: 10000 ms 00:28:44.907 00:28:44.907 NVM Command Set Attributes 00:28:44.907 ========================== 00:28:44.907 Submission Queue Entry Size 00:28:44.907 Max: 64 00:28:44.907 Min: 64 00:28:44.907 Completion Queue Entry Size 00:28:44.907 Max: 16 00:28:44.907 Min: 16 00:28:44.907 Number of Namespaces: 32 00:28:44.907 Compare Command: Supported 00:28:44.907 Write Uncorrectable Command: Not Supported 00:28:44.907 Dataset Management Command: Supported 00:28:44.907 Write Zeroes Command: Supported 00:28:44.907 Set Features Save Field: Not Supported 00:28:44.907 Reservations: Supported 00:28:44.907 Timestamp: Not Supported 00:28:44.907 Copy: Supported 00:28:44.907 Volatile Write Cache: Present 00:28:44.907 Atomic Write Unit (Normal): 1 00:28:44.907 Atomic Write Unit (PFail): 1 00:28:44.907 Atomic Compare & Write Unit: 1 00:28:44.907 Fused Compare & Write: Supported 00:28:44.907 Scatter-Gather List 00:28:44.907 SGL Command Set: Supported 00:28:44.907 SGL Keyed: Supported 00:28:44.907 SGL Bit Bucket Descriptor: Not Supported 00:28:44.907 SGL Metadata Pointer: Not Supported 00:28:44.907 Oversized SGL: Not Supported 00:28:44.907 SGL Metadata Address: Not Supported 00:28:44.907 SGL Offset: Supported 00:28:44.907 Transport SGL Data Block: Not Supported 00:28:44.907 Replay Protected Memory Block: Not Supported 00:28:44.907 00:28:44.907 Firmware Slot Information 00:28:44.907 ========================= 00:28:44.907 Active slot: 1 00:28:44.907 Slot 1 Firmware Revision: 24.05.1 00:28:44.907 00:28:44.907 00:28:44.907 Commands Supported and Effects 00:28:44.907 ============================== 00:28:44.907 Admin Commands 00:28:44.907 -------------- 00:28:44.907 Get Log Page (02h): Supported 00:28:44.907 Identify (06h): Supported 00:28:44.907 Abort (08h): Supported 00:28:44.907 Set Features (09h): Supported 00:28:44.907 Get Features (0Ah): Supported 00:28:44.907 Asynchronous Event Request (0Ch): Supported 00:28:44.907 Keep Alive (18h): Supported 00:28:44.907 I/O Commands 00:28:44.907 ------------ 00:28:44.907 Flush (00h): Supported LBA-Change 00:28:44.907 Write (01h): Supported LBA-Change 00:28:44.907 Read (02h): Supported 00:28:44.907 Compare (05h): Supported 00:28:44.907 Write Zeroes (08h): Supported LBA-Change 00:28:44.907 Dataset Management (09h): Supported LBA-Change 00:28:44.907 Copy (19h): Supported LBA-Change 00:28:44.907 Unknown (79h): Supported LBA-Change 00:28:44.907 Unknown (7Ah): Supported 00:28:44.907 00:28:44.907 Error Log 00:28:44.907 ========= 00:28:44.907 00:28:44.907 Arbitration 00:28:44.907 =========== 00:28:44.907 Arbitration Burst: 1 00:28:44.907 00:28:44.907 Power Management 00:28:44.907 ================ 00:28:44.907 Number of Power States: 1 00:28:44.907 Current Power State: Power State #0 00:28:44.907 Power State #0: 00:28:44.907 Max Power: 0.00 W 00:28:44.907 Non-Operational State: Operational 00:28:44.907 Entry Latency: Not Reported 00:28:44.907 Exit Latency: Not Reported 00:28:44.907 Relative Read Throughput: 0 00:28:44.907 Relative Read Latency: 0 00:28:44.907 Relative Write Throughput: 0 00:28:44.907 Relative Write Latency: 0 00:28:44.907 Idle Power: Not Reported 00:28:44.907 Active Power: Not Reported 00:28:44.907 Non-Operational Permissive Mode: Not Supported 00:28:44.907 00:28:44.907 Health Information 00:28:44.907 ================== 00:28:44.907 Critical Warnings: 00:28:44.907 Available Spare Space: OK 00:28:44.907 Temperature: OK 00:28:44.907 Device Reliability: OK 00:28:44.907 Read Only: No 00:28:44.907 Volatile Memory Backup: OK 00:28:44.907 Current Temperature: 0 Kelvin (-273 Celsius) 00:28:44.907 Temperature Threshold: [2024-07-21 03:38:30.060733] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:44.907 [2024-07-21 03:38:30.060751] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x7f9980) 00:28:44.907 [2024-07-21 03:38:30.060769] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.907 [2024-07-21 03:38:30.060802] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x861e60, cid 7, qid 0 00:28:44.907 [2024-07-21 03:38:30.060921] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:44.907 [2024-07-21 03:38:30.060942] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:44.907 [2024-07-21 03:38:30.060953] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:44.908 [2024-07-21 03:38:30.060964] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x861e60) on tqpair=0x7f9980 00:28:44.908 [2024-07-21 03:38:30.061023] nvme_ctrlr.c:4234:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:28:44.908 [2024-07-21 03:38:30.061056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.908 [2024-07-21 03:38:30.061076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.908 [2024-07-21 03:38:30.061091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.908 [2024-07-21 03:38:30.061117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.908 [2024-07-21 03:38:30.061136] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:44.908 [2024-07-21 03:38:30.061148] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:44.908 [2024-07-21 03:38:30.061158] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7f9980) 00:28:44.908 [2024-07-21 03:38:30.061173] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.908 [2024-07-21 03:38:30.061206] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8618e0, cid 3, qid 0 00:28:44.908 [2024-07-21 03:38:30.061313] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:44.908 [2024-07-21 03:38:30.061333] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:44.908 [2024-07-21 03:38:30.061343] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:44.908 [2024-07-21 03:38:30.061354] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8618e0) on tqpair=0x7f9980 00:28:44.908 [2024-07-21 03:38:30.061372] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:44.908 [2024-07-21 03:38:30.061385] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:44.908 [2024-07-21 03:38:30.061397] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7f9980) 00:28:44.908 [2024-07-21 03:38:30.061411] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.908 [2024-07-21 03:38:30.061447] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8618e0, cid 3, qid 0 00:28:44.908 [2024-07-21 03:38:30.061568] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:44.908 [2024-07-21 03:38:30.061591] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:44.908 [2024-07-21 03:38:30.061605] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:44.908 [2024-07-21 03:38:30.065633] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8618e0) on tqpair=0x7f9980 00:28:44.908 [2024-07-21 03:38:30.065662] nvme_ctrlr.c:1084:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:28:44.908 [2024-07-21 03:38:30.065676] nvme_ctrlr.c:1087:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:28:44.908 [2024-07-21 03:38:30.065705] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:44.908 [2024-07-21 03:38:30.065720] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:44.908 [2024-07-21 03:38:30.065731] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7f9980) 00:28:44.908 [2024-07-21 03:38:30.065748] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.908 [2024-07-21 03:38:30.065781] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8618e0, cid 3, qid 0 00:28:44.908 [2024-07-21 03:38:30.065897] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:44.908 [2024-07-21 03:38:30.065916] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:44.908 [2024-07-21 03:38:30.065928] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:44.908 [2024-07-21 03:38:30.065938] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8618e0) on tqpair=0x7f9980 00:28:44.908 [2024-07-21 03:38:30.065959] nvme_ctrlr.c:1206:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 0 milliseconds 00:28:44.908 0 Kelvin (-273 Celsius) 00:28:44.908 Available Spare: 0% 00:28:44.908 Available Spare Threshold: 0% 00:28:44.908 Life Percentage Used: 0% 00:28:44.908 Data Units Read: 0 00:28:44.908 Data Units Written: 0 00:28:44.908 Host Read Commands: 0 00:28:44.908 Host Write Commands: 0 00:28:44.908 Controller Busy Time: 0 minutes 00:28:44.908 Power Cycles: 0 00:28:44.908 Power On Hours: 0 hours 00:28:44.908 Unsafe Shutdowns: 0 00:28:44.908 Unrecoverable Media Errors: 0 00:28:44.908 Lifetime Error Log Entries: 0 00:28:44.908 Warning Temperature Time: 0 minutes 00:28:44.908 Critical Temperature Time: 0 minutes 00:28:44.908 00:28:44.908 Number of Queues 00:28:44.908 ================ 00:28:44.908 Number of I/O Submission Queues: 127 00:28:44.908 Number of I/O Completion Queues: 127 00:28:44.908 00:28:44.908 Active Namespaces 00:28:44.908 ================= 00:28:44.908 Namespace ID:1 00:28:44.908 Error Recovery Timeout: Unlimited 00:28:44.908 Command Set Identifier: NVM (00h) 00:28:44.908 Deallocate: Supported 00:28:44.908 Deallocated/Unwritten Error: Not Supported 00:28:44.908 Deallocated Read Value: Unknown 00:28:44.908 Deallocate in Write Zeroes: Not Supported 00:28:44.908 Deallocated Guard Field: 0xFFFF 00:28:44.908 Flush: Supported 00:28:44.908 Reservation: Supported 00:28:44.908 Namespace Sharing Capabilities: Multiple Controllers 00:28:44.908 Size (in LBAs): 131072 (0GiB) 00:28:44.908 Capacity (in LBAs): 131072 (0GiB) 00:28:44.908 Utilization (in LBAs): 131072 (0GiB) 00:28:44.908 NGUID: ABCDEF0123456789ABCDEF0123456789 00:28:44.908 EUI64: ABCDEF0123456789 00:28:44.908 UUID: 97e67867-8953-43c7-9770-671a5e36939d 00:28:44.908 Thin Provisioning: Not Supported 00:28:44.908 Per-NS Atomic Units: Yes 00:28:44.908 Atomic Boundary Size (Normal): 0 00:28:44.908 Atomic Boundary Size (PFail): 0 00:28:44.908 Atomic Boundary Offset: 0 00:28:44.908 Maximum Single Source Range Length: 65535 00:28:44.908 Maximum Copy Length: 65535 00:28:44.908 Maximum Source Range Count: 1 00:28:44.908 NGUID/EUI64 Never Reused: No 00:28:44.908 Namespace Write Protected: No 00:28:44.908 Number of LBA Formats: 1 00:28:44.908 Current LBA Format: LBA Format #00 00:28:44.908 LBA Format #00: Data Size: 512 Metadata Size: 0 00:28:44.908 00:28:44.908 03:38:30 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:28:44.908 03:38:30 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:44.908 03:38:30 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:44.908 03:38:30 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:44.908 03:38:30 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:44.908 03:38:30 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:28:44.908 03:38:30 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:28:44.908 03:38:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:44.908 03:38:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:28:44.908 03:38:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:44.908 03:38:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:28:44.908 03:38:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:44.908 03:38:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:44.908 rmmod nvme_tcp 00:28:44.908 rmmod nvme_fabrics 00:28:44.908 rmmod nvme_keyring 00:28:44.908 03:38:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:44.908 03:38:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:28:44.908 03:38:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:28:44.908 03:38:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 2505821 ']' 00:28:44.908 03:38:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 2505821 00:28:44.908 03:38:30 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@946 -- # '[' -z 2505821 ']' 00:28:44.908 03:38:30 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@950 -- # kill -0 2505821 00:28:44.908 03:38:30 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@951 -- # uname 00:28:44.908 03:38:30 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:28:44.908 03:38:30 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2505821 00:28:44.908 03:38:30 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:28:44.908 03:38:30 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:28:44.908 03:38:30 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2505821' 00:28:44.908 killing process with pid 2505821 00:28:44.908 03:38:30 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@965 -- # kill 2505821 00:28:44.908 03:38:30 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@970 -- # wait 2505821 00:28:45.167 03:38:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:45.167 03:38:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:45.167 03:38:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:45.167 03:38:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:45.167 03:38:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:45.167 03:38:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:45.167 03:38:30 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:45.167 03:38:30 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:47.697 03:38:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:47.697 00:28:47.697 real 0m5.550s 00:28:47.697 user 0m4.880s 00:28:47.697 sys 0m1.886s 00:28:47.697 03:38:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:47.697 03:38:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:47.697 ************************************ 00:28:47.697 END TEST nvmf_identify 00:28:47.697 ************************************ 00:28:47.697 03:38:32 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:28:47.697 03:38:32 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:28:47.697 03:38:32 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:47.697 03:38:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:47.697 ************************************ 00:28:47.697 START TEST nvmf_perf 00:28:47.697 ************************************ 00:28:47.697 03:38:32 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:28:47.697 * Looking for test storage... 00:28:47.697 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:47.697 03:38:32 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:47.697 03:38:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:28:47.697 03:38:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:47.697 03:38:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:47.697 03:38:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:47.697 03:38:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:47.697 03:38:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:47.697 03:38:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:47.697 03:38:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:47.697 03:38:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:47.697 03:38:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:47.697 03:38:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:47.697 03:38:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:47.697 03:38:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:47.697 03:38:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:47.697 03:38:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:47.697 03:38:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:47.697 03:38:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:47.697 03:38:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:47.697 03:38:32 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:47.697 03:38:32 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:47.697 03:38:32 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:47.697 03:38:32 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:47.697 03:38:32 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:47.697 03:38:32 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:47.697 03:38:32 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:28:47.697 03:38:32 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:47.697 03:38:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:28:47.697 03:38:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:47.697 03:38:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:47.697 03:38:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:47.697 03:38:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:47.697 03:38:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:47.697 03:38:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:47.697 03:38:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:47.697 03:38:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:47.697 03:38:32 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:28:47.697 03:38:32 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:28:47.697 03:38:32 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:47.697 03:38:32 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:28:47.697 03:38:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:47.697 03:38:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:47.697 03:38:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:47.697 03:38:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:47.697 03:38:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:47.697 03:38:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:47.697 03:38:32 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:47.697 03:38:32 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:47.697 03:38:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:47.697 03:38:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:47.697 03:38:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:28:47.697 03:38:32 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:28:49.608 03:38:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:49.609 03:38:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:28:49.609 03:38:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:49.609 03:38:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:49.609 03:38:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:49.609 03:38:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:49.609 03:38:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:49.609 03:38:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:28:49.609 03:38:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:49.609 03:38:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:28:49.609 03:38:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:28:49.609 03:38:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:28:49.609 03:38:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:28:49.609 03:38:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:28:49.609 03:38:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:28:49.609 03:38:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:49.609 03:38:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:49.609 03:38:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:49.609 03:38:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:49.609 03:38:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:49.609 03:38:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:49.609 03:38:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:49.609 03:38:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:49.609 03:38:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:49.609 03:38:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:49.609 03:38:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:49.609 03:38:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:49.609 03:38:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:49.609 03:38:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:49.609 03:38:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:49.609 03:38:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:49.609 03:38:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:49.609 03:38:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:49.609 03:38:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:49.609 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:49.609 03:38:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:49.609 03:38:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:49.609 03:38:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:49.609 03:38:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:49.609 03:38:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:49.609 03:38:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:49.609 03:38:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:49.609 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:49.609 03:38:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:49.609 03:38:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:49.609 03:38:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:49.609 03:38:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:49.609 03:38:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:49.609 03:38:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:49.609 03:38:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:49.609 03:38:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:49.609 03:38:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:49.609 03:38:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:49.609 03:38:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:49.609 03:38:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:49.609 03:38:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:49.609 03:38:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:49.609 03:38:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:49.609 03:38:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:49.609 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:49.609 03:38:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:49.609 03:38:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:49.609 03:38:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:49.609 03:38:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:49.609 03:38:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:49.609 03:38:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:49.609 03:38:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:49.609 03:38:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:49.609 03:38:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:49.609 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:49.609 03:38:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:49.609 03:38:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:49.609 03:38:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:28:49.609 03:38:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:49.609 03:38:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:49.609 03:38:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:49.609 03:38:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:49.609 03:38:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:49.609 03:38:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:49.609 03:38:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:49.609 03:38:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:49.609 03:38:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:49.609 03:38:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:49.609 03:38:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:49.609 03:38:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:49.609 03:38:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:49.609 03:38:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:49.609 03:38:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:49.609 03:38:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:49.609 03:38:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:49.609 03:38:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:49.609 03:38:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:49.609 03:38:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:49.609 03:38:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:49.609 03:38:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:49.609 03:38:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:49.609 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:49.609 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.252 ms 00:28:49.609 00:28:49.609 --- 10.0.0.2 ping statistics --- 00:28:49.609 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:49.609 rtt min/avg/max/mdev = 0.252/0.252/0.252/0.000 ms 00:28:49.609 03:38:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:49.609 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:49.609 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.050 ms 00:28:49.609 00:28:49.609 --- 10.0.0.1 ping statistics --- 00:28:49.609 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:49.609 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:28:49.609 03:38:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:49.609 03:38:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:28:49.609 03:38:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:49.609 03:38:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:49.609 03:38:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:49.609 03:38:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:49.609 03:38:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:49.609 03:38:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:49.609 03:38:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:49.609 03:38:34 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:28:49.609 03:38:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:49.609 03:38:34 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@720 -- # xtrace_disable 00:28:49.609 03:38:34 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:28:49.609 03:38:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=2507896 00:28:49.609 03:38:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:28:49.609 03:38:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 2507896 00:28:49.609 03:38:34 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@827 -- # '[' -z 2507896 ']' 00:28:49.609 03:38:34 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:49.609 03:38:34 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:49.609 03:38:34 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:49.609 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:49.609 03:38:34 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:49.609 03:38:34 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:28:49.609 [2024-07-21 03:38:34.705284] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:28:49.609 [2024-07-21 03:38:34.705355] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:49.609 EAL: No free 2048 kB hugepages reported on node 1 00:28:49.609 [2024-07-21 03:38:34.768392] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:49.609 [2024-07-21 03:38:34.856184] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:49.610 [2024-07-21 03:38:34.856247] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:49.610 [2024-07-21 03:38:34.856271] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:49.610 [2024-07-21 03:38:34.856283] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:49.610 [2024-07-21 03:38:34.856293] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:49.610 [2024-07-21 03:38:34.856391] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:49.610 [2024-07-21 03:38:34.856466] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:49.610 [2024-07-21 03:38:34.856524] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:49.610 [2024-07-21 03:38:34.856526] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:49.869 03:38:34 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:49.869 03:38:34 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@860 -- # return 0 00:28:49.869 03:38:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:49.869 03:38:34 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:49.869 03:38:34 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:28:49.869 03:38:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:49.869 03:38:35 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:28:49.869 03:38:35 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:28:53.142 03:38:38 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:28:53.142 03:38:38 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:28:53.142 03:38:38 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:88:00.0 00:28:53.142 03:38:38 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:28:53.399 03:38:38 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:28:53.399 03:38:38 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:88:00.0 ']' 00:28:53.399 03:38:38 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:28:53.399 03:38:38 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:28:53.399 03:38:38 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:28:53.656 [2024-07-21 03:38:38.864774] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:53.656 03:38:38 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:53.913 03:38:39 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:28:53.913 03:38:39 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:54.170 03:38:39 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:28:54.170 03:38:39 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:28:54.428 03:38:39 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:54.685 [2024-07-21 03:38:39.844353] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:54.685 03:38:39 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:54.942 03:38:40 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:88:00.0 ']' 00:28:54.942 03:38:40 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:28:54.942 03:38:40 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:28:54.942 03:38:40 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:28:56.342 Initializing NVMe Controllers 00:28:56.342 Attached to NVMe Controller at 0000:88:00.0 [8086:0a54] 00:28:56.342 Associating PCIE (0000:88:00.0) NSID 1 with lcore 0 00:28:56.342 Initialization complete. Launching workers. 00:28:56.342 ======================================================== 00:28:56.342 Latency(us) 00:28:56.342 Device Information : IOPS MiB/s Average min max 00:28:56.342 PCIE (0000:88:00.0) NSID 1 from core 0: 84511.35 330.12 378.10 42.33 6240.14 00:28:56.342 ======================================================== 00:28:56.342 Total : 84511.35 330.12 378.10 42.33 6240.14 00:28:56.342 00:28:56.342 03:38:41 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:56.342 EAL: No free 2048 kB hugepages reported on node 1 00:28:57.275 Initializing NVMe Controllers 00:28:57.275 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:57.275 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:57.275 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:57.275 Initialization complete. Launching workers. 00:28:57.275 ======================================================== 00:28:57.275 Latency(us) 00:28:57.275 Device Information : IOPS MiB/s Average min max 00:28:57.275 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 99.00 0.39 10409.44 143.77 45753.96 00:28:57.275 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 46.00 0.18 22703.83 7946.37 47908.49 00:28:57.275 ======================================================== 00:28:57.275 Total : 145.00 0.57 14309.73 143.77 47908.49 00:28:57.275 00:28:57.275 03:38:42 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:57.275 EAL: No free 2048 kB hugepages reported on node 1 00:28:58.644 Initializing NVMe Controllers 00:28:58.644 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:58.644 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:58.644 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:58.644 Initialization complete. Launching workers. 00:28:58.644 ======================================================== 00:28:58.644 Latency(us) 00:28:58.644 Device Information : IOPS MiB/s Average min max 00:28:58.644 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8574.25 33.49 3737.34 575.96 9663.59 00:28:58.644 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3722.94 14.54 8692.84 4658.10 47753.72 00:28:58.644 ======================================================== 00:28:58.644 Total : 12297.19 48.04 5237.61 575.96 47753.72 00:28:58.644 00:28:58.644 03:38:43 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:28:58.644 03:38:43 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:28:58.645 03:38:43 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:58.645 EAL: No free 2048 kB hugepages reported on node 1 00:29:01.172 Initializing NVMe Controllers 00:29:01.173 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:01.173 Controller IO queue size 128, less than required. 00:29:01.173 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:01.173 Controller IO queue size 128, less than required. 00:29:01.173 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:01.173 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:01.173 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:29:01.173 Initialization complete. Launching workers. 00:29:01.173 ======================================================== 00:29:01.173 Latency(us) 00:29:01.173 Device Information : IOPS MiB/s Average min max 00:29:01.173 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1645.42 411.36 78970.17 50882.75 117701.93 00:29:01.173 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 565.12 141.28 232168.83 71590.85 368468.48 00:29:01.173 ======================================================== 00:29:01.173 Total : 2210.54 552.64 118134.78 50882.75 368468.48 00:29:01.173 00:29:01.173 03:38:46 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:29:01.429 EAL: No free 2048 kB hugepages reported on node 1 00:29:01.429 No valid NVMe controllers or AIO or URING devices found 00:29:01.429 Initializing NVMe Controllers 00:29:01.429 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:01.429 Controller IO queue size 128, less than required. 00:29:01.429 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:01.429 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:29:01.429 Controller IO queue size 128, less than required. 00:29:01.429 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:01.429 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:29:01.429 WARNING: Some requested NVMe devices were skipped 00:29:01.429 03:38:46 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:29:01.429 EAL: No free 2048 kB hugepages reported on node 1 00:29:03.953 Initializing NVMe Controllers 00:29:03.953 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:03.953 Controller IO queue size 128, less than required. 00:29:03.953 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:03.953 Controller IO queue size 128, less than required. 00:29:03.953 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:03.953 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:03.953 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:29:03.953 Initialization complete. Launching workers. 00:29:03.953 00:29:03.953 ==================== 00:29:03.953 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:29:03.953 TCP transport: 00:29:03.953 polls: 8971 00:29:03.953 idle_polls: 5086 00:29:03.953 sock_completions: 3885 00:29:03.953 nvme_completions: 6329 00:29:03.953 submitted_requests: 9426 00:29:03.953 queued_requests: 1 00:29:03.953 00:29:03.953 ==================== 00:29:03.953 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:29:03.953 TCP transport: 00:29:03.953 polls: 12239 00:29:03.953 idle_polls: 9207 00:29:03.953 sock_completions: 3032 00:29:03.953 nvme_completions: 5413 00:29:03.953 submitted_requests: 8166 00:29:03.953 queued_requests: 1 00:29:03.953 ======================================================== 00:29:03.953 Latency(us) 00:29:03.953 Device Information : IOPS MiB/s Average min max 00:29:03.953 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1581.02 395.26 82833.82 49212.85 142731.44 00:29:03.953 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1352.16 338.04 96296.87 41033.87 145629.33 00:29:03.953 ======================================================== 00:29:03.953 Total : 2933.19 733.30 89040.12 41033.87 145629.33 00:29:03.953 00:29:03.953 03:38:49 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:29:03.953 03:38:49 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:04.211 03:38:49 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:29:04.211 03:38:49 nvmf_tcp.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:88:00.0 ']' 00:29:04.211 03:38:49 nvmf_tcp.nvmf_perf -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:29:08.388 03:38:52 nvmf_tcp.nvmf_perf -- host/perf.sh@72 -- # ls_guid=b04c8623-ce9f-454a-9ed7-1270c5b6404c 00:29:08.388 03:38:52 nvmf_tcp.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb b04c8623-ce9f-454a-9ed7-1270c5b6404c 00:29:08.388 03:38:52 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1360 -- # local lvs_uuid=b04c8623-ce9f-454a-9ed7-1270c5b6404c 00:29:08.388 03:38:52 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1361 -- # local lvs_info 00:29:08.388 03:38:52 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1362 -- # local fc 00:29:08.388 03:38:52 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1363 -- # local cs 00:29:08.388 03:38:52 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:29:08.388 03:38:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # lvs_info='[ 00:29:08.388 { 00:29:08.388 "uuid": "b04c8623-ce9f-454a-9ed7-1270c5b6404c", 00:29:08.388 "name": "lvs_0", 00:29:08.388 "base_bdev": "Nvme0n1", 00:29:08.388 "total_data_clusters": 238234, 00:29:08.388 "free_clusters": 238234, 00:29:08.388 "block_size": 512, 00:29:08.388 "cluster_size": 4194304 00:29:08.388 } 00:29:08.388 ]' 00:29:08.388 03:38:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # jq '.[] | select(.uuid=="b04c8623-ce9f-454a-9ed7-1270c5b6404c") .free_clusters' 00:29:08.388 03:38:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # fc=238234 00:29:08.388 03:38:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # jq '.[] | select(.uuid=="b04c8623-ce9f-454a-9ed7-1270c5b6404c") .cluster_size' 00:29:08.388 03:38:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # cs=4194304 00:29:08.388 03:38:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # free_mb=952936 00:29:08.388 03:38:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # echo 952936 00:29:08.388 952936 00:29:08.388 03:38:53 nvmf_tcp.nvmf_perf -- host/perf.sh@77 -- # '[' 952936 -gt 20480 ']' 00:29:08.388 03:38:53 nvmf_tcp.nvmf_perf -- host/perf.sh@78 -- # free_mb=20480 00:29:08.388 03:38:53 nvmf_tcp.nvmf_perf -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u b04c8623-ce9f-454a-9ed7-1270c5b6404c lbd_0 20480 00:29:08.388 03:38:53 nvmf_tcp.nvmf_perf -- host/perf.sh@80 -- # lb_guid=7aa0935c-8a15-4748-ae5f-1af97a65e49b 00:29:08.388 03:38:53 nvmf_tcp.nvmf_perf -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore 7aa0935c-8a15-4748-ae5f-1af97a65e49b lvs_n_0 00:29:09.320 03:38:54 nvmf_tcp.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=fcd11d49-e619-4d3c-aa2f-2d6ba3fb25bd 00:29:09.320 03:38:54 nvmf_tcp.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb fcd11d49-e619-4d3c-aa2f-2d6ba3fb25bd 00:29:09.320 03:38:54 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1360 -- # local lvs_uuid=fcd11d49-e619-4d3c-aa2f-2d6ba3fb25bd 00:29:09.320 03:38:54 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1361 -- # local lvs_info 00:29:09.320 03:38:54 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1362 -- # local fc 00:29:09.320 03:38:54 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1363 -- # local cs 00:29:09.320 03:38:54 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:29:09.577 03:38:54 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # lvs_info='[ 00:29:09.577 { 00:29:09.577 "uuid": "b04c8623-ce9f-454a-9ed7-1270c5b6404c", 00:29:09.577 "name": "lvs_0", 00:29:09.577 "base_bdev": "Nvme0n1", 00:29:09.577 "total_data_clusters": 238234, 00:29:09.577 "free_clusters": 233114, 00:29:09.577 "block_size": 512, 00:29:09.577 "cluster_size": 4194304 00:29:09.577 }, 00:29:09.577 { 00:29:09.577 "uuid": "fcd11d49-e619-4d3c-aa2f-2d6ba3fb25bd", 00:29:09.577 "name": "lvs_n_0", 00:29:09.577 "base_bdev": "7aa0935c-8a15-4748-ae5f-1af97a65e49b", 00:29:09.577 "total_data_clusters": 5114, 00:29:09.577 "free_clusters": 5114, 00:29:09.577 "block_size": 512, 00:29:09.577 "cluster_size": 4194304 00:29:09.577 } 00:29:09.577 ]' 00:29:09.577 03:38:54 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # jq '.[] | select(.uuid=="fcd11d49-e619-4d3c-aa2f-2d6ba3fb25bd") .free_clusters' 00:29:09.577 03:38:54 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # fc=5114 00:29:09.577 03:38:54 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # jq '.[] | select(.uuid=="fcd11d49-e619-4d3c-aa2f-2d6ba3fb25bd") .cluster_size' 00:29:09.577 03:38:54 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # cs=4194304 00:29:09.577 03:38:54 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # free_mb=20456 00:29:09.577 03:38:54 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # echo 20456 00:29:09.577 20456 00:29:09.577 03:38:54 nvmf_tcp.nvmf_perf -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:29:09.577 03:38:54 nvmf_tcp.nvmf_perf -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u fcd11d49-e619-4d3c-aa2f-2d6ba3fb25bd lbd_nest_0 20456 00:29:09.834 03:38:55 nvmf_tcp.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=241b00b6-d7cf-4270-b5f6-fcb010fb2bfd 00:29:09.835 03:38:55 nvmf_tcp.nvmf_perf -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:10.091 03:38:55 nvmf_tcp.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:29:10.091 03:38:55 nvmf_tcp.nvmf_perf -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 241b00b6-d7cf-4270-b5f6-fcb010fb2bfd 00:29:10.351 03:38:55 nvmf_tcp.nvmf_perf -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:10.608 03:38:55 nvmf_tcp.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:29:10.608 03:38:55 nvmf_tcp.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:29:10.608 03:38:55 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:29:10.608 03:38:55 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:10.608 03:38:55 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:10.608 EAL: No free 2048 kB hugepages reported on node 1 00:29:22.789 Initializing NVMe Controllers 00:29:22.789 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:22.789 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:22.789 Initialization complete. Launching workers. 00:29:22.789 ======================================================== 00:29:22.789 Latency(us) 00:29:22.789 Device Information : IOPS MiB/s Average min max 00:29:22.789 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 47.90 0.02 20960.95 172.58 45782.16 00:29:22.789 ======================================================== 00:29:22.789 Total : 47.90 0.02 20960.95 172.58 45782.16 00:29:22.789 00:29:22.789 03:39:06 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:22.789 03:39:06 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:22.789 EAL: No free 2048 kB hugepages reported on node 1 00:29:32.786 Initializing NVMe Controllers 00:29:32.786 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:32.786 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:32.786 Initialization complete. Launching workers. 00:29:32.786 ======================================================== 00:29:32.786 Latency(us) 00:29:32.786 Device Information : IOPS MiB/s Average min max 00:29:32.786 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 79.30 9.91 12628.02 5022.43 47902.91 00:29:32.786 ======================================================== 00:29:32.786 Total : 79.30 9.91 12628.02 5022.43 47902.91 00:29:32.786 00:29:32.786 03:39:16 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:29:32.786 03:39:16 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:32.786 03:39:16 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:32.786 EAL: No free 2048 kB hugepages reported on node 1 00:29:42.752 Initializing NVMe Controllers 00:29:42.752 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:42.752 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:42.752 Initialization complete. Launching workers. 00:29:42.752 ======================================================== 00:29:42.752 Latency(us) 00:29:42.752 Device Information : IOPS MiB/s Average min max 00:29:42.752 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7647.90 3.73 4193.22 273.09 47883.80 00:29:42.752 ======================================================== 00:29:42.752 Total : 7647.90 3.73 4193.22 273.09 47883.80 00:29:42.752 00:29:42.752 03:39:26 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:42.752 03:39:26 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:42.752 EAL: No free 2048 kB hugepages reported on node 1 00:29:52.709 Initializing NVMe Controllers 00:29:52.709 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:52.709 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:52.709 Initialization complete. Launching workers. 00:29:52.709 ======================================================== 00:29:52.709 Latency(us) 00:29:52.709 Device Information : IOPS MiB/s Average min max 00:29:52.709 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3735.19 466.90 8568.73 709.78 20356.79 00:29:52.709 ======================================================== 00:29:52.709 Total : 3735.19 466.90 8568.73 709.78 20356.79 00:29:52.709 00:29:52.709 03:39:37 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:29:52.709 03:39:37 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:52.709 03:39:37 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:52.709 EAL: No free 2048 kB hugepages reported on node 1 00:30:02.673 Initializing NVMe Controllers 00:30:02.673 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:02.673 Controller IO queue size 128, less than required. 00:30:02.673 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:02.673 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:02.673 Initialization complete. Launching workers. 00:30:02.673 ======================================================== 00:30:02.673 Latency(us) 00:30:02.673 Device Information : IOPS MiB/s Average min max 00:30:02.673 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11612.29 5.67 11025.46 1870.71 26378.25 00:30:02.673 ======================================================== 00:30:02.673 Total : 11612.29 5.67 11025.46 1870.71 26378.25 00:30:02.673 00:30:02.673 03:39:47 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:02.673 03:39:47 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:02.673 EAL: No free 2048 kB hugepages reported on node 1 00:30:12.652 Initializing NVMe Controllers 00:30:12.652 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:12.652 Controller IO queue size 128, less than required. 00:30:12.652 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:12.652 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:12.652 Initialization complete. Launching workers. 00:30:12.652 ======================================================== 00:30:12.652 Latency(us) 00:30:12.652 Device Information : IOPS MiB/s Average min max 00:30:12.652 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1191.00 148.87 108110.15 20452.26 225065.54 00:30:12.652 ======================================================== 00:30:12.652 Total : 1191.00 148.87 108110.15 20452.26 225065.54 00:30:12.652 00:30:12.652 03:39:57 nvmf_tcp.nvmf_perf -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:12.908 03:39:58 nvmf_tcp.nvmf_perf -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 241b00b6-d7cf-4270-b5f6-fcb010fb2bfd 00:30:13.838 03:39:58 nvmf_tcp.nvmf_perf -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:30:14.096 03:39:59 nvmf_tcp.nvmf_perf -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 7aa0935c-8a15-4748-ae5f-1af97a65e49b 00:30:14.353 03:39:59 nvmf_tcp.nvmf_perf -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:30:14.611 03:39:59 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:30:14.611 03:39:59 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:30:14.611 03:39:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:14.611 03:39:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:30:14.611 03:39:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:14.611 03:39:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:30:14.611 03:39:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:14.611 03:39:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:14.611 rmmod nvme_tcp 00:30:14.611 rmmod nvme_fabrics 00:30:14.611 rmmod nvme_keyring 00:30:14.611 03:39:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:14.611 03:39:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:30:14.611 03:39:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:30:14.611 03:39:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 2507896 ']' 00:30:14.611 03:39:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 2507896 00:30:14.611 03:39:59 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@946 -- # '[' -z 2507896 ']' 00:30:14.611 03:39:59 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@950 -- # kill -0 2507896 00:30:14.611 03:39:59 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@951 -- # uname 00:30:14.611 03:39:59 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:30:14.611 03:39:59 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2507896 00:30:14.611 03:39:59 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:30:14.611 03:39:59 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:30:14.611 03:39:59 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2507896' 00:30:14.611 killing process with pid 2507896 00:30:14.611 03:39:59 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@965 -- # kill 2507896 00:30:14.611 03:39:59 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@970 -- # wait 2507896 00:30:16.546 03:40:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:16.546 03:40:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:16.546 03:40:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:16.546 03:40:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:16.546 03:40:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:16.546 03:40:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:16.546 03:40:01 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:16.546 03:40:01 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:18.446 03:40:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:18.446 00:30:18.446 real 1m30.939s 00:30:18.446 user 5m33.497s 00:30:18.446 sys 0m16.926s 00:30:18.446 03:40:03 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:30:18.446 03:40:03 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:30:18.446 ************************************ 00:30:18.446 END TEST nvmf_perf 00:30:18.446 ************************************ 00:30:18.446 03:40:03 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:30:18.446 03:40:03 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:30:18.446 03:40:03 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:30:18.446 03:40:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:18.446 ************************************ 00:30:18.446 START TEST nvmf_fio_host 00:30:18.446 ************************************ 00:30:18.446 03:40:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:30:18.446 * Looking for test storage... 00:30:18.446 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:18.446 03:40:03 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:18.446 03:40:03 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:18.446 03:40:03 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:18.446 03:40:03 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:18.446 03:40:03 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:18.446 03:40:03 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:18.446 03:40:03 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:18.446 03:40:03 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:30:18.446 03:40:03 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:18.446 03:40:03 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:18.446 03:40:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:30:18.446 03:40:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:18.446 03:40:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:18.446 03:40:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:18.446 03:40:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:18.446 03:40:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:18.446 03:40:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:18.446 03:40:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:18.446 03:40:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:18.446 03:40:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:18.446 03:40:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:18.446 03:40:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:18.446 03:40:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:18.446 03:40:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:18.446 03:40:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:18.446 03:40:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:18.446 03:40:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:18.446 03:40:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:18.446 03:40:03 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:18.446 03:40:03 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:18.446 03:40:03 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:18.446 03:40:03 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:18.446 03:40:03 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:18.446 03:40:03 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:18.446 03:40:03 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:30:18.447 03:40:03 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:18.447 03:40:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:30:18.447 03:40:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:18.447 03:40:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:18.447 03:40:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:18.447 03:40:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:18.447 03:40:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:18.447 03:40:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:18.447 03:40:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:18.447 03:40:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:18.447 03:40:03 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:18.447 03:40:03 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:30:18.447 03:40:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:18.447 03:40:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:18.447 03:40:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:18.447 03:40:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:18.447 03:40:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:18.447 03:40:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:18.447 03:40:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:18.447 03:40:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:18.447 03:40:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:18.447 03:40:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:18.447 03:40:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:30:18.447 03:40:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:20.344 03:40:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:20.344 03:40:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:30:20.344 03:40:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:20.344 03:40:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:20.344 03:40:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:20.344 03:40:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:20.344 03:40:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:20.344 03:40:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:30:20.344 03:40:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:20.344 03:40:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:30:20.344 03:40:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:30:20.344 03:40:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:30:20.344 03:40:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:30:20.344 03:40:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:30:20.344 03:40:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:30:20.344 03:40:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:20.344 03:40:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:20.344 03:40:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:20.344 03:40:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:20.344 03:40:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:20.344 03:40:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:20.344 03:40:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:20.344 03:40:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:20.344 03:40:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:20.344 03:40:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:20.344 03:40:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:20.344 03:40:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:20.344 03:40:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:20.344 03:40:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:20.344 03:40:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:20.344 03:40:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:20.344 03:40:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:20.344 03:40:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:20.344 03:40:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:20.344 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:20.344 03:40:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:20.344 03:40:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:20.344 03:40:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:20.344 03:40:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:20.344 03:40:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:20.344 03:40:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:20.344 03:40:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:20.344 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:20.344 03:40:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:20.344 03:40:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:20.344 03:40:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:20.344 03:40:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:20.344 03:40:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:20.344 03:40:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:20.344 03:40:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:20.344 03:40:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:20.344 03:40:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:20.344 03:40:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:20.344 03:40:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:20.344 03:40:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:20.344 03:40:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:20.344 03:40:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:20.344 03:40:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:20.344 03:40:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:20.344 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:20.344 03:40:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:20.344 03:40:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:20.344 03:40:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:20.344 03:40:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:20.344 03:40:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:20.344 03:40:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:20.344 03:40:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:20.344 03:40:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:20.344 03:40:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:20.344 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:20.344 03:40:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:20.344 03:40:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:20.344 03:40:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:30:20.344 03:40:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:20.344 03:40:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:20.344 03:40:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:20.344 03:40:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:20.344 03:40:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:20.344 03:40:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:20.344 03:40:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:20.344 03:40:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:20.344 03:40:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:20.344 03:40:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:20.344 03:40:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:20.344 03:40:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:20.344 03:40:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:20.344 03:40:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:20.344 03:40:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:20.344 03:40:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:20.344 03:40:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:20.344 03:40:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:20.344 03:40:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:20.344 03:40:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:20.344 03:40:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:20.344 03:40:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:20.344 03:40:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:20.344 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:20.344 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.106 ms 00:30:20.344 00:30:20.344 --- 10.0.0.2 ping statistics --- 00:30:20.344 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:20.344 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:30:20.345 03:40:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:20.345 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:20.345 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.079 ms 00:30:20.345 00:30:20.345 --- 10.0.0.1 ping statistics --- 00:30:20.345 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:20.345 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:30:20.345 03:40:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:20.345 03:40:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:30:20.345 03:40:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:20.345 03:40:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:20.345 03:40:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:20.345 03:40:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:20.345 03:40:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:20.345 03:40:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:20.345 03:40:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:20.345 03:40:05 nvmf_tcp.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:30:20.345 03:40:05 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:30:20.345 03:40:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@720 -- # xtrace_disable 00:30:20.345 03:40:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:20.345 03:40:05 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=2520474 00:30:20.345 03:40:05 nvmf_tcp.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:30:20.345 03:40:05 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:20.345 03:40:05 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 2520474 00:30:20.345 03:40:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@827 -- # '[' -z 2520474 ']' 00:30:20.345 03:40:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:20.345 03:40:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@832 -- # local max_retries=100 00:30:20.345 03:40:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:20.345 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:20.345 03:40:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@836 -- # xtrace_disable 00:30:20.345 03:40:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:20.345 [2024-07-21 03:40:05.629645] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:30:20.345 [2024-07-21 03:40:05.629732] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:20.601 EAL: No free 2048 kB hugepages reported on node 1 00:30:20.601 [2024-07-21 03:40:05.702512] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:20.601 [2024-07-21 03:40:05.795309] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:20.601 [2024-07-21 03:40:05.795368] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:20.601 [2024-07-21 03:40:05.795402] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:20.601 [2024-07-21 03:40:05.795417] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:20.601 [2024-07-21 03:40:05.795429] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:20.601 [2024-07-21 03:40:05.795511] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:20.601 [2024-07-21 03:40:05.795572] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:30:20.601 [2024-07-21 03:40:05.795624] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:30:20.601 [2024-07-21 03:40:05.795628] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:20.857 03:40:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:30:20.857 03:40:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@860 -- # return 0 00:30:20.857 03:40:05 nvmf_tcp.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:20.857 [2024-07-21 03:40:06.143940] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:20.857 03:40:06 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:30:21.113 03:40:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:21.113 03:40:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:21.113 03:40:06 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:30:21.369 Malloc1 00:30:21.369 03:40:06 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:21.626 03:40:06 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:30:21.626 03:40:06 nvmf_tcp.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:21.883 [2024-07-21 03:40:07.163562] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:21.883 03:40:07 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:22.140 03:40:07 nvmf_tcp.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:30:22.140 03:40:07 nvmf_tcp.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:22.140 03:40:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:22.140 03:40:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:30:22.140 03:40:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:22.140 03:40:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local sanitizers 00:30:22.140 03:40:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:22.140 03:40:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # shift 00:30:22.140 03:40:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local asan_lib= 00:30:22.140 03:40:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:30:22.140 03:40:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:22.140 03:40:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libasan 00:30:22.140 03:40:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:30:22.140 03:40:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:30:22.140 03:40:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:30:22.140 03:40:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:30:22.140 03:40:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:22.140 03:40:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:30:22.140 03:40:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:30:22.140 03:40:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:30:22.140 03:40:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:30:22.140 03:40:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:30:22.140 03:40:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:22.396 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:30:22.396 fio-3.35 00:30:22.396 Starting 1 thread 00:30:22.396 EAL: No free 2048 kB hugepages reported on node 1 00:30:24.919 00:30:24.919 test: (groupid=0, jobs=1): err= 0: pid=2520830: Sun Jul 21 03:40:09 2024 00:30:24.919 read: IOPS=9170, BW=35.8MiB/s (37.6MB/s)(71.9MiB/2006msec) 00:30:24.919 slat (nsec): min=1912, max=105073, avg=2396.54, stdev=1416.73 00:30:24.919 clat (usec): min=2088, max=12619, avg=7662.57, stdev=615.35 00:30:24.919 lat (usec): min=2111, max=12622, avg=7664.97, stdev=615.28 00:30:24.919 clat percentiles (usec): 00:30:24.919 | 1.00th=[ 6259], 5.00th=[ 6718], 10.00th=[ 6915], 20.00th=[ 7177], 00:30:24.919 | 30.00th=[ 7373], 40.00th=[ 7504], 50.00th=[ 7701], 60.00th=[ 7832], 00:30:24.919 | 70.00th=[ 7963], 80.00th=[ 8160], 90.00th=[ 8356], 95.00th=[ 8586], 00:30:24.919 | 99.00th=[ 8979], 99.50th=[ 9110], 99.90th=[11863], 99.95th=[12387], 00:30:24.919 | 99.99th=[12649] 00:30:24.919 bw ( KiB/s): min=35696, max=37336, per=99.92%, avg=36652.00, stdev=687.77, samples=4 00:30:24.919 iops : min= 8924, max= 9334, avg=9163.00, stdev=171.94, samples=4 00:30:24.919 write: IOPS=9180, BW=35.9MiB/s (37.6MB/s)(71.9MiB/2006msec); 0 zone resets 00:30:24.919 slat (nsec): min=2007, max=97515, avg=2509.96, stdev=1109.43 00:30:24.919 clat (usec): min=1550, max=11898, avg=6231.42, stdev=509.62 00:30:24.919 lat (usec): min=1557, max=11900, avg=6233.93, stdev=509.59 00:30:24.919 clat percentiles (usec): 00:30:24.919 | 1.00th=[ 5145], 5.00th=[ 5473], 10.00th=[ 5669], 20.00th=[ 5866], 00:30:24.920 | 30.00th=[ 5997], 40.00th=[ 6128], 50.00th=[ 6259], 60.00th=[ 6325], 00:30:24.920 | 70.00th=[ 6456], 80.00th=[ 6587], 90.00th=[ 6783], 95.00th=[ 6980], 00:30:24.920 | 99.00th=[ 7308], 99.50th=[ 7504], 99.90th=[10028], 99.95th=[10814], 00:30:24.920 | 99.99th=[11863] 00:30:24.920 bw ( KiB/s): min=36480, max=36848, per=99.99%, avg=36720.00, stdev=166.79, samples=4 00:30:24.920 iops : min= 9120, max= 9212, avg=9180.00, stdev=41.70, samples=4 00:30:24.920 lat (msec) : 2=0.02%, 4=0.10%, 10=99.73%, 20=0.14% 00:30:24.920 cpu : usr=63.84%, sys=33.72%, ctx=75, majf=0, minf=6 00:30:24.920 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:30:24.920 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:24.920 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:24.920 issued rwts: total=18396,18417,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:24.920 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:24.920 00:30:24.920 Run status group 0 (all jobs): 00:30:24.920 READ: bw=35.8MiB/s (37.6MB/s), 35.8MiB/s-35.8MiB/s (37.6MB/s-37.6MB/s), io=71.9MiB (75.3MB), run=2006-2006msec 00:30:24.920 WRITE: bw=35.9MiB/s (37.6MB/s), 35.9MiB/s-35.9MiB/s (37.6MB/s-37.6MB/s), io=71.9MiB (75.4MB), run=2006-2006msec 00:30:24.920 03:40:09 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:30:24.920 03:40:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:30:24.920 03:40:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:30:24.920 03:40:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:24.920 03:40:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local sanitizers 00:30:24.920 03:40:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:24.920 03:40:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # shift 00:30:24.920 03:40:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local asan_lib= 00:30:24.920 03:40:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:30:24.920 03:40:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:24.920 03:40:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libasan 00:30:24.920 03:40:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:30:24.920 03:40:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:30:24.920 03:40:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:30:24.920 03:40:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:30:24.920 03:40:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:24.920 03:40:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:30:24.920 03:40:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:30:24.920 03:40:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:30:24.920 03:40:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:30:24.920 03:40:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:30:24.920 03:40:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:30:24.920 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:30:24.920 fio-3.35 00:30:24.920 Starting 1 thread 00:30:24.920 EAL: No free 2048 kB hugepages reported on node 1 00:30:27.445 00:30:27.445 test: (groupid=0, jobs=1): err= 0: pid=2521212: Sun Jul 21 03:40:12 2024 00:30:27.445 read: IOPS=8204, BW=128MiB/s (134MB/s)(258MiB/2009msec) 00:30:27.445 slat (nsec): min=2915, max=93318, avg=3661.68, stdev=1737.34 00:30:27.445 clat (usec): min=2624, max=52671, avg=9005.37, stdev=3957.46 00:30:27.445 lat (usec): min=2628, max=52674, avg=9009.03, stdev=3957.46 00:30:27.445 clat percentiles (usec): 00:30:27.445 | 1.00th=[ 4752], 5.00th=[ 5538], 10.00th=[ 6194], 20.00th=[ 6980], 00:30:27.445 | 30.00th=[ 7570], 40.00th=[ 8160], 50.00th=[ 8717], 60.00th=[ 9241], 00:30:27.445 | 70.00th=[ 9765], 80.00th=[10421], 90.00th=[11207], 95.00th=[12125], 00:30:27.445 | 99.00th=[14353], 99.50th=[46400], 99.90th=[51643], 99.95th=[52167], 00:30:27.445 | 99.99th=[52691] 00:30:27.445 bw ( KiB/s): min=55680, max=80096, per=52.46%, avg=68872.00, stdev=10550.57, samples=4 00:30:27.445 iops : min= 3480, max= 5006, avg=4304.50, stdev=659.41, samples=4 00:30:27.445 write: IOPS=4960, BW=77.5MiB/s (81.3MB/s)(141MiB/1815msec); 0 zone resets 00:30:27.445 slat (usec): min=30, max=162, avg=34.42, stdev= 5.96 00:30:27.445 clat (usec): min=3460, max=19969, avg=11219.92, stdev=1989.07 00:30:27.445 lat (usec): min=3514, max=20002, avg=11254.34, stdev=1989.08 00:30:27.445 clat percentiles (usec): 00:30:27.445 | 1.00th=[ 7570], 5.00th=[ 8586], 10.00th=[ 8979], 20.00th=[ 9634], 00:30:27.445 | 30.00th=[10028], 40.00th=[10421], 50.00th=[10945], 60.00th=[11469], 00:30:27.445 | 70.00th=[12125], 80.00th=[12780], 90.00th=[14091], 95.00th=[14877], 00:30:27.445 | 99.00th=[16319], 99.50th=[16712], 99.90th=[19006], 99.95th=[19530], 00:30:27.445 | 99.99th=[20055] 00:30:27.445 bw ( KiB/s): min=57600, max=83680, per=89.92%, avg=71368.00, stdev=11498.06, samples=4 00:30:27.445 iops : min= 3600, max= 5230, avg=4460.50, stdev=718.63, samples=4 00:30:27.445 lat (msec) : 4=0.16%, 10=58.72%, 20=40.62%, 50=0.35%, 100=0.15% 00:30:27.445 cpu : usr=76.26%, sys=22.40%, ctx=35, majf=0, minf=2 00:30:27.445 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:30:27.445 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:27.445 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:27.445 issued rwts: total=16483,9003,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:27.445 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:27.445 00:30:27.445 Run status group 0 (all jobs): 00:30:27.445 READ: bw=128MiB/s (134MB/s), 128MiB/s-128MiB/s (134MB/s-134MB/s), io=258MiB (270MB), run=2009-2009msec 00:30:27.445 WRITE: bw=77.5MiB/s (81.3MB/s), 77.5MiB/s-77.5MiB/s (81.3MB/s-81.3MB/s), io=141MiB (148MB), run=1815-1815msec 00:30:27.445 03:40:12 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:27.708 03:40:12 nvmf_tcp.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:30:27.708 03:40:12 nvmf_tcp.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:30:27.708 03:40:12 nvmf_tcp.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 00:30:27.708 03:40:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1509 -- # bdfs=() 00:30:27.708 03:40:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1509 -- # local bdfs 00:30:27.708 03:40:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:30:27.708 03:40:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1510 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:30:27.708 03:40:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:30:27.708 03:40:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:30:27.708 03:40:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:88:00.0 00:30:27.708 03:40:12 nvmf_tcp.nvmf_fio_host -- host/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 -i 10.0.0.2 00:30:30.983 Nvme0n1 00:30:30.983 03:40:15 nvmf_tcp.nvmf_fio_host -- host/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:30:34.260 03:40:18 nvmf_tcp.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=b6761c11-e78c-496d-8602-52d43cfa9d5e 00:30:34.260 03:40:18 nvmf_tcp.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb b6761c11-e78c-496d-8602-52d43cfa9d5e 00:30:34.260 03:40:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # local lvs_uuid=b6761c11-e78c-496d-8602-52d43cfa9d5e 00:30:34.260 03:40:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1361 -- # local lvs_info 00:30:34.260 03:40:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1362 -- # local fc 00:30:34.260 03:40:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1363 -- # local cs 00:30:34.260 03:40:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:34.260 03:40:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # lvs_info='[ 00:30:34.260 { 00:30:34.260 "uuid": "b6761c11-e78c-496d-8602-52d43cfa9d5e", 00:30:34.260 "name": "lvs_0", 00:30:34.260 "base_bdev": "Nvme0n1", 00:30:34.260 "total_data_clusters": 930, 00:30:34.260 "free_clusters": 930, 00:30:34.260 "block_size": 512, 00:30:34.260 "cluster_size": 1073741824 00:30:34.260 } 00:30:34.260 ]' 00:30:34.260 03:40:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # jq '.[] | select(.uuid=="b6761c11-e78c-496d-8602-52d43cfa9d5e") .free_clusters' 00:30:34.260 03:40:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # fc=930 00:30:34.260 03:40:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # jq '.[] | select(.uuid=="b6761c11-e78c-496d-8602-52d43cfa9d5e") .cluster_size' 00:30:34.260 03:40:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # cs=1073741824 00:30:34.260 03:40:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # free_mb=952320 00:30:34.260 03:40:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # echo 952320 00:30:34.260 952320 00:30:34.260 03:40:19 nvmf_tcp.nvmf_fio_host -- host/fio.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 952320 00:30:34.260 798bdced-d758-4ed1-8b9a-519a46ccc7c6 00:30:34.260 03:40:19 nvmf_tcp.nvmf_fio_host -- host/fio.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:30:34.518 03:40:19 nvmf_tcp.nvmf_fio_host -- host/fio.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:30:34.776 03:40:20 nvmf_tcp.nvmf_fio_host -- host/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:30:35.033 03:40:20 nvmf_tcp.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:35.033 03:40:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:35.033 03:40:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:30:35.033 03:40:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:35.033 03:40:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local sanitizers 00:30:35.034 03:40:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:35.034 03:40:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # shift 00:30:35.034 03:40:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local asan_lib= 00:30:35.034 03:40:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:30:35.034 03:40:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:35.034 03:40:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libasan 00:30:35.034 03:40:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:30:35.034 03:40:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:30:35.034 03:40:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:30:35.034 03:40:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:30:35.034 03:40:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:35.034 03:40:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:30:35.034 03:40:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:30:35.034 03:40:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:30:35.034 03:40:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:30:35.034 03:40:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:30:35.034 03:40:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:35.291 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:30:35.291 fio-3.35 00:30:35.291 Starting 1 thread 00:30:35.291 EAL: No free 2048 kB hugepages reported on node 1 00:30:37.884 00:30:37.884 test: (groupid=0, jobs=1): err= 0: pid=2522565: Sun Jul 21 03:40:22 2024 00:30:37.884 read: IOPS=5921, BW=23.1MiB/s (24.3MB/s)(46.5MiB/2009msec) 00:30:37.884 slat (nsec): min=1941, max=131153, avg=2675.27, stdev=2165.88 00:30:37.884 clat (usec): min=1084, max=171103, avg=11801.09, stdev=11705.02 00:30:37.884 lat (usec): min=1087, max=171137, avg=11803.77, stdev=11705.24 00:30:37.884 clat percentiles (msec): 00:30:37.884 | 1.00th=[ 9], 5.00th=[ 10], 10.00th=[ 10], 20.00th=[ 11], 00:30:37.884 | 30.00th=[ 11], 40.00th=[ 11], 50.00th=[ 11], 60.00th=[ 12], 00:30:37.884 | 70.00th=[ 12], 80.00th=[ 12], 90.00th=[ 13], 95.00th=[ 13], 00:30:37.884 | 99.00th=[ 14], 99.50th=[ 157], 99.90th=[ 171], 99.95th=[ 171], 00:30:37.884 | 99.99th=[ 171] 00:30:37.884 bw ( KiB/s): min=16832, max=26840, per=99.88%, avg=23656.00, stdev=4627.58, samples=4 00:30:37.884 iops : min= 4208, max= 6710, avg=5914.00, stdev=1156.90, samples=4 00:30:37.884 write: IOPS=5917, BW=23.1MiB/s (24.2MB/s)(46.4MiB/2009msec); 0 zone resets 00:30:37.884 slat (nsec): min=2043, max=98138, avg=2760.93, stdev=1895.00 00:30:37.884 clat (usec): min=353, max=169231, avg=9659.19, stdev=10979.55 00:30:37.884 lat (usec): min=356, max=169237, avg=9661.95, stdev=10979.76 00:30:37.884 clat percentiles (msec): 00:30:37.884 | 1.00th=[ 7], 5.00th=[ 8], 10.00th=[ 8], 20.00th=[ 9], 00:30:37.884 | 30.00th=[ 9], 40.00th=[ 9], 50.00th=[ 9], 60.00th=[ 10], 00:30:37.884 | 70.00th=[ 10], 80.00th=[ 10], 90.00th=[ 10], 95.00th=[ 11], 00:30:37.884 | 99.00th=[ 12], 99.50th=[ 17], 99.90th=[ 169], 99.95th=[ 169], 00:30:37.884 | 99.99th=[ 169] 00:30:37.884 bw ( KiB/s): min=17832, max=26112, per=99.95%, avg=23658.00, stdev=3923.17, samples=4 00:30:37.884 iops : min= 4458, max= 6528, avg=5914.50, stdev=980.79, samples=4 00:30:37.884 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:30:37.884 lat (msec) : 2=0.02%, 4=0.13%, 10=53.89%, 20=45.40%, 250=0.54% 00:30:37.884 cpu : usr=61.25%, sys=36.90%, ctx=93, majf=0, minf=24 00:30:37.884 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:30:37.884 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:37.884 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:37.884 issued rwts: total=11896,11888,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:37.884 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:37.884 00:30:37.884 Run status group 0 (all jobs): 00:30:37.884 READ: bw=23.1MiB/s (24.3MB/s), 23.1MiB/s-23.1MiB/s (24.3MB/s-24.3MB/s), io=46.5MiB (48.7MB), run=2009-2009msec 00:30:37.884 WRITE: bw=23.1MiB/s (24.2MB/s), 23.1MiB/s-23.1MiB/s (24.2MB/s-24.2MB/s), io=46.4MiB (48.7MB), run=2009-2009msec 00:30:37.884 03:40:22 nvmf_tcp.nvmf_fio_host -- host/fio.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:30:37.884 03:40:23 nvmf_tcp.nvmf_fio_host -- host/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:30:39.251 03:40:24 nvmf_tcp.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=1840d16d-809e-4923-b613-29b5a42cba69 00:30:39.251 03:40:24 nvmf_tcp.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb 1840d16d-809e-4923-b613-29b5a42cba69 00:30:39.251 03:40:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # local lvs_uuid=1840d16d-809e-4923-b613-29b5a42cba69 00:30:39.251 03:40:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1361 -- # local lvs_info 00:30:39.251 03:40:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1362 -- # local fc 00:30:39.251 03:40:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1363 -- # local cs 00:30:39.251 03:40:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:39.251 03:40:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # lvs_info='[ 00:30:39.251 { 00:30:39.251 "uuid": "b6761c11-e78c-496d-8602-52d43cfa9d5e", 00:30:39.251 "name": "lvs_0", 00:30:39.251 "base_bdev": "Nvme0n1", 00:30:39.251 "total_data_clusters": 930, 00:30:39.251 "free_clusters": 0, 00:30:39.251 "block_size": 512, 00:30:39.251 "cluster_size": 1073741824 00:30:39.251 }, 00:30:39.251 { 00:30:39.251 "uuid": "1840d16d-809e-4923-b613-29b5a42cba69", 00:30:39.251 "name": "lvs_n_0", 00:30:39.251 "base_bdev": "798bdced-d758-4ed1-8b9a-519a46ccc7c6", 00:30:39.251 "total_data_clusters": 237847, 00:30:39.251 "free_clusters": 237847, 00:30:39.251 "block_size": 512, 00:30:39.251 "cluster_size": 4194304 00:30:39.251 } 00:30:39.251 ]' 00:30:39.251 03:40:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # jq '.[] | select(.uuid=="1840d16d-809e-4923-b613-29b5a42cba69") .free_clusters' 00:30:39.251 03:40:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # fc=237847 00:30:39.251 03:40:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # jq '.[] | select(.uuid=="1840d16d-809e-4923-b613-29b5a42cba69") .cluster_size' 00:30:39.508 03:40:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # cs=4194304 00:30:39.508 03:40:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # free_mb=951388 00:30:39.508 03:40:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # echo 951388 00:30:39.508 951388 00:30:39.508 03:40:24 nvmf_tcp.nvmf_fio_host -- host/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 951388 00:30:40.072 5b9f11a3-7178-497a-8db3-82d13543ce41 00:30:40.072 03:40:25 nvmf_tcp.nvmf_fio_host -- host/fio.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:30:40.329 03:40:25 nvmf_tcp.nvmf_fio_host -- host/fio.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:30:40.587 03:40:25 nvmf_tcp.nvmf_fio_host -- host/fio.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:30:40.845 03:40:25 nvmf_tcp.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:40.845 03:40:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:40.845 03:40:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:30:40.845 03:40:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:40.845 03:40:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local sanitizers 00:30:40.845 03:40:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:40.845 03:40:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # shift 00:30:40.845 03:40:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local asan_lib= 00:30:40.845 03:40:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:30:40.845 03:40:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:40.845 03:40:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libasan 00:30:40.845 03:40:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:30:40.845 03:40:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:30:40.845 03:40:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:30:40.845 03:40:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:30:40.845 03:40:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:40.845 03:40:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:30:40.845 03:40:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:30:40.845 03:40:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:30:40.845 03:40:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:30:40.845 03:40:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:30:40.845 03:40:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:41.102 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:30:41.102 fio-3.35 00:30:41.102 Starting 1 thread 00:30:41.102 EAL: No free 2048 kB hugepages reported on node 1 00:30:43.629 00:30:43.629 test: (groupid=0, jobs=1): err= 0: pid=2523296: Sun Jul 21 03:40:28 2024 00:30:43.629 read: IOPS=5854, BW=22.9MiB/s (24.0MB/s)(45.9MiB/2009msec) 00:30:43.629 slat (nsec): min=1927, max=137285, avg=2480.06, stdev=2035.64 00:30:43.629 clat (usec): min=4384, max=20866, avg=11966.95, stdev=1116.63 00:30:43.629 lat (usec): min=4404, max=20868, avg=11969.43, stdev=1116.49 00:30:43.629 clat percentiles (usec): 00:30:43.629 | 1.00th=[ 9372], 5.00th=[10290], 10.00th=[10552], 20.00th=[11076], 00:30:43.629 | 30.00th=[11469], 40.00th=[11731], 50.00th=[11994], 60.00th=[12256], 00:30:43.629 | 70.00th=[12518], 80.00th=[12911], 90.00th=[13304], 95.00th=[13698], 00:30:43.629 | 99.00th=[14353], 99.50th=[14615], 99.90th=[18220], 99.95th=[19792], 00:30:43.629 | 99.99th=[20841] 00:30:43.629 bw ( KiB/s): min=22104, max=23912, per=99.93%, avg=23402.00, stdev=869.77, samples=4 00:30:43.629 iops : min= 5526, max= 5978, avg=5850.50, stdev=217.44, samples=4 00:30:43.629 write: IOPS=5848, BW=22.8MiB/s (24.0MB/s)(45.9MiB/2009msec); 0 zone resets 00:30:43.629 slat (nsec): min=1996, max=99701, avg=2571.42, stdev=1522.93 00:30:43.629 clat (usec): min=2095, max=18454, avg=9758.89, stdev=920.37 00:30:43.629 lat (usec): min=2102, max=18457, avg=9761.46, stdev=920.31 00:30:43.629 clat percentiles (usec): 00:30:43.629 | 1.00th=[ 7635], 5.00th=[ 8455], 10.00th=[ 8717], 20.00th=[ 9110], 00:30:43.629 | 30.00th=[ 9372], 40.00th=[ 9503], 50.00th=[ 9765], 60.00th=[10028], 00:30:43.629 | 70.00th=[10159], 80.00th=[10421], 90.00th=[10814], 95.00th=[11076], 00:30:43.629 | 99.00th=[11731], 99.50th=[12125], 99.90th=[15926], 99.95th=[17433], 00:30:43.629 | 99.99th=[18482] 00:30:43.629 bw ( KiB/s): min=23128, max=23488, per=99.89%, avg=23366.00, stdev=161.51, samples=4 00:30:43.629 iops : min= 5782, max= 5872, avg=5841.50, stdev=40.38, samples=4 00:30:43.629 lat (msec) : 4=0.05%, 10=32.45%, 20=67.49%, 50=0.01% 00:30:43.629 cpu : usr=61.40%, sys=36.60%, ctx=125, majf=0, minf=24 00:30:43.629 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:30:43.629 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:43.629 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:43.629 issued rwts: total=11762,11749,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:43.629 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:43.629 00:30:43.629 Run status group 0 (all jobs): 00:30:43.629 READ: bw=22.9MiB/s (24.0MB/s), 22.9MiB/s-22.9MiB/s (24.0MB/s-24.0MB/s), io=45.9MiB (48.2MB), run=2009-2009msec 00:30:43.629 WRITE: bw=22.8MiB/s (24.0MB/s), 22.8MiB/s-22.8MiB/s (24.0MB/s-24.0MB/s), io=45.9MiB (48.1MB), run=2009-2009msec 00:30:43.629 03:40:28 nvmf_tcp.nvmf_fio_host -- host/fio.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:30:43.629 03:40:28 nvmf_tcp.nvmf_fio_host -- host/fio.sh@74 -- # sync 00:30:43.629 03:40:28 nvmf_tcp.nvmf_fio_host -- host/fio.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_n_0/lbd_nest_0 00:30:47.809 03:40:32 nvmf_tcp.nvmf_fio_host -- host/fio.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:30:47.809 03:40:32 nvmf_tcp.nvmf_fio_host -- host/fio.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:30:51.077 03:40:35 nvmf_tcp.nvmf_fio_host -- host/fio.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:30:51.077 03:40:35 nvmf_tcp.nvmf_fio_host -- host/fio.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:30:52.972 03:40:37 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:30:52.972 03:40:37 nvmf_tcp.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:30:52.972 03:40:37 nvmf_tcp.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:30:52.972 03:40:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:52.972 03:40:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:30:52.972 03:40:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:52.972 03:40:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:30:52.972 03:40:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:52.972 03:40:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:52.972 rmmod nvme_tcp 00:30:52.972 rmmod nvme_fabrics 00:30:52.972 rmmod nvme_keyring 00:30:52.972 03:40:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:52.972 03:40:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:30:52.972 03:40:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:30:52.972 03:40:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 2520474 ']' 00:30:52.972 03:40:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 2520474 00:30:52.972 03:40:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@946 -- # '[' -z 2520474 ']' 00:30:52.972 03:40:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@950 -- # kill -0 2520474 00:30:52.972 03:40:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@951 -- # uname 00:30:52.972 03:40:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:30:52.972 03:40:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2520474 00:30:52.972 03:40:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:30:52.972 03:40:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:30:52.972 03:40:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2520474' 00:30:52.972 killing process with pid 2520474 00:30:52.972 03:40:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@965 -- # kill 2520474 00:30:52.972 03:40:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@970 -- # wait 2520474 00:30:52.972 03:40:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:52.972 03:40:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:52.972 03:40:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:52.972 03:40:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:52.972 03:40:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:52.972 03:40:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:52.972 03:40:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:52.973 03:40:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:55.495 03:40:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:55.495 00:30:55.495 real 0m36.794s 00:30:55.495 user 2m21.328s 00:30:55.495 sys 0m6.780s 00:30:55.495 03:40:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1122 -- # xtrace_disable 00:30:55.495 03:40:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:55.495 ************************************ 00:30:55.495 END TEST nvmf_fio_host 00:30:55.495 ************************************ 00:30:55.495 03:40:40 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:30:55.495 03:40:40 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:30:55.495 03:40:40 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:30:55.495 03:40:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:55.495 ************************************ 00:30:55.495 START TEST nvmf_failover 00:30:55.495 ************************************ 00:30:55.495 03:40:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:30:55.495 * Looking for test storage... 00:30:55.495 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:55.495 03:40:40 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:55.495 03:40:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:30:55.495 03:40:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:55.495 03:40:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:55.495 03:40:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:55.495 03:40:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:55.495 03:40:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:55.495 03:40:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:55.495 03:40:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:55.495 03:40:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:55.495 03:40:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:55.495 03:40:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:55.495 03:40:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:55.495 03:40:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:55.495 03:40:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:55.495 03:40:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:55.495 03:40:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:55.495 03:40:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:55.495 03:40:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:55.495 03:40:40 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:55.495 03:40:40 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:55.495 03:40:40 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:55.495 03:40:40 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:55.495 03:40:40 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:55.495 03:40:40 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:55.495 03:40:40 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:30:55.495 03:40:40 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:55.495 03:40:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:30:55.495 03:40:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:55.495 03:40:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:55.495 03:40:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:55.495 03:40:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:55.495 03:40:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:55.495 03:40:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:55.495 03:40:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:55.495 03:40:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:55.495 03:40:40 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:55.495 03:40:40 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:55.495 03:40:40 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:55.495 03:40:40 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:30:55.495 03:40:40 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:30:55.495 03:40:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:55.495 03:40:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:55.495 03:40:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:55.495 03:40:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:55.495 03:40:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:55.495 03:40:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:55.495 03:40:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:55.495 03:40:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:55.495 03:40:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:55.495 03:40:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:55.495 03:40:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:30:55.495 03:40:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:57.393 03:40:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:57.393 03:40:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:30:57.393 03:40:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:57.393 03:40:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:57.393 03:40:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:57.393 03:40:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:57.393 03:40:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:57.393 03:40:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:30:57.393 03:40:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:57.393 03:40:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:30:57.393 03:40:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:30:57.393 03:40:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:30:57.393 03:40:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:30:57.393 03:40:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:30:57.393 03:40:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:30:57.393 03:40:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:57.393 03:40:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:57.393 03:40:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:57.393 03:40:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:57.393 03:40:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:57.393 03:40:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:57.393 03:40:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:57.393 03:40:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:57.393 03:40:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:57.393 03:40:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:57.393 03:40:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:57.393 03:40:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:57.393 03:40:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:57.393 03:40:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:57.393 03:40:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:57.393 03:40:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:57.393 03:40:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:57.393 03:40:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:57.393 03:40:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:57.393 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:57.393 03:40:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:57.393 03:40:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:57.393 03:40:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:57.393 03:40:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:57.393 03:40:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:57.393 03:40:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:57.393 03:40:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:57.393 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:57.393 03:40:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:57.393 03:40:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:57.393 03:40:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:57.393 03:40:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:57.393 03:40:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:57.393 03:40:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:57.393 03:40:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:57.393 03:40:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:57.393 03:40:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:57.393 03:40:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:57.393 03:40:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:57.393 03:40:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:57.393 03:40:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:57.393 03:40:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:57.393 03:40:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:57.393 03:40:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:57.393 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:57.393 03:40:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:57.393 03:40:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:57.393 03:40:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:57.393 03:40:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:57.393 03:40:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:57.393 03:40:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:57.393 03:40:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:57.393 03:40:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:57.393 03:40:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:57.393 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:57.393 03:40:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:57.393 03:40:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:57.393 03:40:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:30:57.393 03:40:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:57.393 03:40:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:57.393 03:40:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:57.393 03:40:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:57.393 03:40:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:57.393 03:40:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:57.393 03:40:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:57.393 03:40:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:57.393 03:40:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:57.393 03:40:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:57.393 03:40:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:57.393 03:40:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:57.393 03:40:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:57.393 03:40:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:57.393 03:40:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:57.393 03:40:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:57.393 03:40:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:57.394 03:40:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:57.394 03:40:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:57.394 03:40:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:57.394 03:40:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:57.394 03:40:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:57.394 03:40:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:57.394 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:57.394 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.192 ms 00:30:57.394 00:30:57.394 --- 10.0.0.2 ping statistics --- 00:30:57.394 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:57.394 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:30:57.394 03:40:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:57.394 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:57.394 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.070 ms 00:30:57.394 00:30:57.394 --- 10.0.0.1 ping statistics --- 00:30:57.394 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:57.394 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:30:57.394 03:40:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:57.394 03:40:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:30:57.394 03:40:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:57.394 03:40:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:57.394 03:40:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:57.394 03:40:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:57.394 03:40:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:57.394 03:40:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:57.394 03:40:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:57.394 03:40:42 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:30:57.394 03:40:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:57.394 03:40:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@720 -- # xtrace_disable 00:30:57.394 03:40:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:57.394 03:40:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=2526546 00:30:57.394 03:40:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:30:57.394 03:40:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 2526546 00:30:57.394 03:40:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@827 -- # '[' -z 2526546 ']' 00:30:57.394 03:40:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:57.394 03:40:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@832 -- # local max_retries=100 00:30:57.394 03:40:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:57.394 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:57.394 03:40:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # xtrace_disable 00:30:57.394 03:40:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:57.394 [2024-07-21 03:40:42.582379] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:30:57.394 [2024-07-21 03:40:42.582470] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:57.394 EAL: No free 2048 kB hugepages reported on node 1 00:30:57.394 [2024-07-21 03:40:42.646034] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:57.652 [2024-07-21 03:40:42.730892] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:57.652 [2024-07-21 03:40:42.730943] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:57.652 [2024-07-21 03:40:42.730967] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:57.652 [2024-07-21 03:40:42.730978] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:57.652 [2024-07-21 03:40:42.730988] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:57.652 [2024-07-21 03:40:42.731061] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:30:57.652 [2024-07-21 03:40:42.731118] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:30:57.652 [2024-07-21 03:40:42.731121] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:57.652 03:40:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:30:57.652 03:40:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@860 -- # return 0 00:30:57.652 03:40:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:57.652 03:40:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:57.652 03:40:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:57.652 03:40:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:57.652 03:40:42 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:57.908 [2024-07-21 03:40:43.092746] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:57.908 03:40:43 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:30:58.166 Malloc0 00:30:58.166 03:40:43 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:58.422 03:40:43 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:58.679 03:40:43 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:58.936 [2024-07-21 03:40:44.085362] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:58.936 03:40:44 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:59.193 [2024-07-21 03:40:44.338120] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:59.193 03:40:44 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:30:59.450 [2024-07-21 03:40:44.591059] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:30:59.450 03:40:44 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=2526828 00:30:59.450 03:40:44 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:30:59.450 03:40:44 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:59.450 03:40:44 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 2526828 /var/tmp/bdevperf.sock 00:30:59.450 03:40:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@827 -- # '[' -z 2526828 ']' 00:30:59.450 03:40:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:59.450 03:40:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@832 -- # local max_retries=100 00:30:59.450 03:40:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:59.450 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:59.450 03:40:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # xtrace_disable 00:30:59.450 03:40:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:59.707 03:40:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:30:59.707 03:40:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@860 -- # return 0 00:30:59.707 03:40:44 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:00.307 NVMe0n1 00:31:00.308 03:40:45 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:00.565 00:31:00.565 03:40:45 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=2526964 00:31:00.565 03:40:45 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:31:00.565 03:40:45 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:31:01.509 03:40:46 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:01.766 [2024-07-21 03:40:46.912242] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e84d50 is same with the state(5) to be set 00:31:01.766 [2024-07-21 03:40:46.912351] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e84d50 is same with the state(5) to be set 00:31:01.766 [2024-07-21 03:40:46.912377] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e84d50 is same with the state(5) to be set 00:31:01.766 [2024-07-21 03:40:46.912390] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e84d50 is same with the state(5) to be set 00:31:01.766 [2024-07-21 03:40:46.912403] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e84d50 is same with the state(5) to be set 00:31:01.766 [2024-07-21 03:40:46.912416] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e84d50 is same with the state(5) to be set 00:31:01.766 [2024-07-21 03:40:46.912428] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e84d50 is same with the state(5) to be set 00:31:01.766 [2024-07-21 03:40:46.912440] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e84d50 is same with the state(5) to be set 00:31:01.766 03:40:46 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:31:05.037 03:40:49 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:05.037 00:31:05.294 03:40:50 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:31:05.294 [2024-07-21 03:40:50.585569] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e85bd0 is same with the state(5) to be set 00:31:05.294 [2024-07-21 03:40:50.585641] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e85bd0 is same with the state(5) to be set 00:31:05.294 [2024-07-21 03:40:50.585665] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e85bd0 is same with the state(5) to be set 00:31:05.294 [2024-07-21 03:40:50.585679] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e85bd0 is same with the state(5) to be set 00:31:05.294 [2024-07-21 03:40:50.585691] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e85bd0 is same with the state(5) to be set 00:31:05.294 [2024-07-21 03:40:50.585703] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e85bd0 is same with the state(5) to be set 00:31:05.295 [2024-07-21 03:40:50.585716] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e85bd0 is same with the state(5) to be set 00:31:05.295 [2024-07-21 03:40:50.585744] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e85bd0 is same with the state(5) to be set 00:31:05.295 [2024-07-21 03:40:50.585767] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e85bd0 is same with the state(5) to be set 00:31:05.295 [2024-07-21 03:40:50.585782] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e85bd0 is same with the state(5) to be set 00:31:05.295 [2024-07-21 03:40:50.585810] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e85bd0 is same with the state(5) to be set 00:31:05.295 [2024-07-21 03:40:50.585824] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e85bd0 is same with the state(5) to be set 00:31:05.295 [2024-07-21 03:40:50.585836] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e85bd0 is same with the state(5) to be set 00:31:05.295 [2024-07-21 03:40:50.585848] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e85bd0 is same with the state(5) to be set 00:31:05.295 [2024-07-21 03:40:50.585861] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e85bd0 is same with the state(5) to be set 00:31:05.295 [2024-07-21 03:40:50.585873] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e85bd0 is same with the state(5) to be set 00:31:05.295 [2024-07-21 03:40:50.585886] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e85bd0 is same with the state(5) to be set 00:31:05.295 [2024-07-21 03:40:50.585899] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e85bd0 is same with the state(5) to be set 00:31:05.295 [2024-07-21 03:40:50.585912] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e85bd0 is same with the state(5) to be set 00:31:05.295 [2024-07-21 03:40:50.585926] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e85bd0 is same with the state(5) to be set 00:31:05.295 [2024-07-21 03:40:50.585939] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e85bd0 is same with the state(5) to be set 00:31:05.295 [2024-07-21 03:40:50.585952] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e85bd0 is same with the state(5) to be set 00:31:05.295 [2024-07-21 03:40:50.585964] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e85bd0 is same with the state(5) to be set 00:31:05.295 [2024-07-21 03:40:50.585976] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e85bd0 is same with the state(5) to be set 00:31:05.295 [2024-07-21 03:40:50.585997] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e85bd0 is same with the state(5) to be set 00:31:05.295 [2024-07-21 03:40:50.586009] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e85bd0 is same with the state(5) to be set 00:31:05.295 [2024-07-21 03:40:50.586021] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e85bd0 is same with the state(5) to be set 00:31:05.295 03:40:50 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:31:08.572 03:40:53 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:08.572 [2024-07-21 03:40:53.877098] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:08.828 03:40:53 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:31:09.759 03:40:54 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:31:10.015 03:40:55 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 2526964 00:31:16.574 0 00:31:16.574 03:41:00 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 2526828 00:31:16.574 03:41:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@946 -- # '[' -z 2526828 ']' 00:31:16.574 03:41:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@950 -- # kill -0 2526828 00:31:16.574 03:41:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # uname 00:31:16.574 03:41:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:31:16.574 03:41:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2526828 00:31:16.574 03:41:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:31:16.574 03:41:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:31:16.574 03:41:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2526828' 00:31:16.574 killing process with pid 2526828 00:31:16.574 03:41:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@965 -- # kill 2526828 00:31:16.574 03:41:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@970 -- # wait 2526828 00:31:16.574 03:41:01 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:31:16.574 [2024-07-21 03:40:44.656407] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:31:16.574 [2024-07-21 03:40:44.656512] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2526828 ] 00:31:16.574 EAL: No free 2048 kB hugepages reported on node 1 00:31:16.574 [2024-07-21 03:40:44.720334] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:16.574 [2024-07-21 03:40:44.806769] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:16.574 Running I/O for 15 seconds... 00:31:16.574 [2024-07-21 03:40:46.912753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:79304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.574 [2024-07-21 03:40:46.912796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.574 [2024-07-21 03:40:46.912826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:80000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.574 [2024-07-21 03:40:46.912843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.574 [2024-07-21 03:40:46.912860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:80008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.574 [2024-07-21 03:40:46.912875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.574 [2024-07-21 03:40:46.912892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:80016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.574 [2024-07-21 03:40:46.912917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.574 [2024-07-21 03:40:46.912934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:80024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.574 [2024-07-21 03:40:46.912948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.574 [2024-07-21 03:40:46.912964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:80032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.574 [2024-07-21 03:40:46.912978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.574 [2024-07-21 03:40:46.912995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:80040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.574 [2024-07-21 03:40:46.913010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.574 [2024-07-21 03:40:46.913025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:80048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.574 [2024-07-21 03:40:46.913040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.574 [2024-07-21 03:40:46.913072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:80056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.574 [2024-07-21 03:40:46.913086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.574 [2024-07-21 03:40:46.913102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:80064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.574 [2024-07-21 03:40:46.913132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.574 [2024-07-21 03:40:46.913149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:80072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.574 [2024-07-21 03:40:46.913163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.574 [2024-07-21 03:40:46.913191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:80080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.574 [2024-07-21 03:40:46.913206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.574 [2024-07-21 03:40:46.913222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:80088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.574 [2024-07-21 03:40:46.913236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.574 [2024-07-21 03:40:46.913251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:80096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.574 [2024-07-21 03:40:46.913264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.574 [2024-07-21 03:40:46.913279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:80104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.574 [2024-07-21 03:40:46.913293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.574 [2024-07-21 03:40:46.913308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:80112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.574 [2024-07-21 03:40:46.913321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.574 [2024-07-21 03:40:46.913337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:80120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.574 [2024-07-21 03:40:46.913352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.574 [2024-07-21 03:40:46.913367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:80128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.574 [2024-07-21 03:40:46.913382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.574 [2024-07-21 03:40:46.913397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:79312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.574 [2024-07-21 03:40:46.913411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.574 [2024-07-21 03:40:46.913427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:79320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.574 [2024-07-21 03:40:46.913441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.574 [2024-07-21 03:40:46.913457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:79328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.574 [2024-07-21 03:40:46.913470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.574 [2024-07-21 03:40:46.913486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:79336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.574 [2024-07-21 03:40:46.913499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.574 [2024-07-21 03:40:46.913514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:79344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.574 [2024-07-21 03:40:46.913528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.574 [2024-07-21 03:40:46.913543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:79352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.574 [2024-07-21 03:40:46.913560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.574 [2024-07-21 03:40:46.913576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:79360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.574 [2024-07-21 03:40:46.913591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.574 [2024-07-21 03:40:46.913611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:79368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.574 [2024-07-21 03:40:46.913648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.574 [2024-07-21 03:40:46.913665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:79376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.574 [2024-07-21 03:40:46.913680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.574 [2024-07-21 03:40:46.913695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:79384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.574 [2024-07-21 03:40:46.913709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.574 [2024-07-21 03:40:46.913725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:79392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.574 [2024-07-21 03:40:46.913739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.574 [2024-07-21 03:40:46.913754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:79400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.574 [2024-07-21 03:40:46.913768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.574 [2024-07-21 03:40:46.913784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:79408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.574 [2024-07-21 03:40:46.913797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.574 [2024-07-21 03:40:46.913812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:79416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.574 [2024-07-21 03:40:46.913826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.574 [2024-07-21 03:40:46.913842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:79424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.574 [2024-07-21 03:40:46.913856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.574 [2024-07-21 03:40:46.913871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:79432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.574 [2024-07-21 03:40:46.913885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.575 [2024-07-21 03:40:46.913912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:80136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.575 [2024-07-21 03:40:46.913942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.575 [2024-07-21 03:40:46.913957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:80144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.575 [2024-07-21 03:40:46.913971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.575 [2024-07-21 03:40:46.913990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:80152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.575 [2024-07-21 03:40:46.914005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.575 [2024-07-21 03:40:46.914020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:80160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.575 [2024-07-21 03:40:46.914033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.575 [2024-07-21 03:40:46.914048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:80168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.575 [2024-07-21 03:40:46.914062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.575 [2024-07-21 03:40:46.914077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:80176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.575 [2024-07-21 03:40:46.914091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.575 [2024-07-21 03:40:46.914106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:80184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.575 [2024-07-21 03:40:46.914119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.575 [2024-07-21 03:40:46.914134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:80192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.575 [2024-07-21 03:40:46.914148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.575 [2024-07-21 03:40:46.914163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:80200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.575 [2024-07-21 03:40:46.914176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.575 [2024-07-21 03:40:46.914191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:80208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.575 [2024-07-21 03:40:46.914204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.575 [2024-07-21 03:40:46.914219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:80216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.575 [2024-07-21 03:40:46.914232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.575 [2024-07-21 03:40:46.914247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:80224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.575 [2024-07-21 03:40:46.914261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.575 [2024-07-21 03:40:46.914275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:80232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.575 [2024-07-21 03:40:46.914289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.575 [2024-07-21 03:40:46.914304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:80240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.575 [2024-07-21 03:40:46.914318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.575 [2024-07-21 03:40:46.914333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:80248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.575 [2024-07-21 03:40:46.914346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.575 [2024-07-21 03:40:46.914379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:80256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.575 [2024-07-21 03:40:46.914394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.575 [2024-07-21 03:40:46.914412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:80264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.575 [2024-07-21 03:40:46.914427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.575 [2024-07-21 03:40:46.914442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:80272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.575 [2024-07-21 03:40:46.914457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.575 [2024-07-21 03:40:46.914472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:80280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.575 [2024-07-21 03:40:46.914486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.575 [2024-07-21 03:40:46.914502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:80288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.575 [2024-07-21 03:40:46.914516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.575 [2024-07-21 03:40:46.914531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:80296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.575 [2024-07-21 03:40:46.914545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.575 [2024-07-21 03:40:46.914561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:80304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.575 [2024-07-21 03:40:46.914575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.575 [2024-07-21 03:40:46.914591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:80312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.575 [2024-07-21 03:40:46.914621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.575 [2024-07-21 03:40:46.914655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:79440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.575 [2024-07-21 03:40:46.914670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.575 [2024-07-21 03:40:46.914687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:79448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.575 [2024-07-21 03:40:46.914701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.575 [2024-07-21 03:40:46.914717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:79456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.575 [2024-07-21 03:40:46.914731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.575 [2024-07-21 03:40:46.914747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:79464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.575 [2024-07-21 03:40:46.914761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.575 [2024-07-21 03:40:46.914777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:79472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.575 [2024-07-21 03:40:46.914795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.575 [2024-07-21 03:40:46.914812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:79480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.575 [2024-07-21 03:40:46.914827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.575 [2024-07-21 03:40:46.914843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:79488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.575 [2024-07-21 03:40:46.914858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.575 [2024-07-21 03:40:46.914874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:79496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.575 [2024-07-21 03:40:46.914889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.575 [2024-07-21 03:40:46.914904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:79504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.575 [2024-07-21 03:40:46.914921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.575 [2024-07-21 03:40:46.914953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:79512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.575 [2024-07-21 03:40:46.914968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.575 [2024-07-21 03:40:46.914983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:79520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.575 [2024-07-21 03:40:46.914997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.575 [2024-07-21 03:40:46.915012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:79528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.575 [2024-07-21 03:40:46.915027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.575 [2024-07-21 03:40:46.915042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:79536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.575 [2024-07-21 03:40:46.915055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.575 [2024-07-21 03:40:46.915071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:79544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.575 [2024-07-21 03:40:46.915086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.575 [2024-07-21 03:40:46.915101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:79552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.575 [2024-07-21 03:40:46.915116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.575 [2024-07-21 03:40:46.915131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:79560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.575 [2024-07-21 03:40:46.915145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.575 [2024-07-21 03:40:46.915160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:79568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.575 [2024-07-21 03:40:46.915175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.575 [2024-07-21 03:40:46.915194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:79576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.575 [2024-07-21 03:40:46.915208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.575 [2024-07-21 03:40:46.915224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:79584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.575 [2024-07-21 03:40:46.915238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.575 [2024-07-21 03:40:46.915253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:79592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.575 [2024-07-21 03:40:46.915267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.575 [2024-07-21 03:40:46.915283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:79600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.575 [2024-07-21 03:40:46.915296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.575 [2024-07-21 03:40:46.915312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:79608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.575 [2024-07-21 03:40:46.915326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.575 [2024-07-21 03:40:46.915342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:79616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.575 [2024-07-21 03:40:46.915356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.575 [2024-07-21 03:40:46.915372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:79624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.575 [2024-07-21 03:40:46.915385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.575 [2024-07-21 03:40:46.915401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:79632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.575 [2024-07-21 03:40:46.915415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.575 [2024-07-21 03:40:46.915431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:79640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.575 [2024-07-21 03:40:46.915445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.575 [2024-07-21 03:40:46.915460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:79648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.575 [2024-07-21 03:40:46.915474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.575 [2024-07-21 03:40:46.915489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:79656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.575 [2024-07-21 03:40:46.915503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.575 [2024-07-21 03:40:46.915519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:79664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.575 [2024-07-21 03:40:46.915533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.575 [2024-07-21 03:40:46.915548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:79672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.575 [2024-07-21 03:40:46.915565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.575 [2024-07-21 03:40:46.915581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:79680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.575 [2024-07-21 03:40:46.915604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.575 [2024-07-21 03:40:46.915648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:79688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.575 [2024-07-21 03:40:46.915669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.575 [2024-07-21 03:40:46.915687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:79696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.575 [2024-07-21 03:40:46.915702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.575 [2024-07-21 03:40:46.915717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:79704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.575 [2024-07-21 03:40:46.915731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.575 [2024-07-21 03:40:46.915748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:79712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.575 [2024-07-21 03:40:46.915762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.575 [2024-07-21 03:40:46.915778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:79720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.575 [2024-07-21 03:40:46.915792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.575 [2024-07-21 03:40:46.915808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:79728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.575 [2024-07-21 03:40:46.915823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.575 [2024-07-21 03:40:46.915838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:79736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.575 [2024-07-21 03:40:46.915853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.575 [2024-07-21 03:40:46.915868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:79744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.575 [2024-07-21 03:40:46.915883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.575 [2024-07-21 03:40:46.915909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:79752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.575 [2024-07-21 03:40:46.915939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.575 [2024-07-21 03:40:46.915955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:79760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.575 [2024-07-21 03:40:46.915968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.575 [2024-07-21 03:40:46.915984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:79768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.575 [2024-07-21 03:40:46.915998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.575 [2024-07-21 03:40:46.916018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:79776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.575 [2024-07-21 03:40:46.916033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.575 [2024-07-21 03:40:46.916049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:79784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.575 [2024-07-21 03:40:46.916062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.575 [2024-07-21 03:40:46.916077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:79792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.575 [2024-07-21 03:40:46.916091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.575 [2024-07-21 03:40:46.916107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.575 [2024-07-21 03:40:46.916121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.575 [2024-07-21 03:40:46.916137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:79808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.575 [2024-07-21 03:40:46.916151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.575 [2024-07-21 03:40:46.916166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:79816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.575 [2024-07-21 03:40:46.916179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.575 [2024-07-21 03:40:46.916195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:79824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.575 [2024-07-21 03:40:46.916209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.575 [2024-07-21 03:40:46.916225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:79832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.576 [2024-07-21 03:40:46.916239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.576 [2024-07-21 03:40:46.916254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:79840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.576 [2024-07-21 03:40:46.916269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.576 [2024-07-21 03:40:46.916284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:79848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.576 [2024-07-21 03:40:46.916298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.576 [2024-07-21 03:40:46.916313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:79856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.576 [2024-07-21 03:40:46.916327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.576 [2024-07-21 03:40:46.916342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:79864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.576 [2024-07-21 03:40:46.916357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.576 [2024-07-21 03:40:46.916372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:79872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.576 [2024-07-21 03:40:46.916386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.576 [2024-07-21 03:40:46.916405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:79880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.576 [2024-07-21 03:40:46.916420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.576 [2024-07-21 03:40:46.916436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:79888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.576 [2024-07-21 03:40:46.916449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.576 [2024-07-21 03:40:46.916465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:79896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.576 [2024-07-21 03:40:46.916479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.576 [2024-07-21 03:40:46.916495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:79904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.576 [2024-07-21 03:40:46.916509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.576 [2024-07-21 03:40:46.916524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:79912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.576 [2024-07-21 03:40:46.916538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.576 [2024-07-21 03:40:46.916553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:79920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.576 [2024-07-21 03:40:46.916567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.576 [2024-07-21 03:40:46.916583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:79928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.576 [2024-07-21 03:40:46.916605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.576 [2024-07-21 03:40:46.916647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:79936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.576 [2024-07-21 03:40:46.916663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.576 [2024-07-21 03:40:46.916679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:79944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.576 [2024-07-21 03:40:46.916694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.576 [2024-07-21 03:40:46.916710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:79952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.576 [2024-07-21 03:40:46.916725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.576 [2024-07-21 03:40:46.916740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:79960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.576 [2024-07-21 03:40:46.916755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.576 [2024-07-21 03:40:46.916770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:79968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.576 [2024-07-21 03:40:46.916785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.576 [2024-07-21 03:40:46.916801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:79976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.576 [2024-07-21 03:40:46.916820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.576 [2024-07-21 03:40:46.916838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:79984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.576 [2024-07-21 03:40:46.916853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.576 [2024-07-21 03:40:46.916869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:79992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.576 [2024-07-21 03:40:46.916883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.576 [2024-07-21 03:40:46.916898] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a3b50 is same with the state(5) to be set 00:31:16.576 [2024-07-21 03:40:46.916940] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:16.576 [2024-07-21 03:40:46.916953] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:16.576 [2024-07-21 03:40:46.916965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80320 len:8 PRP1 0x0 PRP2 0x0 00:31:16.576 [2024-07-21 03:40:46.916988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.576 [2024-07-21 03:40:46.917045] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x10a3b50 was disconnected and freed. reset controller. 00:31:16.576 [2024-07-21 03:40:46.917068] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:31:16.576 [2024-07-21 03:40:46.917116] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:16.576 [2024-07-21 03:40:46.917136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.576 [2024-07-21 03:40:46.917153] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:16.576 [2024-07-21 03:40:46.917166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.576 [2024-07-21 03:40:46.917181] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:16.576 [2024-07-21 03:40:46.917195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.576 [2024-07-21 03:40:46.917209] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:16.576 [2024-07-21 03:40:46.917223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.576 [2024-07-21 03:40:46.917237] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:16.576 [2024-07-21 03:40:46.920609] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:16.576 [2024-07-21 03:40:46.920671] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1084eb0 (9): Bad file descriptor 00:31:16.576 [2024-07-21 03:40:46.956769] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:31:16.576 [2024-07-21 03:40:50.587763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:78856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.576 [2024-07-21 03:40:50.587805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.576 [2024-07-21 03:40:50.587830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:78864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.576 [2024-07-21 03:40:50.587853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.576 [2024-07-21 03:40:50.587870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:78872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.576 [2024-07-21 03:40:50.587885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.576 [2024-07-21 03:40:50.587900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:78880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.576 [2024-07-21 03:40:50.587914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.576 [2024-07-21 03:40:50.587929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:78888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.576 [2024-07-21 03:40:50.587944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.576 [2024-07-21 03:40:50.587959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:78896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.576 [2024-07-21 03:40:50.587972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.576 [2024-07-21 03:40:50.587987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:78904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.576 [2024-07-21 03:40:50.588000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.576 [2024-07-21 03:40:50.588015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:78912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.576 [2024-07-21 03:40:50.588029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.576 [2024-07-21 03:40:50.588043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:78920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.576 [2024-07-21 03:40:50.588057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.576 [2024-07-21 03:40:50.588072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:78928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.576 [2024-07-21 03:40:50.588085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.576 [2024-07-21 03:40:50.588100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:78936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.576 [2024-07-21 03:40:50.588114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.576 [2024-07-21 03:40:50.588129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:78944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.576 [2024-07-21 03:40:50.588142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.576 [2024-07-21 03:40:50.588158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:78952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.576 [2024-07-21 03:40:50.588187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.576 [2024-07-21 03:40:50.588203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:78960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.576 [2024-07-21 03:40:50.588217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.576 [2024-07-21 03:40:50.588240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:78968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.576 [2024-07-21 03:40:50.588255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.576 [2024-07-21 03:40:50.588271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:78976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.576 [2024-07-21 03:40:50.588286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.576 [2024-07-21 03:40:50.588301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:78984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.576 [2024-07-21 03:40:50.588315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.576 [2024-07-21 03:40:50.588331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:78992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.576 [2024-07-21 03:40:50.588345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.576 [2024-07-21 03:40:50.588361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:79000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.576 [2024-07-21 03:40:50.588374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.576 [2024-07-21 03:40:50.588390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:79008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.576 [2024-07-21 03:40:50.588403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.576 [2024-07-21 03:40:50.588419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:79016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.576 [2024-07-21 03:40:50.588433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.576 [2024-07-21 03:40:50.588449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:79024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.576 [2024-07-21 03:40:50.588463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.576 [2024-07-21 03:40:50.588478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:79032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.576 [2024-07-21 03:40:50.588492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.576 [2024-07-21 03:40:50.588508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:79040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.576 [2024-07-21 03:40:50.588522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.576 [2024-07-21 03:40:50.588537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:79048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.576 [2024-07-21 03:40:50.588551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.576 [2024-07-21 03:40:50.588567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:79056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.576 [2024-07-21 03:40:50.588581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.576 [2024-07-21 03:40:50.588596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:79064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.576 [2024-07-21 03:40:50.588620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.576 [2024-07-21 03:40:50.588655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:79072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.576 [2024-07-21 03:40:50.588670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.576 [2024-07-21 03:40:50.588687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:79080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.576 [2024-07-21 03:40:50.588702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.576 [2024-07-21 03:40:50.588718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:79088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.576 [2024-07-21 03:40:50.588732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.576 [2024-07-21 03:40:50.588748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:79096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.576 [2024-07-21 03:40:50.588765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.576 [2024-07-21 03:40:50.588780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:79104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.576 [2024-07-21 03:40:50.588795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.577 [2024-07-21 03:40:50.588811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:79112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.577 [2024-07-21 03:40:50.588825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.577 [2024-07-21 03:40:50.588840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:79120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.577 [2024-07-21 03:40:50.588854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.577 [2024-07-21 03:40:50.588870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:79128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.577 [2024-07-21 03:40:50.588885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.577 [2024-07-21 03:40:50.588900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:79136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.577 [2024-07-21 03:40:50.588914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.577 [2024-07-21 03:40:50.588945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:79144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.577 [2024-07-21 03:40:50.588960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.577 [2024-07-21 03:40:50.588975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:79152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.577 [2024-07-21 03:40:50.588989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.577 [2024-07-21 03:40:50.589004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:79160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.577 [2024-07-21 03:40:50.589018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.577 [2024-07-21 03:40:50.589033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:79168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.577 [2024-07-21 03:40:50.589051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.577 [2024-07-21 03:40:50.589067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:79176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.577 [2024-07-21 03:40:50.589081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.577 [2024-07-21 03:40:50.589097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:79184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.577 [2024-07-21 03:40:50.589111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.577 [2024-07-21 03:40:50.589125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:79192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.577 [2024-07-21 03:40:50.589139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.577 [2024-07-21 03:40:50.589155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:79200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.577 [2024-07-21 03:40:50.589168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.577 [2024-07-21 03:40:50.589184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:79208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.577 [2024-07-21 03:40:50.589198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.577 [2024-07-21 03:40:50.589213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:79216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.577 [2024-07-21 03:40:50.589227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.577 [2024-07-21 03:40:50.589242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:79224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.577 [2024-07-21 03:40:50.589257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.577 [2024-07-21 03:40:50.589272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:79232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.577 [2024-07-21 03:40:50.589287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.577 [2024-07-21 03:40:50.589302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:79240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.577 [2024-07-21 03:40:50.589315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.577 [2024-07-21 03:40:50.589330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:79248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.577 [2024-07-21 03:40:50.589344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.577 [2024-07-21 03:40:50.589359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:79256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.577 [2024-07-21 03:40:50.589373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.577 [2024-07-21 03:40:50.589389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:79264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.577 [2024-07-21 03:40:50.589402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.577 [2024-07-21 03:40:50.589421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:79272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.577 [2024-07-21 03:40:50.589435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.577 [2024-07-21 03:40:50.589451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:79280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.577 [2024-07-21 03:40:50.589465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.577 [2024-07-21 03:40:50.589480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:79288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.577 [2024-07-21 03:40:50.589494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.577 [2024-07-21 03:40:50.589509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:78712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.577 [2024-07-21 03:40:50.589523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.577 [2024-07-21 03:40:50.589538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:78720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.577 [2024-07-21 03:40:50.589552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.577 [2024-07-21 03:40:50.589567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:79296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.577 [2024-07-21 03:40:50.589581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.577 [2024-07-21 03:40:50.589596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:79304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.577 [2024-07-21 03:40:50.589609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.577 [2024-07-21 03:40:50.589652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:79312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.577 [2024-07-21 03:40:50.589668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.577 [2024-07-21 03:40:50.589684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:79320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.577 [2024-07-21 03:40:50.589698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.577 [2024-07-21 03:40:50.589714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:79328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.577 [2024-07-21 03:40:50.589728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.577 [2024-07-21 03:40:50.589743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:79336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.577 [2024-07-21 03:40:50.589758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.577 [2024-07-21 03:40:50.589774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:79344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.577 [2024-07-21 03:40:50.589789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.577 [2024-07-21 03:40:50.589804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:79352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.577 [2024-07-21 03:40:50.589822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.577 [2024-07-21 03:40:50.589853] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:16.577 [2024-07-21 03:40:50.589870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79360 len:8 PRP1 0x0 PRP2 0x0 00:31:16.577 [2024-07-21 03:40:50.589884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.577 [2024-07-21 03:40:50.590053] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:16.577 [2024-07-21 03:40:50.590072] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:16.577 [2024-07-21 03:40:50.590085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79368 len:8 PRP1 0x0 PRP2 0x0 00:31:16.577 [2024-07-21 03:40:50.590098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.577 [2024-07-21 03:40:50.590114] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:16.577 [2024-07-21 03:40:50.590126] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:16.577 [2024-07-21 03:40:50.590137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79376 len:8 PRP1 0x0 PRP2 0x0 00:31:16.577 [2024-07-21 03:40:50.590150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.577 [2024-07-21 03:40:50.590163] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:16.577 [2024-07-21 03:40:50.590174] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:16.577 [2024-07-21 03:40:50.590186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79384 len:8 PRP1 0x0 PRP2 0x0 00:31:16.577 [2024-07-21 03:40:50.590199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.577 [2024-07-21 03:40:50.590213] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:16.577 [2024-07-21 03:40:50.590224] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:16.577 [2024-07-21 03:40:50.590235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79392 len:8 PRP1 0x0 PRP2 0x0 00:31:16.577 [2024-07-21 03:40:50.590247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.577 [2024-07-21 03:40:50.590260] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:16.577 [2024-07-21 03:40:50.590272] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:16.577 [2024-07-21 03:40:50.590283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79400 len:8 PRP1 0x0 PRP2 0x0 00:31:16.577 [2024-07-21 03:40:50.590296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.577 [2024-07-21 03:40:50.590309] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:16.577 [2024-07-21 03:40:50.590321] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:16.577 [2024-07-21 03:40:50.590332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79408 len:8 PRP1 0x0 PRP2 0x0 00:31:16.577 [2024-07-21 03:40:50.590345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.577 [2024-07-21 03:40:50.590359] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:16.577 [2024-07-21 03:40:50.590370] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:16.577 [2024-07-21 03:40:50.590381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79416 len:8 PRP1 0x0 PRP2 0x0 00:31:16.577 [2024-07-21 03:40:50.590398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.577 [2024-07-21 03:40:50.590412] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:16.577 [2024-07-21 03:40:50.590423] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:16.577 [2024-07-21 03:40:50.590434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79424 len:8 PRP1 0x0 PRP2 0x0 00:31:16.577 [2024-07-21 03:40:50.590447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.577 [2024-07-21 03:40:50.590460] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:16.577 [2024-07-21 03:40:50.590471] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:16.577 [2024-07-21 03:40:50.590482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79432 len:8 PRP1 0x0 PRP2 0x0 00:31:16.577 [2024-07-21 03:40:50.590495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.577 [2024-07-21 03:40:50.590508] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:16.577 [2024-07-21 03:40:50.590519] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:16.577 [2024-07-21 03:40:50.590530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79440 len:8 PRP1 0x0 PRP2 0x0 00:31:16.577 [2024-07-21 03:40:50.590543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.577 [2024-07-21 03:40:50.590556] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:16.577 [2024-07-21 03:40:50.590566] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:16.577 [2024-07-21 03:40:50.590578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79448 len:8 PRP1 0x0 PRP2 0x0 00:31:16.577 [2024-07-21 03:40:50.590590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.577 [2024-07-21 03:40:50.590630] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:16.577 [2024-07-21 03:40:50.590642] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:16.577 [2024-07-21 03:40:50.590654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79456 len:8 PRP1 0x0 PRP2 0x0 00:31:16.577 [2024-07-21 03:40:50.590667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.577 [2024-07-21 03:40:50.590680] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:16.577 [2024-07-21 03:40:50.590692] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:16.577 [2024-07-21 03:40:50.590703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79464 len:8 PRP1 0x0 PRP2 0x0 00:31:16.577 [2024-07-21 03:40:50.590716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.577 [2024-07-21 03:40:50.590730] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:16.577 [2024-07-21 03:40:50.590742] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:16.577 [2024-07-21 03:40:50.590754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79472 len:8 PRP1 0x0 PRP2 0x0 00:31:16.577 [2024-07-21 03:40:50.590767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.577 [2024-07-21 03:40:50.590780] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:16.577 [2024-07-21 03:40:50.590792] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:16.577 [2024-07-21 03:40:50.590807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79480 len:8 PRP1 0x0 PRP2 0x0 00:31:16.577 [2024-07-21 03:40:50.590821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.577 [2024-07-21 03:40:50.590834] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:16.577 [2024-07-21 03:40:50.590845] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:16.577 [2024-07-21 03:40:50.590857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79488 len:8 PRP1 0x0 PRP2 0x0 00:31:16.577 [2024-07-21 03:40:50.590870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.577 [2024-07-21 03:40:50.590883] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:16.577 [2024-07-21 03:40:50.590904] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:16.577 [2024-07-21 03:40:50.590930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79496 len:8 PRP1 0x0 PRP2 0x0 00:31:16.577 [2024-07-21 03:40:50.590943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.577 [2024-07-21 03:40:50.590964] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:16.577 [2024-07-21 03:40:50.590976] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:16.577 [2024-07-21 03:40:50.590987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79504 len:8 PRP1 0x0 PRP2 0x0 00:31:16.577 [2024-07-21 03:40:50.590999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.577 [2024-07-21 03:40:50.591012] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:16.577 [2024-07-21 03:40:50.591023] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:16.577 [2024-07-21 03:40:50.591034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78728 len:8 PRP1 0x0 PRP2 0x0 00:31:16.577 [2024-07-21 03:40:50.591047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.577 [2024-07-21 03:40:50.591060] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:16.577 [2024-07-21 03:40:50.591071] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:16.577 [2024-07-21 03:40:50.591081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78736 len:8 PRP1 0x0 PRP2 0x0 00:31:16.577 [2024-07-21 03:40:50.591094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.577 [2024-07-21 03:40:50.591107] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:16.577 [2024-07-21 03:40:50.591118] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:16.577 [2024-07-21 03:40:50.591129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78744 len:8 PRP1 0x0 PRP2 0x0 00:31:16.577 [2024-07-21 03:40:50.591142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.577 [2024-07-21 03:40:50.591155] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:16.577 [2024-07-21 03:40:50.591166] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:16.577 [2024-07-21 03:40:50.591177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78752 len:8 PRP1 0x0 PRP2 0x0 00:31:16.577 [2024-07-21 03:40:50.591200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.577 [2024-07-21 03:40:50.591216] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:16.577 [2024-07-21 03:40:50.591227] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:16.578 [2024-07-21 03:40:50.591239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78760 len:8 PRP1 0x0 PRP2 0x0 00:31:16.578 [2024-07-21 03:40:50.591251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.578 [2024-07-21 03:40:50.591264] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:16.578 [2024-07-21 03:40:50.591275] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:16.578 [2024-07-21 03:40:50.591287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78768 len:8 PRP1 0x0 PRP2 0x0 00:31:16.578 [2024-07-21 03:40:50.591299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.578 [2024-07-21 03:40:50.591313] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:16.578 [2024-07-21 03:40:50.591324] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:16.578 [2024-07-21 03:40:50.591335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78776 len:8 PRP1 0x0 PRP2 0x0 00:31:16.578 [2024-07-21 03:40:50.591349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.578 [2024-07-21 03:40:50.591362] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:16.578 [2024-07-21 03:40:50.591373] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:16.578 [2024-07-21 03:40:50.591384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78784 len:8 PRP1 0x0 PRP2 0x0 00:31:16.578 [2024-07-21 03:40:50.591399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.578 [2024-07-21 03:40:50.591412] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:16.578 [2024-07-21 03:40:50.591423] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:16.578 [2024-07-21 03:40:50.591435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78792 len:8 PRP1 0x0 PRP2 0x0 00:31:16.578 [2024-07-21 03:40:50.591448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.578 [2024-07-21 03:40:50.591461] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:16.578 [2024-07-21 03:40:50.591473] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:16.578 [2024-07-21 03:40:50.591485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78800 len:8 PRP1 0x0 PRP2 0x0 00:31:16.578 [2024-07-21 03:40:50.591497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.578 [2024-07-21 03:40:50.591511] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:16.578 [2024-07-21 03:40:50.591522] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:16.578 [2024-07-21 03:40:50.591533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78808 len:8 PRP1 0x0 PRP2 0x0 00:31:16.578 [2024-07-21 03:40:50.591545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.578 [2024-07-21 03:40:50.591559] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:16.578 [2024-07-21 03:40:50.591570] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:16.578 [2024-07-21 03:40:50.591582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78816 len:8 PRP1 0x0 PRP2 0x0 00:31:16.578 [2024-07-21 03:40:50.591594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.578 [2024-07-21 03:40:50.591610] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:16.578 [2024-07-21 03:40:50.591652] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:16.578 [2024-07-21 03:40:50.591664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78824 len:8 PRP1 0x0 PRP2 0x0 00:31:16.578 [2024-07-21 03:40:50.591677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.578 [2024-07-21 03:40:50.591691] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:16.578 [2024-07-21 03:40:50.591702] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:16.578 [2024-07-21 03:40:50.591714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78832 len:8 PRP1 0x0 PRP2 0x0 00:31:16.578 [2024-07-21 03:40:50.591727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.578 [2024-07-21 03:40:50.591740] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:16.578 [2024-07-21 03:40:50.591751] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:16.578 [2024-07-21 03:40:50.591763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78840 len:8 PRP1 0x0 PRP2 0x0 00:31:16.578 [2024-07-21 03:40:50.591776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.578 [2024-07-21 03:40:50.591790] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:16.578 [2024-07-21 03:40:50.591801] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:16.578 [2024-07-21 03:40:50.591813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78848 len:8 PRP1 0x0 PRP2 0x0 00:31:16.578 [2024-07-21 03:40:50.591825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.578 [2024-07-21 03:40:50.591839] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:16.578 [2024-07-21 03:40:50.591851] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:16.578 [2024-07-21 03:40:50.591862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79512 len:8 PRP1 0x0 PRP2 0x0 00:31:16.578 [2024-07-21 03:40:50.591876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.578 [2024-07-21 03:40:50.591890] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:16.578 [2024-07-21 03:40:50.591901] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:16.578 [2024-07-21 03:40:50.591912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79520 len:8 PRP1 0x0 PRP2 0x0 00:31:16.578 [2024-07-21 03:40:50.591941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.578 [2024-07-21 03:40:50.591955] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:16.578 [2024-07-21 03:40:50.591966] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:16.578 [2024-07-21 03:40:50.591977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79528 len:8 PRP1 0x0 PRP2 0x0 00:31:16.578 [2024-07-21 03:40:50.591990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.578 [2024-07-21 03:40:50.592003] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:16.578 [2024-07-21 03:40:50.592015] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:16.578 [2024-07-21 03:40:50.592030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79536 len:8 PRP1 0x0 PRP2 0x0 00:31:16.578 [2024-07-21 03:40:50.592043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.578 [2024-07-21 03:40:50.592056] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:16.578 [2024-07-21 03:40:50.592067] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:16.578 [2024-07-21 03:40:50.592078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79544 len:8 PRP1 0x0 PRP2 0x0 00:31:16.578 [2024-07-21 03:40:50.592091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.578 [2024-07-21 03:40:50.592104] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:16.578 [2024-07-21 03:40:50.592116] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:16.578 [2024-07-21 03:40:50.592127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79552 len:8 PRP1 0x0 PRP2 0x0 00:31:16.578 [2024-07-21 03:40:50.592139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.578 [2024-07-21 03:40:50.592152] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:16.578 [2024-07-21 03:40:50.592163] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:16.578 [2024-07-21 03:40:50.592175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79560 len:8 PRP1 0x0 PRP2 0x0 00:31:16.578 [2024-07-21 03:40:50.592187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.578 [2024-07-21 03:40:50.592200] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:16.578 [2024-07-21 03:40:50.592211] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:16.578 [2024-07-21 03:40:50.592222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79568 len:8 PRP1 0x0 PRP2 0x0 00:31:16.578 [2024-07-21 03:40:50.592235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.578 [2024-07-21 03:40:50.592248] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:16.578 [2024-07-21 03:40:50.592261] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:16.578 [2024-07-21 03:40:50.592273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79576 len:8 PRP1 0x0 PRP2 0x0 00:31:16.578 [2024-07-21 03:40:50.592286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.578 [2024-07-21 03:40:50.592299] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:16.578 [2024-07-21 03:40:50.592311] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:16.578 [2024-07-21 03:40:50.592322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79584 len:8 PRP1 0x0 PRP2 0x0 00:31:16.578 [2024-07-21 03:40:50.592335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.578 [2024-07-21 03:40:50.592349] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:16.578 [2024-07-21 03:40:50.592360] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:16.578 [2024-07-21 03:40:50.592371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79592 len:8 PRP1 0x0 PRP2 0x0 00:31:16.578 [2024-07-21 03:40:50.592384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.578 [2024-07-21 03:40:50.592405] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:16.578 [2024-07-21 03:40:50.592420] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:16.578 [2024-07-21 03:40:50.592432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79600 len:8 PRP1 0x0 PRP2 0x0 00:31:16.578 [2024-07-21 03:40:50.592446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.578 [2024-07-21 03:40:50.592460] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:16.578 [2024-07-21 03:40:50.592472] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:16.578 [2024-07-21 03:40:50.592484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79608 len:8 PRP1 0x0 PRP2 0x0 00:31:16.578 [2024-07-21 03:40:50.592496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.578 [2024-07-21 03:40:50.592509] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:16.578 [2024-07-21 03:40:50.592521] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:16.578 [2024-07-21 03:40:50.592533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79616 len:8 PRP1 0x0 PRP2 0x0 00:31:16.578 [2024-07-21 03:40:50.592545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.578 [2024-07-21 03:40:50.592559] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:16.578 [2024-07-21 03:40:50.592570] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:16.578 [2024-07-21 03:40:50.592582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79624 len:8 PRP1 0x0 PRP2 0x0 00:31:16.578 [2024-07-21 03:40:50.592596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.578 [2024-07-21 03:40:50.592610] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:16.578 [2024-07-21 03:40:50.592655] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:16.578 [2024-07-21 03:40:50.592668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79632 len:8 PRP1 0x0 PRP2 0x0 00:31:16.578 [2024-07-21 03:40:50.592682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.578 [2024-07-21 03:40:50.592698] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:16.578 [2024-07-21 03:40:50.592711] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:16.578 [2024-07-21 03:40:50.592723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79640 len:8 PRP1 0x0 PRP2 0x0 00:31:16.578 [2024-07-21 03:40:50.592736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.578 [2024-07-21 03:40:50.592749] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:16.578 [2024-07-21 03:40:50.592761] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:16.578 [2024-07-21 03:40:50.592774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79648 len:8 PRP1 0x0 PRP2 0x0 00:31:16.578 [2024-07-21 03:40:50.592786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.578 [2024-07-21 03:40:50.592800] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:16.578 [2024-07-21 03:40:50.592811] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:16.578 [2024-07-21 03:40:50.592823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79656 len:8 PRP1 0x0 PRP2 0x0 00:31:16.578 [2024-07-21 03:40:50.592836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.578 [2024-07-21 03:40:50.592858] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:16.578 [2024-07-21 03:40:50.592871] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:16.578 [2024-07-21 03:40:50.592883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79664 len:8 PRP1 0x0 PRP2 0x0 00:31:16.578 [2024-07-21 03:40:50.592896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.578 [2024-07-21 03:40:50.592910] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:16.578 [2024-07-21 03:40:50.592922] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:16.578 [2024-07-21 03:40:50.592950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79672 len:8 PRP1 0x0 PRP2 0x0 00:31:16.578 [2024-07-21 03:40:50.592972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.578 [2024-07-21 03:40:50.592987] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:16.578 [2024-07-21 03:40:50.592998] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:16.578 [2024-07-21 03:40:50.593009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79680 len:8 PRP1 0x0 PRP2 0x0 00:31:16.578 [2024-07-21 03:40:50.593021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.578 [2024-07-21 03:40:50.593034] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:16.578 [2024-07-21 03:40:50.593046] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:16.578 [2024-07-21 03:40:50.593057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79688 len:8 PRP1 0x0 PRP2 0x0 00:31:16.578 [2024-07-21 03:40:50.593070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.578 [2024-07-21 03:40:50.593084] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:16.578 [2024-07-21 03:40:50.593095] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:16.578 [2024-07-21 03:40:50.593106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79696 len:8 PRP1 0x0 PRP2 0x0 00:31:16.578 [2024-07-21 03:40:50.593119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.578 [2024-07-21 03:40:50.593132] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:16.578 [2024-07-21 03:40:50.593144] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:16.578 [2024-07-21 03:40:50.593157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79704 len:8 PRP1 0x0 PRP2 0x0 00:31:16.578 [2024-07-21 03:40:50.593170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.578 [2024-07-21 03:40:50.593183] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:16.578 [2024-07-21 03:40:50.593194] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:16.578 [2024-07-21 03:40:50.593206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79712 len:8 PRP1 0x0 PRP2 0x0 00:31:16.578 [2024-07-21 03:40:50.593218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.578 [2024-07-21 03:40:50.593231] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:16.578 [2024-07-21 03:40:50.593243] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:16.578 [2024-07-21 03:40:50.593254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79720 len:8 PRP1 0x0 PRP2 0x0 00:31:16.578 [2024-07-21 03:40:50.593270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.578 [2024-07-21 03:40:50.593285] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:16.578 [2024-07-21 03:40:50.593297] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:16.578 [2024-07-21 03:40:50.593310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79728 len:8 PRP1 0x0 PRP2 0x0 00:31:16.578 [2024-07-21 03:40:50.593323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.578 [2024-07-21 03:40:50.593338] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:16.578 [2024-07-21 03:40:50.593350] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:16.578 [2024-07-21 03:40:50.593361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78856 len:8 PRP1 0x0 PRP2 0x0 00:31:16.578 [2024-07-21 03:40:50.593374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.578 [2024-07-21 03:40:50.593389] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:16.578 [2024-07-21 03:40:50.593400] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:16.578 [2024-07-21 03:40:50.593412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78864 len:8 PRP1 0x0 PRP2 0x0 00:31:16.578 [2024-07-21 03:40:50.593425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.578 [2024-07-21 03:40:50.593438] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:16.578 [2024-07-21 03:40:50.593451] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:16.578 [2024-07-21 03:40:50.593462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78872 len:8 PRP1 0x0 PRP2 0x0 00:31:16.578 [2024-07-21 03:40:50.593476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.579 [2024-07-21 03:40:50.593489] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:16.579 [2024-07-21 03:40:50.593501] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:16.579 [2024-07-21 03:40:50.593512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78880 len:8 PRP1 0x0 PRP2 0x0 00:31:16.579 [2024-07-21 03:40:50.593525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.579 [2024-07-21 03:40:50.593538] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:16.579 [2024-07-21 03:40:50.593550] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:16.579 [2024-07-21 03:40:50.593563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78888 len:8 PRP1 0x0 PRP2 0x0 00:31:16.579 [2024-07-21 03:40:50.593577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.579 [2024-07-21 03:40:50.593591] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:16.579 [2024-07-21 03:40:50.593602] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:16.579 [2024-07-21 03:40:50.593619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78896 len:8 PRP1 0x0 PRP2 0x0 00:31:16.579 [2024-07-21 03:40:50.593650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.579 [2024-07-21 03:40:50.593674] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:16.579 [2024-07-21 03:40:50.593686] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:16.579 [2024-07-21 03:40:50.593701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78904 len:8 PRP1 0x0 PRP2 0x0 00:31:16.579 [2024-07-21 03:40:50.593715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.579 [2024-07-21 03:40:50.593730] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:16.579 [2024-07-21 03:40:50.593743] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:16.579 [2024-07-21 03:40:50.593754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78912 len:8 PRP1 0x0 PRP2 0x0 00:31:16.579 [2024-07-21 03:40:50.593768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.579 [2024-07-21 03:40:50.593783] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:16.579 [2024-07-21 03:40:50.593795] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:16.579 [2024-07-21 03:40:50.593808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78920 len:8 PRP1 0x0 PRP2 0x0 00:31:16.579 [2024-07-21 03:40:50.593822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.579 [2024-07-21 03:40:50.593836] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:16.579 [2024-07-21 03:40:50.593848] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:16.579 [2024-07-21 03:40:50.593860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78928 len:8 PRP1 0x0 PRP2 0x0 00:31:16.579 [2024-07-21 03:40:50.593874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.579 [2024-07-21 03:40:50.593888] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:16.579 [2024-07-21 03:40:50.593900] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:16.579 [2024-07-21 03:40:50.593912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78936 len:8 PRP1 0x0 PRP2 0x0 00:31:16.579 [2024-07-21 03:40:50.593925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.579 [2024-07-21 03:40:50.593955] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:16.579 [2024-07-21 03:40:50.593966] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:16.579 [2024-07-21 03:40:50.593979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78944 len:8 PRP1 0x0 PRP2 0x0 00:31:16.579 [2024-07-21 03:40:50.593993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.579 [2024-07-21 03:40:50.594006] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:16.579 [2024-07-21 03:40:50.594018] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:16.579 [2024-07-21 03:40:50.594029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78952 len:8 PRP1 0x0 PRP2 0x0 00:31:16.579 [2024-07-21 03:40:50.594042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.579 [2024-07-21 03:40:50.594056] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:16.579 [2024-07-21 03:40:50.594068] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:16.579 [2024-07-21 03:40:50.594079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78960 len:8 PRP1 0x0 PRP2 0x0 00:31:16.579 [2024-07-21 03:40:50.594092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.579 [2024-07-21 03:40:50.594110] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:16.579 [2024-07-21 03:40:50.594121] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:16.579 [2024-07-21 03:40:50.594133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78968 len:8 PRP1 0x0 PRP2 0x0 00:31:16.579 [2024-07-21 03:40:50.594146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.579 [2024-07-21 03:40:50.594159] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:16.579 [2024-07-21 03:40:50.594170] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:16.579 [2024-07-21 03:40:50.594182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78976 len:8 PRP1 0x0 PRP2 0x0 00:31:16.579 [2024-07-21 03:40:50.594195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.579 [2024-07-21 03:40:50.594208] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:16.579 [2024-07-21 03:40:50.594219] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:16.579 [2024-07-21 03:40:50.594231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78984 len:8 PRP1 0x0 PRP2 0x0 00:31:16.579 [2024-07-21 03:40:50.594243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.579 [2024-07-21 03:40:50.594257] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:16.579 [2024-07-21 03:40:50.594267] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:16.579 [2024-07-21 03:40:50.594279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78992 len:8 PRP1 0x0 PRP2 0x0 00:31:16.579 [2024-07-21 03:40:50.594292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.579 [2024-07-21 03:40:50.594305] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:16.579 [2024-07-21 03:40:50.594316] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:16.579 [2024-07-21 03:40:50.594328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79000 len:8 PRP1 0x0 PRP2 0x0 00:31:16.579 [2024-07-21 03:40:50.594340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.579 [2024-07-21 03:40:50.594353] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:16.579 [2024-07-21 03:40:50.594364] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:16.579 [2024-07-21 03:40:50.594376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79008 len:8 PRP1 0x0 PRP2 0x0 00:31:16.579 [2024-07-21 03:40:50.594388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.579 [2024-07-21 03:40:50.594401] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:16.579 [2024-07-21 03:40:50.594412] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:16.579 [2024-07-21 03:40:50.602390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79016 len:8 PRP1 0x0 PRP2 0x0 00:31:16.579 [2024-07-21 03:40:50.602420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.579 [2024-07-21 03:40:50.602437] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:16.579 [2024-07-21 03:40:50.602449] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:16.579 [2024-07-21 03:40:50.602462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79024 len:8 PRP1 0x0 PRP2 0x0 00:31:16.579 [2024-07-21 03:40:50.602475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.579 [2024-07-21 03:40:50.602497] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:16.579 [2024-07-21 03:40:50.602509] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:16.579 [2024-07-21 03:40:50.602521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79032 len:8 PRP1 0x0 PRP2 0x0 00:31:16.579 [2024-07-21 03:40:50.602534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.579 [2024-07-21 03:40:50.602548] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:16.579 [2024-07-21 03:40:50.602559] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:16.579 [2024-07-21 03:40:50.602570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79040 len:8 PRP1 0x0 PRP2 0x0 00:31:16.579 [2024-07-21 03:40:50.602583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.579 [2024-07-21 03:40:50.602608] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:16.579 [2024-07-21 03:40:50.602647] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:16.579 [2024-07-21 03:40:50.602660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79048 len:8 PRP1 0x0 PRP2 0x0 00:31:16.579 [2024-07-21 03:40:50.602673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.579 [2024-07-21 03:40:50.602688] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:16.579 [2024-07-21 03:40:50.602699] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:16.579 [2024-07-21 03:40:50.602711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79056 len:8 PRP1 0x0 PRP2 0x0 00:31:16.579 [2024-07-21 03:40:50.602724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.579 [2024-07-21 03:40:50.602738] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:16.579 [2024-07-21 03:40:50.602749] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:16.579 [2024-07-21 03:40:50.602761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79064 len:8 PRP1 0x0 PRP2 0x0 00:31:16.579 [2024-07-21 03:40:50.602774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.579 [2024-07-21 03:40:50.602788] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:16.579 [2024-07-21 03:40:50.602800] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:16.579 [2024-07-21 03:40:50.602812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79072 len:8 PRP1 0x0 PRP2 0x0 00:31:16.579 [2024-07-21 03:40:50.602826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.579 [2024-07-21 03:40:50.602840] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:16.579 [2024-07-21 03:40:50.602852] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:16.579 [2024-07-21 03:40:50.602864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79080 len:8 PRP1 0x0 PRP2 0x0 00:31:16.579 [2024-07-21 03:40:50.602877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.579 [2024-07-21 03:40:50.602891] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:16.579 [2024-07-21 03:40:50.602904] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:16.579 [2024-07-21 03:40:50.602919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79088 len:8 PRP1 0x0 PRP2 0x0 00:31:16.579 [2024-07-21 03:40:50.602933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.579 [2024-07-21 03:40:50.602948] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:16.579 [2024-07-21 03:40:50.602969] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:16.579 [2024-07-21 03:40:50.602981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79096 len:8 PRP1 0x0 PRP2 0x0 00:31:16.579 [2024-07-21 03:40:50.602994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.579 [2024-07-21 03:40:50.603008] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:16.579 [2024-07-21 03:40:50.603019] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:16.579 [2024-07-21 03:40:50.603031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79104 len:8 PRP1 0x0 PRP2 0x0 00:31:16.579 [2024-07-21 03:40:50.603043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.579 [2024-07-21 03:40:50.603057] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:16.579 [2024-07-21 03:40:50.603068] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:16.579 [2024-07-21 03:40:50.603082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79112 len:8 PRP1 0x0 PRP2 0x0 00:31:16.579 [2024-07-21 03:40:50.603096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.579 [2024-07-21 03:40:50.603111] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:16.579 [2024-07-21 03:40:50.603123] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:16.579 [2024-07-21 03:40:50.603135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79120 len:8 PRP1 0x0 PRP2 0x0 00:31:16.579 [2024-07-21 03:40:50.603148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.579 [2024-07-21 03:40:50.603161] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:16.579 [2024-07-21 03:40:50.603174] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:16.579 [2024-07-21 03:40:50.603185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79128 len:8 PRP1 0x0 PRP2 0x0 00:31:16.579 [2024-07-21 03:40:50.603199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.579 [2024-07-21 03:40:50.603212] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:16.579 [2024-07-21 03:40:50.603224] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:16.579 [2024-07-21 03:40:50.603235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79136 len:8 PRP1 0x0 PRP2 0x0 00:31:16.579 [2024-07-21 03:40:50.603249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.579 [2024-07-21 03:40:50.603262] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:16.579 [2024-07-21 03:40:50.603273] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:16.579 [2024-07-21 03:40:50.603285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79144 len:8 PRP1 0x0 PRP2 0x0 00:31:16.579 [2024-07-21 03:40:50.603298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.579 [2024-07-21 03:40:50.603313] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:16.579 [2024-07-21 03:40:50.603328] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:16.579 [2024-07-21 03:40:50.603340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79152 len:8 PRP1 0x0 PRP2 0x0 00:31:16.579 [2024-07-21 03:40:50.603354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.579 [2024-07-21 03:40:50.603369] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:16.579 [2024-07-21 03:40:50.603381] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:16.579 [2024-07-21 03:40:50.603393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79160 len:8 PRP1 0x0 PRP2 0x0 00:31:16.579 [2024-07-21 03:40:50.603407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.579 [2024-07-21 03:40:50.603421] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:16.579 [2024-07-21 03:40:50.603433] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:16.579 [2024-07-21 03:40:50.603445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79168 len:8 PRP1 0x0 PRP2 0x0 00:31:16.579 [2024-07-21 03:40:50.603459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.579 [2024-07-21 03:40:50.603474] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:16.579 [2024-07-21 03:40:50.603486] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:16.579 [2024-07-21 03:40:50.603498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79176 len:8 PRP1 0x0 PRP2 0x0 00:31:16.579 [2024-07-21 03:40:50.603512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.580 [2024-07-21 03:40:50.603526] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:16.580 [2024-07-21 03:40:50.603538] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:16.580 [2024-07-21 03:40:50.603550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79184 len:8 PRP1 0x0 PRP2 0x0 00:31:16.580 [2024-07-21 03:40:50.603563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.580 [2024-07-21 03:40:50.603576] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:16.580 [2024-07-21 03:40:50.603588] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:16.580 [2024-07-21 03:40:50.603609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79192 len:8 PRP1 0x0 PRP2 0x0 00:31:16.580 [2024-07-21 03:40:50.603631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.580 [2024-07-21 03:40:50.603646] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:16.580 [2024-07-21 03:40:50.603658] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:16.580 [2024-07-21 03:40:50.603670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79200 len:8 PRP1 0x0 PRP2 0x0 00:31:16.580 [2024-07-21 03:40:50.603683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.580 [2024-07-21 03:40:50.603696] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:16.580 [2024-07-21 03:40:50.603708] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:16.580 [2024-07-21 03:40:50.603719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79208 len:8 PRP1 0x0 PRP2 0x0 00:31:16.580 [2024-07-21 03:40:50.603732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.580 [2024-07-21 03:40:50.603749] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:16.580 [2024-07-21 03:40:50.603761] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:16.580 [2024-07-21 03:40:50.603772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79216 len:8 PRP1 0x0 PRP2 0x0 00:31:16.580 [2024-07-21 03:40:50.603786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.580 [2024-07-21 03:40:50.603799] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:16.580 [2024-07-21 03:40:50.603811] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:16.580 [2024-07-21 03:40:50.603823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79224 len:8 PRP1 0x0 PRP2 0x0 00:31:16.580 [2024-07-21 03:40:50.603836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.580 [2024-07-21 03:40:50.603850] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:16.580 [2024-07-21 03:40:50.603861] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:16.580 [2024-07-21 03:40:50.603873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79232 len:8 PRP1 0x0 PRP2 0x0 00:31:16.580 [2024-07-21 03:40:50.603887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.580 [2024-07-21 03:40:50.603900] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:16.580 [2024-07-21 03:40:50.603912] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:16.580 [2024-07-21 03:40:50.603924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79240 len:8 PRP1 0x0 PRP2 0x0 00:31:16.580 [2024-07-21 03:40:50.603937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.580 [2024-07-21 03:40:50.603965] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:16.580 [2024-07-21 03:40:50.603977] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:16.580 [2024-07-21 03:40:50.603988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79248 len:8 PRP1 0x0 PRP2 0x0 00:31:16.580 [2024-07-21 03:40:50.604001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.580 [2024-07-21 03:40:50.604014] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:16.580 [2024-07-21 03:40:50.604041] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:16.580 [2024-07-21 03:40:50.604052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79256 len:8 PRP1 0x0 PRP2 0x0 00:31:16.580 [2024-07-21 03:40:50.604065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.580 [2024-07-21 03:40:50.604079] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:16.580 [2024-07-21 03:40:50.604090] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:16.580 [2024-07-21 03:40:50.604102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79264 len:8 PRP1 0x0 PRP2 0x0 00:31:16.580 [2024-07-21 03:40:50.604115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.580 [2024-07-21 03:40:50.604130] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:16.580 [2024-07-21 03:40:50.604151] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:16.580 [2024-07-21 03:40:50.604162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79272 len:8 PRP1 0x0 PRP2 0x0 00:31:16.580 [2024-07-21 03:40:50.604179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.580 [2024-07-21 03:40:50.604193] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:16.580 [2024-07-21 03:40:50.604204] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:16.580 [2024-07-21 03:40:50.604216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79280 len:8 PRP1 0x0 PRP2 0x0 00:31:16.580 [2024-07-21 03:40:50.604229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.580 [2024-07-21 03:40:50.604243] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:16.580 [2024-07-21 03:40:50.604255] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:16.580 [2024-07-21 03:40:50.604267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79288 len:8 PRP1 0x0 PRP2 0x0 00:31:16.580 [2024-07-21 03:40:50.604280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.580 [2024-07-21 03:40:50.604293] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:16.580 [2024-07-21 03:40:50.604305] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:16.580 [2024-07-21 03:40:50.604327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78712 len:8 PRP1 0x0 PRP2 0x0 00:31:16.580 [2024-07-21 03:40:50.604355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.580 [2024-07-21 03:40:50.604369] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:16.580 [2024-07-21 03:40:50.604380] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:16.580 [2024-07-21 03:40:50.604400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78720 len:8 PRP1 0x0 PRP2 0x0 00:31:16.580 [2024-07-21 03:40:50.604413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.580 [2024-07-21 03:40:50.604426] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:16.580 [2024-07-21 03:40:50.604437] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:16.580 [2024-07-21 03:40:50.604448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79296 len:8 PRP1 0x0 PRP2 0x0 00:31:16.580 [2024-07-21 03:40:50.604461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.580 [2024-07-21 03:40:50.604474] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:16.580 [2024-07-21 03:40:50.604485] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:16.580 [2024-07-21 03:40:50.604497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79304 len:8 PRP1 0x0 PRP2 0x0 00:31:16.580 [2024-07-21 03:40:50.604509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.580 [2024-07-21 03:40:50.604522] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:16.580 [2024-07-21 03:40:50.604533] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:16.580 [2024-07-21 03:40:50.604544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79312 len:8 PRP1 0x0 PRP2 0x0 00:31:16.580 [2024-07-21 03:40:50.604560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.580 [2024-07-21 03:40:50.604574] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:16.580 [2024-07-21 03:40:50.604586] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:16.580 [2024-07-21 03:40:50.604600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79320 len:8 PRP1 0x0 PRP2 0x0 00:31:16.580 [2024-07-21 03:40:50.604619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.580 [2024-07-21 03:40:50.604660] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:16.580 [2024-07-21 03:40:50.604671] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:16.580 [2024-07-21 03:40:50.604683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79328 len:8 PRP1 0x0 PRP2 0x0 00:31:16.580 [2024-07-21 03:40:50.604697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.580 [2024-07-21 03:40:50.604711] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:16.580 [2024-07-21 03:40:50.604722] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:16.580 [2024-07-21 03:40:50.604734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79336 len:8 PRP1 0x0 PRP2 0x0 00:31:16.580 [2024-07-21 03:40:50.604747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.580 [2024-07-21 03:40:50.604760] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:16.580 [2024-07-21 03:40:50.604773] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:16.580 [2024-07-21 03:40:50.604785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79344 len:8 PRP1 0x0 PRP2 0x0 00:31:16.580 [2024-07-21 03:40:50.604800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.580 [2024-07-21 03:40:50.604815] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:16.580 [2024-07-21 03:40:50.604826] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:16.580 [2024-07-21 03:40:50.604838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79352 len:8 PRP1 0x0 PRP2 0x0 00:31:16.580 [2024-07-21 03:40:50.604852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.580 [2024-07-21 03:40:50.604865] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:16.580 [2024-07-21 03:40:50.604876] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:16.580 [2024-07-21 03:40:50.604888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79360 len:8 PRP1 0x0 PRP2 0x0 00:31:16.580 [2024-07-21 03:40:50.604900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.580 [2024-07-21 03:40:50.604962] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x10a5b50 was disconnected and freed. reset controller. 00:31:16.580 [2024-07-21 03:40:50.604980] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:31:16.580 [2024-07-21 03:40:50.605029] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:16.580 [2024-07-21 03:40:50.605048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.580 [2024-07-21 03:40:50.605064] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:16.580 [2024-07-21 03:40:50.605079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.580 [2024-07-21 03:40:50.605093] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:16.580 [2024-07-21 03:40:50.605111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.580 [2024-07-21 03:40:50.605126] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:16.580 [2024-07-21 03:40:50.605140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.580 [2024-07-21 03:40:50.605154] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:16.580 [2024-07-21 03:40:50.605194] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1084eb0 (9): Bad file descriptor 00:31:16.580 [2024-07-21 03:40:50.608544] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:16.580 [2024-07-21 03:40:50.679895] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:31:16.580 [2024-07-21 03:40:55.138539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:23680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.580 [2024-07-21 03:40:55.138646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.580 [2024-07-21 03:40:55.138677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:23688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.580 [2024-07-21 03:40:55.138694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.580 [2024-07-21 03:40:55.138711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:23696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.580 [2024-07-21 03:40:55.138726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.580 [2024-07-21 03:40:55.138742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:23704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.580 [2024-07-21 03:40:55.138757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.580 [2024-07-21 03:40:55.138788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:23712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.580 [2024-07-21 03:40:55.138804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.580 [2024-07-21 03:40:55.138820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:23720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.580 [2024-07-21 03:40:55.138834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.580 [2024-07-21 03:40:55.138851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:23728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.580 [2024-07-21 03:40:55.138866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.580 [2024-07-21 03:40:55.138884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:23736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.580 [2024-07-21 03:40:55.138899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.580 [2024-07-21 03:40:55.138915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:23744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.580 [2024-07-21 03:40:55.138929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.580 [2024-07-21 03:40:55.138945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:23752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.580 [2024-07-21 03:40:55.138968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.580 [2024-07-21 03:40:55.138984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:23760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.580 [2024-07-21 03:40:55.138998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.580 [2024-07-21 03:40:55.139013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:23768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.580 [2024-07-21 03:40:55.139027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.580 [2024-07-21 03:40:55.139042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:23776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.580 [2024-07-21 03:40:55.139055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.580 [2024-07-21 03:40:55.139070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:23784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.580 [2024-07-21 03:40:55.139084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.580 [2024-07-21 03:40:55.139114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:23792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.580 [2024-07-21 03:40:55.139127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.580 [2024-07-21 03:40:55.139142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:23800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.580 [2024-07-21 03:40:55.139156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.580 [2024-07-21 03:40:55.139171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:23808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.580 [2024-07-21 03:40:55.139185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.580 [2024-07-21 03:40:55.139200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:23816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.580 [2024-07-21 03:40:55.139214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.580 [2024-07-21 03:40:55.139229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:23824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.580 [2024-07-21 03:40:55.139242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.580 [2024-07-21 03:40:55.139258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:23832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.580 [2024-07-21 03:40:55.139272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.580 [2024-07-21 03:40:55.139287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:23840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.580 [2024-07-21 03:40:55.139302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.580 [2024-07-21 03:40:55.139316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:23848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.580 [2024-07-21 03:40:55.139330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.580 [2024-07-21 03:40:55.139345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.580 [2024-07-21 03:40:55.139362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.580 [2024-07-21 03:40:55.139377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:22864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.580 [2024-07-21 03:40:55.139391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.581 [2024-07-21 03:40:55.139406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.581 [2024-07-21 03:40:55.139419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.581 [2024-07-21 03:40:55.139434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:22880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.581 [2024-07-21 03:40:55.139449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.581 [2024-07-21 03:40:55.139463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.581 [2024-07-21 03:40:55.139477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.581 [2024-07-21 03:40:55.139491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.581 [2024-07-21 03:40:55.139505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.581 [2024-07-21 03:40:55.139519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:22904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.581 [2024-07-21 03:40:55.139532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.581 [2024-07-21 03:40:55.139547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:22912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.581 [2024-07-21 03:40:55.139561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.581 [2024-07-21 03:40:55.139575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:22920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.581 [2024-07-21 03:40:55.139588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.581 [2024-07-21 03:40:55.139625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:22928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.581 [2024-07-21 03:40:55.139642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.581 [2024-07-21 03:40:55.139657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:22936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.581 [2024-07-21 03:40:55.139671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.581 [2024-07-21 03:40:55.139686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.581 [2024-07-21 03:40:55.139700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.581 [2024-07-21 03:40:55.139715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:22952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.581 [2024-07-21 03:40:55.139728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.581 [2024-07-21 03:40:55.139747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:22960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.581 [2024-07-21 03:40:55.139761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.581 [2024-07-21 03:40:55.139776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:22968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.581 [2024-07-21 03:40:55.139790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.581 [2024-07-21 03:40:55.139806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:23856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.581 [2024-07-21 03:40:55.139819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.581 [2024-07-21 03:40:55.139834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:22976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.581 [2024-07-21 03:40:55.139848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.581 [2024-07-21 03:40:55.139863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:22984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.581 [2024-07-21 03:40:55.139877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.581 [2024-07-21 03:40:55.139893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:22992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.581 [2024-07-21 03:40:55.139906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.581 [2024-07-21 03:40:55.139921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.581 [2024-07-21 03:40:55.139935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.581 [2024-07-21 03:40:55.139950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:23008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.581 [2024-07-21 03:40:55.139964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.581 [2024-07-21 03:40:55.139978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.581 [2024-07-21 03:40:55.139992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.581 [2024-07-21 03:40:55.140007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:23024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.581 [2024-07-21 03:40:55.140020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.581 [2024-07-21 03:40:55.140035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:23032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.581 [2024-07-21 03:40:55.140049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.581 [2024-07-21 03:40:55.140064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:23040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.581 [2024-07-21 03:40:55.140077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.581 [2024-07-21 03:40:55.140092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.581 [2024-07-21 03:40:55.140109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.581 [2024-07-21 03:40:55.140125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:23056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.581 [2024-07-21 03:40:55.140139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.581 [2024-07-21 03:40:55.140154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:23064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.581 [2024-07-21 03:40:55.140168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.581 [2024-07-21 03:40:55.140183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:23072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.581 [2024-07-21 03:40:55.140197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.581 [2024-07-21 03:40:55.140212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:23080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.581 [2024-07-21 03:40:55.140226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.581 [2024-07-21 03:40:55.140241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:23088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.581 [2024-07-21 03:40:55.140255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.581 [2024-07-21 03:40:55.140270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:23096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.581 [2024-07-21 03:40:55.140284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.581 [2024-07-21 03:40:55.140299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:23104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.581 [2024-07-21 03:40:55.140313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.581 [2024-07-21 03:40:55.140327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:23112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.581 [2024-07-21 03:40:55.140341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.581 [2024-07-21 03:40:55.140356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:23120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.581 [2024-07-21 03:40:55.140370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.581 [2024-07-21 03:40:55.140385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:23128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.581 [2024-07-21 03:40:55.140398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.581 [2024-07-21 03:40:55.140413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:23136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.581 [2024-07-21 03:40:55.140427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.581 [2024-07-21 03:40:55.140441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:23144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.581 [2024-07-21 03:40:55.140455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.581 [2024-07-21 03:40:55.140473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:23152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.581 [2024-07-21 03:40:55.140488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.581 [2024-07-21 03:40:55.140504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.581 [2024-07-21 03:40:55.140519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.581 [2024-07-21 03:40:55.140534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:23168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.581 [2024-07-21 03:40:55.140548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.581 [2024-07-21 03:40:55.140564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:23176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.581 [2024-07-21 03:40:55.140578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.581 [2024-07-21 03:40:55.140594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:23184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.581 [2024-07-21 03:40:55.140607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.581 [2024-07-21 03:40:55.140645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:23192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.581 [2024-07-21 03:40:55.140661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.581 [2024-07-21 03:40:55.140677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:23200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.581 [2024-07-21 03:40:55.140692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.581 [2024-07-21 03:40:55.140708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:23208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.581 [2024-07-21 03:40:55.140723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.581 [2024-07-21 03:40:55.140738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:23216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.581 [2024-07-21 03:40:55.140753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.581 [2024-07-21 03:40:55.140769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.581 [2024-07-21 03:40:55.140783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.581 [2024-07-21 03:40:55.140800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:23232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.581 [2024-07-21 03:40:55.140814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.581 [2024-07-21 03:40:55.140830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:23240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.581 [2024-07-21 03:40:55.140844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.581 [2024-07-21 03:40:55.140860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:23248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.581 [2024-07-21 03:40:55.140877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.581 [2024-07-21 03:40:55.140894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:23256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.581 [2024-07-21 03:40:55.140909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.581 [2024-07-21 03:40:55.140940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.581 [2024-07-21 03:40:55.140955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.581 [2024-07-21 03:40:55.140970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:23272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.581 [2024-07-21 03:40:55.140984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.581 [2024-07-21 03:40:55.140999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:23280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.581 [2024-07-21 03:40:55.141013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.581 [2024-07-21 03:40:55.141029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:23288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.581 [2024-07-21 03:40:55.141043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.581 [2024-07-21 03:40:55.141058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.581 [2024-07-21 03:40:55.141072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.581 [2024-07-21 03:40:55.141088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:23304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.581 [2024-07-21 03:40:55.141102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.581 [2024-07-21 03:40:55.141118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.581 [2024-07-21 03:40:55.141131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.581 [2024-07-21 03:40:55.141147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:23320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.581 [2024-07-21 03:40:55.141161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.581 [2024-07-21 03:40:55.141176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.581 [2024-07-21 03:40:55.141190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.581 [2024-07-21 03:40:55.141206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:23336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.581 [2024-07-21 03:40:55.141220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.581 [2024-07-21 03:40:55.141235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:23344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.581 [2024-07-21 03:40:55.141250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.581 [2024-07-21 03:40:55.141266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:23352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.581 [2024-07-21 03:40:55.141283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.581 [2024-07-21 03:40:55.141299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.581 [2024-07-21 03:40:55.141314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.581 [2024-07-21 03:40:55.141329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:23368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.581 [2024-07-21 03:40:55.141343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.581 [2024-07-21 03:40:55.141358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.581 [2024-07-21 03:40:55.141372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.581 [2024-07-21 03:40:55.141388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:23384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.582 [2024-07-21 03:40:55.141402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.582 [2024-07-21 03:40:55.141417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:23392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.582 [2024-07-21 03:40:55.141432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.582 [2024-07-21 03:40:55.141447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:23400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.582 [2024-07-21 03:40:55.141461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.582 [2024-07-21 03:40:55.141477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:23408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.582 [2024-07-21 03:40:55.141492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.582 [2024-07-21 03:40:55.141507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:23416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.582 [2024-07-21 03:40:55.141521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.582 [2024-07-21 03:40:55.141536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:23424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.582 [2024-07-21 03:40:55.141549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.582 [2024-07-21 03:40:55.141564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:23432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.582 [2024-07-21 03:40:55.141578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.582 [2024-07-21 03:40:55.141593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.582 [2024-07-21 03:40:55.141607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.582 [2024-07-21 03:40:55.141647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.582 [2024-07-21 03:40:55.141663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.582 [2024-07-21 03:40:55.141684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:23456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.582 [2024-07-21 03:40:55.141699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.582 [2024-07-21 03:40:55.141715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:23464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.582 [2024-07-21 03:40:55.141730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.582 [2024-07-21 03:40:55.141746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:23472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.582 [2024-07-21 03:40:55.141760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.582 [2024-07-21 03:40:55.141776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:23480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.582 [2024-07-21 03:40:55.141790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.582 [2024-07-21 03:40:55.141805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:23488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.582 [2024-07-21 03:40:55.141820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.582 [2024-07-21 03:40:55.141835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:23496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.582 [2024-07-21 03:40:55.141850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.582 [2024-07-21 03:40:55.141865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:23504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.582 [2024-07-21 03:40:55.141879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.582 [2024-07-21 03:40:55.141900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:23512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.582 [2024-07-21 03:40:55.141915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.582 [2024-07-21 03:40:55.141946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:23520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.582 [2024-07-21 03:40:55.141960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.582 [2024-07-21 03:40:55.141975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.582 [2024-07-21 03:40:55.141988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.582 [2024-07-21 03:40:55.142004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:23536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.582 [2024-07-21 03:40:55.142018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.582 [2024-07-21 03:40:55.142032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:23544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.582 [2024-07-21 03:40:55.142046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.582 [2024-07-21 03:40:55.142061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:23864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.582 [2024-07-21 03:40:55.142082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.582 [2024-07-21 03:40:55.142097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:23872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.582 [2024-07-21 03:40:55.142112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.582 [2024-07-21 03:40:55.142127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:23552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.582 [2024-07-21 03:40:55.142141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.582 [2024-07-21 03:40:55.142156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.582 [2024-07-21 03:40:55.142171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.582 [2024-07-21 03:40:55.142186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:23568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.582 [2024-07-21 03:40:55.142200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.582 [2024-07-21 03:40:55.142215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.582 [2024-07-21 03:40:55.142229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.582 [2024-07-21 03:40:55.142244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:23584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.582 [2024-07-21 03:40:55.142258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.582 [2024-07-21 03:40:55.142273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:23592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.582 [2024-07-21 03:40:55.142286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.582 [2024-07-21 03:40:55.142301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:23600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.582 [2024-07-21 03:40:55.142315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.582 [2024-07-21 03:40:55.142330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.582 [2024-07-21 03:40:55.142344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.582 [2024-07-21 03:40:55.142359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.582 [2024-07-21 03:40:55.142373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.582 [2024-07-21 03:40:55.142388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.582 [2024-07-21 03:40:55.142402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.582 [2024-07-21 03:40:55.142418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:23632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.582 [2024-07-21 03:40:55.142432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.582 [2024-07-21 03:40:55.142450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:23640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.582 [2024-07-21 03:40:55.142465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.582 [2024-07-21 03:40:55.142480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:23648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.582 [2024-07-21 03:40:55.142494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.582 [2024-07-21 03:40:55.142509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:23656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.582 [2024-07-21 03:40:55.142523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.582 [2024-07-21 03:40:55.142537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:23664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.582 [2024-07-21 03:40:55.142551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.582 [2024-07-21 03:40:55.142579] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:16.582 [2024-07-21 03:40:55.142622] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:16.582 [2024-07-21 03:40:55.142638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23672 len:8 PRP1 0x0 PRP2 0x0 00:31:16.582 [2024-07-21 03:40:55.142652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.582 [2024-07-21 03:40:55.142712] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x124f070 was disconnected and freed. reset controller. 00:31:16.582 [2024-07-21 03:40:55.142730] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:31:16.582 [2024-07-21 03:40:55.142762] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:16.582 [2024-07-21 03:40:55.142781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.582 [2024-07-21 03:40:55.142797] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:16.582 [2024-07-21 03:40:55.142811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.582 [2024-07-21 03:40:55.142826] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:16.582 [2024-07-21 03:40:55.142839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.582 [2024-07-21 03:40:55.142853] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:16.582 [2024-07-21 03:40:55.142866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.582 [2024-07-21 03:40:55.142880] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:16.582 [2024-07-21 03:40:55.146202] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:16.582 [2024-07-21 03:40:55.146240] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1084eb0 (9): Bad file descriptor 00:31:16.582 [2024-07-21 03:40:55.215157] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:31:16.582 00:31:16.582 Latency(us) 00:31:16.582 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:16.582 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:31:16.582 Verification LBA range: start 0x0 length 0x4000 00:31:16.582 NVMe0n1 : 15.01 8591.01 33.56 444.24 0.00 14138.21 567.37 23787.14 00:31:16.582 =================================================================================================================== 00:31:16.582 Total : 8591.01 33.56 444.24 0.00 14138.21 567.37 23787.14 00:31:16.582 Received shutdown signal, test time was about 15.000000 seconds 00:31:16.582 00:31:16.582 Latency(us) 00:31:16.582 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:16.582 =================================================================================================================== 00:31:16.582 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:16.582 03:41:01 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:31:16.582 03:41:01 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:31:16.582 03:41:01 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:31:16.582 03:41:01 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=2528685 00:31:16.582 03:41:01 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:31:16.582 03:41:01 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 2528685 /var/tmp/bdevperf.sock 00:31:16.582 03:41:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@827 -- # '[' -z 2528685 ']' 00:31:16.582 03:41:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:16.582 03:41:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@832 -- # local max_retries=100 00:31:16.582 03:41:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:16.582 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:16.582 03:41:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # xtrace_disable 00:31:16.582 03:41:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:16.582 03:41:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:31:16.582 03:41:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@860 -- # return 0 00:31:16.582 03:41:01 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:31:16.582 [2024-07-21 03:41:01.588694] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:16.582 03:41:01 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:31:16.582 [2024-07-21 03:41:01.833323] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:31:16.582 03:41:01 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:16.839 NVMe0n1 00:31:17.095 03:41:02 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:17.351 00:31:17.351 03:41:02 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:17.607 00:31:17.607 03:41:02 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:17.607 03:41:02 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:31:17.863 03:41:03 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:18.119 03:41:03 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:31:21.394 03:41:06 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:21.394 03:41:06 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:31:21.394 03:41:06 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=2529344 00:31:21.394 03:41:06 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:31:21.394 03:41:06 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 2529344 00:31:22.766 0 00:31:22.766 03:41:07 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:31:22.766 [2024-07-21 03:41:01.116970] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:31:22.766 [2024-07-21 03:41:01.117071] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2528685 ] 00:31:22.766 EAL: No free 2048 kB hugepages reported on node 1 00:31:22.766 [2024-07-21 03:41:01.178002] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:22.766 [2024-07-21 03:41:01.262823] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:22.766 [2024-07-21 03:41:03.257272] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:31:22.766 [2024-07-21 03:41:03.257363] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:22.766 [2024-07-21 03:41:03.257386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.766 [2024-07-21 03:41:03.257403] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:22.766 [2024-07-21 03:41:03.257417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.766 [2024-07-21 03:41:03.257431] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:22.766 [2024-07-21 03:41:03.257445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.766 [2024-07-21 03:41:03.257459] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:22.766 [2024-07-21 03:41:03.257473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.766 [2024-07-21 03:41:03.257487] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:22.766 [2024-07-21 03:41:03.257527] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:22.767 [2024-07-21 03:41:03.257559] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a8eb0 (9): Bad file descriptor 00:31:22.767 [2024-07-21 03:41:03.349753] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:31:22.767 Running I/O for 1 seconds... 00:31:22.767 00:31:22.767 Latency(us) 00:31:22.767 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:22.767 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:31:22.767 Verification LBA range: start 0x0 length 0x4000 00:31:22.767 NVMe0n1 : 1.01 8800.51 34.38 0.00 0.00 14482.25 3106.89 12330.48 00:31:22.767 =================================================================================================================== 00:31:22.767 Total : 8800.51 34.38 0.00 0.00 14482.25 3106.89 12330.48 00:31:22.767 03:41:07 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:22.767 03:41:07 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:31:22.767 03:41:07 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:23.023 03:41:08 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:23.023 03:41:08 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:31:23.280 03:41:08 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:23.591 03:41:08 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:31:26.893 03:41:11 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:26.893 03:41:11 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:31:26.893 03:41:11 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 2528685 00:31:26.893 03:41:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@946 -- # '[' -z 2528685 ']' 00:31:26.893 03:41:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@950 -- # kill -0 2528685 00:31:26.893 03:41:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # uname 00:31:26.893 03:41:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:31:26.893 03:41:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2528685 00:31:26.893 03:41:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:31:26.893 03:41:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:31:26.893 03:41:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2528685' 00:31:26.893 killing process with pid 2528685 00:31:26.893 03:41:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@965 -- # kill 2528685 00:31:26.893 03:41:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@970 -- # wait 2528685 00:31:26.893 03:41:12 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:31:26.893 03:41:12 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:27.150 03:41:12 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:31:27.150 03:41:12 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:31:27.150 03:41:12 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:31:27.150 03:41:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:27.150 03:41:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:31:27.150 03:41:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:27.150 03:41:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:31:27.150 03:41:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:27.150 03:41:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:27.150 rmmod nvme_tcp 00:31:27.150 rmmod nvme_fabrics 00:31:27.150 rmmod nvme_keyring 00:31:27.150 03:41:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:27.150 03:41:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:31:27.150 03:41:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:31:27.150 03:41:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 2526546 ']' 00:31:27.150 03:41:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 2526546 00:31:27.150 03:41:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@946 -- # '[' -z 2526546 ']' 00:31:27.150 03:41:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@950 -- # kill -0 2526546 00:31:27.150 03:41:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # uname 00:31:27.150 03:41:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:31:27.408 03:41:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2526546 00:31:27.408 03:41:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:31:27.408 03:41:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:31:27.408 03:41:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2526546' 00:31:27.408 killing process with pid 2526546 00:31:27.408 03:41:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@965 -- # kill 2526546 00:31:27.408 03:41:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@970 -- # wait 2526546 00:31:27.665 03:41:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:27.665 03:41:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:27.665 03:41:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:27.665 03:41:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:27.665 03:41:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:27.665 03:41:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:27.665 03:41:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:27.665 03:41:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:29.558 03:41:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:29.558 00:31:29.558 real 0m34.459s 00:31:29.558 user 2m0.504s 00:31:29.558 sys 0m6.202s 00:31:29.558 03:41:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1122 -- # xtrace_disable 00:31:29.558 03:41:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:29.558 ************************************ 00:31:29.558 END TEST nvmf_failover 00:31:29.558 ************************************ 00:31:29.558 03:41:14 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:31:29.558 03:41:14 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:31:29.558 03:41:14 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:31:29.558 03:41:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:29.558 ************************************ 00:31:29.558 START TEST nvmf_host_discovery 00:31:29.558 ************************************ 00:31:29.558 03:41:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:31:29.815 * Looking for test storage... 00:31:29.815 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:29.815 03:41:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:29.815 03:41:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:31:29.815 03:41:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:29.815 03:41:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:29.815 03:41:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:29.815 03:41:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:29.815 03:41:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:29.815 03:41:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:29.815 03:41:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:29.815 03:41:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:29.815 03:41:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:29.815 03:41:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:29.815 03:41:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:31:29.815 03:41:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:31:29.815 03:41:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:29.815 03:41:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:29.815 03:41:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:29.815 03:41:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:29.815 03:41:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:29.815 03:41:14 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:29.815 03:41:14 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:29.815 03:41:14 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:29.815 03:41:14 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:29.815 03:41:14 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:29.815 03:41:14 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:29.815 03:41:14 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:31:29.815 03:41:14 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:29.815 03:41:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:31:29.815 03:41:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:29.815 03:41:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:29.815 03:41:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:29.815 03:41:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:29.815 03:41:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:29.815 03:41:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:29.815 03:41:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:29.815 03:41:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:29.815 03:41:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:31:29.815 03:41:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:31:29.815 03:41:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:31:29.815 03:41:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:31:29.815 03:41:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:31:29.815 03:41:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:31:29.815 03:41:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:31:29.815 03:41:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:29.815 03:41:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:29.815 03:41:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:29.815 03:41:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:29.815 03:41:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:29.815 03:41:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:29.815 03:41:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:29.815 03:41:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:29.815 03:41:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:29.815 03:41:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:29.815 03:41:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:31:29.815 03:41:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:31.712 03:41:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:31.713 03:41:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:31:31.713 03:41:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:31.713 03:41:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:31.713 03:41:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:31.713 03:41:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:31.713 03:41:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:31.713 03:41:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:31:31.713 03:41:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:31.713 03:41:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:31:31.713 03:41:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:31:31.713 03:41:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:31:31.713 03:41:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:31:31.713 03:41:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:31:31.713 03:41:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:31:31.713 03:41:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:31.713 03:41:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:31.713 03:41:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:31.713 03:41:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:31.713 03:41:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:31.713 03:41:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:31.713 03:41:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:31.713 03:41:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:31.713 03:41:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:31.713 03:41:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:31.713 03:41:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:31.713 03:41:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:31.713 03:41:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:31.713 03:41:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:31.713 03:41:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:31.713 03:41:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:31.713 03:41:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:31.713 03:41:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:31.713 03:41:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:31:31.713 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:31:31.713 03:41:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:31.713 03:41:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:31.713 03:41:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:31.713 03:41:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:31.713 03:41:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:31.713 03:41:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:31.713 03:41:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:31:31.713 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:31:31.713 03:41:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:31.713 03:41:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:31.713 03:41:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:31.713 03:41:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:31.713 03:41:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:31.713 03:41:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:31.713 03:41:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:31.713 03:41:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:31.713 03:41:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:31.713 03:41:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:31.713 03:41:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:31.713 03:41:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:31.713 03:41:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:31.713 03:41:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:31.713 03:41:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:31.713 03:41:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:31:31.713 Found net devices under 0000:0a:00.0: cvl_0_0 00:31:31.713 03:41:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:31.713 03:41:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:31.713 03:41:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:31.713 03:41:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:31.713 03:41:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:31.713 03:41:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:31.713 03:41:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:31.713 03:41:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:31.713 03:41:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:31:31.713 Found net devices under 0000:0a:00.1: cvl_0_1 00:31:31.713 03:41:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:31.713 03:41:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:31.713 03:41:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:31:31.713 03:41:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:31.713 03:41:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:31.713 03:41:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:31.713 03:41:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:31.713 03:41:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:31.713 03:41:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:31.713 03:41:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:31.713 03:41:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:31.713 03:41:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:31.713 03:41:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:31.713 03:41:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:31.713 03:41:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:31.713 03:41:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:31.713 03:41:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:31.713 03:41:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:31.713 03:41:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:31.713 03:41:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:31.713 03:41:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:31.713 03:41:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:31.713 03:41:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:31.713 03:41:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:31.713 03:41:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:31.713 03:41:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:31.713 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:31.713 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.166 ms 00:31:31.713 00:31:31.713 --- 10.0.0.2 ping statistics --- 00:31:31.713 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:31.713 rtt min/avg/max/mdev = 0.166/0.166/0.166/0.000 ms 00:31:31.713 03:41:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:31.713 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:31.713 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.098 ms 00:31:31.713 00:31:31.713 --- 10.0.0.1 ping statistics --- 00:31:31.713 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:31.713 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:31:31.713 03:41:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:31.713 03:41:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:31:31.713 03:41:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:31.713 03:41:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:31.713 03:41:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:31.713 03:41:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:31.713 03:41:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:31.713 03:41:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:31.713 03:41:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:31.971 03:41:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:31:31.971 03:41:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:31.971 03:41:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@720 -- # xtrace_disable 00:31:31.971 03:41:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:31.971 03:41:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=2531998 00:31:31.971 03:41:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:31:31.971 03:41:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 2531998 00:31:31.971 03:41:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@827 -- # '[' -z 2531998 ']' 00:31:31.971 03:41:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:31.971 03:41:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@832 -- # local max_retries=100 00:31:31.971 03:41:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:31.971 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:31.971 03:41:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # xtrace_disable 00:31:31.971 03:41:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:31.971 [2024-07-21 03:41:17.081269] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:31:31.971 [2024-07-21 03:41:17.081343] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:31.971 EAL: No free 2048 kB hugepages reported on node 1 00:31:31.971 [2024-07-21 03:41:17.146315] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:31.971 [2024-07-21 03:41:17.229807] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:31.971 [2024-07-21 03:41:17.229864] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:31.971 [2024-07-21 03:41:17.229878] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:31.971 [2024-07-21 03:41:17.229902] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:31.971 [2024-07-21 03:41:17.229912] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:31.971 [2024-07-21 03:41:17.229938] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:32.229 03:41:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:31:32.229 03:41:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@860 -- # return 0 00:31:32.229 03:41:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:32.229 03:41:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:32.229 03:41:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:32.229 03:41:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:32.229 03:41:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:32.229 03:41:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:32.229 03:41:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:32.229 [2024-07-21 03:41:17.361931] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:32.229 03:41:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:32.229 03:41:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:31:32.229 03:41:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:32.229 03:41:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:32.229 [2024-07-21 03:41:17.370134] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:31:32.229 03:41:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:32.229 03:41:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:31:32.229 03:41:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:32.229 03:41:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:32.229 null0 00:31:32.229 03:41:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:32.229 03:41:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:31:32.229 03:41:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:32.229 03:41:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:32.229 null1 00:31:32.229 03:41:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:32.229 03:41:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:31:32.229 03:41:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:32.229 03:41:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:32.229 03:41:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:32.229 03:41:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=2532084 00:31:32.229 03:41:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:31:32.229 03:41:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 2532084 /tmp/host.sock 00:31:32.229 03:41:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@827 -- # '[' -z 2532084 ']' 00:31:32.229 03:41:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@831 -- # local rpc_addr=/tmp/host.sock 00:31:32.229 03:41:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@832 -- # local max_retries=100 00:31:32.229 03:41:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:31:32.229 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:31:32.229 03:41:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # xtrace_disable 00:31:32.229 03:41:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:32.229 [2024-07-21 03:41:17.441741] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:31:32.229 [2024-07-21 03:41:17.441806] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2532084 ] 00:31:32.229 EAL: No free 2048 kB hugepages reported on node 1 00:31:32.229 [2024-07-21 03:41:17.502789] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:32.488 [2024-07-21 03:41:17.594413] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:32.488 03:41:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:31:32.488 03:41:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@860 -- # return 0 00:31:32.488 03:41:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:32.488 03:41:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:31:32.488 03:41:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:32.488 03:41:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:32.488 03:41:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:32.488 03:41:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:31:32.488 03:41:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:32.488 03:41:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:32.488 03:41:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:32.488 03:41:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:31:32.488 03:41:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:31:32.488 03:41:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:32.488 03:41:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:32.488 03:41:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:32.488 03:41:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:32.488 03:41:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:32.488 03:41:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:32.488 03:41:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:32.488 03:41:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:31:32.488 03:41:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:31:32.488 03:41:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:32.488 03:41:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:32.488 03:41:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:32.488 03:41:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:32.488 03:41:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:32.488 03:41:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:32.488 03:41:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:32.746 03:41:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:31:32.746 03:41:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:31:32.746 03:41:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:32.746 03:41:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:32.746 03:41:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:32.746 03:41:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:31:32.746 03:41:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:32.746 03:41:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:32.746 03:41:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:32.746 03:41:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:32.746 03:41:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:32.746 03:41:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:32.746 03:41:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:32.746 03:41:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:31:32.746 03:41:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:31:32.746 03:41:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:32.746 03:41:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:32.746 03:41:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:32.746 03:41:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:32.746 03:41:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:32.746 03:41:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:32.746 03:41:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:32.746 03:41:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:31:32.746 03:41:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:31:32.746 03:41:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:32.746 03:41:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:32.746 03:41:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:32.746 03:41:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:31:32.746 03:41:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:32.746 03:41:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:32.746 03:41:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:32.746 03:41:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:32.746 03:41:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:32.746 03:41:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:32.746 03:41:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:32.746 03:41:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:31:32.746 03:41:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:31:32.746 03:41:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:32.746 03:41:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:32.746 03:41:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:32.746 03:41:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:32.746 03:41:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:32.746 03:41:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:32.746 03:41:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:32.746 03:41:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:31:32.746 03:41:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:32.746 03:41:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:32.746 03:41:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:32.746 [2024-07-21 03:41:17.975767] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:32.746 03:41:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:32.746 03:41:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:31:32.746 03:41:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:32.746 03:41:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:32.746 03:41:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:32.746 03:41:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:32.746 03:41:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:32.746 03:41:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:32.746 03:41:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:32.746 03:41:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:31:32.746 03:41:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:31:32.746 03:41:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:32.746 03:41:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:32.746 03:41:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:32.746 03:41:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:32.746 03:41:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:32.746 03:41:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:32.746 03:41:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:33.004 03:41:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:31:33.004 03:41:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:31:33.004 03:41:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:31:33.004 03:41:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:33.004 03:41:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:33.004 03:41:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:33.004 03:41:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:33.004 03:41:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:33.004 03:41:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:31:33.004 03:41:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:31:33.004 03:41:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:33.004 03:41:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:31:33.004 03:41:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:33.004 03:41:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:33.004 03:41:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:31:33.004 03:41:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:31:33.004 03:41:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:31:33.004 03:41:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:33.004 03:41:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:31:33.004 03:41:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:33.004 03:41:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:33.004 03:41:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:33.004 03:41:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:33.004 03:41:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:33.004 03:41:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:33.004 03:41:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:33.004 03:41:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:31:33.004 03:41:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:31:33.004 03:41:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:33.004 03:41:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:33.004 03:41:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:33.004 03:41:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:33.004 03:41:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:33.004 03:41:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:33.004 03:41:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:33.004 03:41:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ '' == \n\v\m\e\0 ]] 00:31:33.004 03:41:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # sleep 1 00:31:33.569 [2024-07-21 03:41:18.729107] bdev_nvme.c:6984:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:31:33.569 [2024-07-21 03:41:18.729136] bdev_nvme.c:7064:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:31:33.569 [2024-07-21 03:41:18.729160] bdev_nvme.c:6947:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:33.569 [2024-07-21 03:41:18.816448] bdev_nvme.c:6913:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:31:33.827 [2024-07-21 03:41:19.000571] bdev_nvme.c:6803:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:31:33.827 [2024-07-21 03:41:19.000598] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:31:34.086 03:41:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:34.086 03:41:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:31:34.086 03:41:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:31:34.086 03:41:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:34.086 03:41:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:34.086 03:41:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:34.086 03:41:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:34.086 03:41:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:34.086 03:41:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:34.086 03:41:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:34.086 03:41:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:34.086 03:41:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:34.086 03:41:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:31:34.086 03:41:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:31:34.086 03:41:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:34.086 03:41:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:34.086 03:41:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:31:34.086 03:41:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:31:34.086 03:41:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:34.086 03:41:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:34.086 03:41:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:34.086 03:41:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:34.086 03:41:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:34.086 03:41:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:34.086 03:41:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:34.086 03:41:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:31:34.086 03:41:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:34.086 03:41:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:31:34.086 03:41:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:31:34.086 03:41:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:34.086 03:41:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:34.086 03:41:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:31:34.086 03:41:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:31:34.086 03:41:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:31:34.086 03:41:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:31:34.086 03:41:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:34.086 03:41:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:34.086 03:41:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:31:34.086 03:41:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:31:34.086 03:41:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:34.086 03:41:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4420 == \4\4\2\0 ]] 00:31:34.086 03:41:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:34.086 03:41:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:31:34.086 03:41:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:31:34.086 03:41:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:34.086 03:41:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:34.086 03:41:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:34.086 03:41:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:34.086 03:41:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:34.086 03:41:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:31:34.086 03:41:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:31:34.086 03:41:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:34.086 03:41:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:31:34.086 03:41:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:34.086 03:41:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:34.086 03:41:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:31:34.086 03:41:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:31:34.086 03:41:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:31:34.086 03:41:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:34.086 03:41:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:31:34.086 03:41:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:34.086 03:41:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:34.086 03:41:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:34.086 03:41:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:34.086 03:41:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:34.086 03:41:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:34.086 03:41:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:34.086 03:41:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:31:34.086 03:41:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:31:34.086 03:41:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:34.086 03:41:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:34.086 03:41:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:34.086 03:41:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:34.086 03:41:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:34.086 03:41:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:34.086 03:41:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:34.086 03:41:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:34.086 03:41:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:34.086 03:41:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:31:34.086 03:41:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:31:34.086 03:41:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:34.086 03:41:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:34.086 03:41:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:34.086 03:41:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:34.087 03:41:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:34.087 03:41:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:31:34.087 03:41:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:31:34.087 03:41:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:31:34.087 03:41:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:34.087 03:41:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:34.087 03:41:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:34.087 03:41:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:31:34.087 03:41:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:31:34.087 03:41:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:31:34.087 03:41:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:34.087 03:41:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:31:34.087 03:41:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:34.087 03:41:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:34.344 [2024-07-21 03:41:19.400023] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:34.344 [2024-07-21 03:41:19.401072] bdev_nvme.c:6966:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:31:34.344 [2024-07-21 03:41:19.401120] bdev_nvme.c:6947:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:34.344 03:41:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:34.344 03:41:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:34.344 03:41:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:34.344 03:41:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:34.344 03:41:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:34.344 03:41:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:31:34.344 03:41:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:31:34.344 03:41:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:34.344 03:41:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:34.344 03:41:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:34.344 03:41:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:34.344 03:41:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:34.344 03:41:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:34.344 03:41:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:34.344 03:41:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:34.344 03:41:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:34.344 03:41:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:34.344 03:41:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:34.344 03:41:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:34.344 03:41:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:34.344 03:41:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:31:34.344 03:41:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:31:34.344 03:41:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:34.344 03:41:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:34.344 03:41:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:34.344 03:41:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:34.344 03:41:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:34.344 03:41:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:34.344 03:41:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:34.344 03:41:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:34.344 03:41:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:34.344 03:41:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:31:34.344 03:41:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:31:34.344 03:41:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:34.344 03:41:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:34.344 03:41:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:31:34.344 03:41:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:31:34.344 03:41:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:31:34.344 03:41:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:34.344 03:41:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:31:34.344 03:41:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:34.344 03:41:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:31:34.344 03:41:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:31:34.344 03:41:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:34.344 03:41:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:31:34.344 03:41:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # sleep 1 00:31:34.344 [2024-07-21 03:41:19.527027] bdev_nvme.c:6908:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:31:34.344 [2024-07-21 03:41:19.625696] bdev_nvme.c:6803:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:31:34.344 [2024-07-21 03:41:19.625717] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:31:34.344 [2024-07-21 03:41:19.625726] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:31:35.276 03:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:35.276 03:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:31:35.276 03:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:31:35.277 03:41:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:31:35.277 03:41:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:31:35.277 03:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:35.277 03:41:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:31:35.277 03:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:35.277 03:41:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:31:35.277 03:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:35.277 03:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:31:35.277 03:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:35.277 03:41:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:31:35.277 03:41:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:31:35.277 03:41:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:35.277 03:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:35.277 03:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:35.277 03:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:35.277 03:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:35.277 03:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:31:35.277 03:41:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:31:35.277 03:41:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:31:35.277 03:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:35.277 03:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:35.277 03:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:35.536 03:41:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:31:35.536 03:41:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:31:35.536 03:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:31:35.536 03:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:35.536 03:41:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:35.536 03:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:35.536 03:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:35.536 [2024-07-21 03:41:20.620070] bdev_nvme.c:6966:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:31:35.536 [2024-07-21 03:41:20.620111] bdev_nvme.c:6947:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:35.536 [2024-07-21 03:41:20.621632] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:35.536 [2024-07-21 03:41:20.621693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.536 [2024-07-21 03:41:20.621711] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:35.536 [2024-07-21 03:41:20.621725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.536 [2024-07-21 03:41:20.621755] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:35.536 [2024-07-21 03:41:20.621774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.536 [2024-07-21 03:41:20.621789] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:35.536 [2024-07-21 03:41:20.621802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.536 [2024-07-21 03:41:20.621817] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1386450 is same with the state(5) to be set 00:31:35.536 03:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:35.536 03:41:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:35.536 03:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:35.536 03:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:35.536 03:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:35.536 03:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:31:35.536 03:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:31:35.536 03:41:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:35.536 03:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:35.536 03:41:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:35.536 03:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:35.536 03:41:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:35.536 03:41:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:35.536 [2024-07-21 03:41:20.631630] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1386450 (9): Bad file descriptor 00:31:35.536 03:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:35.536 [2024-07-21 03:41:20.641681] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:35.536 [2024-07-21 03:41:20.641985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.536 [2024-07-21 03:41:20.642020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1386450 with addr=10.0.0.2, port=4420 00:31:35.536 [2024-07-21 03:41:20.642039] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1386450 is same with the state(5) to be set 00:31:35.536 [2024-07-21 03:41:20.642066] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1386450 (9): Bad file descriptor 00:31:35.536 [2024-07-21 03:41:20.642091] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:35.536 [2024-07-21 03:41:20.642108] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:35.536 [2024-07-21 03:41:20.642126] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:35.536 [2024-07-21 03:41:20.642165] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:35.536 [2024-07-21 03:41:20.651759] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:35.536 [2024-07-21 03:41:20.651931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.536 [2024-07-21 03:41:20.651963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1386450 with addr=10.0.0.2, port=4420 00:31:35.536 [2024-07-21 03:41:20.651982] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1386450 is same with the state(5) to be set 00:31:35.536 [2024-07-21 03:41:20.652007] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1386450 (9): Bad file descriptor 00:31:35.536 [2024-07-21 03:41:20.652032] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:35.536 [2024-07-21 03:41:20.652048] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:35.536 [2024-07-21 03:41:20.652064] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:35.536 [2024-07-21 03:41:20.652086] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:35.536 [2024-07-21 03:41:20.661831] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:35.536 [2024-07-21 03:41:20.661981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.536 [2024-07-21 03:41:20.662010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1386450 with addr=10.0.0.2, port=4420 00:31:35.536 [2024-07-21 03:41:20.662027] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1386450 is same with the state(5) to be set 00:31:35.536 [2024-07-21 03:41:20.662064] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1386450 (9): Bad file descriptor 00:31:35.536 [2024-07-21 03:41:20.662086] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:35.536 [2024-07-21 03:41:20.662101] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:35.536 [2024-07-21 03:41:20.662114] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:35.536 [2024-07-21 03:41:20.662133] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:35.536 03:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:35.536 03:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:35.536 03:41:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:35.536 03:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:35.536 03:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:35.536 03:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:35.536 03:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:31:35.536 03:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:31:35.536 03:41:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:35.536 03:41:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:35.536 03:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:35.536 03:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:35.536 03:41:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:35.536 03:41:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:35.536 [2024-07-21 03:41:20.671917] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:35.536 [2024-07-21 03:41:20.672121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.536 [2024-07-21 03:41:20.672152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1386450 with addr=10.0.0.2, port=4420 00:31:35.536 [2024-07-21 03:41:20.672171] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1386450 is same with the state(5) to be set 00:31:35.536 [2024-07-21 03:41:20.672196] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1386450 (9): Bad file descriptor 00:31:35.536 [2024-07-21 03:41:20.672220] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:35.536 [2024-07-21 03:41:20.672236] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:35.536 [2024-07-21 03:41:20.672252] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:35.536 [2024-07-21 03:41:20.672287] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:35.536 [2024-07-21 03:41:20.681994] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:35.536 [2024-07-21 03:41:20.682161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.536 [2024-07-21 03:41:20.682206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1386450 with addr=10.0.0.2, port=4420 00:31:35.536 [2024-07-21 03:41:20.682224] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1386450 is same with the state(5) to be set 00:31:35.536 [2024-07-21 03:41:20.682247] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1386450 (9): Bad file descriptor 00:31:35.537 [2024-07-21 03:41:20.682269] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:35.537 [2024-07-21 03:41:20.682284] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:35.537 [2024-07-21 03:41:20.682299] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:35.537 [2024-07-21 03:41:20.682319] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:35.537 [2024-07-21 03:41:20.692073] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:35.537 [2024-07-21 03:41:20.692250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.537 [2024-07-21 03:41:20.692281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1386450 with addr=10.0.0.2, port=4420 00:31:35.537 [2024-07-21 03:41:20.692299] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1386450 is same with the state(5) to be set 00:31:35.537 [2024-07-21 03:41:20.692331] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1386450 (9): Bad file descriptor 00:31:35.537 [2024-07-21 03:41:20.692356] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:35.537 [2024-07-21 03:41:20.692373] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:35.537 [2024-07-21 03:41:20.692389] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:35.537 [2024-07-21 03:41:20.692410] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:35.537 03:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:35.537 [2024-07-21 03:41:20.702154] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:35.537 [2024-07-21 03:41:20.702308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.537 [2024-07-21 03:41:20.702354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1386450 with addr=10.0.0.2, port=4420 00:31:35.537 [2024-07-21 03:41:20.702372] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1386450 is same with the state(5) to be set 00:31:35.537 [2024-07-21 03:41:20.702397] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1386450 (9): Bad file descriptor 00:31:35.537 [2024-07-21 03:41:20.702420] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:35.537 [2024-07-21 03:41:20.702436] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:35.537 [2024-07-21 03:41:20.702451] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:35.537 [2024-07-21 03:41:20.702472] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:35.537 [2024-07-21 03:41:20.706521] bdev_nvme.c:6771:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:31:35.537 [2024-07-21 03:41:20.706553] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:31:35.537 03:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:35.537 03:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:35.537 03:41:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:31:35.537 03:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:31:35.537 03:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:35.537 03:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:35.537 03:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:31:35.537 03:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:31:35.537 03:41:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:31:35.537 03:41:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:31:35.537 03:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:35.537 03:41:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:31:35.537 03:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:35.537 03:41:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:31:35.537 03:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:35.537 03:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4421 == \4\4\2\1 ]] 00:31:35.537 03:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:35.537 03:41:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:31:35.537 03:41:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:31:35.537 03:41:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:35.537 03:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:35.537 03:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:35.537 03:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:35.537 03:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:35.537 03:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:31:35.537 03:41:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:31:35.537 03:41:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:31:35.537 03:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:35.537 03:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:35.537 03:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:35.537 03:41:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:31:35.537 03:41:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:31:35.537 03:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:31:35.537 03:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:35.537 03:41:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:31:35.537 03:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:35.537 03:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:35.537 03:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:35.537 03:41:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:31:35.537 03:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:31:35.537 03:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:35.537 03:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:35.537 03:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:31:35.537 03:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:31:35.537 03:41:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:35.537 03:41:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:35.537 03:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:35.537 03:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:35.537 03:41:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:35.537 03:41:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:35.537 03:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:35.537 03:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ '' == '' ]] 00:31:35.537 03:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:35.537 03:41:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:31:35.537 03:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:31:35.537 03:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:35.537 03:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:35.537 03:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:31:35.537 03:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:31:35.795 03:41:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:35.795 03:41:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:35.795 03:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:35.795 03:41:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:35.795 03:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:35.795 03:41:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:35.795 03:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:35.795 03:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ '' == '' ]] 00:31:35.795 03:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:35.795 03:41:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:31:35.795 03:41:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:31:35.795 03:41:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:35.795 03:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:35.795 03:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:35.795 03:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:35.795 03:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:35.795 03:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:31:35.795 03:41:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:31:35.795 03:41:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:31:35.795 03:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:35.795 03:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:35.795 03:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:35.795 03:41:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:31:35.795 03:41:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:31:35.795 03:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:31:35.795 03:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:35.795 03:41:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:35.795 03:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:35.795 03:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:36.726 [2024-07-21 03:41:21.988772] bdev_nvme.c:6984:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:31:36.726 [2024-07-21 03:41:21.988813] bdev_nvme.c:7064:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:31:36.726 [2024-07-21 03:41:21.988836] bdev_nvme.c:6947:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:36.982 [2024-07-21 03:41:22.075132] bdev_nvme.c:6913:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:31:36.982 [2024-07-21 03:41:22.183246] bdev_nvme.c:6803:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:31:36.982 [2024-07-21 03:41:22.183291] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:31:36.982 03:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:36.982 03:41:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:36.982 03:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:31:36.982 03:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:36.982 03:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:31:36.982 03:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:36.982 03:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:31:36.982 03:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:36.982 03:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:36.982 03:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:36.982 03:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:36.982 request: 00:31:36.982 { 00:31:36.982 "name": "nvme", 00:31:36.982 "trtype": "tcp", 00:31:36.982 "traddr": "10.0.0.2", 00:31:36.982 "hostnqn": "nqn.2021-12.io.spdk:test", 00:31:36.982 "adrfam": "ipv4", 00:31:36.982 "trsvcid": "8009", 00:31:36.982 "wait_for_attach": true, 00:31:36.982 "method": "bdev_nvme_start_discovery", 00:31:36.982 "req_id": 1 00:31:36.982 } 00:31:36.982 Got JSON-RPC error response 00:31:36.982 response: 00:31:36.982 { 00:31:36.982 "code": -17, 00:31:36.982 "message": "File exists" 00:31:36.982 } 00:31:36.982 03:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:31:36.982 03:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:31:36.982 03:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:31:36.982 03:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:31:36.982 03:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:31:36.982 03:41:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:31:36.982 03:41:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:31:36.983 03:41:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:31:36.983 03:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:36.983 03:41:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:31:36.983 03:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:36.983 03:41:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:31:36.983 03:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:36.983 03:41:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:31:36.983 03:41:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:31:36.983 03:41:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:36.983 03:41:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:36.983 03:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:36.983 03:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:36.983 03:41:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:36.983 03:41:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:36.983 03:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:36.983 03:41:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:36.983 03:41:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:36.983 03:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:31:36.983 03:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:36.983 03:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:31:36.983 03:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:36.983 03:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:31:36.983 03:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:36.983 03:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:36.983 03:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:36.983 03:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:37.239 request: 00:31:37.239 { 00:31:37.239 "name": "nvme_second", 00:31:37.239 "trtype": "tcp", 00:31:37.239 "traddr": "10.0.0.2", 00:31:37.239 "hostnqn": "nqn.2021-12.io.spdk:test", 00:31:37.239 "adrfam": "ipv4", 00:31:37.239 "trsvcid": "8009", 00:31:37.239 "wait_for_attach": true, 00:31:37.239 "method": "bdev_nvme_start_discovery", 00:31:37.239 "req_id": 1 00:31:37.239 } 00:31:37.239 Got JSON-RPC error response 00:31:37.239 response: 00:31:37.239 { 00:31:37.239 "code": -17, 00:31:37.239 "message": "File exists" 00:31:37.239 } 00:31:37.239 03:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:31:37.239 03:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:31:37.239 03:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:31:37.239 03:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:31:37.239 03:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:31:37.239 03:41:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:31:37.239 03:41:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:31:37.239 03:41:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:31:37.239 03:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:37.239 03:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:37.239 03:41:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:31:37.239 03:41:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:31:37.239 03:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:37.239 03:41:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:31:37.239 03:41:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:31:37.239 03:41:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:37.239 03:41:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:37.239 03:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:37.239 03:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:37.239 03:41:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:37.239 03:41:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:37.239 03:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:37.239 03:41:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:37.239 03:41:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:31:37.239 03:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:31:37.239 03:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:31:37.239 03:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:31:37.239 03:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:37.239 03:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:31:37.239 03:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:37.239 03:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:31:37.239 03:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:37.239 03:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:38.169 [2024-07-21 03:41:23.395740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.169 [2024-07-21 03:41:23.395792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1382500 with addr=10.0.0.2, port=8010 00:31:38.169 [2024-07-21 03:41:23.395820] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:31:38.169 [2024-07-21 03:41:23.395835] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:31:38.169 [2024-07-21 03:41:23.395848] bdev_nvme.c:7046:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:31:39.144 [2024-07-21 03:41:24.398184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.144 [2024-07-21 03:41:24.398248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1382500 with addr=10.0.0.2, port=8010 00:31:39.144 [2024-07-21 03:41:24.398277] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:31:39.144 [2024-07-21 03:41:24.398292] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:31:39.144 [2024-07-21 03:41:24.398306] bdev_nvme.c:7046:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:31:40.514 [2024-07-21 03:41:25.400347] bdev_nvme.c:7027:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:31:40.514 request: 00:31:40.514 { 00:31:40.514 "name": "nvme_second", 00:31:40.514 "trtype": "tcp", 00:31:40.514 "traddr": "10.0.0.2", 00:31:40.514 "hostnqn": "nqn.2021-12.io.spdk:test", 00:31:40.514 "adrfam": "ipv4", 00:31:40.514 "trsvcid": "8010", 00:31:40.514 "attach_timeout_ms": 3000, 00:31:40.514 "method": "bdev_nvme_start_discovery", 00:31:40.514 "req_id": 1 00:31:40.514 } 00:31:40.514 Got JSON-RPC error response 00:31:40.514 response: 00:31:40.514 { 00:31:40.514 "code": -110, 00:31:40.514 "message": "Connection timed out" 00:31:40.514 } 00:31:40.514 03:41:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:31:40.514 03:41:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:31:40.514 03:41:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:31:40.514 03:41:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:31:40.514 03:41:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:31:40.514 03:41:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:31:40.514 03:41:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:31:40.514 03:41:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:31:40.514 03:41:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:40.514 03:41:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:31:40.514 03:41:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:40.514 03:41:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:31:40.514 03:41:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:40.514 03:41:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:31:40.514 03:41:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:31:40.514 03:41:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 2532084 00:31:40.514 03:41:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:31:40.514 03:41:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:40.514 03:41:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:31:40.514 03:41:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:40.514 03:41:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:31:40.514 03:41:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:40.514 03:41:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:40.514 rmmod nvme_tcp 00:31:40.514 rmmod nvme_fabrics 00:31:40.514 rmmod nvme_keyring 00:31:40.514 03:41:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:40.514 03:41:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:31:40.514 03:41:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:31:40.514 03:41:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 2531998 ']' 00:31:40.514 03:41:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 2531998 00:31:40.514 03:41:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@946 -- # '[' -z 2531998 ']' 00:31:40.514 03:41:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@950 -- # kill -0 2531998 00:31:40.514 03:41:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@951 -- # uname 00:31:40.514 03:41:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:31:40.514 03:41:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2531998 00:31:40.514 03:41:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:31:40.514 03:41:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:31:40.514 03:41:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2531998' 00:31:40.514 killing process with pid 2531998 00:31:40.514 03:41:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@965 -- # kill 2531998 00:31:40.514 03:41:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@970 -- # wait 2531998 00:31:40.514 03:41:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:40.514 03:41:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:40.514 03:41:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:40.514 03:41:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:40.514 03:41:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:40.514 03:41:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:40.514 03:41:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:40.514 03:41:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:43.047 03:41:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:43.047 00:31:43.047 real 0m12.966s 00:31:43.047 user 0m18.752s 00:31:43.047 sys 0m2.662s 00:31:43.047 03:41:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1122 -- # xtrace_disable 00:31:43.047 03:41:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:43.047 ************************************ 00:31:43.047 END TEST nvmf_host_discovery 00:31:43.047 ************************************ 00:31:43.047 03:41:27 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:31:43.047 03:41:27 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:31:43.047 03:41:27 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:31:43.047 03:41:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:43.047 ************************************ 00:31:43.047 START TEST nvmf_host_multipath_status 00:31:43.047 ************************************ 00:31:43.047 03:41:27 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:31:43.047 * Looking for test storage... 00:31:43.047 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:43.047 03:41:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:43.047 03:41:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:31:43.047 03:41:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:43.047 03:41:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:43.047 03:41:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:43.047 03:41:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:43.047 03:41:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:43.047 03:41:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:43.047 03:41:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:43.047 03:41:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:43.047 03:41:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:43.047 03:41:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:43.047 03:41:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:31:43.047 03:41:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:31:43.047 03:41:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:43.047 03:41:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:43.047 03:41:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:43.047 03:41:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:43.047 03:41:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:43.047 03:41:27 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:43.047 03:41:27 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:43.047 03:41:27 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:43.047 03:41:27 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:43.047 03:41:27 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:43.047 03:41:27 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:43.047 03:41:27 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:31:43.047 03:41:27 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:43.047 03:41:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:31:43.047 03:41:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:43.047 03:41:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:43.047 03:41:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:43.047 03:41:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:43.047 03:41:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:43.047 03:41:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:43.047 03:41:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:43.047 03:41:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:43.047 03:41:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:31:43.047 03:41:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:31:43.047 03:41:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:43.047 03:41:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:31:43.047 03:41:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:31:43.047 03:41:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:31:43.047 03:41:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:31:43.047 03:41:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:43.047 03:41:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:43.047 03:41:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:43.047 03:41:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:43.047 03:41:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:43.047 03:41:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:43.047 03:41:27 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:43.047 03:41:27 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:43.047 03:41:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:43.048 03:41:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:43.048 03:41:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:31:43.048 03:41:27 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:44.973 03:41:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:44.973 03:41:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:31:44.973 03:41:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:44.973 03:41:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:44.973 03:41:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:44.973 03:41:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:44.973 03:41:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:44.973 03:41:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:31:44.973 03:41:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:44.973 03:41:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:31:44.973 03:41:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:31:44.973 03:41:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:31:44.973 03:41:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:31:44.973 03:41:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:31:44.973 03:41:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:31:44.973 03:41:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:44.973 03:41:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:44.973 03:41:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:44.973 03:41:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:44.973 03:41:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:44.973 03:41:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:44.973 03:41:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:44.973 03:41:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:44.973 03:41:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:44.973 03:41:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:44.973 03:41:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:44.973 03:41:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:44.973 03:41:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:44.973 03:41:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:44.973 03:41:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:44.973 03:41:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:44.973 03:41:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:44.973 03:41:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:44.973 03:41:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:31:44.973 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:31:44.973 03:41:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:44.973 03:41:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:44.973 03:41:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:44.973 03:41:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:44.973 03:41:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:44.973 03:41:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:44.973 03:41:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:31:44.973 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:31:44.973 03:41:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:44.973 03:41:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:44.973 03:41:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:44.973 03:41:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:44.973 03:41:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:44.973 03:41:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:44.973 03:41:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:44.973 03:41:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:44.973 03:41:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:44.973 03:41:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:44.973 03:41:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:44.973 03:41:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:44.973 03:41:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:44.973 03:41:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:44.973 03:41:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:44.973 03:41:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:31:44.973 Found net devices under 0000:0a:00.0: cvl_0_0 00:31:44.973 03:41:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:44.973 03:41:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:44.973 03:41:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:44.973 03:41:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:44.973 03:41:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:44.973 03:41:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:44.973 03:41:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:44.973 03:41:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:44.973 03:41:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:31:44.973 Found net devices under 0000:0a:00.1: cvl_0_1 00:31:44.973 03:41:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:44.973 03:41:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:44.973 03:41:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:31:44.973 03:41:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:44.973 03:41:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:44.973 03:41:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:44.973 03:41:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:44.973 03:41:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:44.973 03:41:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:44.973 03:41:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:44.973 03:41:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:44.973 03:41:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:44.973 03:41:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:44.973 03:41:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:44.973 03:41:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:44.974 03:41:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:44.974 03:41:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:44.974 03:41:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:44.974 03:41:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:44.974 03:41:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:44.974 03:41:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:44.974 03:41:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:44.974 03:41:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:44.974 03:41:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:44.974 03:41:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:44.974 03:41:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:44.974 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:44.974 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.141 ms 00:31:44.974 00:31:44.974 --- 10.0.0.2 ping statistics --- 00:31:44.974 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:44.974 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:31:44.974 03:41:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:44.974 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:44.974 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.095 ms 00:31:44.974 00:31:44.974 --- 10.0.0.1 ping statistics --- 00:31:44.974 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:44.974 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:31:44.974 03:41:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:44.974 03:41:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:31:44.974 03:41:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:44.974 03:41:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:44.974 03:41:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:44.974 03:41:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:44.974 03:41:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:44.974 03:41:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:44.974 03:41:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:44.974 03:41:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:31:44.974 03:41:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:44.974 03:41:30 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@720 -- # xtrace_disable 00:31:44.974 03:41:30 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:44.974 03:41:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=2535110 00:31:44.974 03:41:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:31:44.974 03:41:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 2535110 00:31:44.974 03:41:30 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@827 -- # '[' -z 2535110 ']' 00:31:44.974 03:41:30 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:44.974 03:41:30 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@832 -- # local max_retries=100 00:31:44.974 03:41:30 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:44.974 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:44.974 03:41:30 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # xtrace_disable 00:31:44.974 03:41:30 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:44.974 [2024-07-21 03:41:30.084657] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:31:44.974 [2024-07-21 03:41:30.084750] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:44.974 EAL: No free 2048 kB hugepages reported on node 1 00:31:44.974 [2024-07-21 03:41:30.149138] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:31:44.974 [2024-07-21 03:41:30.240502] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:44.974 [2024-07-21 03:41:30.240565] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:44.974 [2024-07-21 03:41:30.240579] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:44.974 [2024-07-21 03:41:30.240595] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:44.974 [2024-07-21 03:41:30.240605] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:44.974 [2024-07-21 03:41:30.240703] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:44.974 [2024-07-21 03:41:30.240709] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:45.230 03:41:30 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:31:45.230 03:41:30 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # return 0 00:31:45.230 03:41:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:45.230 03:41:30 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:45.231 03:41:30 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:45.231 03:41:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:45.231 03:41:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=2535110 00:31:45.231 03:41:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:45.486 [2024-07-21 03:41:30.626860] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:45.487 03:41:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:31:45.743 Malloc0 00:31:45.743 03:41:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:31:46.000 03:41:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:46.257 03:41:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:46.514 [2024-07-21 03:41:31.651665] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:46.514 03:41:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:31:46.772 [2024-07-21 03:41:31.896286] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:46.772 03:41:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=2535324 00:31:46.772 03:41:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:31:46.772 03:41:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:31:46.772 03:41:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 2535324 /var/tmp/bdevperf.sock 00:31:46.772 03:41:31 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@827 -- # '[' -z 2535324 ']' 00:31:46.772 03:41:31 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:46.772 03:41:31 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@832 -- # local max_retries=100 00:31:46.772 03:41:31 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:46.772 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:46.772 03:41:31 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # xtrace_disable 00:31:46.772 03:41:31 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:47.030 03:41:32 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:31:47.030 03:41:32 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # return 0 00:31:47.030 03:41:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:31:47.287 03:41:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:31:47.851 Nvme0n1 00:31:47.851 03:41:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:31:48.107 Nvme0n1 00:31:48.107 03:41:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:31:48.107 03:41:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:31:50.005 03:41:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:31:50.005 03:41:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:31:50.571 03:41:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:31:50.829 03:41:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:31:51.763 03:41:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:31:51.763 03:41:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:51.763 03:41:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:51.763 03:41:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:52.021 03:41:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:52.021 03:41:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:31:52.021 03:41:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:52.021 03:41:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:52.281 03:41:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:52.281 03:41:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:52.281 03:41:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:52.281 03:41:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:52.540 03:41:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:52.540 03:41:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:52.540 03:41:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:52.540 03:41:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:52.798 03:41:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:52.798 03:41:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:52.798 03:41:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:52.798 03:41:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:53.057 03:41:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:53.057 03:41:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:53.057 03:41:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:53.057 03:41:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:53.315 03:41:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:53.315 03:41:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:31:53.315 03:41:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:53.573 03:41:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:31:53.573 03:41:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:31:54.942 03:41:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:31:54.942 03:41:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:31:54.942 03:41:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:54.942 03:41:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:54.942 03:41:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:54.942 03:41:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:31:54.942 03:41:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:54.942 03:41:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:55.199 03:41:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:55.199 03:41:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:55.199 03:41:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:55.199 03:41:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:55.456 03:41:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:55.456 03:41:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:55.456 03:41:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:55.456 03:41:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:55.713 03:41:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:55.713 03:41:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:55.713 03:41:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:55.713 03:41:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:55.971 03:41:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:55.971 03:41:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:55.971 03:41:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:55.971 03:41:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:56.227 03:41:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:56.227 03:41:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:31:56.227 03:41:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:56.484 03:41:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:31:56.741 03:41:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:31:57.673 03:41:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:31:57.673 03:41:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:57.673 03:41:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:57.673 03:41:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:57.930 03:41:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:57.930 03:41:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:31:57.930 03:41:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:57.930 03:41:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:58.188 03:41:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:58.188 03:41:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:58.188 03:41:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:58.188 03:41:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:58.444 03:41:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:58.444 03:41:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:58.444 03:41:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:58.444 03:41:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:58.702 03:41:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:58.702 03:41:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:58.702 03:41:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:58.702 03:41:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:58.959 03:41:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:58.959 03:41:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:58.959 03:41:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:58.959 03:41:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:59.216 03:41:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:59.216 03:41:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:31:59.216 03:41:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:59.474 03:41:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:31:59.731 03:41:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:32:00.676 03:41:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:32:00.676 03:41:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:32:00.676 03:41:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:00.677 03:41:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:00.934 03:41:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:00.934 03:41:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:32:00.934 03:41:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:00.934 03:41:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:01.190 03:41:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:01.190 03:41:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:01.190 03:41:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:01.190 03:41:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:01.447 03:41:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:01.447 03:41:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:01.447 03:41:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:01.447 03:41:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:01.704 03:41:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:01.704 03:41:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:32:01.704 03:41:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:01.704 03:41:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:01.961 03:41:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:01.961 03:41:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:32:01.961 03:41:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:01.961 03:41:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:02.217 03:41:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:02.217 03:41:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:32:02.217 03:41:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:32:02.474 03:41:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:32:02.731 03:41:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:32:03.661 03:41:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:32:03.661 03:41:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:32:03.661 03:41:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:03.661 03:41:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:03.918 03:41:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:03.918 03:41:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:32:03.918 03:41:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:03.918 03:41:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:04.176 03:41:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:04.176 03:41:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:04.176 03:41:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:04.176 03:41:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:04.432 03:41:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:04.432 03:41:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:04.432 03:41:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:04.433 03:41:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:04.689 03:41:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:04.690 03:41:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:32:04.690 03:41:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:04.690 03:41:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:04.947 03:41:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:04.947 03:41:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:32:04.947 03:41:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:04.947 03:41:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:05.205 03:41:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:05.205 03:41:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:32:05.205 03:41:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:32:05.463 03:41:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:32:05.719 03:41:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:32:06.651 03:41:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:32:06.651 03:41:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:32:06.651 03:41:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:06.651 03:41:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:06.909 03:41:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:06.909 03:41:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:32:06.909 03:41:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:06.909 03:41:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:07.166 03:41:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:07.166 03:41:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:07.166 03:41:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:07.166 03:41:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:07.424 03:41:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:07.424 03:41:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:07.424 03:41:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:07.424 03:41:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:07.681 03:41:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:07.681 03:41:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:32:07.681 03:41:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:07.681 03:41:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:07.938 03:41:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:07.938 03:41:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:32:07.938 03:41:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:07.938 03:41:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:08.194 03:41:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:08.194 03:41:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:32:08.451 03:41:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:32:08.452 03:41:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:32:08.708 03:41:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:32:08.964 03:41:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:32:09.895 03:41:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:32:09.895 03:41:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:32:09.895 03:41:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:09.895 03:41:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:10.153 03:41:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:10.153 03:41:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:32:10.153 03:41:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:10.153 03:41:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:10.409 03:41:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:10.409 03:41:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:10.409 03:41:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:10.409 03:41:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:10.667 03:41:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:10.667 03:41:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:10.667 03:41:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:10.667 03:41:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:10.923 03:41:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:10.923 03:41:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:32:10.923 03:41:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:10.923 03:41:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:11.180 03:41:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:11.180 03:41:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:32:11.180 03:41:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:11.180 03:41:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:11.437 03:41:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:11.437 03:41:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:32:11.437 03:41:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:32:11.694 03:41:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:32:11.950 03:41:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:32:13.321 03:41:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:32:13.321 03:41:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:32:13.321 03:41:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:13.321 03:41:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:13.321 03:41:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:13.321 03:41:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:32:13.321 03:41:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:13.321 03:41:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:13.579 03:41:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:13.579 03:41:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:13.579 03:41:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:13.579 03:41:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:13.837 03:41:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:13.837 03:41:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:13.837 03:41:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:13.837 03:41:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:14.095 03:41:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:14.095 03:41:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:32:14.095 03:41:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:14.095 03:41:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:14.352 03:41:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:14.352 03:41:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:32:14.352 03:41:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:14.352 03:41:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:14.609 03:41:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:14.609 03:41:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:32:14.609 03:41:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:32:14.867 03:41:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:32:15.124 03:42:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:32:16.057 03:42:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:32:16.058 03:42:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:32:16.058 03:42:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:16.058 03:42:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:16.345 03:42:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:16.345 03:42:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:32:16.345 03:42:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:16.345 03:42:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:16.603 03:42:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:16.603 03:42:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:16.603 03:42:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:16.603 03:42:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:16.861 03:42:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:16.861 03:42:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:16.861 03:42:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:16.861 03:42:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:17.118 03:42:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:17.118 03:42:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:32:17.118 03:42:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:17.118 03:42:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:17.377 03:42:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:17.377 03:42:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:32:17.377 03:42:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:17.377 03:42:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:17.635 03:42:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:17.635 03:42:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:32:17.635 03:42:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:32:17.894 03:42:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:32:18.150 03:42:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:32:19.082 03:42:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:32:19.082 03:42:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:32:19.082 03:42:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:19.082 03:42:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:19.340 03:42:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:19.340 03:42:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:32:19.340 03:42:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:19.340 03:42:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:19.597 03:42:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:19.597 03:42:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:19.597 03:42:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:19.597 03:42:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:19.854 03:42:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:19.854 03:42:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:19.854 03:42:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:19.854 03:42:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:20.111 03:42:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:20.111 03:42:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:32:20.111 03:42:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:20.111 03:42:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:20.369 03:42:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:20.369 03:42:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:32:20.369 03:42:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:20.369 03:42:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:20.626 03:42:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:20.626 03:42:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 2535324 00:32:20.626 03:42:05 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@946 -- # '[' -z 2535324 ']' 00:32:20.626 03:42:05 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # kill -0 2535324 00:32:20.626 03:42:05 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # uname 00:32:20.626 03:42:05 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:32:20.626 03:42:05 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2535324 00:32:20.626 03:42:05 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:32:20.626 03:42:05 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:32:20.626 03:42:05 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2535324' 00:32:20.626 killing process with pid 2535324 00:32:20.626 03:42:05 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@965 -- # kill 2535324 00:32:20.626 03:42:05 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # wait 2535324 00:32:20.885 Connection closed with partial response: 00:32:20.885 00:32:20.885 00:32:20.885 03:42:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 2535324 00:32:20.885 03:42:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:32:20.885 [2024-07-21 03:41:31.960439] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:32:20.885 [2024-07-21 03:41:31.960551] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2535324 ] 00:32:20.885 EAL: No free 2048 kB hugepages reported on node 1 00:32:20.885 [2024-07-21 03:41:32.023707] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:20.886 [2024-07-21 03:41:32.112918] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:32:20.886 Running I/O for 90 seconds... 00:32:20.886 [2024-07-21 03:41:47.684168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:83608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:20.886 [2024-07-21 03:41:47.684222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:32:20.886 [2024-07-21 03:41:47.684282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:82720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.886 [2024-07-21 03:41:47.684302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:32:20.886 [2024-07-21 03:41:47.684326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:82728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.886 [2024-07-21 03:41:47.684344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:32:20.886 [2024-07-21 03:41:47.684366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:82736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.886 [2024-07-21 03:41:47.684383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:32:20.886 [2024-07-21 03:41:47.684417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:82744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.886 [2024-07-21 03:41:47.684448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:32:20.886 [2024-07-21 03:41:47.684472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:82752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.886 [2024-07-21 03:41:47.684488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:20.886 [2024-07-21 03:41:47.684509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:82760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.886 [2024-07-21 03:41:47.684524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:32:20.886 [2024-07-21 03:41:47.684545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:82768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.886 [2024-07-21 03:41:47.684560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:32:20.886 [2024-07-21 03:41:47.684580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:82776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.886 [2024-07-21 03:41:47.684606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:32:20.886 [2024-07-21 03:41:47.684652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:82784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.886 [2024-07-21 03:41:47.684668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:32:20.886 [2024-07-21 03:41:47.684689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:82792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.886 [2024-07-21 03:41:47.684714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:32:20.886 [2024-07-21 03:41:47.684737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:82800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.886 [2024-07-21 03:41:47.684752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:32:20.886 [2024-07-21 03:41:47.684773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:82808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.886 [2024-07-21 03:41:47.684788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:32:20.886 [2024-07-21 03:41:47.684809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:82816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.886 [2024-07-21 03:41:47.684825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:32:20.886 [2024-07-21 03:41:47.684846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:82824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.886 [2024-07-21 03:41:47.684861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:32:20.886 [2024-07-21 03:41:47.684882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:82832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.886 [2024-07-21 03:41:47.684898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:32:20.886 [2024-07-21 03:41:47.684934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:82840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.886 [2024-07-21 03:41:47.684950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:32:20.886 [2024-07-21 03:41:47.685053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:83616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:20.886 [2024-07-21 03:41:47.685075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:32:20.886 [2024-07-21 03:41:47.685101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:83624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:20.886 [2024-07-21 03:41:47.685130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:32:20.886 [2024-07-21 03:41:47.685153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:83632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:20.886 [2024-07-21 03:41:47.685169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:32:20.886 [2024-07-21 03:41:47.685203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:83640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:20.886 [2024-07-21 03:41:47.685219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:32:20.886 [2024-07-21 03:41:47.685242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:83648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:20.886 [2024-07-21 03:41:47.685258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:32:20.886 [2024-07-21 03:41:47.685280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:83656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:20.886 [2024-07-21 03:41:47.685296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:32:20.886 [2024-07-21 03:41:47.685324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:83664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:20.886 [2024-07-21 03:41:47.685356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:32:20.886 [2024-07-21 03:41:47.685852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:83672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:20.886 [2024-07-21 03:41:47.685874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:32:20.886 [2024-07-21 03:41:47.685918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:83680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:20.886 [2024-07-21 03:41:47.685937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:32:20.886 [2024-07-21 03:41:47.685962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:83688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:20.886 [2024-07-21 03:41:47.685980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:20.886 [2024-07-21 03:41:47.686005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:83696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:20.886 [2024-07-21 03:41:47.686022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:20.886 [2024-07-21 03:41:47.686046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:83704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:20.886 [2024-07-21 03:41:47.686063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:20.886 [2024-07-21 03:41:47.686087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:83712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:20.886 [2024-07-21 03:41:47.686104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:32:20.886 [2024-07-21 03:41:47.686128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:83720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:20.886 [2024-07-21 03:41:47.686168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:32:20.886 [2024-07-21 03:41:47.686192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:82848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.886 [2024-07-21 03:41:47.686209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:32:20.886 [2024-07-21 03:41:47.686248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:82856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.886 [2024-07-21 03:41:47.686263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:32:20.886 [2024-07-21 03:41:47.686286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:82864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.886 [2024-07-21 03:41:47.686301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:32:20.886 [2024-07-21 03:41:47.686323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:82872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.886 [2024-07-21 03:41:47.686339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:32:20.886 [2024-07-21 03:41:47.686366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:82880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.886 [2024-07-21 03:41:47.686382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:32:20.886 [2024-07-21 03:41:47.686404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:82888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.886 [2024-07-21 03:41:47.686420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:32:20.886 [2024-07-21 03:41:47.686442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:82896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.886 [2024-07-21 03:41:47.686458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:32:20.886 [2024-07-21 03:41:47.686480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:82904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.886 [2024-07-21 03:41:47.686495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:32:20.886 [2024-07-21 03:41:47.686517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:82912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.886 [2024-07-21 03:41:47.686533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:32:20.886 [2024-07-21 03:41:47.686555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:82920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.887 [2024-07-21 03:41:47.686573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:32:20.887 [2024-07-21 03:41:47.686648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:82928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.887 [2024-07-21 03:41:47.686668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:32:20.887 [2024-07-21 03:41:47.686695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:82936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.887 [2024-07-21 03:41:47.686711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:20.887 [2024-07-21 03:41:47.686735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:82944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.887 [2024-07-21 03:41:47.686751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:32:20.887 [2024-07-21 03:41:47.686774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:82952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.887 [2024-07-21 03:41:47.686790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:32:20.887 [2024-07-21 03:41:47.686813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:82960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.887 [2024-07-21 03:41:47.686829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:32:20.887 [2024-07-21 03:41:47.686852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:82968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.887 [2024-07-21 03:41:47.686868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:32:20.887 [2024-07-21 03:41:47.686891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:82976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.887 [2024-07-21 03:41:47.686911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:32:20.887 [2024-07-21 03:41:47.686935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:82984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.887 [2024-07-21 03:41:47.686951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:32:20.887 [2024-07-21 03:41:47.686974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:82992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.887 [2024-07-21 03:41:47.687002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:32:20.887 [2024-07-21 03:41:47.687026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:83000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.887 [2024-07-21 03:41:47.687041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:32:20.887 [2024-07-21 03:41:47.687064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:83008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.887 [2024-07-21 03:41:47.687087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:32:20.887 [2024-07-21 03:41:47.687110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:83016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.887 [2024-07-21 03:41:47.687126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:20.887 [2024-07-21 03:41:47.687150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:83024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.887 [2024-07-21 03:41:47.687182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:20.887 [2024-07-21 03:41:47.687208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:83032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.887 [2024-07-21 03:41:47.687224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:20.887 [2024-07-21 03:41:47.687248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:83040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.887 [2024-07-21 03:41:47.687264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:20.887 [2024-07-21 03:41:47.687289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:83048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.887 [2024-07-21 03:41:47.687304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:32:20.887 [2024-07-21 03:41:47.687328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:83056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.887 [2024-07-21 03:41:47.687344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:32:20.887 [2024-07-21 03:41:47.687369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:83064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.887 [2024-07-21 03:41:47.687384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:20.887 [2024-07-21 03:41:47.687409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:83072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.887 [2024-07-21 03:41:47.687428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:20.887 [2024-07-21 03:41:47.687454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:83080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.887 [2024-07-21 03:41:47.687469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:20.887 [2024-07-21 03:41:47.687495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:83088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.887 [2024-07-21 03:41:47.687526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:32:20.887 [2024-07-21 03:41:47.687550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:83096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.887 [2024-07-21 03:41:47.687565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:32:20.887 [2024-07-21 03:41:47.687589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:83104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.887 [2024-07-21 03:41:47.687608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:32:20.887 [2024-07-21 03:41:47.687656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:83112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.887 [2024-07-21 03:41:47.687673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:32:20.887 [2024-07-21 03:41:47.687697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:83120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.887 [2024-07-21 03:41:47.687713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:32:20.887 [2024-07-21 03:41:47.687737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:83128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.887 [2024-07-21 03:41:47.687753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:32:20.887 [2024-07-21 03:41:47.687776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:83136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.887 [2024-07-21 03:41:47.687792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:32:20.887 [2024-07-21 03:41:47.687816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:83144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.887 [2024-07-21 03:41:47.687832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:32:20.887 [2024-07-21 03:41:47.687857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:83152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.887 [2024-07-21 03:41:47.687872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:32:20.887 [2024-07-21 03:41:47.687895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:83160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.887 [2024-07-21 03:41:47.687935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:32:20.887 [2024-07-21 03:41:47.687960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:83168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.887 [2024-07-21 03:41:47.687979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:32:20.887 [2024-07-21 03:41:47.688004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:83176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.887 [2024-07-21 03:41:47.688019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:32:20.887 [2024-07-21 03:41:47.688043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:83184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.887 [2024-07-21 03:41:47.688058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:32:20.887 [2024-07-21 03:41:47.688081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:83192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.887 [2024-07-21 03:41:47.688097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:32:20.887 [2024-07-21 03:41:47.688121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:83200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.887 [2024-07-21 03:41:47.688136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:32:20.887 [2024-07-21 03:41:47.688160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:83208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.887 [2024-07-21 03:41:47.688175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:32:20.887 [2024-07-21 03:41:47.688198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:83216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.887 [2024-07-21 03:41:47.688213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:32:20.887 [2024-07-21 03:41:47.688236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:83224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.887 [2024-07-21 03:41:47.688252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:32:20.887 [2024-07-21 03:41:47.688275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:83232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.887 [2024-07-21 03:41:47.688291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:32:20.887 [2024-07-21 03:41:47.688314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:83240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.887 [2024-07-21 03:41:47.688329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:32:20.887 [2024-07-21 03:41:47.688352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:83248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.887 [2024-07-21 03:41:47.688368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:32:20.888 [2024-07-21 03:41:47.688392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:83256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.888 [2024-07-21 03:41:47.688407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:32:20.888 [2024-07-21 03:41:47.688430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:83264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.888 [2024-07-21 03:41:47.688445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:32:20.888 [2024-07-21 03:41:47.688476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:83272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.888 [2024-07-21 03:41:47.688492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:32:20.888 [2024-07-21 03:41:47.688515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:83280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.888 [2024-07-21 03:41:47.688530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:32:20.888 [2024-07-21 03:41:47.688554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:83728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:20.888 [2024-07-21 03:41:47.688570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:32:20.888 [2024-07-21 03:41:47.688734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:83736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:20.888 [2024-07-21 03:41:47.688772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:32:20.888 [2024-07-21 03:41:47.688806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:83288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.888 [2024-07-21 03:41:47.688824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:32:20.888 [2024-07-21 03:41:47.688853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:83296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.888 [2024-07-21 03:41:47.688870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:32:20.888 [2024-07-21 03:41:47.688898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:83304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.888 [2024-07-21 03:41:47.688915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:32:20.888 [2024-07-21 03:41:47.688959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:83312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.888 [2024-07-21 03:41:47.688975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:20.888 [2024-07-21 03:41:47.689004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:83320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.888 [2024-07-21 03:41:47.689020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:20.888 [2024-07-21 03:41:47.689060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:83328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.888 [2024-07-21 03:41:47.689077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:32:20.888 [2024-07-21 03:41:47.689104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:83336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.888 [2024-07-21 03:41:47.689120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:32:20.888 [2024-07-21 03:41:47.689146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:83344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.888 [2024-07-21 03:41:47.689162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:32:20.888 [2024-07-21 03:41:47.689192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:83352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.888 [2024-07-21 03:41:47.689209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:32:20.888 [2024-07-21 03:41:47.689235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:83360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.888 [2024-07-21 03:41:47.689251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:32:20.888 [2024-07-21 03:41:47.689277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:83368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.888 [2024-07-21 03:41:47.689292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:32:20.888 [2024-07-21 03:41:47.689319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:83376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.888 [2024-07-21 03:41:47.689335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:32:20.888 [2024-07-21 03:41:47.689361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:83384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.888 [2024-07-21 03:41:47.689377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:32:20.888 [2024-07-21 03:41:47.689403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:83392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.888 [2024-07-21 03:41:47.689418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:32:20.888 [2024-07-21 03:41:47.689445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:83400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.888 [2024-07-21 03:41:47.689460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:32:20.888 [2024-07-21 03:41:47.689487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:83408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.888 [2024-07-21 03:41:47.689504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:32:20.888 [2024-07-21 03:41:47.689531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:83416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.888 [2024-07-21 03:41:47.689547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:32:20.888 [2024-07-21 03:41:47.689573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:83424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.888 [2024-07-21 03:41:47.689604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:32:20.888 [2024-07-21 03:41:47.689641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:83432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.888 [2024-07-21 03:41:47.689659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:32:20.888 [2024-07-21 03:41:47.689686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:83440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.888 [2024-07-21 03:41:47.689703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:32:20.888 [2024-07-21 03:41:47.689730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:83448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.888 [2024-07-21 03:41:47.689750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:32:20.888 [2024-07-21 03:41:47.689778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:83456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.888 [2024-07-21 03:41:47.689794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:32:20.888 [2024-07-21 03:41:47.689821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:83464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.888 [2024-07-21 03:41:47.689837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:32:20.888 [2024-07-21 03:41:47.689865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:83472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.888 [2024-07-21 03:41:47.689881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:32:20.888 [2024-07-21 03:41:47.689909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:83480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.888 [2024-07-21 03:41:47.689924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:32:20.888 [2024-07-21 03:41:47.689966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:83488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.888 [2024-07-21 03:41:47.689983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:32:20.888 [2024-07-21 03:41:47.690009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:83496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.888 [2024-07-21 03:41:47.690024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:32:20.888 [2024-07-21 03:41:47.690051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:83504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.888 [2024-07-21 03:41:47.690066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:32:20.888 [2024-07-21 03:41:47.690093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:83512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.888 [2024-07-21 03:41:47.690108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:32:20.888 [2024-07-21 03:41:47.690135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:83520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.888 [2024-07-21 03:41:47.690151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:32:20.888 [2024-07-21 03:41:47.690178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:83528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.888 [2024-07-21 03:41:47.690193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:32:20.888 [2024-07-21 03:41:47.690220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:83536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.888 [2024-07-21 03:41:47.690236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:32:20.888 [2024-07-21 03:41:47.690263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:83544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.888 [2024-07-21 03:41:47.690282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:32:20.888 [2024-07-21 03:41:47.690309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:83552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.888 [2024-07-21 03:41:47.690325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:32:20.888 [2024-07-21 03:41:47.690351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:83560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.888 [2024-07-21 03:41:47.690367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:32:20.888 [2024-07-21 03:41:47.690393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:83568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.889 [2024-07-21 03:41:47.690408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:20.889 [2024-07-21 03:41:47.690435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:83576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.889 [2024-07-21 03:41:47.690450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:20.889 [2024-07-21 03:41:47.690478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:83584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.889 [2024-07-21 03:41:47.690494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:32:20.889 [2024-07-21 03:41:47.690521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:83592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.889 [2024-07-21 03:41:47.690537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:32:20.889 [2024-07-21 03:41:47.690564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:83600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.889 [2024-07-21 03:41:47.690579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:32:20.889 [2024-07-21 03:42:03.265542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:84488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.889 [2024-07-21 03:42:03.265631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:32:20.889 [2024-07-21 03:42:03.265691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:84416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.889 [2024-07-21 03:42:03.265711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:32:20.889 [2024-07-21 03:42:03.265735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:84448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.889 [2024-07-21 03:42:03.265752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:32:20.889 [2024-07-21 03:42:03.265780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:84480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.889 [2024-07-21 03:42:03.265815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:32:20.889 [2024-07-21 03:42:03.266084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:84664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:20.889 [2024-07-21 03:42:03.266122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:32:20.889 [2024-07-21 03:42:03.266170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:84680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:20.889 [2024-07-21 03:42:03.266188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:32:20.889 [2024-07-21 03:42:03.266231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:84696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:20.889 [2024-07-21 03:42:03.266249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:32:20.889 [2024-07-21 03:42:03.266273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:84712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:20.889 [2024-07-21 03:42:03.266290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:32:20.889 [2024-07-21 03:42:03.266313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:84728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:20.889 [2024-07-21 03:42:03.266329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:32:20.889 [2024-07-21 03:42:03.266352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:84744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:20.889 [2024-07-21 03:42:03.266369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:32:20.889 [2024-07-21 03:42:03.266392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:20.889 [2024-07-21 03:42:03.266408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:32:20.889 [2024-07-21 03:42:03.266430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:84776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:20.889 [2024-07-21 03:42:03.266447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:32:20.889 [2024-07-21 03:42:03.266470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:84792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:20.889 [2024-07-21 03:42:03.266487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:32:20.889 [2024-07-21 03:42:03.267976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:84808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:20.889 [2024-07-21 03:42:03.268002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:20.889 [2024-07-21 03:42:03.268034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:84824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:20.889 [2024-07-21 03:42:03.268052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:20.889 [2024-07-21 03:42:03.268073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:84840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:20.889 [2024-07-21 03:42:03.268089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:20.889 [2024-07-21 03:42:03.268125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:84856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:20.889 [2024-07-21 03:42:03.268142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:32:20.889 [2024-07-21 03:42:03.268171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:84872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:20.889 [2024-07-21 03:42:03.268203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:32:20.889 [2024-07-21 03:42:03.268226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:84888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:20.889 [2024-07-21 03:42:03.268243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:32:20.889 [2024-07-21 03:42:03.268265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:84904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:20.889 [2024-07-21 03:42:03.268282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:32:20.889 [2024-07-21 03:42:03.268305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:84920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:20.889 [2024-07-21 03:42:03.268322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:32:20.889 [2024-07-21 03:42:03.268344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:84936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:20.889 [2024-07-21 03:42:03.268361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:32:20.889 [2024-07-21 03:42:03.268384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:84952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:20.889 [2024-07-21 03:42:03.268401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:32:20.889 [2024-07-21 03:42:03.268424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:84968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:20.889 [2024-07-21 03:42:03.268440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:32:20.889 [2024-07-21 03:42:03.268462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:84984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:20.889 [2024-07-21 03:42:03.268479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:32:20.889 [2024-07-21 03:42:03.268501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:85000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:20.889 [2024-07-21 03:42:03.268533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:32:20.889 [2024-07-21 03:42:03.268557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:85016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:20.889 [2024-07-21 03:42:03.268574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:32:20.889 [2024-07-21 03:42:03.268611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:84520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.889 [2024-07-21 03:42:03.268637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:32:20.889 [2024-07-21 03:42:03.268676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:84552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.889 [2024-07-21 03:42:03.268693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:32:20.889 [2024-07-21 03:42:03.268715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:84584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.889 [2024-07-21 03:42:03.268735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:20.889 [2024-07-21 03:42:03.268758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:84616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.889 [2024-07-21 03:42:03.268774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:32:20.889 [2024-07-21 03:42:03.268813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:84648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.889 [2024-07-21 03:42:03.268830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:32:20.889 [2024-07-21 03:42:03.268853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:85032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:20.889 [2024-07-21 03:42:03.268869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:32:20.889 [2024-07-21 03:42:03.268892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:85048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:20.889 [2024-07-21 03:42:03.268908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:32:20.889 [2024-07-21 03:42:03.268931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:85064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:20.889 [2024-07-21 03:42:03.268947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:32:20.889 [2024-07-21 03:42:03.268969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:85080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:20.889 [2024-07-21 03:42:03.268986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:32:20.889 [2024-07-21 03:42:03.269008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:85096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:20.890 [2024-07-21 03:42:03.269025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:32:20.890 [2024-07-21 03:42:03.269048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:85112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:20.890 [2024-07-21 03:42:03.269064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:32:20.890 [2024-07-21 03:42:03.269087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:85128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:20.890 [2024-07-21 03:42:03.269103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:32:20.890 [2024-07-21 03:42:03.269126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:85144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:20.890 [2024-07-21 03:42:03.269142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:20.890 [2024-07-21 03:42:03.269165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:85160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:20.890 [2024-07-21 03:42:03.269181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:20.890 [2024-07-21 03:42:03.269203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:85176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:20.890 [2024-07-21 03:42:03.269224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:20.890 [2024-07-21 03:42:03.269263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:85192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:20.890 [2024-07-21 03:42:03.269280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:20.890 [2024-07-21 03:42:03.269303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:85208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:20.890 [2024-07-21 03:42:03.269319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:32:20.890 [2024-07-21 03:42:03.269340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:85224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:20.890 [2024-07-21 03:42:03.269356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:32:20.890 [2024-07-21 03:42:03.269378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:85240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:20.890 [2024-07-21 03:42:03.269394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:20.890 [2024-07-21 03:42:03.269416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:85256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:20.890 [2024-07-21 03:42:03.269433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:20.890 [2024-07-21 03:42:03.269454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:85272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:20.890 [2024-07-21 03:42:03.269485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:20.890 [2024-07-21 03:42:03.269507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:85288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:20.890 [2024-07-21 03:42:03.269523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:32:20.890 [2024-07-21 03:42:03.269544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:85304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:20.890 [2024-07-21 03:42:03.269560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:32:20.890 [2024-07-21 03:42:03.269581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:85320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:20.890 [2024-07-21 03:42:03.269619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:32:20.890 [2024-07-21 03:42:03.269645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:85336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:20.890 [2024-07-21 03:42:03.269662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:32:20.890 [2024-07-21 03:42:03.269683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:85352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:20.890 [2024-07-21 03:42:03.269700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:32:20.890 [2024-07-21 03:42:03.269722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:85368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:20.890 [2024-07-21 03:42:03.269738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:32:20.890 [2024-07-21 03:42:03.269763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:85384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:20.890 [2024-07-21 03:42:03.269780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:32:20.890 [2024-07-21 03:42:03.269802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:84528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.890 [2024-07-21 03:42:03.269818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:32:20.890 [2024-07-21 03:42:03.269840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:84560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.890 [2024-07-21 03:42:03.269856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:32:20.890 [2024-07-21 03:42:03.269877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:84592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.890 [2024-07-21 03:42:03.269908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:32:20.890 [2024-07-21 03:42:03.269931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:84624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.890 [2024-07-21 03:42:03.269947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:32:20.890 Received shutdown signal, test time was about 32.388229 seconds 00:32:20.890 00:32:20.890 Latency(us) 00:32:20.890 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:20.890 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:32:20.890 Verification LBA range: start 0x0 length 0x4000 00:32:20.890 Nvme0n1 : 32.39 8103.20 31.65 0.00 0.00 15769.52 467.25 4026531.84 00:32:20.890 =================================================================================================================== 00:32:20.890 Total : 8103.20 31.65 0.00 0.00 15769.52 467.25 4026531.84 00:32:20.890 03:42:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:21.148 03:42:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:32:21.148 03:42:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:32:21.148 03:42:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:32:21.148 03:42:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:21.148 03:42:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:32:21.148 03:42:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:21.148 03:42:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:32:21.148 03:42:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:21.148 03:42:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:21.148 rmmod nvme_tcp 00:32:21.148 rmmod nvme_fabrics 00:32:21.148 rmmod nvme_keyring 00:32:21.148 03:42:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:21.148 03:42:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:32:21.148 03:42:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:32:21.148 03:42:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 2535110 ']' 00:32:21.148 03:42:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 2535110 00:32:21.148 03:42:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@946 -- # '[' -z 2535110 ']' 00:32:21.148 03:42:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # kill -0 2535110 00:32:21.148 03:42:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # uname 00:32:21.148 03:42:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:32:21.148 03:42:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2535110 00:32:21.148 03:42:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:32:21.148 03:42:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:32:21.148 03:42:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2535110' 00:32:21.148 killing process with pid 2535110 00:32:21.148 03:42:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@965 -- # kill 2535110 00:32:21.148 03:42:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # wait 2535110 00:32:21.406 03:42:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:32:21.406 03:42:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:21.406 03:42:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:21.406 03:42:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:21.406 03:42:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:21.406 03:42:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:21.406 03:42:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:21.406 03:42:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:23.937 03:42:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:23.937 00:32:23.937 real 0m40.889s 00:32:23.937 user 2m2.187s 00:32:23.937 sys 0m11.013s 00:32:23.937 03:42:08 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1122 -- # xtrace_disable 00:32:23.937 03:42:08 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:23.937 ************************************ 00:32:23.937 END TEST nvmf_host_multipath_status 00:32:23.937 ************************************ 00:32:23.937 03:42:08 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:32:23.937 03:42:08 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:32:23.937 03:42:08 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:32:23.937 03:42:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:23.937 ************************************ 00:32:23.937 START TEST nvmf_discovery_remove_ifc 00:32:23.937 ************************************ 00:32:23.937 03:42:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:32:23.937 * Looking for test storage... 00:32:23.937 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:23.937 03:42:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:23.937 03:42:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:32:23.937 03:42:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:23.937 03:42:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:23.937 03:42:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:23.937 03:42:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:23.937 03:42:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:23.937 03:42:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:23.937 03:42:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:23.937 03:42:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:23.937 03:42:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:23.937 03:42:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:23.937 03:42:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:32:23.937 03:42:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:32:23.937 03:42:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:23.937 03:42:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:23.937 03:42:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:23.937 03:42:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:23.937 03:42:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:23.937 03:42:08 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:23.937 03:42:08 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:23.937 03:42:08 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:23.937 03:42:08 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:23.937 03:42:08 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:23.937 03:42:08 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:23.937 03:42:08 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:32:23.937 03:42:08 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:23.937 03:42:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:32:23.937 03:42:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:23.937 03:42:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:23.937 03:42:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:23.937 03:42:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:23.937 03:42:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:23.937 03:42:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:23.937 03:42:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:23.937 03:42:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:23.937 03:42:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:32:23.937 03:42:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:32:23.937 03:42:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:32:23.937 03:42:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:32:23.937 03:42:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:32:23.937 03:42:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:32:23.937 03:42:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:32:23.937 03:42:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:32:23.937 03:42:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:23.937 03:42:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:23.937 03:42:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:23.937 03:42:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:23.937 03:42:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:23.937 03:42:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:23.937 03:42:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:23.937 03:42:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:32:23.937 03:42:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:32:23.937 03:42:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:32:23.937 03:42:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:25.849 03:42:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:25.849 03:42:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:32:25.849 03:42:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:25.849 03:42:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:25.849 03:42:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:25.849 03:42:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:25.849 03:42:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:25.849 03:42:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:32:25.849 03:42:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:25.849 03:42:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:32:25.849 03:42:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:32:25.849 03:42:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:32:25.849 03:42:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:32:25.849 03:42:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:32:25.849 03:42:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:32:25.849 03:42:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:25.850 03:42:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:25.850 03:42:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:25.850 03:42:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:25.850 03:42:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:25.850 03:42:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:25.850 03:42:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:25.850 03:42:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:25.850 03:42:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:25.850 03:42:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:25.850 03:42:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:25.850 03:42:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:25.850 03:42:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:25.850 03:42:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:25.850 03:42:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:25.850 03:42:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:25.850 03:42:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:25.850 03:42:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:25.850 03:42:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:32:25.850 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:32:25.850 03:42:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:25.850 03:42:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:25.850 03:42:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:25.850 03:42:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:25.850 03:42:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:25.850 03:42:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:25.850 03:42:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:32:25.850 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:32:25.850 03:42:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:25.850 03:42:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:25.850 03:42:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:25.850 03:42:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:25.850 03:42:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:25.850 03:42:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:25.850 03:42:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:25.850 03:42:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:32:25.850 03:42:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:25.850 03:42:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:25.850 03:42:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:25.850 03:42:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:25.850 03:42:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:25.850 03:42:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:25.850 03:42:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:25.850 03:42:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:32:25.850 Found net devices under 0000:0a:00.0: cvl_0_0 00:32:25.850 03:42:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:25.850 03:42:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:25.850 03:42:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:25.850 03:42:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:25.850 03:42:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:25.850 03:42:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:25.850 03:42:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:25.850 03:42:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:25.850 03:42:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:32:25.850 Found net devices under 0000:0a:00.1: cvl_0_1 00:32:25.850 03:42:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:25.850 03:42:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:25.850 03:42:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:32:25.850 03:42:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:25.850 03:42:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:32:25.850 03:42:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:32:25.850 03:42:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:25.850 03:42:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:25.850 03:42:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:25.850 03:42:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:32:25.850 03:42:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:25.850 03:42:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:25.850 03:42:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:32:25.850 03:42:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:25.850 03:42:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:25.850 03:42:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:32:25.850 03:42:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:32:25.850 03:42:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:32:25.850 03:42:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:25.850 03:42:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:25.850 03:42:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:25.850 03:42:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:32:25.850 03:42:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:25.850 03:42:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:25.850 03:42:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:25.850 03:42:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:32:25.850 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:25.850 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.191 ms 00:32:25.850 00:32:25.850 --- 10.0.0.2 ping statistics --- 00:32:25.850 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:25.850 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:32:25.850 03:42:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:25.850 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:25.850 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.092 ms 00:32:25.850 00:32:25.850 --- 10.0.0.1 ping statistics --- 00:32:25.850 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:25.850 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:32:25.850 03:42:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:25.850 03:42:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:32:25.850 03:42:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:32:25.850 03:42:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:25.850 03:42:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:25.850 03:42:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:25.850 03:42:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:25.850 03:42:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:25.850 03:42:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:25.850 03:42:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:32:25.850 03:42:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:32:25.850 03:42:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@720 -- # xtrace_disable 00:32:25.850 03:42:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:25.850 03:42:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=2542077 00:32:25.850 03:42:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:32:25.850 03:42:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 2542077 00:32:25.850 03:42:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@827 -- # '[' -z 2542077 ']' 00:32:25.850 03:42:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:25.850 03:42:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@832 -- # local max_retries=100 00:32:25.850 03:42:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:25.850 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:25.850 03:42:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # xtrace_disable 00:32:25.850 03:42:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:25.850 [2024-07-21 03:42:10.962882] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:32:25.850 [2024-07-21 03:42:10.962970] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:25.850 EAL: No free 2048 kB hugepages reported on node 1 00:32:25.850 [2024-07-21 03:42:11.029825] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:25.851 [2024-07-21 03:42:11.116536] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:25.851 [2024-07-21 03:42:11.116592] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:25.851 [2024-07-21 03:42:11.116607] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:25.851 [2024-07-21 03:42:11.116626] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:25.851 [2024-07-21 03:42:11.116637] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:25.851 [2024-07-21 03:42:11.116666] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:26.108 03:42:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:32:26.108 03:42:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # return 0 00:32:26.108 03:42:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:32:26.109 03:42:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:26.109 03:42:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:26.109 03:42:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:26.109 03:42:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:32:26.109 03:42:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:26.109 03:42:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:26.109 [2024-07-21 03:42:11.266201] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:26.109 [2024-07-21 03:42:11.274370] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:32:26.109 null0 00:32:26.109 [2024-07-21 03:42:11.306319] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:26.109 03:42:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:26.109 03:42:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=2542141 00:32:26.109 03:42:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:32:26.109 03:42:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 2542141 /tmp/host.sock 00:32:26.109 03:42:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@827 -- # '[' -z 2542141 ']' 00:32:26.109 03:42:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # local rpc_addr=/tmp/host.sock 00:32:26.109 03:42:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@832 -- # local max_retries=100 00:32:26.109 03:42:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:32:26.109 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:32:26.109 03:42:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # xtrace_disable 00:32:26.109 03:42:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:26.109 [2024-07-21 03:42:11.381490] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:32:26.109 [2024-07-21 03:42:11.381584] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2542141 ] 00:32:26.109 EAL: No free 2048 kB hugepages reported on node 1 00:32:26.366 [2024-07-21 03:42:11.455206] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:26.366 [2024-07-21 03:42:11.550550] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:26.366 03:42:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:32:26.366 03:42:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # return 0 00:32:26.367 03:42:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:26.367 03:42:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:32:26.367 03:42:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:26.367 03:42:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:26.367 03:42:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:26.367 03:42:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:32:26.367 03:42:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:26.367 03:42:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:26.624 03:42:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:26.624 03:42:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:32:26.624 03:42:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:26.624 03:42:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:27.554 [2024-07-21 03:42:12.755298] bdev_nvme.c:6984:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:32:27.554 [2024-07-21 03:42:12.755328] bdev_nvme.c:7064:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:32:27.554 [2024-07-21 03:42:12.755349] bdev_nvme.c:6947:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:27.826 [2024-07-21 03:42:12.883799] bdev_nvme.c:6913:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:32:27.826 [2024-07-21 03:42:12.984250] bdev_nvme.c:7774:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:32:27.826 [2024-07-21 03:42:12.984310] bdev_nvme.c:7774:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:32:27.826 [2024-07-21 03:42:12.984347] bdev_nvme.c:7774:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:32:27.826 [2024-07-21 03:42:12.984369] bdev_nvme.c:6803:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:32:27.826 [2024-07-21 03:42:12.984402] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:32:27.826 03:42:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:27.826 03:42:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:32:27.826 03:42:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:27.826 03:42:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:27.826 03:42:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:27.826 03:42:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:27.826 03:42:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:27.826 03:42:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:27.826 03:42:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:27.826 [2024-07-21 03:42:12.992187] bdev_nvme.c:1614:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1e79df0 was disconnected and freed. delete nvme_qpair. 00:32:27.826 03:42:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:27.826 03:42:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:32:27.826 03:42:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:32:27.826 03:42:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:32:27.826 03:42:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:32:27.826 03:42:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:27.826 03:42:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:27.826 03:42:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:27.826 03:42:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:27.826 03:42:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:27.826 03:42:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:27.826 03:42:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:27.826 03:42:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:27.826 03:42:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:27.826 03:42:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:29.202 03:42:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:29.202 03:42:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:29.202 03:42:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:29.202 03:42:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:29.202 03:42:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:29.202 03:42:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:29.202 03:42:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:29.202 03:42:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:29.202 03:42:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:29.202 03:42:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:30.132 03:42:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:30.132 03:42:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:30.132 03:42:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:30.132 03:42:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:30.132 03:42:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:30.132 03:42:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:30.132 03:42:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:30.132 03:42:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:30.132 03:42:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:30.132 03:42:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:31.063 03:42:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:31.063 03:42:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:31.063 03:42:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:31.063 03:42:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:31.063 03:42:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:31.063 03:42:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:31.063 03:42:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:31.063 03:42:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:31.063 03:42:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:31.063 03:42:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:31.997 03:42:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:31.997 03:42:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:31.997 03:42:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:31.997 03:42:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:31.997 03:42:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:31.997 03:42:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:31.997 03:42:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:31.997 03:42:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:31.997 03:42:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:31.998 03:42:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:33.372 03:42:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:33.372 03:42:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:33.372 03:42:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:33.372 03:42:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:33.372 03:42:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:33.372 03:42:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:33.372 03:42:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:33.372 03:42:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:33.372 03:42:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:33.373 03:42:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:33.373 [2024-07-21 03:42:18.425708] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:32:33.373 [2024-07-21 03:42:18.425794] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:33.373 [2024-07-21 03:42:18.425815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:33.373 [2024-07-21 03:42:18.425844] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:33.373 [2024-07-21 03:42:18.425858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:33.373 [2024-07-21 03:42:18.425872] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:33.373 [2024-07-21 03:42:18.425885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:33.373 [2024-07-21 03:42:18.425898] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:33.373 [2024-07-21 03:42:18.425935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:33.373 [2024-07-21 03:42:18.425954] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:32:33.373 [2024-07-21 03:42:18.425969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:33.373 [2024-07-21 03:42:18.425985] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e40f80 is same with the state(5) to be set 00:32:33.373 [2024-07-21 03:42:18.435733] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e40f80 (9): Bad file descriptor 00:32:33.373 [2024-07-21 03:42:18.445790] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:34.310 03:42:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:34.311 03:42:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:34.311 03:42:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:34.311 03:42:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.311 03:42:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:34.311 03:42:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:34.311 03:42:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:34.311 [2024-07-21 03:42:19.464667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:32:34.311 [2024-07-21 03:42:19.464738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e40f80 with addr=10.0.0.2, port=4420 00:32:34.311 [2024-07-21 03:42:19.464765] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e40f80 is same with the state(5) to be set 00:32:34.311 [2024-07-21 03:42:19.464820] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e40f80 (9): Bad file descriptor 00:32:34.311 [2024-07-21 03:42:19.465301] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:32:34.311 [2024-07-21 03:42:19.465337] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:34.311 [2024-07-21 03:42:19.465354] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:34.311 [2024-07-21 03:42:19.465372] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:34.311 [2024-07-21 03:42:19.465409] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:34.311 [2024-07-21 03:42:19.465429] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:34.311 03:42:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.311 03:42:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:34.311 03:42:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:35.290 [2024-07-21 03:42:20.467943] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:35.290 [2024-07-21 03:42:20.468011] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:35.290 [2024-07-21 03:42:20.468025] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:35.290 [2024-07-21 03:42:20.468039] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:32:35.290 [2024-07-21 03:42:20.468069] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:35.291 [2024-07-21 03:42:20.468108] bdev_nvme.c:6735:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:32:35.291 [2024-07-21 03:42:20.468168] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:35.291 [2024-07-21 03:42:20.468200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.291 [2024-07-21 03:42:20.468219] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:35.291 [2024-07-21 03:42:20.468232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.291 [2024-07-21 03:42:20.468245] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:35.291 [2024-07-21 03:42:20.468258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.291 [2024-07-21 03:42:20.468273] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:35.291 [2024-07-21 03:42:20.468285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.291 [2024-07-21 03:42:20.468300] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:32:35.291 [2024-07-21 03:42:20.468313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.291 [2024-07-21 03:42:20.468326] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:32:35.291 [2024-07-21 03:42:20.468421] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e40410 (9): Bad file descriptor 00:32:35.291 [2024-07-21 03:42:20.469446] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:32:35.291 [2024-07-21 03:42:20.469468] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:32:35.291 03:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:35.291 03:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:35.291 03:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:35.291 03:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.291 03:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:35.291 03:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:35.291 03:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:35.291 03:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.291 03:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:32:35.291 03:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:35.291 03:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:35.291 03:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:32:35.291 03:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:35.291 03:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:35.291 03:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:35.291 03:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.291 03:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:35.291 03:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:35.291 03:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:35.291 03:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.550 03:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:32:35.550 03:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:36.483 03:42:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:36.483 03:42:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:36.483 03:42:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.483 03:42:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:36.483 03:42:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:36.483 03:42:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:36.483 03:42:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:36.483 03:42:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.483 03:42:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:32:36.483 03:42:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:37.416 [2024-07-21 03:42:22.519744] bdev_nvme.c:6984:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:32:37.416 [2024-07-21 03:42:22.519780] bdev_nvme.c:7064:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:32:37.416 [2024-07-21 03:42:22.519803] bdev_nvme.c:6947:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:37.416 [2024-07-21 03:42:22.606113] bdev_nvme.c:6913:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:32:37.416 03:42:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:37.416 03:42:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:37.416 03:42:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:37.416 03:42:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.416 03:42:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:37.416 03:42:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:37.416 03:42:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:37.416 03:42:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.416 03:42:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:32:37.416 03:42:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:37.673 [2024-07-21 03:42:22.783390] bdev_nvme.c:7774:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:32:37.673 [2024-07-21 03:42:22.783447] bdev_nvme.c:7774:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:32:37.673 [2024-07-21 03:42:22.783486] bdev_nvme.c:7774:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:32:37.673 [2024-07-21 03:42:22.783510] bdev_nvme.c:6803:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:32:37.673 [2024-07-21 03:42:22.783526] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:32:37.673 [2024-07-21 03:42:22.788332] bdev_nvme.c:1614:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1e4dd30 was disconnected and freed. delete nvme_qpair. 00:32:38.601 03:42:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:38.601 03:42:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:38.601 03:42:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.601 03:42:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:38.601 03:42:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:38.601 03:42:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:38.601 03:42:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:38.601 03:42:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.601 03:42:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:32:38.601 03:42:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:32:38.601 03:42:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 2542141 00:32:38.601 03:42:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@946 -- # '[' -z 2542141 ']' 00:32:38.601 03:42:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # kill -0 2542141 00:32:38.601 03:42:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # uname 00:32:38.601 03:42:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:32:38.601 03:42:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2542141 00:32:38.601 03:42:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:32:38.601 03:42:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:32:38.601 03:42:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2542141' 00:32:38.601 killing process with pid 2542141 00:32:38.601 03:42:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@965 -- # kill 2542141 00:32:38.601 03:42:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # wait 2542141 00:32:38.857 03:42:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:32:38.857 03:42:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:38.857 03:42:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:32:38.857 03:42:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:38.857 03:42:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:32:38.857 03:42:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:38.857 03:42:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:38.857 rmmod nvme_tcp 00:32:38.857 rmmod nvme_fabrics 00:32:38.857 rmmod nvme_keyring 00:32:38.857 03:42:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:38.857 03:42:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:32:38.857 03:42:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:32:38.857 03:42:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 2542077 ']' 00:32:38.857 03:42:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 2542077 00:32:38.857 03:42:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@946 -- # '[' -z 2542077 ']' 00:32:38.857 03:42:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # kill -0 2542077 00:32:38.857 03:42:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # uname 00:32:38.857 03:42:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:32:38.857 03:42:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2542077 00:32:38.857 03:42:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:32:38.857 03:42:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:32:38.857 03:42:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2542077' 00:32:38.857 killing process with pid 2542077 00:32:38.857 03:42:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@965 -- # kill 2542077 00:32:38.857 03:42:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # wait 2542077 00:32:39.115 03:42:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:32:39.115 03:42:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:39.115 03:42:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:39.115 03:42:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:39.115 03:42:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:39.115 03:42:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:39.115 03:42:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:39.115 03:42:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:41.661 03:42:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:41.661 00:32:41.661 real 0m17.564s 00:32:41.661 user 0m25.535s 00:32:41.661 sys 0m2.998s 00:32:41.661 03:42:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:32:41.661 03:42:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:41.661 ************************************ 00:32:41.661 END TEST nvmf_discovery_remove_ifc 00:32:41.661 ************************************ 00:32:41.661 03:42:26 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:32:41.661 03:42:26 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:32:41.661 03:42:26 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:32:41.661 03:42:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:41.661 ************************************ 00:32:41.661 START TEST nvmf_identify_kernel_target 00:32:41.661 ************************************ 00:32:41.661 03:42:26 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:32:41.661 * Looking for test storage... 00:32:41.661 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:41.661 03:42:26 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:41.661 03:42:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:32:41.661 03:42:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:41.661 03:42:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:41.661 03:42:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:41.661 03:42:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:41.661 03:42:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:41.661 03:42:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:41.661 03:42:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:41.661 03:42:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:41.661 03:42:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:41.661 03:42:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:41.661 03:42:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:32:41.661 03:42:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:32:41.661 03:42:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:41.661 03:42:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:41.661 03:42:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:41.661 03:42:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:41.661 03:42:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:41.661 03:42:26 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:41.661 03:42:26 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:41.661 03:42:26 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:41.661 03:42:26 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:41.661 03:42:26 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:41.661 03:42:26 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:41.661 03:42:26 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:32:41.661 03:42:26 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:41.661 03:42:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:32:41.661 03:42:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:41.661 03:42:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:41.661 03:42:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:41.661 03:42:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:41.661 03:42:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:41.661 03:42:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:41.661 03:42:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:41.661 03:42:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:41.661 03:42:26 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:32:41.661 03:42:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:32:41.661 03:42:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:41.661 03:42:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:41.661 03:42:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:41.661 03:42:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:41.661 03:42:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:41.661 03:42:26 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:41.661 03:42:26 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:41.661 03:42:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:32:41.661 03:42:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:32:41.661 03:42:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:32:41.661 03:42:26 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:32:43.561 03:42:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:43.561 03:42:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:32:43.561 03:42:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:43.561 03:42:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:43.561 03:42:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:43.561 03:42:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:43.561 03:42:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:43.561 03:42:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:32:43.561 03:42:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:43.561 03:42:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:32:43.561 03:42:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:32:43.561 03:42:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:32:43.561 03:42:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:32:43.561 03:42:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:32:43.561 03:42:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:32:43.561 03:42:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:43.561 03:42:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:43.561 03:42:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:43.561 03:42:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:43.561 03:42:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:43.561 03:42:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:43.561 03:42:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:43.561 03:42:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:43.561 03:42:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:43.561 03:42:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:43.561 03:42:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:43.561 03:42:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:43.561 03:42:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:43.561 03:42:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:43.561 03:42:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:43.561 03:42:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:43.561 03:42:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:43.562 03:42:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:43.562 03:42:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:32:43.562 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:32:43.562 03:42:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:43.562 03:42:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:43.562 03:42:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:43.562 03:42:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:43.562 03:42:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:43.562 03:42:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:43.562 03:42:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:32:43.562 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:32:43.562 03:42:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:43.562 03:42:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:43.562 03:42:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:43.562 03:42:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:43.562 03:42:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:43.562 03:42:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:43.562 03:42:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:43.562 03:42:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:32:43.562 03:42:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:43.562 03:42:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:43.562 03:42:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:43.562 03:42:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:43.562 03:42:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:43.562 03:42:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:43.562 03:42:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:43.562 03:42:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:32:43.562 Found net devices under 0000:0a:00.0: cvl_0_0 00:32:43.562 03:42:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:43.562 03:42:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:43.562 03:42:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:43.562 03:42:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:43.562 03:42:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:43.562 03:42:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:43.562 03:42:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:43.562 03:42:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:43.562 03:42:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:32:43.562 Found net devices under 0000:0a:00.1: cvl_0_1 00:32:43.562 03:42:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:43.562 03:42:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:43.562 03:42:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:32:43.562 03:42:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:43.562 03:42:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:32:43.562 03:42:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:32:43.562 03:42:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:43.562 03:42:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:43.562 03:42:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:43.562 03:42:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:32:43.562 03:42:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:43.562 03:42:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:43.562 03:42:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:32:43.562 03:42:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:43.562 03:42:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:43.562 03:42:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:32:43.562 03:42:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:32:43.562 03:42:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:32:43.562 03:42:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:43.562 03:42:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:43.562 03:42:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:43.562 03:42:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:32:43.562 03:42:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:43.562 03:42:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:43.562 03:42:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:43.562 03:42:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:32:43.562 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:43.562 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.162 ms 00:32:43.562 00:32:43.562 --- 10.0.0.2 ping statistics --- 00:32:43.562 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:43.562 rtt min/avg/max/mdev = 0.162/0.162/0.162/0.000 ms 00:32:43.562 03:42:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:43.562 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:43.562 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.134 ms 00:32:43.562 00:32:43.562 --- 10.0.0.1 ping statistics --- 00:32:43.562 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:43.562 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:32:43.562 03:42:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:43.562 03:42:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:32:43.562 03:42:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:32:43.562 03:42:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:43.562 03:42:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:43.562 03:42:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:43.562 03:42:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:43.562 03:42:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:43.562 03:42:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:43.562 03:42:28 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:32:43.562 03:42:28 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:32:43.562 03:42:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:32:43.562 03:42:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:43.562 03:42:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:43.562 03:42:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:43.562 03:42:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:43.562 03:42:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:43.562 03:42:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:43.562 03:42:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:43.562 03:42:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:43.562 03:42:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:43.562 03:42:28 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:32:43.562 03:42:28 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:32:43.562 03:42:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:32:43.562 03:42:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:32:43.562 03:42:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:43.562 03:42:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:43.562 03:42:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:32:43.562 03:42:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:32:43.562 03:42:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:32:43.562 03:42:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:32:43.562 03:42:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:32:43.562 03:42:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:32:44.497 Waiting for block devices as requested 00:32:44.497 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:32:44.755 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:32:44.755 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:32:45.014 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:32:45.014 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:32:45.014 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:32:45.014 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:32:45.272 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:32:45.272 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:32:45.272 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:32:45.272 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:32:45.530 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:32:45.530 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:32:45.530 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:32:45.530 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:32:45.788 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:32:45.788 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:32:45.788 03:42:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:32:45.788 03:42:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:32:45.788 03:42:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:32:45.788 03:42:31 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:32:45.788 03:42:31 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:32:45.788 03:42:31 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:32:45.788 03:42:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:32:45.788 03:42:31 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:32:45.788 03:42:31 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:32:46.047 No valid GPT data, bailing 00:32:46.047 03:42:31 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:32:46.047 03:42:31 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:32:46.047 03:42:31 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:32:46.047 03:42:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:32:46.047 03:42:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:32:46.047 03:42:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:46.047 03:42:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:46.047 03:42:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:32:46.047 03:42:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:32:46.047 03:42:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:32:46.047 03:42:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:32:46.047 03:42:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:32:46.047 03:42:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:32:46.047 03:42:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:32:46.047 03:42:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:32:46.047 03:42:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:32:46.047 03:42:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:32:46.047 03:42:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:32:46.047 00:32:46.047 Discovery Log Number of Records 2, Generation counter 2 00:32:46.047 =====Discovery Log Entry 0====== 00:32:46.047 trtype: tcp 00:32:46.047 adrfam: ipv4 00:32:46.047 subtype: current discovery subsystem 00:32:46.047 treq: not specified, sq flow control disable supported 00:32:46.047 portid: 1 00:32:46.047 trsvcid: 4420 00:32:46.047 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:32:46.047 traddr: 10.0.0.1 00:32:46.047 eflags: none 00:32:46.047 sectype: none 00:32:46.047 =====Discovery Log Entry 1====== 00:32:46.047 trtype: tcp 00:32:46.047 adrfam: ipv4 00:32:46.047 subtype: nvme subsystem 00:32:46.047 treq: not specified, sq flow control disable supported 00:32:46.047 portid: 1 00:32:46.047 trsvcid: 4420 00:32:46.047 subnqn: nqn.2016-06.io.spdk:testnqn 00:32:46.047 traddr: 10.0.0.1 00:32:46.047 eflags: none 00:32:46.047 sectype: none 00:32:46.047 03:42:31 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:32:46.047 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:32:46.047 EAL: No free 2048 kB hugepages reported on node 1 00:32:46.047 ===================================================== 00:32:46.047 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:32:46.047 ===================================================== 00:32:46.047 Controller Capabilities/Features 00:32:46.047 ================================ 00:32:46.047 Vendor ID: 0000 00:32:46.047 Subsystem Vendor ID: 0000 00:32:46.047 Serial Number: 8f8984ba81edec873c08 00:32:46.047 Model Number: Linux 00:32:46.047 Firmware Version: 6.7.0-68 00:32:46.047 Recommended Arb Burst: 0 00:32:46.047 IEEE OUI Identifier: 00 00 00 00:32:46.047 Multi-path I/O 00:32:46.047 May have multiple subsystem ports: No 00:32:46.047 May have multiple controllers: No 00:32:46.047 Associated with SR-IOV VF: No 00:32:46.047 Max Data Transfer Size: Unlimited 00:32:46.047 Max Number of Namespaces: 0 00:32:46.047 Max Number of I/O Queues: 1024 00:32:46.047 NVMe Specification Version (VS): 1.3 00:32:46.047 NVMe Specification Version (Identify): 1.3 00:32:46.047 Maximum Queue Entries: 1024 00:32:46.047 Contiguous Queues Required: No 00:32:46.047 Arbitration Mechanisms Supported 00:32:46.047 Weighted Round Robin: Not Supported 00:32:46.047 Vendor Specific: Not Supported 00:32:46.047 Reset Timeout: 7500 ms 00:32:46.047 Doorbell Stride: 4 bytes 00:32:46.047 NVM Subsystem Reset: Not Supported 00:32:46.047 Command Sets Supported 00:32:46.047 NVM Command Set: Supported 00:32:46.047 Boot Partition: Not Supported 00:32:46.047 Memory Page Size Minimum: 4096 bytes 00:32:46.047 Memory Page Size Maximum: 4096 bytes 00:32:46.047 Persistent Memory Region: Not Supported 00:32:46.047 Optional Asynchronous Events Supported 00:32:46.047 Namespace Attribute Notices: Not Supported 00:32:46.047 Firmware Activation Notices: Not Supported 00:32:46.047 ANA Change Notices: Not Supported 00:32:46.047 PLE Aggregate Log Change Notices: Not Supported 00:32:46.047 LBA Status Info Alert Notices: Not Supported 00:32:46.047 EGE Aggregate Log Change Notices: Not Supported 00:32:46.047 Normal NVM Subsystem Shutdown event: Not Supported 00:32:46.047 Zone Descriptor Change Notices: Not Supported 00:32:46.047 Discovery Log Change Notices: Supported 00:32:46.047 Controller Attributes 00:32:46.047 128-bit Host Identifier: Not Supported 00:32:46.047 Non-Operational Permissive Mode: Not Supported 00:32:46.047 NVM Sets: Not Supported 00:32:46.047 Read Recovery Levels: Not Supported 00:32:46.047 Endurance Groups: Not Supported 00:32:46.047 Predictable Latency Mode: Not Supported 00:32:46.047 Traffic Based Keep ALive: Not Supported 00:32:46.047 Namespace Granularity: Not Supported 00:32:46.047 SQ Associations: Not Supported 00:32:46.047 UUID List: Not Supported 00:32:46.047 Multi-Domain Subsystem: Not Supported 00:32:46.047 Fixed Capacity Management: Not Supported 00:32:46.047 Variable Capacity Management: Not Supported 00:32:46.047 Delete Endurance Group: Not Supported 00:32:46.047 Delete NVM Set: Not Supported 00:32:46.047 Extended LBA Formats Supported: Not Supported 00:32:46.047 Flexible Data Placement Supported: Not Supported 00:32:46.047 00:32:46.047 Controller Memory Buffer Support 00:32:46.047 ================================ 00:32:46.047 Supported: No 00:32:46.047 00:32:46.047 Persistent Memory Region Support 00:32:46.047 ================================ 00:32:46.047 Supported: No 00:32:46.047 00:32:46.047 Admin Command Set Attributes 00:32:46.047 ============================ 00:32:46.047 Security Send/Receive: Not Supported 00:32:46.047 Format NVM: Not Supported 00:32:46.047 Firmware Activate/Download: Not Supported 00:32:46.047 Namespace Management: Not Supported 00:32:46.047 Device Self-Test: Not Supported 00:32:46.047 Directives: Not Supported 00:32:46.047 NVMe-MI: Not Supported 00:32:46.047 Virtualization Management: Not Supported 00:32:46.047 Doorbell Buffer Config: Not Supported 00:32:46.047 Get LBA Status Capability: Not Supported 00:32:46.047 Command & Feature Lockdown Capability: Not Supported 00:32:46.047 Abort Command Limit: 1 00:32:46.047 Async Event Request Limit: 1 00:32:46.047 Number of Firmware Slots: N/A 00:32:46.047 Firmware Slot 1 Read-Only: N/A 00:32:46.047 Firmware Activation Without Reset: N/A 00:32:46.047 Multiple Update Detection Support: N/A 00:32:46.047 Firmware Update Granularity: No Information Provided 00:32:46.047 Per-Namespace SMART Log: No 00:32:46.047 Asymmetric Namespace Access Log Page: Not Supported 00:32:46.047 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:32:46.047 Command Effects Log Page: Not Supported 00:32:46.047 Get Log Page Extended Data: Supported 00:32:46.047 Telemetry Log Pages: Not Supported 00:32:46.047 Persistent Event Log Pages: Not Supported 00:32:46.047 Supported Log Pages Log Page: May Support 00:32:46.047 Commands Supported & Effects Log Page: Not Supported 00:32:46.047 Feature Identifiers & Effects Log Page:May Support 00:32:46.047 NVMe-MI Commands & Effects Log Page: May Support 00:32:46.047 Data Area 4 for Telemetry Log: Not Supported 00:32:46.047 Error Log Page Entries Supported: 1 00:32:46.047 Keep Alive: Not Supported 00:32:46.047 00:32:46.047 NVM Command Set Attributes 00:32:46.047 ========================== 00:32:46.047 Submission Queue Entry Size 00:32:46.047 Max: 1 00:32:46.047 Min: 1 00:32:46.047 Completion Queue Entry Size 00:32:46.047 Max: 1 00:32:46.047 Min: 1 00:32:46.047 Number of Namespaces: 0 00:32:46.047 Compare Command: Not Supported 00:32:46.047 Write Uncorrectable Command: Not Supported 00:32:46.047 Dataset Management Command: Not Supported 00:32:46.047 Write Zeroes Command: Not Supported 00:32:46.047 Set Features Save Field: Not Supported 00:32:46.047 Reservations: Not Supported 00:32:46.047 Timestamp: Not Supported 00:32:46.047 Copy: Not Supported 00:32:46.047 Volatile Write Cache: Not Present 00:32:46.047 Atomic Write Unit (Normal): 1 00:32:46.047 Atomic Write Unit (PFail): 1 00:32:46.048 Atomic Compare & Write Unit: 1 00:32:46.048 Fused Compare & Write: Not Supported 00:32:46.048 Scatter-Gather List 00:32:46.048 SGL Command Set: Supported 00:32:46.048 SGL Keyed: Not Supported 00:32:46.048 SGL Bit Bucket Descriptor: Not Supported 00:32:46.048 SGL Metadata Pointer: Not Supported 00:32:46.048 Oversized SGL: Not Supported 00:32:46.048 SGL Metadata Address: Not Supported 00:32:46.048 SGL Offset: Supported 00:32:46.048 Transport SGL Data Block: Not Supported 00:32:46.048 Replay Protected Memory Block: Not Supported 00:32:46.048 00:32:46.048 Firmware Slot Information 00:32:46.048 ========================= 00:32:46.048 Active slot: 0 00:32:46.048 00:32:46.048 00:32:46.048 Error Log 00:32:46.048 ========= 00:32:46.048 00:32:46.048 Active Namespaces 00:32:46.048 ================= 00:32:46.048 Discovery Log Page 00:32:46.048 ================== 00:32:46.048 Generation Counter: 2 00:32:46.048 Number of Records: 2 00:32:46.048 Record Format: 0 00:32:46.048 00:32:46.048 Discovery Log Entry 0 00:32:46.048 ---------------------- 00:32:46.048 Transport Type: 3 (TCP) 00:32:46.048 Address Family: 1 (IPv4) 00:32:46.048 Subsystem Type: 3 (Current Discovery Subsystem) 00:32:46.048 Entry Flags: 00:32:46.048 Duplicate Returned Information: 0 00:32:46.048 Explicit Persistent Connection Support for Discovery: 0 00:32:46.048 Transport Requirements: 00:32:46.048 Secure Channel: Not Specified 00:32:46.048 Port ID: 1 (0x0001) 00:32:46.048 Controller ID: 65535 (0xffff) 00:32:46.048 Admin Max SQ Size: 32 00:32:46.048 Transport Service Identifier: 4420 00:32:46.048 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:32:46.048 Transport Address: 10.0.0.1 00:32:46.048 Discovery Log Entry 1 00:32:46.048 ---------------------- 00:32:46.048 Transport Type: 3 (TCP) 00:32:46.048 Address Family: 1 (IPv4) 00:32:46.048 Subsystem Type: 2 (NVM Subsystem) 00:32:46.048 Entry Flags: 00:32:46.048 Duplicate Returned Information: 0 00:32:46.048 Explicit Persistent Connection Support for Discovery: 0 00:32:46.048 Transport Requirements: 00:32:46.048 Secure Channel: Not Specified 00:32:46.048 Port ID: 1 (0x0001) 00:32:46.048 Controller ID: 65535 (0xffff) 00:32:46.048 Admin Max SQ Size: 32 00:32:46.048 Transport Service Identifier: 4420 00:32:46.048 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:32:46.048 Transport Address: 10.0.0.1 00:32:46.048 03:42:31 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:46.307 EAL: No free 2048 kB hugepages reported on node 1 00:32:46.307 get_feature(0x01) failed 00:32:46.307 get_feature(0x02) failed 00:32:46.307 get_feature(0x04) failed 00:32:46.307 ===================================================== 00:32:46.307 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:32:46.307 ===================================================== 00:32:46.307 Controller Capabilities/Features 00:32:46.307 ================================ 00:32:46.307 Vendor ID: 0000 00:32:46.307 Subsystem Vendor ID: 0000 00:32:46.307 Serial Number: 8dfab29c2dbbb0a3da92 00:32:46.307 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:32:46.307 Firmware Version: 6.7.0-68 00:32:46.307 Recommended Arb Burst: 6 00:32:46.307 IEEE OUI Identifier: 00 00 00 00:32:46.307 Multi-path I/O 00:32:46.307 May have multiple subsystem ports: Yes 00:32:46.307 May have multiple controllers: Yes 00:32:46.307 Associated with SR-IOV VF: No 00:32:46.307 Max Data Transfer Size: Unlimited 00:32:46.307 Max Number of Namespaces: 1024 00:32:46.307 Max Number of I/O Queues: 128 00:32:46.307 NVMe Specification Version (VS): 1.3 00:32:46.307 NVMe Specification Version (Identify): 1.3 00:32:46.307 Maximum Queue Entries: 1024 00:32:46.307 Contiguous Queues Required: No 00:32:46.307 Arbitration Mechanisms Supported 00:32:46.307 Weighted Round Robin: Not Supported 00:32:46.307 Vendor Specific: Not Supported 00:32:46.307 Reset Timeout: 7500 ms 00:32:46.307 Doorbell Stride: 4 bytes 00:32:46.307 NVM Subsystem Reset: Not Supported 00:32:46.307 Command Sets Supported 00:32:46.307 NVM Command Set: Supported 00:32:46.307 Boot Partition: Not Supported 00:32:46.307 Memory Page Size Minimum: 4096 bytes 00:32:46.307 Memory Page Size Maximum: 4096 bytes 00:32:46.307 Persistent Memory Region: Not Supported 00:32:46.307 Optional Asynchronous Events Supported 00:32:46.307 Namespace Attribute Notices: Supported 00:32:46.307 Firmware Activation Notices: Not Supported 00:32:46.307 ANA Change Notices: Supported 00:32:46.307 PLE Aggregate Log Change Notices: Not Supported 00:32:46.307 LBA Status Info Alert Notices: Not Supported 00:32:46.307 EGE Aggregate Log Change Notices: Not Supported 00:32:46.307 Normal NVM Subsystem Shutdown event: Not Supported 00:32:46.307 Zone Descriptor Change Notices: Not Supported 00:32:46.307 Discovery Log Change Notices: Not Supported 00:32:46.307 Controller Attributes 00:32:46.307 128-bit Host Identifier: Supported 00:32:46.307 Non-Operational Permissive Mode: Not Supported 00:32:46.307 NVM Sets: Not Supported 00:32:46.307 Read Recovery Levels: Not Supported 00:32:46.307 Endurance Groups: Not Supported 00:32:46.307 Predictable Latency Mode: Not Supported 00:32:46.307 Traffic Based Keep ALive: Supported 00:32:46.307 Namespace Granularity: Not Supported 00:32:46.307 SQ Associations: Not Supported 00:32:46.307 UUID List: Not Supported 00:32:46.307 Multi-Domain Subsystem: Not Supported 00:32:46.307 Fixed Capacity Management: Not Supported 00:32:46.307 Variable Capacity Management: Not Supported 00:32:46.307 Delete Endurance Group: Not Supported 00:32:46.307 Delete NVM Set: Not Supported 00:32:46.307 Extended LBA Formats Supported: Not Supported 00:32:46.307 Flexible Data Placement Supported: Not Supported 00:32:46.307 00:32:46.307 Controller Memory Buffer Support 00:32:46.307 ================================ 00:32:46.307 Supported: No 00:32:46.307 00:32:46.307 Persistent Memory Region Support 00:32:46.307 ================================ 00:32:46.307 Supported: No 00:32:46.307 00:32:46.307 Admin Command Set Attributes 00:32:46.307 ============================ 00:32:46.307 Security Send/Receive: Not Supported 00:32:46.307 Format NVM: Not Supported 00:32:46.307 Firmware Activate/Download: Not Supported 00:32:46.307 Namespace Management: Not Supported 00:32:46.307 Device Self-Test: Not Supported 00:32:46.307 Directives: Not Supported 00:32:46.307 NVMe-MI: Not Supported 00:32:46.307 Virtualization Management: Not Supported 00:32:46.307 Doorbell Buffer Config: Not Supported 00:32:46.307 Get LBA Status Capability: Not Supported 00:32:46.307 Command & Feature Lockdown Capability: Not Supported 00:32:46.307 Abort Command Limit: 4 00:32:46.307 Async Event Request Limit: 4 00:32:46.307 Number of Firmware Slots: N/A 00:32:46.307 Firmware Slot 1 Read-Only: N/A 00:32:46.307 Firmware Activation Without Reset: N/A 00:32:46.307 Multiple Update Detection Support: N/A 00:32:46.307 Firmware Update Granularity: No Information Provided 00:32:46.307 Per-Namespace SMART Log: Yes 00:32:46.307 Asymmetric Namespace Access Log Page: Supported 00:32:46.307 ANA Transition Time : 10 sec 00:32:46.307 00:32:46.307 Asymmetric Namespace Access Capabilities 00:32:46.307 ANA Optimized State : Supported 00:32:46.307 ANA Non-Optimized State : Supported 00:32:46.307 ANA Inaccessible State : Supported 00:32:46.307 ANA Persistent Loss State : Supported 00:32:46.307 ANA Change State : Supported 00:32:46.307 ANAGRPID is not changed : No 00:32:46.307 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:32:46.307 00:32:46.307 ANA Group Identifier Maximum : 128 00:32:46.307 Number of ANA Group Identifiers : 128 00:32:46.307 Max Number of Allowed Namespaces : 1024 00:32:46.307 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:32:46.307 Command Effects Log Page: Supported 00:32:46.307 Get Log Page Extended Data: Supported 00:32:46.307 Telemetry Log Pages: Not Supported 00:32:46.307 Persistent Event Log Pages: Not Supported 00:32:46.307 Supported Log Pages Log Page: May Support 00:32:46.307 Commands Supported & Effects Log Page: Not Supported 00:32:46.308 Feature Identifiers & Effects Log Page:May Support 00:32:46.308 NVMe-MI Commands & Effects Log Page: May Support 00:32:46.308 Data Area 4 for Telemetry Log: Not Supported 00:32:46.308 Error Log Page Entries Supported: 128 00:32:46.308 Keep Alive: Supported 00:32:46.308 Keep Alive Granularity: 1000 ms 00:32:46.308 00:32:46.308 NVM Command Set Attributes 00:32:46.308 ========================== 00:32:46.308 Submission Queue Entry Size 00:32:46.308 Max: 64 00:32:46.308 Min: 64 00:32:46.308 Completion Queue Entry Size 00:32:46.308 Max: 16 00:32:46.308 Min: 16 00:32:46.308 Number of Namespaces: 1024 00:32:46.308 Compare Command: Not Supported 00:32:46.308 Write Uncorrectable Command: Not Supported 00:32:46.308 Dataset Management Command: Supported 00:32:46.308 Write Zeroes Command: Supported 00:32:46.308 Set Features Save Field: Not Supported 00:32:46.308 Reservations: Not Supported 00:32:46.308 Timestamp: Not Supported 00:32:46.308 Copy: Not Supported 00:32:46.308 Volatile Write Cache: Present 00:32:46.308 Atomic Write Unit (Normal): 1 00:32:46.308 Atomic Write Unit (PFail): 1 00:32:46.308 Atomic Compare & Write Unit: 1 00:32:46.308 Fused Compare & Write: Not Supported 00:32:46.308 Scatter-Gather List 00:32:46.308 SGL Command Set: Supported 00:32:46.308 SGL Keyed: Not Supported 00:32:46.308 SGL Bit Bucket Descriptor: Not Supported 00:32:46.308 SGL Metadata Pointer: Not Supported 00:32:46.308 Oversized SGL: Not Supported 00:32:46.308 SGL Metadata Address: Not Supported 00:32:46.308 SGL Offset: Supported 00:32:46.308 Transport SGL Data Block: Not Supported 00:32:46.308 Replay Protected Memory Block: Not Supported 00:32:46.308 00:32:46.308 Firmware Slot Information 00:32:46.308 ========================= 00:32:46.308 Active slot: 0 00:32:46.308 00:32:46.308 Asymmetric Namespace Access 00:32:46.308 =========================== 00:32:46.308 Change Count : 0 00:32:46.308 Number of ANA Group Descriptors : 1 00:32:46.308 ANA Group Descriptor : 0 00:32:46.308 ANA Group ID : 1 00:32:46.308 Number of NSID Values : 1 00:32:46.308 Change Count : 0 00:32:46.308 ANA State : 1 00:32:46.308 Namespace Identifier : 1 00:32:46.308 00:32:46.308 Commands Supported and Effects 00:32:46.308 ============================== 00:32:46.308 Admin Commands 00:32:46.308 -------------- 00:32:46.308 Get Log Page (02h): Supported 00:32:46.308 Identify (06h): Supported 00:32:46.308 Abort (08h): Supported 00:32:46.308 Set Features (09h): Supported 00:32:46.308 Get Features (0Ah): Supported 00:32:46.308 Asynchronous Event Request (0Ch): Supported 00:32:46.308 Keep Alive (18h): Supported 00:32:46.308 I/O Commands 00:32:46.308 ------------ 00:32:46.308 Flush (00h): Supported 00:32:46.308 Write (01h): Supported LBA-Change 00:32:46.308 Read (02h): Supported 00:32:46.308 Write Zeroes (08h): Supported LBA-Change 00:32:46.308 Dataset Management (09h): Supported 00:32:46.308 00:32:46.308 Error Log 00:32:46.308 ========= 00:32:46.308 Entry: 0 00:32:46.308 Error Count: 0x3 00:32:46.308 Submission Queue Id: 0x0 00:32:46.308 Command Id: 0x5 00:32:46.308 Phase Bit: 0 00:32:46.308 Status Code: 0x2 00:32:46.308 Status Code Type: 0x0 00:32:46.308 Do Not Retry: 1 00:32:46.308 Error Location: 0x28 00:32:46.308 LBA: 0x0 00:32:46.308 Namespace: 0x0 00:32:46.308 Vendor Log Page: 0x0 00:32:46.308 ----------- 00:32:46.308 Entry: 1 00:32:46.308 Error Count: 0x2 00:32:46.308 Submission Queue Id: 0x0 00:32:46.308 Command Id: 0x5 00:32:46.308 Phase Bit: 0 00:32:46.308 Status Code: 0x2 00:32:46.308 Status Code Type: 0x0 00:32:46.308 Do Not Retry: 1 00:32:46.308 Error Location: 0x28 00:32:46.308 LBA: 0x0 00:32:46.308 Namespace: 0x0 00:32:46.308 Vendor Log Page: 0x0 00:32:46.308 ----------- 00:32:46.308 Entry: 2 00:32:46.308 Error Count: 0x1 00:32:46.308 Submission Queue Id: 0x0 00:32:46.308 Command Id: 0x4 00:32:46.308 Phase Bit: 0 00:32:46.308 Status Code: 0x2 00:32:46.308 Status Code Type: 0x0 00:32:46.308 Do Not Retry: 1 00:32:46.308 Error Location: 0x28 00:32:46.308 LBA: 0x0 00:32:46.308 Namespace: 0x0 00:32:46.308 Vendor Log Page: 0x0 00:32:46.308 00:32:46.308 Number of Queues 00:32:46.308 ================ 00:32:46.308 Number of I/O Submission Queues: 128 00:32:46.308 Number of I/O Completion Queues: 128 00:32:46.308 00:32:46.308 ZNS Specific Controller Data 00:32:46.308 ============================ 00:32:46.308 Zone Append Size Limit: 0 00:32:46.308 00:32:46.308 00:32:46.308 Active Namespaces 00:32:46.308 ================= 00:32:46.308 get_feature(0x05) failed 00:32:46.308 Namespace ID:1 00:32:46.308 Command Set Identifier: NVM (00h) 00:32:46.308 Deallocate: Supported 00:32:46.308 Deallocated/Unwritten Error: Not Supported 00:32:46.308 Deallocated Read Value: Unknown 00:32:46.308 Deallocate in Write Zeroes: Not Supported 00:32:46.308 Deallocated Guard Field: 0xFFFF 00:32:46.308 Flush: Supported 00:32:46.308 Reservation: Not Supported 00:32:46.308 Namespace Sharing Capabilities: Multiple Controllers 00:32:46.308 Size (in LBAs): 1953525168 (931GiB) 00:32:46.308 Capacity (in LBAs): 1953525168 (931GiB) 00:32:46.308 Utilization (in LBAs): 1953525168 (931GiB) 00:32:46.308 UUID: 8f562b24-16d9-4a84-89cf-02dcef116b4c 00:32:46.308 Thin Provisioning: Not Supported 00:32:46.308 Per-NS Atomic Units: Yes 00:32:46.308 Atomic Boundary Size (Normal): 0 00:32:46.308 Atomic Boundary Size (PFail): 0 00:32:46.308 Atomic Boundary Offset: 0 00:32:46.308 NGUID/EUI64 Never Reused: No 00:32:46.308 ANA group ID: 1 00:32:46.308 Namespace Write Protected: No 00:32:46.308 Number of LBA Formats: 1 00:32:46.308 Current LBA Format: LBA Format #00 00:32:46.308 LBA Format #00: Data Size: 512 Metadata Size: 0 00:32:46.308 00:32:46.308 03:42:31 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:32:46.308 03:42:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:46.308 03:42:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:32:46.308 03:42:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:46.308 03:42:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:32:46.308 03:42:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:46.308 03:42:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:46.308 rmmod nvme_tcp 00:32:46.308 rmmod nvme_fabrics 00:32:46.308 03:42:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:46.308 03:42:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:32:46.308 03:42:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:32:46.308 03:42:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:32:46.308 03:42:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:32:46.308 03:42:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:46.308 03:42:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:46.308 03:42:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:46.308 03:42:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:46.308 03:42:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:46.308 03:42:31 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:46.308 03:42:31 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:48.207 03:42:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:48.465 03:42:33 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:32:48.465 03:42:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:32:48.465 03:42:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:32:48.465 03:42:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:48.465 03:42:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:48.465 03:42:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:32:48.465 03:42:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:48.465 03:42:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:32:48.465 03:42:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:32:48.465 03:42:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:32:49.396 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:32:49.396 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:32:49.396 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:32:49.396 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:32:49.396 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:32:49.396 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:32:49.396 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:32:49.396 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:32:49.396 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:32:49.396 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:32:49.396 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:32:49.396 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:32:49.396 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:32:49.396 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:32:49.396 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:32:49.654 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:32:50.587 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:32:50.587 00:32:50.587 real 0m9.342s 00:32:50.587 user 0m1.950s 00:32:50.587 sys 0m3.357s 00:32:50.587 03:42:35 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1122 -- # xtrace_disable 00:32:50.587 03:42:35 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:32:50.587 ************************************ 00:32:50.587 END TEST nvmf_identify_kernel_target 00:32:50.587 ************************************ 00:32:50.587 03:42:35 nvmf_tcp -- nvmf/nvmf.sh@105 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:32:50.587 03:42:35 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:32:50.587 03:42:35 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:32:50.587 03:42:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:50.587 ************************************ 00:32:50.587 START TEST nvmf_auth_host 00:32:50.587 ************************************ 00:32:50.587 03:42:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:32:50.587 * Looking for test storage... 00:32:50.587 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:50.587 03:42:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:50.587 03:42:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:32:50.587 03:42:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:50.587 03:42:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:50.587 03:42:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:50.587 03:42:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:50.587 03:42:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:50.587 03:42:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:50.587 03:42:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:50.587 03:42:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:50.587 03:42:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:50.587 03:42:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:50.587 03:42:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:32:50.587 03:42:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:32:50.587 03:42:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:50.587 03:42:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:50.587 03:42:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:50.587 03:42:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:50.587 03:42:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:50.587 03:42:35 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:50.587 03:42:35 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:50.587 03:42:35 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:50.587 03:42:35 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:50.587 03:42:35 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:50.587 03:42:35 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:50.587 03:42:35 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:32:50.587 03:42:35 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:50.587 03:42:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:32:50.587 03:42:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:50.587 03:42:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:50.587 03:42:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:50.587 03:42:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:50.587 03:42:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:50.587 03:42:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:50.587 03:42:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:50.587 03:42:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:50.587 03:42:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:32:50.587 03:42:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:32:50.587 03:42:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:32:50.587 03:42:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:32:50.587 03:42:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:32:50.587 03:42:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:32:50.587 03:42:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:32:50.587 03:42:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:32:50.587 03:42:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:32:50.587 03:42:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:32:50.587 03:42:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:50.587 03:42:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:50.587 03:42:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:50.587 03:42:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:50.587 03:42:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:50.587 03:42:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:50.587 03:42:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:50.587 03:42:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:32:50.587 03:42:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:32:50.587 03:42:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:32:50.587 03:42:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.484 03:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:52.484 03:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:32:52.484 03:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:52.484 03:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:52.484 03:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:52.484 03:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:52.484 03:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:52.484 03:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:32:52.484 03:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:52.484 03:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:32:52.484 03:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:32:52.484 03:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:32:52.484 03:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:32:52.484 03:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:32:52.484 03:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:32:52.484 03:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:52.484 03:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:52.484 03:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:52.484 03:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:52.484 03:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:52.484 03:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:52.484 03:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:52.484 03:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:52.484 03:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:52.484 03:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:52.484 03:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:52.484 03:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:52.484 03:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:52.484 03:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:52.484 03:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:52.484 03:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:52.484 03:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:52.484 03:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:52.484 03:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:32:52.484 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:32:52.484 03:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:52.484 03:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:52.484 03:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:52.484 03:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:52.484 03:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:52.484 03:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:52.484 03:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:32:52.484 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:32:52.484 03:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:52.484 03:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:52.484 03:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:52.484 03:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:52.484 03:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:52.484 03:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:52.484 03:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:52.484 03:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:32:52.484 03:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:52.484 03:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:52.484 03:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:52.484 03:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:52.484 03:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:52.484 03:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:52.484 03:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:52.484 03:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:32:52.484 Found net devices under 0000:0a:00.0: cvl_0_0 00:32:52.484 03:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:52.484 03:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:52.484 03:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:52.484 03:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:52.484 03:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:52.484 03:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:52.484 03:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:52.484 03:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:52.484 03:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:32:52.484 Found net devices under 0000:0a:00.1: cvl_0_1 00:32:52.484 03:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:52.484 03:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:52.484 03:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:32:52.484 03:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:52.484 03:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:32:52.484 03:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:32:52.484 03:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:52.484 03:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:52.484 03:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:52.484 03:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:32:52.484 03:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:52.484 03:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:52.484 03:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:32:52.484 03:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:52.484 03:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:52.484 03:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:32:52.484 03:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:32:52.484 03:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:32:52.484 03:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:52.484 03:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:52.484 03:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:52.484 03:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:32:52.484 03:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:52.741 03:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:52.741 03:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:52.741 03:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:32:52.741 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:52.741 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.105 ms 00:32:52.741 00:32:52.741 --- 10.0.0.2 ping statistics --- 00:32:52.741 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:52.741 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:32:52.741 03:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:52.741 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:52.741 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.095 ms 00:32:52.741 00:32:52.741 --- 10.0.0.1 ping statistics --- 00:32:52.741 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:52.741 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:32:52.741 03:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:52.741 03:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:32:52.741 03:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:32:52.741 03:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:52.741 03:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:52.741 03:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:52.742 03:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:52.742 03:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:52.742 03:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:52.742 03:42:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:32:52.742 03:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:32:52.742 03:42:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@720 -- # xtrace_disable 00:32:52.742 03:42:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.742 03:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=2549179 00:32:52.742 03:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:32:52.742 03:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 2549179 00:32:52.742 03:42:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@827 -- # '[' -z 2549179 ']' 00:32:52.742 03:42:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:52.742 03:42:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@832 -- # local max_retries=100 00:32:52.742 03:42:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:52.742 03:42:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # xtrace_disable 00:32:52.742 03:42:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.999 03:42:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:32:52.999 03:42:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@860 -- # return 0 00:32:52.999 03:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:32:52.999 03:42:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:52.999 03:42:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.999 03:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:52.999 03:42:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:32:52.999 03:42:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:32:52.999 03:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:52.999 03:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:52.999 03:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:52.999 03:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:32:52.999 03:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:32:52.999 03:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:32:52.999 03:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=79a6732ef0d32d5b04712ff410a84294 00:32:52.999 03:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:32:52.999 03:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.9kp 00:32:52.999 03:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 79a6732ef0d32d5b04712ff410a84294 0 00:32:52.999 03:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 79a6732ef0d32d5b04712ff410a84294 0 00:32:52.999 03:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:52.999 03:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:52.999 03:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=79a6732ef0d32d5b04712ff410a84294 00:32:52.999 03:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:32:52.999 03:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:52.999 03:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.9kp 00:32:52.999 03:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.9kp 00:32:52.999 03:42:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.9kp 00:32:52.999 03:42:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:32:52.999 03:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:52.999 03:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:52.999 03:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:52.999 03:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:32:52.999 03:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:32:52.999 03:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:32:52.999 03:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=1dc014b7182c08c6c1867ad33849f61af239d1779713bdf2a5d9e71e54f82b03 00:32:52.999 03:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:32:52.999 03:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.Zn0 00:32:52.999 03:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 1dc014b7182c08c6c1867ad33849f61af239d1779713bdf2a5d9e71e54f82b03 3 00:32:52.999 03:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 1dc014b7182c08c6c1867ad33849f61af239d1779713bdf2a5d9e71e54f82b03 3 00:32:52.999 03:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:52.999 03:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:52.999 03:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=1dc014b7182c08c6c1867ad33849f61af239d1779713bdf2a5d9e71e54f82b03 00:32:52.999 03:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:32:53.000 03:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:53.000 03:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.Zn0 00:32:53.000 03:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.Zn0 00:32:53.000 03:42:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.Zn0 00:32:53.000 03:42:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:32:53.000 03:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:53.000 03:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:53.000 03:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:53.000 03:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:32:53.000 03:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:32:53.000 03:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:32:53.000 03:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=59d35b57988dc605239d94ccc79a98834d79f7ef32854e30 00:32:53.000 03:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:32:53.291 03:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.Z2L 00:32:53.291 03:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 59d35b57988dc605239d94ccc79a98834d79f7ef32854e30 0 00:32:53.291 03:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 59d35b57988dc605239d94ccc79a98834d79f7ef32854e30 0 00:32:53.291 03:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:53.291 03:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:53.291 03:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=59d35b57988dc605239d94ccc79a98834d79f7ef32854e30 00:32:53.291 03:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:32:53.291 03:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:53.291 03:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.Z2L 00:32:53.291 03:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.Z2L 00:32:53.291 03:42:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.Z2L 00:32:53.291 03:42:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:32:53.291 03:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:53.291 03:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:53.291 03:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:53.291 03:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:32:53.291 03:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:32:53.291 03:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:32:53.291 03:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=e2df0b9c17677dd172a23079521b775f1ae22a501f23149e 00:32:53.291 03:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:32:53.291 03:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.zU2 00:32:53.291 03:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key e2df0b9c17677dd172a23079521b775f1ae22a501f23149e 2 00:32:53.291 03:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 e2df0b9c17677dd172a23079521b775f1ae22a501f23149e 2 00:32:53.291 03:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:53.291 03:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:53.291 03:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=e2df0b9c17677dd172a23079521b775f1ae22a501f23149e 00:32:53.291 03:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:32:53.291 03:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:53.291 03:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.zU2 00:32:53.291 03:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.zU2 00:32:53.291 03:42:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.zU2 00:32:53.291 03:42:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:32:53.291 03:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:53.291 03:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:53.291 03:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:53.291 03:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:32:53.291 03:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:32:53.291 03:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:32:53.291 03:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=c22c39064a861d4b7e6f0b79c60a9354 00:32:53.291 03:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:32:53.291 03:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.mbo 00:32:53.291 03:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key c22c39064a861d4b7e6f0b79c60a9354 1 00:32:53.291 03:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 c22c39064a861d4b7e6f0b79c60a9354 1 00:32:53.291 03:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:53.291 03:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:53.291 03:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=c22c39064a861d4b7e6f0b79c60a9354 00:32:53.291 03:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:32:53.291 03:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:53.291 03:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.mbo 00:32:53.291 03:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.mbo 00:32:53.291 03:42:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.mbo 00:32:53.291 03:42:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:32:53.291 03:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:53.291 03:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:53.291 03:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:53.291 03:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:32:53.291 03:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:32:53.291 03:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:32:53.291 03:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=d1e552f016df83dec8805e991bf80d68 00:32:53.291 03:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:32:53.291 03:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.3u7 00:32:53.291 03:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key d1e552f016df83dec8805e991bf80d68 1 00:32:53.291 03:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 d1e552f016df83dec8805e991bf80d68 1 00:32:53.292 03:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:53.292 03:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:53.292 03:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=d1e552f016df83dec8805e991bf80d68 00:32:53.292 03:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:32:53.292 03:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:53.292 03:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.3u7 00:32:53.292 03:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.3u7 00:32:53.292 03:42:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.3u7 00:32:53.292 03:42:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:32:53.292 03:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:53.292 03:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:53.292 03:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:53.292 03:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:32:53.292 03:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:32:53.292 03:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:32:53.292 03:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=eba47cdaafb2e10e91a322f9f89df51ad1939d2641b8ad29 00:32:53.292 03:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:32:53.292 03:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.dFh 00:32:53.292 03:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key eba47cdaafb2e10e91a322f9f89df51ad1939d2641b8ad29 2 00:32:53.292 03:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 eba47cdaafb2e10e91a322f9f89df51ad1939d2641b8ad29 2 00:32:53.292 03:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:53.292 03:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:53.292 03:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=eba47cdaafb2e10e91a322f9f89df51ad1939d2641b8ad29 00:32:53.292 03:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:32:53.292 03:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:53.292 03:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.dFh 00:32:53.292 03:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.dFh 00:32:53.292 03:42:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.dFh 00:32:53.292 03:42:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:32:53.292 03:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:53.292 03:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:53.292 03:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:53.292 03:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:32:53.292 03:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:32:53.292 03:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:32:53.292 03:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=6827f7fb159b10be1fe2e7af707206b3 00:32:53.292 03:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:32:53.292 03:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.3VM 00:32:53.292 03:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 6827f7fb159b10be1fe2e7af707206b3 0 00:32:53.292 03:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 6827f7fb159b10be1fe2e7af707206b3 0 00:32:53.292 03:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:53.292 03:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:53.292 03:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=6827f7fb159b10be1fe2e7af707206b3 00:32:53.292 03:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:32:53.292 03:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:53.549 03:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.3VM 00:32:53.549 03:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.3VM 00:32:53.549 03:42:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.3VM 00:32:53.549 03:42:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:32:53.549 03:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:53.549 03:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:53.550 03:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:53.550 03:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:32:53.550 03:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:32:53.550 03:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:32:53.550 03:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=87f599534f5fcbbcae59b5e3e2737e7308930621f195b07c6793cf933efc4864 00:32:53.550 03:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:32:53.550 03:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.OJF 00:32:53.550 03:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 87f599534f5fcbbcae59b5e3e2737e7308930621f195b07c6793cf933efc4864 3 00:32:53.550 03:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 87f599534f5fcbbcae59b5e3e2737e7308930621f195b07c6793cf933efc4864 3 00:32:53.550 03:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:53.550 03:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:53.550 03:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=87f599534f5fcbbcae59b5e3e2737e7308930621f195b07c6793cf933efc4864 00:32:53.550 03:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:32:53.550 03:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:53.550 03:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.OJF 00:32:53.550 03:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.OJF 00:32:53.550 03:42:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.OJF 00:32:53.550 03:42:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:32:53.550 03:42:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 2549179 00:32:53.550 03:42:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@827 -- # '[' -z 2549179 ']' 00:32:53.550 03:42:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:53.550 03:42:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@832 -- # local max_retries=100 00:32:53.550 03:42:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:53.550 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:53.550 03:42:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # xtrace_disable 00:32:53.550 03:42:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.807 03:42:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:32:53.807 03:42:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@860 -- # return 0 00:32:53.807 03:42:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:32:53.807 03:42:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.9kp 00:32:53.807 03:42:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:53.807 03:42:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.807 03:42:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:53.807 03:42:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.Zn0 ]] 00:32:53.807 03:42:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Zn0 00:32:53.807 03:42:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:53.808 03:42:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.808 03:42:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:53.808 03:42:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:32:53.808 03:42:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.Z2L 00:32:53.808 03:42:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:53.808 03:42:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.808 03:42:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:53.808 03:42:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.zU2 ]] 00:32:53.808 03:42:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.zU2 00:32:53.808 03:42:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:53.808 03:42:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.808 03:42:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:53.808 03:42:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:32:53.808 03:42:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.mbo 00:32:53.808 03:42:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:53.808 03:42:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.808 03:42:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:53.808 03:42:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.3u7 ]] 00:32:53.808 03:42:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.3u7 00:32:53.808 03:42:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:53.808 03:42:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.808 03:42:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:53.808 03:42:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:32:53.808 03:42:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.dFh 00:32:53.808 03:42:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:53.808 03:42:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.808 03:42:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:53.808 03:42:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.3VM ]] 00:32:53.808 03:42:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.3VM 00:32:53.808 03:42:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:53.808 03:42:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.808 03:42:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:53.808 03:42:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:32:53.808 03:42:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.OJF 00:32:53.808 03:42:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:53.808 03:42:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.808 03:42:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:53.808 03:42:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:32:53.808 03:42:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:32:53.808 03:42:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:32:53.808 03:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:53.808 03:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:53.808 03:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:53.808 03:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:53.808 03:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:53.808 03:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:53.808 03:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:53.808 03:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:53.808 03:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:53.808 03:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:53.808 03:42:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:32:53.808 03:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:32:53.808 03:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:32:53.808 03:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:32:53.808 03:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:32:53.808 03:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:32:53.808 03:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:32:53.808 03:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:32:53.808 03:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:32:53.808 03:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:32:53.808 03:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:32:54.739 Waiting for block devices as requested 00:32:54.739 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:32:54.995 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:32:54.995 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:32:55.251 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:32:55.251 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:32:55.251 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:32:55.251 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:32:55.507 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:32:55.507 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:32:55.507 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:32:55.507 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:32:55.764 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:32:55.764 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:32:55.764 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:32:55.764 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:32:56.022 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:32:56.022 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:32:56.588 03:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:32:56.588 03:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:32:56.588 03:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:32:56.588 03:42:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:32:56.588 03:42:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:32:56.588 03:42:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:32:56.588 03:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:32:56.588 03:42:41 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:32:56.588 03:42:41 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:32:56.588 No valid GPT data, bailing 00:32:56.588 03:42:41 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:32:56.588 03:42:41 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:32:56.588 03:42:41 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:32:56.588 03:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:32:56.588 03:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:32:56.588 03:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:32:56.588 03:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:32:56.588 03:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:32:56.588 03:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:32:56.588 03:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:32:56.588 03:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:32:56.588 03:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:32:56.588 03:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:32:56.588 03:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:32:56.588 03:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:32:56.588 03:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:32:56.588 03:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:32:56.588 03:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:32:56.588 00:32:56.588 Discovery Log Number of Records 2, Generation counter 2 00:32:56.588 =====Discovery Log Entry 0====== 00:32:56.588 trtype: tcp 00:32:56.588 adrfam: ipv4 00:32:56.588 subtype: current discovery subsystem 00:32:56.588 treq: not specified, sq flow control disable supported 00:32:56.588 portid: 1 00:32:56.588 trsvcid: 4420 00:32:56.588 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:32:56.588 traddr: 10.0.0.1 00:32:56.588 eflags: none 00:32:56.588 sectype: none 00:32:56.588 =====Discovery Log Entry 1====== 00:32:56.588 trtype: tcp 00:32:56.588 adrfam: ipv4 00:32:56.588 subtype: nvme subsystem 00:32:56.588 treq: not specified, sq flow control disable supported 00:32:56.588 portid: 1 00:32:56.588 trsvcid: 4420 00:32:56.588 subnqn: nqn.2024-02.io.spdk:cnode0 00:32:56.588 traddr: 10.0.0.1 00:32:56.588 eflags: none 00:32:56.588 sectype: none 00:32:56.588 03:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:32:56.588 03:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:32:56.588 03:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:32:56.588 03:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:32:56.588 03:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:56.588 03:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:56.588 03:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:56.588 03:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:56.588 03:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTlkMzViNTc5ODhkYzYwNTIzOWQ5NGNjYzc5YTk4ODM0ZDc5ZjdlZjMyODU0ZTMw3TdAGQ==: 00:32:56.588 03:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTJkZjBiOWMxNzY3N2RkMTcyYTIzMDc5NTIxYjc3NWYxYWUyMmE1MDFmMjMxNDllXZoZpA==: 00:32:56.588 03:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:56.588 03:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:56.588 03:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTlkMzViNTc5ODhkYzYwNTIzOWQ5NGNjYzc5YTk4ODM0ZDc5ZjdlZjMyODU0ZTMw3TdAGQ==: 00:32:56.588 03:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTJkZjBiOWMxNzY3N2RkMTcyYTIzMDc5NTIxYjc3NWYxYWUyMmE1MDFmMjMxNDllXZoZpA==: ]] 00:32:56.588 03:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTJkZjBiOWMxNzY3N2RkMTcyYTIzMDc5NTIxYjc3NWYxYWUyMmE1MDFmMjMxNDllXZoZpA==: 00:32:56.588 03:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:32:56.588 03:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:32:56.588 03:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:32:56.588 03:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:32:56.588 03:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:32:56.588 03:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:56.588 03:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:32:56.588 03:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:32:56.588 03:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:56.588 03:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:56.588 03:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:32:56.588 03:42:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.588 03:42:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.588 03:42:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.588 03:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:56.588 03:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:56.588 03:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:56.588 03:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:56.588 03:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:56.588 03:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:56.588 03:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:56.588 03:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:56.588 03:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:56.588 03:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:56.588 03:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:56.588 03:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:56.588 03:42:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.588 03:42:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.847 nvme0n1 00:32:56.847 03:42:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.847 03:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:56.847 03:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:56.847 03:42:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.847 03:42:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.847 03:42:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.847 03:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:56.847 03:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:56.847 03:42:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.847 03:42:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.847 03:42:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.847 03:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:32:56.847 03:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:56.847 03:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:56.847 03:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:32:56.847 03:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:56.847 03:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:56.847 03:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:56.847 03:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:56.847 03:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzlhNjczMmVmMGQzMmQ1YjA0NzEyZmY0MTBhODQyOTS6vknX: 00:32:56.847 03:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWRjMDE0YjcxODJjMDhjNmMxODY3YWQzMzg0OWY2MWFmMjM5ZDE3Nzk3MTNiZGYyYTVkOWU3MWU1NGY4MmIwMy0+Dcw=: 00:32:56.847 03:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:56.847 03:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:56.847 03:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzlhNjczMmVmMGQzMmQ1YjA0NzEyZmY0MTBhODQyOTS6vknX: 00:32:56.847 03:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWRjMDE0YjcxODJjMDhjNmMxODY3YWQzMzg0OWY2MWFmMjM5ZDE3Nzk3MTNiZGYyYTVkOWU3MWU1NGY4MmIwMy0+Dcw=: ]] 00:32:56.847 03:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWRjMDE0YjcxODJjMDhjNmMxODY3YWQzMzg0OWY2MWFmMjM5ZDE3Nzk3MTNiZGYyYTVkOWU3MWU1NGY4MmIwMy0+Dcw=: 00:32:56.847 03:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:32:56.847 03:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:56.847 03:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:56.847 03:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:56.847 03:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:56.847 03:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:56.847 03:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:56.847 03:42:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.847 03:42:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.847 03:42:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.847 03:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:56.847 03:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:56.847 03:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:56.847 03:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:56.847 03:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:56.847 03:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:56.847 03:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:56.847 03:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:56.847 03:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:56.847 03:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:56.847 03:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:56.847 03:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:56.847 03:42:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.847 03:42:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.105 nvme0n1 00:32:57.105 03:42:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.105 03:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:57.105 03:42:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.105 03:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:57.105 03:42:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.105 03:42:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.105 03:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:57.105 03:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:57.105 03:42:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.105 03:42:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.105 03:42:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.105 03:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:57.105 03:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:32:57.105 03:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:57.105 03:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:57.105 03:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:57.105 03:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:57.105 03:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTlkMzViNTc5ODhkYzYwNTIzOWQ5NGNjYzc5YTk4ODM0ZDc5ZjdlZjMyODU0ZTMw3TdAGQ==: 00:32:57.105 03:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTJkZjBiOWMxNzY3N2RkMTcyYTIzMDc5NTIxYjc3NWYxYWUyMmE1MDFmMjMxNDllXZoZpA==: 00:32:57.105 03:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:57.105 03:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:57.105 03:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTlkMzViNTc5ODhkYzYwNTIzOWQ5NGNjYzc5YTk4ODM0ZDc5ZjdlZjMyODU0ZTMw3TdAGQ==: 00:32:57.105 03:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTJkZjBiOWMxNzY3N2RkMTcyYTIzMDc5NTIxYjc3NWYxYWUyMmE1MDFmMjMxNDllXZoZpA==: ]] 00:32:57.105 03:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTJkZjBiOWMxNzY3N2RkMTcyYTIzMDc5NTIxYjc3NWYxYWUyMmE1MDFmMjMxNDllXZoZpA==: 00:32:57.105 03:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:32:57.105 03:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:57.105 03:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:57.105 03:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:57.105 03:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:57.105 03:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:57.105 03:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:57.105 03:42:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.105 03:42:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.105 03:42:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.105 03:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:57.105 03:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:57.105 03:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:57.105 03:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:57.105 03:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:57.105 03:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:57.105 03:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:57.105 03:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:57.105 03:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:57.105 03:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:57.105 03:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:57.105 03:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:57.105 03:42:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.105 03:42:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.363 nvme0n1 00:32:57.363 03:42:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.363 03:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:57.363 03:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:57.363 03:42:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.363 03:42:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.363 03:42:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.363 03:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:57.363 03:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:57.363 03:42:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.363 03:42:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.363 03:42:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.363 03:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:57.363 03:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:32:57.363 03:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:57.363 03:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:57.363 03:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:57.363 03:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:57.363 03:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzIyYzM5MDY0YTg2MWQ0YjdlNmYwYjc5YzYwYTkzNTRhmmnN: 00:32:57.363 03:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDFlNTUyZjAxNmRmODNkZWM4ODA1ZTk5MWJmODBkNjiivZvV: 00:32:57.363 03:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:57.363 03:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:57.363 03:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzIyYzM5MDY0YTg2MWQ0YjdlNmYwYjc5YzYwYTkzNTRhmmnN: 00:32:57.363 03:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDFlNTUyZjAxNmRmODNkZWM4ODA1ZTk5MWJmODBkNjiivZvV: ]] 00:32:57.363 03:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDFlNTUyZjAxNmRmODNkZWM4ODA1ZTk5MWJmODBkNjiivZvV: 00:32:57.363 03:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:32:57.363 03:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:57.363 03:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:57.363 03:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:57.363 03:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:57.363 03:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:57.363 03:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:57.363 03:42:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.363 03:42:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.363 03:42:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.363 03:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:57.363 03:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:57.363 03:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:57.363 03:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:57.363 03:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:57.363 03:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:57.363 03:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:57.363 03:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:57.363 03:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:57.363 03:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:57.363 03:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:57.363 03:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:57.363 03:42:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.364 03:42:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.621 nvme0n1 00:32:57.621 03:42:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.621 03:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:57.621 03:42:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.621 03:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:57.621 03:42:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.621 03:42:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.621 03:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:57.621 03:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:57.621 03:42:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.621 03:42:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.621 03:42:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.621 03:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:57.621 03:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:32:57.621 03:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:57.621 03:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:57.621 03:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:57.621 03:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:57.622 03:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZWJhNDdjZGFhZmIyZTEwZTkxYTMyMmY5Zjg5ZGY1MWFkMTkzOWQyNjQxYjhhZDI5rzqCdQ==: 00:32:57.622 03:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjgyN2Y3ZmIxNTliMTBiZTFmZTJlN2FmNzA3MjA2YjMq02lR: 00:32:57.622 03:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:57.622 03:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:57.622 03:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZWJhNDdjZGFhZmIyZTEwZTkxYTMyMmY5Zjg5ZGY1MWFkMTkzOWQyNjQxYjhhZDI5rzqCdQ==: 00:32:57.622 03:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjgyN2Y3ZmIxNTliMTBiZTFmZTJlN2FmNzA3MjA2YjMq02lR: ]] 00:32:57.622 03:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjgyN2Y3ZmIxNTliMTBiZTFmZTJlN2FmNzA3MjA2YjMq02lR: 00:32:57.622 03:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:32:57.622 03:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:57.622 03:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:57.622 03:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:57.622 03:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:57.622 03:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:57.622 03:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:57.622 03:42:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.622 03:42:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.622 03:42:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.622 03:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:57.622 03:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:57.622 03:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:57.622 03:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:57.622 03:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:57.622 03:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:57.622 03:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:57.622 03:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:57.622 03:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:57.622 03:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:57.622 03:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:57.622 03:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:57.622 03:42:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.622 03:42:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.622 nvme0n1 00:32:57.622 03:42:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.622 03:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:57.622 03:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:57.622 03:42:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.622 03:42:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.622 03:42:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.882 03:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:57.882 03:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:57.882 03:42:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.882 03:42:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.882 03:42:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.882 03:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:57.882 03:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:32:57.882 03:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:57.882 03:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:57.882 03:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:57.882 03:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:57.882 03:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODdmNTk5NTM0ZjVmY2JiY2FlNTliNWUzZTI3MzdlNzMwODkzMDYyMWYxOTViMDdjNjc5M2NmOTMzZWZjNDg2NMogDDg=: 00:32:57.882 03:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:57.882 03:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:57.882 03:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:57.882 03:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODdmNTk5NTM0ZjVmY2JiY2FlNTliNWUzZTI3MzdlNzMwODkzMDYyMWYxOTViMDdjNjc5M2NmOTMzZWZjNDg2NMogDDg=: 00:32:57.882 03:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:57.882 03:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:32:57.882 03:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:57.882 03:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:57.882 03:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:57.882 03:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:57.882 03:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:57.882 03:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:57.882 03:42:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.882 03:42:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.882 03:42:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.882 03:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:57.882 03:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:57.882 03:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:57.882 03:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:57.882 03:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:57.882 03:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:57.882 03:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:57.882 03:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:57.882 03:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:57.882 03:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:57.882 03:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:57.882 03:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:57.882 03:42:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.882 03:42:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.882 nvme0n1 00:32:57.882 03:42:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.882 03:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:57.882 03:42:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.882 03:42:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.882 03:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:57.882 03:42:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.882 03:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:57.882 03:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:57.882 03:42:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.882 03:42:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.141 03:42:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.141 03:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:58.141 03:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:58.141 03:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:32:58.141 03:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:58.141 03:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:58.141 03:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:58.141 03:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:58.141 03:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzlhNjczMmVmMGQzMmQ1YjA0NzEyZmY0MTBhODQyOTS6vknX: 00:32:58.141 03:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWRjMDE0YjcxODJjMDhjNmMxODY3YWQzMzg0OWY2MWFmMjM5ZDE3Nzk3MTNiZGYyYTVkOWU3MWU1NGY4MmIwMy0+Dcw=: 00:32:58.141 03:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:58.141 03:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:58.141 03:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzlhNjczMmVmMGQzMmQ1YjA0NzEyZmY0MTBhODQyOTS6vknX: 00:32:58.141 03:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWRjMDE0YjcxODJjMDhjNmMxODY3YWQzMzg0OWY2MWFmMjM5ZDE3Nzk3MTNiZGYyYTVkOWU3MWU1NGY4MmIwMy0+Dcw=: ]] 00:32:58.141 03:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWRjMDE0YjcxODJjMDhjNmMxODY3YWQzMzg0OWY2MWFmMjM5ZDE3Nzk3MTNiZGYyYTVkOWU3MWU1NGY4MmIwMy0+Dcw=: 00:32:58.141 03:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:32:58.141 03:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:58.141 03:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:58.141 03:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:58.141 03:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:58.141 03:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:58.141 03:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:58.141 03:42:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.141 03:42:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.141 03:42:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.141 03:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:58.141 03:42:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:58.141 03:42:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:58.141 03:42:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:58.141 03:42:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:58.141 03:42:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:58.141 03:42:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:58.141 03:42:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:58.141 03:42:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:58.141 03:42:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:58.141 03:42:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:58.141 03:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:58.141 03:42:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.141 03:42:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.141 nvme0n1 00:32:58.141 03:42:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.141 03:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:58.141 03:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:58.141 03:42:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.141 03:42:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.141 03:42:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.141 03:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:58.141 03:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:58.141 03:42:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.141 03:42:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.398 03:42:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.398 03:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:58.398 03:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:32:58.398 03:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:58.398 03:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:58.398 03:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:58.398 03:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:58.398 03:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTlkMzViNTc5ODhkYzYwNTIzOWQ5NGNjYzc5YTk4ODM0ZDc5ZjdlZjMyODU0ZTMw3TdAGQ==: 00:32:58.398 03:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTJkZjBiOWMxNzY3N2RkMTcyYTIzMDc5NTIxYjc3NWYxYWUyMmE1MDFmMjMxNDllXZoZpA==: 00:32:58.398 03:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:58.398 03:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:58.398 03:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTlkMzViNTc5ODhkYzYwNTIzOWQ5NGNjYzc5YTk4ODM0ZDc5ZjdlZjMyODU0ZTMw3TdAGQ==: 00:32:58.398 03:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTJkZjBiOWMxNzY3N2RkMTcyYTIzMDc5NTIxYjc3NWYxYWUyMmE1MDFmMjMxNDllXZoZpA==: ]] 00:32:58.398 03:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTJkZjBiOWMxNzY3N2RkMTcyYTIzMDc5NTIxYjc3NWYxYWUyMmE1MDFmMjMxNDllXZoZpA==: 00:32:58.398 03:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:32:58.398 03:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:58.398 03:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:58.398 03:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:58.398 03:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:58.398 03:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:58.398 03:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:58.398 03:42:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.398 03:42:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.398 03:42:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.398 03:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:58.398 03:42:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:58.398 03:42:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:58.398 03:42:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:58.398 03:42:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:58.398 03:42:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:58.398 03:42:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:58.398 03:42:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:58.398 03:42:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:58.398 03:42:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:58.398 03:42:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:58.398 03:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:58.398 03:42:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.398 03:42:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.398 nvme0n1 00:32:58.398 03:42:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.398 03:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:58.398 03:42:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.398 03:42:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.398 03:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:58.398 03:42:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.655 03:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:58.655 03:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:58.655 03:42:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.655 03:42:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.655 03:42:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.655 03:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:58.655 03:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:32:58.655 03:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:58.655 03:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:58.655 03:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:58.655 03:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:58.655 03:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzIyYzM5MDY0YTg2MWQ0YjdlNmYwYjc5YzYwYTkzNTRhmmnN: 00:32:58.655 03:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDFlNTUyZjAxNmRmODNkZWM4ODA1ZTk5MWJmODBkNjiivZvV: 00:32:58.655 03:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:58.655 03:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:58.655 03:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzIyYzM5MDY0YTg2MWQ0YjdlNmYwYjc5YzYwYTkzNTRhmmnN: 00:32:58.655 03:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDFlNTUyZjAxNmRmODNkZWM4ODA1ZTk5MWJmODBkNjiivZvV: ]] 00:32:58.655 03:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDFlNTUyZjAxNmRmODNkZWM4ODA1ZTk5MWJmODBkNjiivZvV: 00:32:58.655 03:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:32:58.655 03:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:58.655 03:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:58.655 03:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:58.655 03:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:58.655 03:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:58.655 03:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:58.655 03:42:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.655 03:42:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.655 03:42:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.655 03:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:58.655 03:42:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:58.655 03:42:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:58.655 03:42:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:58.655 03:42:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:58.655 03:42:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:58.655 03:42:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:58.655 03:42:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:58.655 03:42:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:58.655 03:42:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:58.655 03:42:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:58.655 03:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:58.655 03:42:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.655 03:42:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.655 nvme0n1 00:32:58.655 03:42:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.655 03:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:58.655 03:42:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.655 03:42:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.655 03:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:58.655 03:42:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.913 03:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:58.913 03:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:58.913 03:42:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.913 03:42:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.913 03:42:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.913 03:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:58.913 03:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:32:58.913 03:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:58.913 03:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:58.913 03:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:58.913 03:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:58.913 03:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZWJhNDdjZGFhZmIyZTEwZTkxYTMyMmY5Zjg5ZGY1MWFkMTkzOWQyNjQxYjhhZDI5rzqCdQ==: 00:32:58.913 03:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjgyN2Y3ZmIxNTliMTBiZTFmZTJlN2FmNzA3MjA2YjMq02lR: 00:32:58.913 03:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:58.913 03:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:58.913 03:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZWJhNDdjZGFhZmIyZTEwZTkxYTMyMmY5Zjg5ZGY1MWFkMTkzOWQyNjQxYjhhZDI5rzqCdQ==: 00:32:58.913 03:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjgyN2Y3ZmIxNTliMTBiZTFmZTJlN2FmNzA3MjA2YjMq02lR: ]] 00:32:58.913 03:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjgyN2Y3ZmIxNTliMTBiZTFmZTJlN2FmNzA3MjA2YjMq02lR: 00:32:58.913 03:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:32:58.913 03:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:58.913 03:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:58.914 03:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:58.914 03:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:58.914 03:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:58.914 03:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:58.914 03:42:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.914 03:42:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.914 03:42:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.914 03:42:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:58.914 03:42:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:58.914 03:42:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:58.914 03:42:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:58.914 03:42:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:58.914 03:42:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:58.914 03:42:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:58.914 03:42:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:58.914 03:42:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:58.914 03:42:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:58.914 03:42:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:58.914 03:42:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:58.914 03:42:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.914 03:42:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.914 nvme0n1 00:32:58.914 03:42:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.914 03:42:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:58.914 03:42:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.914 03:42:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.914 03:42:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:58.914 03:42:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.172 03:42:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:59.172 03:42:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:59.172 03:42:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.172 03:42:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.172 03:42:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.172 03:42:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:59.172 03:42:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:32:59.172 03:42:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:59.172 03:42:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:59.172 03:42:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:59.172 03:42:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:59.172 03:42:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODdmNTk5NTM0ZjVmY2JiY2FlNTliNWUzZTI3MzdlNzMwODkzMDYyMWYxOTViMDdjNjc5M2NmOTMzZWZjNDg2NMogDDg=: 00:32:59.172 03:42:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:59.172 03:42:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:59.172 03:42:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:59.172 03:42:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODdmNTk5NTM0ZjVmY2JiY2FlNTliNWUzZTI3MzdlNzMwODkzMDYyMWYxOTViMDdjNjc5M2NmOTMzZWZjNDg2NMogDDg=: 00:32:59.172 03:42:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:59.172 03:42:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:32:59.172 03:42:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:59.172 03:42:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:59.172 03:42:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:59.172 03:42:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:59.172 03:42:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:59.172 03:42:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:59.172 03:42:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.172 03:42:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.172 03:42:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.172 03:42:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:59.172 03:42:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:59.172 03:42:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:59.172 03:42:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:59.172 03:42:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:59.172 03:42:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:59.172 03:42:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:59.172 03:42:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:59.172 03:42:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:59.172 03:42:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:59.172 03:42:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:59.172 03:42:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:59.172 03:42:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.172 03:42:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.172 nvme0n1 00:32:59.172 03:42:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.172 03:42:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:59.172 03:42:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.172 03:42:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:59.172 03:42:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.172 03:42:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.430 03:42:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:59.430 03:42:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:59.430 03:42:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.430 03:42:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.430 03:42:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.430 03:42:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:59.430 03:42:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:59.430 03:42:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:32:59.430 03:42:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:59.430 03:42:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:59.430 03:42:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:59.430 03:42:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:59.430 03:42:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzlhNjczMmVmMGQzMmQ1YjA0NzEyZmY0MTBhODQyOTS6vknX: 00:32:59.430 03:42:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWRjMDE0YjcxODJjMDhjNmMxODY3YWQzMzg0OWY2MWFmMjM5ZDE3Nzk3MTNiZGYyYTVkOWU3MWU1NGY4MmIwMy0+Dcw=: 00:32:59.430 03:42:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:59.430 03:42:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:59.430 03:42:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzlhNjczMmVmMGQzMmQ1YjA0NzEyZmY0MTBhODQyOTS6vknX: 00:32:59.430 03:42:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWRjMDE0YjcxODJjMDhjNmMxODY3YWQzMzg0OWY2MWFmMjM5ZDE3Nzk3MTNiZGYyYTVkOWU3MWU1NGY4MmIwMy0+Dcw=: ]] 00:32:59.430 03:42:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWRjMDE0YjcxODJjMDhjNmMxODY3YWQzMzg0OWY2MWFmMjM5ZDE3Nzk3MTNiZGYyYTVkOWU3MWU1NGY4MmIwMy0+Dcw=: 00:32:59.430 03:42:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:32:59.430 03:42:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:59.430 03:42:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:59.430 03:42:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:59.430 03:42:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:59.430 03:42:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:59.430 03:42:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:59.430 03:42:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.430 03:42:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.430 03:42:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.430 03:42:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:59.430 03:42:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:59.430 03:42:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:59.430 03:42:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:59.430 03:42:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:59.430 03:42:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:59.431 03:42:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:59.431 03:42:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:59.431 03:42:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:59.431 03:42:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:59.431 03:42:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:59.431 03:42:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:59.431 03:42:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.431 03:42:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.689 nvme0n1 00:32:59.689 03:42:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.689 03:42:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:59.689 03:42:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.689 03:42:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:59.689 03:42:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.689 03:42:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.689 03:42:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:59.689 03:42:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:59.689 03:42:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.689 03:42:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.689 03:42:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.689 03:42:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:59.689 03:42:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:32:59.689 03:42:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:59.689 03:42:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:59.689 03:42:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:59.689 03:42:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:59.689 03:42:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTlkMzViNTc5ODhkYzYwNTIzOWQ5NGNjYzc5YTk4ODM0ZDc5ZjdlZjMyODU0ZTMw3TdAGQ==: 00:32:59.689 03:42:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTJkZjBiOWMxNzY3N2RkMTcyYTIzMDc5NTIxYjc3NWYxYWUyMmE1MDFmMjMxNDllXZoZpA==: 00:32:59.689 03:42:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:59.689 03:42:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:59.689 03:42:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTlkMzViNTc5ODhkYzYwNTIzOWQ5NGNjYzc5YTk4ODM0ZDc5ZjdlZjMyODU0ZTMw3TdAGQ==: 00:32:59.689 03:42:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTJkZjBiOWMxNzY3N2RkMTcyYTIzMDc5NTIxYjc3NWYxYWUyMmE1MDFmMjMxNDllXZoZpA==: ]] 00:32:59.689 03:42:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTJkZjBiOWMxNzY3N2RkMTcyYTIzMDc5NTIxYjc3NWYxYWUyMmE1MDFmMjMxNDllXZoZpA==: 00:32:59.689 03:42:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:32:59.689 03:42:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:59.689 03:42:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:59.689 03:42:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:59.689 03:42:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:59.689 03:42:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:59.689 03:42:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:59.689 03:42:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.689 03:42:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.689 03:42:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.689 03:42:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:59.689 03:42:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:59.689 03:42:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:59.689 03:42:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:59.689 03:42:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:59.689 03:42:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:59.689 03:42:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:59.689 03:42:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:59.689 03:42:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:59.689 03:42:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:59.689 03:42:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:59.689 03:42:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:59.689 03:42:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.689 03:42:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.947 nvme0n1 00:32:59.947 03:42:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.947 03:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:59.947 03:42:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.947 03:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:59.947 03:42:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.947 03:42:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.947 03:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:59.947 03:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:59.947 03:42:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.947 03:42:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.947 03:42:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.947 03:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:59.947 03:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:32:59.947 03:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:59.947 03:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:59.947 03:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:59.947 03:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:59.947 03:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzIyYzM5MDY0YTg2MWQ0YjdlNmYwYjc5YzYwYTkzNTRhmmnN: 00:32:59.947 03:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDFlNTUyZjAxNmRmODNkZWM4ODA1ZTk5MWJmODBkNjiivZvV: 00:32:59.947 03:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:59.947 03:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:59.947 03:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzIyYzM5MDY0YTg2MWQ0YjdlNmYwYjc5YzYwYTkzNTRhmmnN: 00:32:59.947 03:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDFlNTUyZjAxNmRmODNkZWM4ODA1ZTk5MWJmODBkNjiivZvV: ]] 00:32:59.947 03:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDFlNTUyZjAxNmRmODNkZWM4ODA1ZTk5MWJmODBkNjiivZvV: 00:32:59.947 03:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:32:59.947 03:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:59.947 03:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:59.947 03:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:59.947 03:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:59.947 03:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:59.947 03:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:59.947 03:42:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.947 03:42:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.947 03:42:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.947 03:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:59.947 03:42:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:59.947 03:42:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:59.947 03:42:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:59.947 03:42:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:59.947 03:42:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:59.947 03:42:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:59.947 03:42:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:59.947 03:42:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:59.947 03:42:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:59.947 03:42:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:59.947 03:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:59.947 03:42:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.947 03:42:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.204 nvme0n1 00:33:00.204 03:42:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.204 03:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:00.204 03:42:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.204 03:42:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.204 03:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:00.461 03:42:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.461 03:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:00.461 03:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:00.461 03:42:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.461 03:42:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.461 03:42:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.461 03:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:00.461 03:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:33:00.461 03:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:00.461 03:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:00.461 03:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:00.461 03:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:00.461 03:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZWJhNDdjZGFhZmIyZTEwZTkxYTMyMmY5Zjg5ZGY1MWFkMTkzOWQyNjQxYjhhZDI5rzqCdQ==: 00:33:00.461 03:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjgyN2Y3ZmIxNTliMTBiZTFmZTJlN2FmNzA3MjA2YjMq02lR: 00:33:00.461 03:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:00.461 03:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:00.461 03:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZWJhNDdjZGFhZmIyZTEwZTkxYTMyMmY5Zjg5ZGY1MWFkMTkzOWQyNjQxYjhhZDI5rzqCdQ==: 00:33:00.461 03:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjgyN2Y3ZmIxNTliMTBiZTFmZTJlN2FmNzA3MjA2YjMq02lR: ]] 00:33:00.461 03:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjgyN2Y3ZmIxNTliMTBiZTFmZTJlN2FmNzA3MjA2YjMq02lR: 00:33:00.461 03:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:33:00.461 03:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:00.461 03:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:00.461 03:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:00.461 03:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:00.461 03:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:00.461 03:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:33:00.461 03:42:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.461 03:42:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.461 03:42:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.461 03:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:00.461 03:42:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:00.461 03:42:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:00.461 03:42:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:00.461 03:42:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:00.461 03:42:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:00.461 03:42:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:00.461 03:42:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:00.461 03:42:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:00.461 03:42:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:00.461 03:42:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:00.461 03:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:00.461 03:42:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.461 03:42:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.719 nvme0n1 00:33:00.719 03:42:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.719 03:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:00.719 03:42:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.719 03:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:00.719 03:42:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.719 03:42:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.719 03:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:00.719 03:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:00.719 03:42:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.719 03:42:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.719 03:42:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.719 03:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:00.719 03:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:33:00.719 03:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:00.719 03:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:00.719 03:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:00.719 03:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:00.719 03:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODdmNTk5NTM0ZjVmY2JiY2FlNTliNWUzZTI3MzdlNzMwODkzMDYyMWYxOTViMDdjNjc5M2NmOTMzZWZjNDg2NMogDDg=: 00:33:00.719 03:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:00.719 03:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:00.719 03:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:00.719 03:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODdmNTk5NTM0ZjVmY2JiY2FlNTliNWUzZTI3MzdlNzMwODkzMDYyMWYxOTViMDdjNjc5M2NmOTMzZWZjNDg2NMogDDg=: 00:33:00.719 03:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:00.719 03:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:33:00.719 03:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:00.719 03:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:00.719 03:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:00.719 03:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:00.719 03:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:00.719 03:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:33:00.719 03:42:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.719 03:42:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.719 03:42:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.719 03:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:00.719 03:42:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:00.719 03:42:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:00.719 03:42:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:00.719 03:42:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:00.719 03:42:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:00.719 03:42:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:00.719 03:42:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:00.719 03:42:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:00.719 03:42:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:00.719 03:42:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:00.719 03:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:00.719 03:42:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.719 03:42:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.976 nvme0n1 00:33:00.976 03:42:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.976 03:42:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:00.976 03:42:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.976 03:42:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:00.976 03:42:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.976 03:42:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.976 03:42:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:00.976 03:42:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:00.976 03:42:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.976 03:42:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.976 03:42:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.976 03:42:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:00.976 03:42:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:00.976 03:42:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:33:00.976 03:42:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:00.976 03:42:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:00.976 03:42:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:00.976 03:42:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:00.976 03:42:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzlhNjczMmVmMGQzMmQ1YjA0NzEyZmY0MTBhODQyOTS6vknX: 00:33:00.976 03:42:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWRjMDE0YjcxODJjMDhjNmMxODY3YWQzMzg0OWY2MWFmMjM5ZDE3Nzk3MTNiZGYyYTVkOWU3MWU1NGY4MmIwMy0+Dcw=: 00:33:00.976 03:42:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:00.976 03:42:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:00.976 03:42:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzlhNjczMmVmMGQzMmQ1YjA0NzEyZmY0MTBhODQyOTS6vknX: 00:33:00.976 03:42:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWRjMDE0YjcxODJjMDhjNmMxODY3YWQzMzg0OWY2MWFmMjM5ZDE3Nzk3MTNiZGYyYTVkOWU3MWU1NGY4MmIwMy0+Dcw=: ]] 00:33:00.976 03:42:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWRjMDE0YjcxODJjMDhjNmMxODY3YWQzMzg0OWY2MWFmMjM5ZDE3Nzk3MTNiZGYyYTVkOWU3MWU1NGY4MmIwMy0+Dcw=: 00:33:00.976 03:42:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:33:00.976 03:42:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:00.976 03:42:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:00.976 03:42:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:00.976 03:42:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:00.976 03:42:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:00.976 03:42:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:33:00.976 03:42:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.976 03:42:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.234 03:42:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.234 03:42:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:01.234 03:42:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:01.234 03:42:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:01.234 03:42:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:01.234 03:42:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:01.234 03:42:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:01.234 03:42:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:01.234 03:42:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:01.234 03:42:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:01.234 03:42:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:01.234 03:42:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:01.234 03:42:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:01.234 03:42:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.234 03:42:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.799 nvme0n1 00:33:01.799 03:42:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.799 03:42:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:01.799 03:42:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.799 03:42:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:01.799 03:42:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.799 03:42:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.799 03:42:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:01.799 03:42:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:01.799 03:42:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.799 03:42:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.799 03:42:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.799 03:42:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:01.799 03:42:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:33:01.799 03:42:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:01.799 03:42:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:01.799 03:42:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:01.799 03:42:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:01.799 03:42:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTlkMzViNTc5ODhkYzYwNTIzOWQ5NGNjYzc5YTk4ODM0ZDc5ZjdlZjMyODU0ZTMw3TdAGQ==: 00:33:01.799 03:42:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTJkZjBiOWMxNzY3N2RkMTcyYTIzMDc5NTIxYjc3NWYxYWUyMmE1MDFmMjMxNDllXZoZpA==: 00:33:01.799 03:42:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:01.799 03:42:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:01.799 03:42:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTlkMzViNTc5ODhkYzYwNTIzOWQ5NGNjYzc5YTk4ODM0ZDc5ZjdlZjMyODU0ZTMw3TdAGQ==: 00:33:01.799 03:42:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTJkZjBiOWMxNzY3N2RkMTcyYTIzMDc5NTIxYjc3NWYxYWUyMmE1MDFmMjMxNDllXZoZpA==: ]] 00:33:01.799 03:42:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTJkZjBiOWMxNzY3N2RkMTcyYTIzMDc5NTIxYjc3NWYxYWUyMmE1MDFmMjMxNDllXZoZpA==: 00:33:01.799 03:42:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:33:01.799 03:42:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:01.799 03:42:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:01.799 03:42:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:01.799 03:42:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:01.799 03:42:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:01.799 03:42:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:33:01.799 03:42:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.799 03:42:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.799 03:42:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.799 03:42:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:01.799 03:42:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:01.799 03:42:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:01.799 03:42:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:01.799 03:42:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:01.799 03:42:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:01.799 03:42:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:01.799 03:42:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:01.799 03:42:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:01.799 03:42:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:01.799 03:42:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:01.799 03:42:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:01.799 03:42:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.799 03:42:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.363 nvme0n1 00:33:02.363 03:42:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.363 03:42:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:02.363 03:42:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.363 03:42:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.363 03:42:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:02.363 03:42:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.363 03:42:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:02.363 03:42:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:02.363 03:42:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.363 03:42:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.363 03:42:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.363 03:42:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:02.363 03:42:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:33:02.363 03:42:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:02.363 03:42:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:02.363 03:42:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:02.363 03:42:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:02.363 03:42:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzIyYzM5MDY0YTg2MWQ0YjdlNmYwYjc5YzYwYTkzNTRhmmnN: 00:33:02.363 03:42:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDFlNTUyZjAxNmRmODNkZWM4ODA1ZTk5MWJmODBkNjiivZvV: 00:33:02.363 03:42:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:02.363 03:42:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:02.363 03:42:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzIyYzM5MDY0YTg2MWQ0YjdlNmYwYjc5YzYwYTkzNTRhmmnN: 00:33:02.363 03:42:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDFlNTUyZjAxNmRmODNkZWM4ODA1ZTk5MWJmODBkNjiivZvV: ]] 00:33:02.363 03:42:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDFlNTUyZjAxNmRmODNkZWM4ODA1ZTk5MWJmODBkNjiivZvV: 00:33:02.363 03:42:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:33:02.363 03:42:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:02.363 03:42:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:02.363 03:42:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:02.363 03:42:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:02.363 03:42:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:02.363 03:42:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:33:02.363 03:42:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.363 03:42:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.363 03:42:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.363 03:42:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:02.363 03:42:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:02.363 03:42:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:02.363 03:42:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:02.363 03:42:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:02.363 03:42:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:02.363 03:42:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:02.363 03:42:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:02.363 03:42:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:02.363 03:42:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:02.363 03:42:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:02.363 03:42:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:02.363 03:42:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.363 03:42:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.927 nvme0n1 00:33:02.927 03:42:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.927 03:42:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:02.927 03:42:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:02.927 03:42:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.927 03:42:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.927 03:42:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.927 03:42:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:02.927 03:42:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:02.927 03:42:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.927 03:42:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.927 03:42:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.927 03:42:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:02.927 03:42:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:33:02.927 03:42:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:02.927 03:42:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:02.927 03:42:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:02.927 03:42:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:02.927 03:42:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZWJhNDdjZGFhZmIyZTEwZTkxYTMyMmY5Zjg5ZGY1MWFkMTkzOWQyNjQxYjhhZDI5rzqCdQ==: 00:33:02.927 03:42:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjgyN2Y3ZmIxNTliMTBiZTFmZTJlN2FmNzA3MjA2YjMq02lR: 00:33:02.927 03:42:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:02.927 03:42:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:02.927 03:42:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZWJhNDdjZGFhZmIyZTEwZTkxYTMyMmY5Zjg5ZGY1MWFkMTkzOWQyNjQxYjhhZDI5rzqCdQ==: 00:33:02.927 03:42:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjgyN2Y3ZmIxNTliMTBiZTFmZTJlN2FmNzA3MjA2YjMq02lR: ]] 00:33:02.927 03:42:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjgyN2Y3ZmIxNTliMTBiZTFmZTJlN2FmNzA3MjA2YjMq02lR: 00:33:02.927 03:42:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:33:02.927 03:42:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:02.927 03:42:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:02.927 03:42:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:02.927 03:42:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:02.928 03:42:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:02.928 03:42:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:33:02.928 03:42:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.928 03:42:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.928 03:42:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.928 03:42:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:02.928 03:42:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:02.928 03:42:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:02.928 03:42:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:02.928 03:42:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:02.928 03:42:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:02.928 03:42:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:02.928 03:42:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:02.928 03:42:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:02.928 03:42:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:02.928 03:42:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:02.928 03:42:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:02.928 03:42:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.928 03:42:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.493 nvme0n1 00:33:03.493 03:42:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:03.493 03:42:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:03.493 03:42:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:03.493 03:42:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:03.493 03:42:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.493 03:42:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:03.493 03:42:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:03.493 03:42:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:03.493 03:42:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:03.493 03:42:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.493 03:42:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:03.493 03:42:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:03.493 03:42:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:33:03.493 03:42:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:03.493 03:42:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:03.493 03:42:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:03.493 03:42:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:03.493 03:42:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODdmNTk5NTM0ZjVmY2JiY2FlNTliNWUzZTI3MzdlNzMwODkzMDYyMWYxOTViMDdjNjc5M2NmOTMzZWZjNDg2NMogDDg=: 00:33:03.493 03:42:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:03.493 03:42:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:03.493 03:42:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:03.493 03:42:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODdmNTk5NTM0ZjVmY2JiY2FlNTliNWUzZTI3MzdlNzMwODkzMDYyMWYxOTViMDdjNjc5M2NmOTMzZWZjNDg2NMogDDg=: 00:33:03.493 03:42:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:03.493 03:42:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:33:03.493 03:42:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:03.493 03:42:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:03.493 03:42:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:03.493 03:42:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:03.493 03:42:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:03.493 03:42:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:33:03.493 03:42:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:03.493 03:42:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.493 03:42:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:03.493 03:42:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:03.493 03:42:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:03.493 03:42:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:03.493 03:42:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:03.493 03:42:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:03.493 03:42:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:03.493 03:42:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:03.493 03:42:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:03.493 03:42:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:03.493 03:42:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:03.493 03:42:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:03.493 03:42:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:03.493 03:42:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:03.493 03:42:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.059 nvme0n1 00:33:04.059 03:42:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.059 03:42:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:04.059 03:42:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:04.059 03:42:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.059 03:42:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.059 03:42:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.059 03:42:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:04.059 03:42:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:04.059 03:42:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.059 03:42:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.059 03:42:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.059 03:42:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:04.059 03:42:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:04.059 03:42:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:33:04.059 03:42:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:04.059 03:42:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:04.059 03:42:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:04.059 03:42:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:04.059 03:42:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzlhNjczMmVmMGQzMmQ1YjA0NzEyZmY0MTBhODQyOTS6vknX: 00:33:04.059 03:42:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWRjMDE0YjcxODJjMDhjNmMxODY3YWQzMzg0OWY2MWFmMjM5ZDE3Nzk3MTNiZGYyYTVkOWU3MWU1NGY4MmIwMy0+Dcw=: 00:33:04.059 03:42:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:04.059 03:42:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:04.059 03:42:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzlhNjczMmVmMGQzMmQ1YjA0NzEyZmY0MTBhODQyOTS6vknX: 00:33:04.059 03:42:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWRjMDE0YjcxODJjMDhjNmMxODY3YWQzMzg0OWY2MWFmMjM5ZDE3Nzk3MTNiZGYyYTVkOWU3MWU1NGY4MmIwMy0+Dcw=: ]] 00:33:04.059 03:42:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWRjMDE0YjcxODJjMDhjNmMxODY3YWQzMzg0OWY2MWFmMjM5ZDE3Nzk3MTNiZGYyYTVkOWU3MWU1NGY4MmIwMy0+Dcw=: 00:33:04.059 03:42:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:33:04.059 03:42:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:04.059 03:42:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:04.059 03:42:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:04.059 03:42:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:04.059 03:42:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:04.059 03:42:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:33:04.059 03:42:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.059 03:42:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.059 03:42:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.059 03:42:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:04.059 03:42:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:04.059 03:42:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:04.059 03:42:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:04.059 03:42:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:04.059 03:42:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:04.059 03:42:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:04.059 03:42:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:04.059 03:42:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:04.059 03:42:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:04.059 03:42:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:04.059 03:42:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:04.059 03:42:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.059 03:42:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.991 nvme0n1 00:33:04.991 03:42:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.991 03:42:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:04.991 03:42:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.991 03:42:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.992 03:42:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:04.992 03:42:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.992 03:42:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:04.992 03:42:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:04.992 03:42:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.992 03:42:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.992 03:42:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.992 03:42:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:04.992 03:42:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:33:04.992 03:42:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:04.992 03:42:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:04.992 03:42:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:04.992 03:42:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:04.992 03:42:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTlkMzViNTc5ODhkYzYwNTIzOWQ5NGNjYzc5YTk4ODM0ZDc5ZjdlZjMyODU0ZTMw3TdAGQ==: 00:33:04.992 03:42:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTJkZjBiOWMxNzY3N2RkMTcyYTIzMDc5NTIxYjc3NWYxYWUyMmE1MDFmMjMxNDllXZoZpA==: 00:33:04.992 03:42:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:04.992 03:42:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:04.992 03:42:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTlkMzViNTc5ODhkYzYwNTIzOWQ5NGNjYzc5YTk4ODM0ZDc5ZjdlZjMyODU0ZTMw3TdAGQ==: 00:33:04.992 03:42:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTJkZjBiOWMxNzY3N2RkMTcyYTIzMDc5NTIxYjc3NWYxYWUyMmE1MDFmMjMxNDllXZoZpA==: ]] 00:33:04.992 03:42:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTJkZjBiOWMxNzY3N2RkMTcyYTIzMDc5NTIxYjc3NWYxYWUyMmE1MDFmMjMxNDllXZoZpA==: 00:33:04.992 03:42:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:33:04.992 03:42:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:04.992 03:42:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:04.992 03:42:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:04.992 03:42:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:04.992 03:42:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:04.992 03:42:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:33:04.992 03:42:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.992 03:42:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.992 03:42:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.992 03:42:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:04.992 03:42:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:04.992 03:42:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:04.992 03:42:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:04.992 03:42:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:04.992 03:42:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:04.992 03:42:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:04.992 03:42:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:04.992 03:42:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:04.992 03:42:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:04.992 03:42:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:04.992 03:42:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:04.992 03:42:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.992 03:42:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:05.924 nvme0n1 00:33:05.924 03:42:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:05.924 03:42:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:05.924 03:42:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:05.924 03:42:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:05.924 03:42:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:05.924 03:42:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:05.924 03:42:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:05.924 03:42:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:05.924 03:42:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:05.924 03:42:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:05.924 03:42:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:05.924 03:42:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:05.924 03:42:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:33:05.924 03:42:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:05.924 03:42:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:05.924 03:42:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:05.924 03:42:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:05.924 03:42:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzIyYzM5MDY0YTg2MWQ0YjdlNmYwYjc5YzYwYTkzNTRhmmnN: 00:33:05.924 03:42:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDFlNTUyZjAxNmRmODNkZWM4ODA1ZTk5MWJmODBkNjiivZvV: 00:33:05.924 03:42:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:05.924 03:42:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:05.924 03:42:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzIyYzM5MDY0YTg2MWQ0YjdlNmYwYjc5YzYwYTkzNTRhmmnN: 00:33:05.924 03:42:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDFlNTUyZjAxNmRmODNkZWM4ODA1ZTk5MWJmODBkNjiivZvV: ]] 00:33:05.924 03:42:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDFlNTUyZjAxNmRmODNkZWM4ODA1ZTk5MWJmODBkNjiivZvV: 00:33:05.924 03:42:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:33:05.924 03:42:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:05.924 03:42:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:05.924 03:42:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:05.924 03:42:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:05.924 03:42:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:05.924 03:42:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:33:05.924 03:42:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:05.924 03:42:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:05.924 03:42:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:05.924 03:42:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:05.924 03:42:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:05.924 03:42:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:05.924 03:42:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:05.925 03:42:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:05.925 03:42:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:05.925 03:42:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:05.925 03:42:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:05.925 03:42:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:05.925 03:42:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:05.925 03:42:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:05.925 03:42:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:05.925 03:42:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:05.925 03:42:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:06.860 nvme0n1 00:33:06.860 03:42:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:06.860 03:42:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:06.860 03:42:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:06.860 03:42:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:06.860 03:42:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:06.860 03:42:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:06.860 03:42:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:06.860 03:42:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:06.860 03:42:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:06.860 03:42:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:06.860 03:42:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:06.860 03:42:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:06.860 03:42:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:33:06.860 03:42:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:06.860 03:42:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:06.860 03:42:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:06.860 03:42:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:07.119 03:42:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZWJhNDdjZGFhZmIyZTEwZTkxYTMyMmY5Zjg5ZGY1MWFkMTkzOWQyNjQxYjhhZDI5rzqCdQ==: 00:33:07.119 03:42:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjgyN2Y3ZmIxNTliMTBiZTFmZTJlN2FmNzA3MjA2YjMq02lR: 00:33:07.119 03:42:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:07.119 03:42:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:07.119 03:42:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZWJhNDdjZGFhZmIyZTEwZTkxYTMyMmY5Zjg5ZGY1MWFkMTkzOWQyNjQxYjhhZDI5rzqCdQ==: 00:33:07.119 03:42:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjgyN2Y3ZmIxNTliMTBiZTFmZTJlN2FmNzA3MjA2YjMq02lR: ]] 00:33:07.119 03:42:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjgyN2Y3ZmIxNTliMTBiZTFmZTJlN2FmNzA3MjA2YjMq02lR: 00:33:07.119 03:42:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:33:07.119 03:42:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:07.119 03:42:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:07.119 03:42:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:07.119 03:42:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:07.119 03:42:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:07.119 03:42:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:33:07.119 03:42:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:07.119 03:42:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.119 03:42:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:07.119 03:42:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:07.119 03:42:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:07.119 03:42:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:07.119 03:42:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:07.119 03:42:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:07.119 03:42:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:07.119 03:42:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:07.119 03:42:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:07.119 03:42:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:07.119 03:42:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:07.119 03:42:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:07.119 03:42:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:07.119 03:42:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:07.119 03:42:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:08.053 nvme0n1 00:33:08.053 03:42:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:08.053 03:42:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:08.053 03:42:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:08.053 03:42:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:08.053 03:42:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:08.053 03:42:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:08.053 03:42:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:08.053 03:42:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:08.053 03:42:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:08.053 03:42:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:08.053 03:42:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:08.053 03:42:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:08.053 03:42:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:33:08.053 03:42:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:08.053 03:42:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:08.053 03:42:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:08.053 03:42:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:08.053 03:42:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODdmNTk5NTM0ZjVmY2JiY2FlNTliNWUzZTI3MzdlNzMwODkzMDYyMWYxOTViMDdjNjc5M2NmOTMzZWZjNDg2NMogDDg=: 00:33:08.053 03:42:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:08.053 03:42:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:08.053 03:42:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:08.053 03:42:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODdmNTk5NTM0ZjVmY2JiY2FlNTliNWUzZTI3MzdlNzMwODkzMDYyMWYxOTViMDdjNjc5M2NmOTMzZWZjNDg2NMogDDg=: 00:33:08.053 03:42:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:08.053 03:42:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:33:08.053 03:42:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:08.053 03:42:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:08.053 03:42:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:08.053 03:42:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:08.053 03:42:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:08.053 03:42:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:33:08.053 03:42:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:08.053 03:42:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:08.053 03:42:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:08.053 03:42:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:08.053 03:42:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:08.053 03:42:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:08.053 03:42:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:08.053 03:42:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:08.053 03:42:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:08.053 03:42:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:08.053 03:42:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:08.053 03:42:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:08.053 03:42:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:08.053 03:42:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:08.053 03:42:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:08.053 03:42:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:08.053 03:42:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.001 nvme0n1 00:33:09.001 03:42:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:09.001 03:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:09.001 03:42:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:09.001 03:42:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.001 03:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:09.001 03:42:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:09.001 03:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:09.001 03:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:09.001 03:42:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:09.001 03:42:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.001 03:42:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:09.001 03:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:33:09.001 03:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:09.001 03:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:09.001 03:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:33:09.001 03:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:09.001 03:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:09.001 03:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:09.001 03:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:09.001 03:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzlhNjczMmVmMGQzMmQ1YjA0NzEyZmY0MTBhODQyOTS6vknX: 00:33:09.001 03:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWRjMDE0YjcxODJjMDhjNmMxODY3YWQzMzg0OWY2MWFmMjM5ZDE3Nzk3MTNiZGYyYTVkOWU3MWU1NGY4MmIwMy0+Dcw=: 00:33:09.001 03:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:09.001 03:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:09.001 03:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzlhNjczMmVmMGQzMmQ1YjA0NzEyZmY0MTBhODQyOTS6vknX: 00:33:09.001 03:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWRjMDE0YjcxODJjMDhjNmMxODY3YWQzMzg0OWY2MWFmMjM5ZDE3Nzk3MTNiZGYyYTVkOWU3MWU1NGY4MmIwMy0+Dcw=: ]] 00:33:09.001 03:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWRjMDE0YjcxODJjMDhjNmMxODY3YWQzMzg0OWY2MWFmMjM5ZDE3Nzk3MTNiZGYyYTVkOWU3MWU1NGY4MmIwMy0+Dcw=: 00:33:09.001 03:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:33:09.001 03:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:09.001 03:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:09.001 03:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:09.001 03:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:09.001 03:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:09.001 03:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:33:09.001 03:42:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:09.001 03:42:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.001 03:42:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:09.001 03:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:09.001 03:42:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:09.001 03:42:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:09.001 03:42:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:09.001 03:42:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:09.001 03:42:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:09.001 03:42:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:09.001 03:42:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:09.001 03:42:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:09.001 03:42:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:09.001 03:42:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:09.001 03:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:09.001 03:42:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:09.001 03:42:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.276 nvme0n1 00:33:09.276 03:42:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:09.276 03:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:09.276 03:42:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:09.276 03:42:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.276 03:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:09.276 03:42:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:09.276 03:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:09.276 03:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:09.276 03:42:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:09.276 03:42:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.276 03:42:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:09.276 03:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:09.276 03:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:33:09.276 03:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:09.276 03:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:09.276 03:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:09.276 03:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:09.276 03:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTlkMzViNTc5ODhkYzYwNTIzOWQ5NGNjYzc5YTk4ODM0ZDc5ZjdlZjMyODU0ZTMw3TdAGQ==: 00:33:09.276 03:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTJkZjBiOWMxNzY3N2RkMTcyYTIzMDc5NTIxYjc3NWYxYWUyMmE1MDFmMjMxNDllXZoZpA==: 00:33:09.276 03:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:09.276 03:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:09.276 03:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTlkMzViNTc5ODhkYzYwNTIzOWQ5NGNjYzc5YTk4ODM0ZDc5ZjdlZjMyODU0ZTMw3TdAGQ==: 00:33:09.276 03:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTJkZjBiOWMxNzY3N2RkMTcyYTIzMDc5NTIxYjc3NWYxYWUyMmE1MDFmMjMxNDllXZoZpA==: ]] 00:33:09.276 03:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTJkZjBiOWMxNzY3N2RkMTcyYTIzMDc5NTIxYjc3NWYxYWUyMmE1MDFmMjMxNDllXZoZpA==: 00:33:09.276 03:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:33:09.276 03:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:09.276 03:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:09.276 03:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:09.276 03:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:09.276 03:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:09.277 03:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:33:09.277 03:42:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:09.277 03:42:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.277 03:42:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:09.277 03:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:09.277 03:42:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:09.277 03:42:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:09.277 03:42:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:09.277 03:42:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:09.277 03:42:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:09.277 03:42:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:09.277 03:42:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:09.277 03:42:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:09.277 03:42:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:09.277 03:42:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:09.277 03:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:09.277 03:42:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:09.277 03:42:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.534 nvme0n1 00:33:09.534 03:42:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:09.534 03:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:09.534 03:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:09.534 03:42:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:09.534 03:42:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.534 03:42:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:09.534 03:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:09.534 03:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:09.534 03:42:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:09.534 03:42:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.534 03:42:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:09.534 03:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:09.534 03:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:33:09.534 03:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:09.534 03:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:09.534 03:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:09.534 03:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:09.534 03:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzIyYzM5MDY0YTg2MWQ0YjdlNmYwYjc5YzYwYTkzNTRhmmnN: 00:33:09.534 03:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDFlNTUyZjAxNmRmODNkZWM4ODA1ZTk5MWJmODBkNjiivZvV: 00:33:09.534 03:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:09.534 03:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:09.534 03:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzIyYzM5MDY0YTg2MWQ0YjdlNmYwYjc5YzYwYTkzNTRhmmnN: 00:33:09.534 03:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDFlNTUyZjAxNmRmODNkZWM4ODA1ZTk5MWJmODBkNjiivZvV: ]] 00:33:09.534 03:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDFlNTUyZjAxNmRmODNkZWM4ODA1ZTk5MWJmODBkNjiivZvV: 00:33:09.534 03:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:33:09.534 03:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:09.534 03:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:09.534 03:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:09.534 03:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:09.534 03:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:09.534 03:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:33:09.534 03:42:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:09.534 03:42:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.534 03:42:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:09.534 03:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:09.534 03:42:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:09.534 03:42:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:09.534 03:42:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:09.534 03:42:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:09.534 03:42:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:09.534 03:42:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:09.534 03:42:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:09.534 03:42:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:09.534 03:42:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:09.534 03:42:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:09.534 03:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:09.534 03:42:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:09.534 03:42:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.792 nvme0n1 00:33:09.792 03:42:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:09.792 03:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:09.792 03:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:09.792 03:42:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:09.792 03:42:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.792 03:42:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:09.792 03:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:09.792 03:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:09.792 03:42:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:09.792 03:42:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.792 03:42:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:09.792 03:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:09.792 03:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:33:09.792 03:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:09.792 03:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:09.792 03:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:09.792 03:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:09.792 03:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZWJhNDdjZGFhZmIyZTEwZTkxYTMyMmY5Zjg5ZGY1MWFkMTkzOWQyNjQxYjhhZDI5rzqCdQ==: 00:33:09.792 03:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjgyN2Y3ZmIxNTliMTBiZTFmZTJlN2FmNzA3MjA2YjMq02lR: 00:33:09.792 03:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:09.792 03:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:09.792 03:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZWJhNDdjZGFhZmIyZTEwZTkxYTMyMmY5Zjg5ZGY1MWFkMTkzOWQyNjQxYjhhZDI5rzqCdQ==: 00:33:09.792 03:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjgyN2Y3ZmIxNTliMTBiZTFmZTJlN2FmNzA3MjA2YjMq02lR: ]] 00:33:09.792 03:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjgyN2Y3ZmIxNTliMTBiZTFmZTJlN2FmNzA3MjA2YjMq02lR: 00:33:09.792 03:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:33:09.792 03:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:09.792 03:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:09.792 03:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:09.792 03:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:09.792 03:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:09.792 03:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:33:09.792 03:42:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:09.792 03:42:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.792 03:42:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:09.792 03:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:09.792 03:42:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:09.792 03:42:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:09.792 03:42:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:09.792 03:42:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:09.792 03:42:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:09.792 03:42:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:09.792 03:42:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:09.792 03:42:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:09.792 03:42:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:09.792 03:42:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:09.792 03:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:09.792 03:42:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:09.792 03:42:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.792 nvme0n1 00:33:09.792 03:42:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:09.792 03:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:09.792 03:42:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:09.792 03:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:09.792 03:42:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.792 03:42:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:10.050 03:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:10.050 03:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:10.050 03:42:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:10.051 03:42:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:10.051 03:42:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:10.051 03:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:10.051 03:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:33:10.051 03:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:10.051 03:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:10.051 03:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:10.051 03:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:10.051 03:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODdmNTk5NTM0ZjVmY2JiY2FlNTliNWUzZTI3MzdlNzMwODkzMDYyMWYxOTViMDdjNjc5M2NmOTMzZWZjNDg2NMogDDg=: 00:33:10.051 03:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:10.051 03:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:10.051 03:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:10.051 03:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODdmNTk5NTM0ZjVmY2JiY2FlNTliNWUzZTI3MzdlNzMwODkzMDYyMWYxOTViMDdjNjc5M2NmOTMzZWZjNDg2NMogDDg=: 00:33:10.051 03:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:10.051 03:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:33:10.051 03:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:10.051 03:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:10.051 03:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:10.051 03:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:10.051 03:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:10.051 03:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:33:10.051 03:42:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:10.051 03:42:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:10.051 03:42:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:10.051 03:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:10.051 03:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:10.051 03:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:10.051 03:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:10.051 03:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:10.051 03:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:10.051 03:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:10.051 03:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:10.051 03:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:10.051 03:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:10.051 03:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:10.051 03:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:10.051 03:42:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:10.051 03:42:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:10.051 nvme0n1 00:33:10.051 03:42:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:10.051 03:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:10.051 03:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:10.051 03:42:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:10.051 03:42:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:10.051 03:42:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:10.051 03:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:10.051 03:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:10.051 03:42:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:10.051 03:42:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:10.308 03:42:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:10.308 03:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:10.308 03:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:10.308 03:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:33:10.308 03:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:10.308 03:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:10.308 03:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:10.308 03:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:10.308 03:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzlhNjczMmVmMGQzMmQ1YjA0NzEyZmY0MTBhODQyOTS6vknX: 00:33:10.308 03:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWRjMDE0YjcxODJjMDhjNmMxODY3YWQzMzg0OWY2MWFmMjM5ZDE3Nzk3MTNiZGYyYTVkOWU3MWU1NGY4MmIwMy0+Dcw=: 00:33:10.308 03:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:10.308 03:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:10.308 03:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzlhNjczMmVmMGQzMmQ1YjA0NzEyZmY0MTBhODQyOTS6vknX: 00:33:10.308 03:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWRjMDE0YjcxODJjMDhjNmMxODY3YWQzMzg0OWY2MWFmMjM5ZDE3Nzk3MTNiZGYyYTVkOWU3MWU1NGY4MmIwMy0+Dcw=: ]] 00:33:10.308 03:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWRjMDE0YjcxODJjMDhjNmMxODY3YWQzMzg0OWY2MWFmMjM5ZDE3Nzk3MTNiZGYyYTVkOWU3MWU1NGY4MmIwMy0+Dcw=: 00:33:10.308 03:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:33:10.308 03:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:10.308 03:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:10.308 03:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:10.308 03:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:10.308 03:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:10.308 03:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:33:10.308 03:42:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:10.308 03:42:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:10.308 03:42:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:10.308 03:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:10.308 03:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:10.308 03:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:10.308 03:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:10.308 03:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:10.308 03:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:10.308 03:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:10.308 03:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:10.308 03:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:10.308 03:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:10.308 03:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:10.308 03:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:10.308 03:42:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:10.308 03:42:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:10.308 nvme0n1 00:33:10.308 03:42:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:10.308 03:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:10.308 03:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:10.308 03:42:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:10.308 03:42:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:10.308 03:42:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:10.308 03:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:10.308 03:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:10.308 03:42:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:10.308 03:42:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:10.565 03:42:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:10.565 03:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:10.565 03:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:33:10.565 03:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:10.565 03:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:10.565 03:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:10.565 03:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:10.565 03:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTlkMzViNTc5ODhkYzYwNTIzOWQ5NGNjYzc5YTk4ODM0ZDc5ZjdlZjMyODU0ZTMw3TdAGQ==: 00:33:10.565 03:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTJkZjBiOWMxNzY3N2RkMTcyYTIzMDc5NTIxYjc3NWYxYWUyMmE1MDFmMjMxNDllXZoZpA==: 00:33:10.565 03:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:10.565 03:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:10.565 03:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTlkMzViNTc5ODhkYzYwNTIzOWQ5NGNjYzc5YTk4ODM0ZDc5ZjdlZjMyODU0ZTMw3TdAGQ==: 00:33:10.565 03:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTJkZjBiOWMxNzY3N2RkMTcyYTIzMDc5NTIxYjc3NWYxYWUyMmE1MDFmMjMxNDllXZoZpA==: ]] 00:33:10.565 03:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTJkZjBiOWMxNzY3N2RkMTcyYTIzMDc5NTIxYjc3NWYxYWUyMmE1MDFmMjMxNDllXZoZpA==: 00:33:10.565 03:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:33:10.565 03:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:10.565 03:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:10.565 03:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:10.565 03:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:10.565 03:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:10.565 03:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:33:10.565 03:42:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:10.565 03:42:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:10.565 03:42:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:10.565 03:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:10.565 03:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:10.565 03:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:10.565 03:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:10.565 03:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:10.565 03:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:10.565 03:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:10.565 03:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:10.565 03:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:10.565 03:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:10.565 03:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:10.565 03:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:10.565 03:42:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:10.565 03:42:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:10.565 nvme0n1 00:33:10.565 03:42:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:10.565 03:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:10.566 03:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:10.566 03:42:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:10.566 03:42:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:10.566 03:42:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:10.566 03:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:10.566 03:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:10.566 03:42:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:10.566 03:42:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:10.822 03:42:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:10.822 03:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:10.822 03:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:33:10.822 03:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:10.822 03:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:10.822 03:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:10.822 03:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:10.822 03:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzIyYzM5MDY0YTg2MWQ0YjdlNmYwYjc5YzYwYTkzNTRhmmnN: 00:33:10.822 03:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDFlNTUyZjAxNmRmODNkZWM4ODA1ZTk5MWJmODBkNjiivZvV: 00:33:10.822 03:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:10.822 03:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:10.822 03:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzIyYzM5MDY0YTg2MWQ0YjdlNmYwYjc5YzYwYTkzNTRhmmnN: 00:33:10.822 03:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDFlNTUyZjAxNmRmODNkZWM4ODA1ZTk5MWJmODBkNjiivZvV: ]] 00:33:10.822 03:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDFlNTUyZjAxNmRmODNkZWM4ODA1ZTk5MWJmODBkNjiivZvV: 00:33:10.822 03:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:33:10.823 03:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:10.823 03:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:10.823 03:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:10.823 03:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:10.823 03:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:10.823 03:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:33:10.823 03:42:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:10.823 03:42:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:10.823 03:42:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:10.823 03:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:10.823 03:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:10.823 03:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:10.823 03:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:10.823 03:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:10.823 03:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:10.823 03:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:10.823 03:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:10.823 03:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:10.823 03:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:10.823 03:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:10.823 03:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:10.823 03:42:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:10.823 03:42:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:10.823 nvme0n1 00:33:10.823 03:42:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:10.823 03:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:10.823 03:42:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:10.823 03:42:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:10.823 03:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:10.823 03:42:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:10.823 03:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:10.823 03:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:10.823 03:42:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:10.823 03:42:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:11.080 03:42:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:11.080 03:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:11.080 03:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:33:11.080 03:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:11.080 03:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:11.080 03:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:11.080 03:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:11.080 03:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZWJhNDdjZGFhZmIyZTEwZTkxYTMyMmY5Zjg5ZGY1MWFkMTkzOWQyNjQxYjhhZDI5rzqCdQ==: 00:33:11.080 03:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjgyN2Y3ZmIxNTliMTBiZTFmZTJlN2FmNzA3MjA2YjMq02lR: 00:33:11.080 03:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:11.080 03:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:11.080 03:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZWJhNDdjZGFhZmIyZTEwZTkxYTMyMmY5Zjg5ZGY1MWFkMTkzOWQyNjQxYjhhZDI5rzqCdQ==: 00:33:11.080 03:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjgyN2Y3ZmIxNTliMTBiZTFmZTJlN2FmNzA3MjA2YjMq02lR: ]] 00:33:11.080 03:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjgyN2Y3ZmIxNTliMTBiZTFmZTJlN2FmNzA3MjA2YjMq02lR: 00:33:11.080 03:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:33:11.080 03:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:11.080 03:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:11.080 03:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:11.080 03:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:11.080 03:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:11.080 03:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:33:11.080 03:42:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:11.080 03:42:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:11.080 03:42:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:11.080 03:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:11.080 03:42:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:11.080 03:42:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:11.080 03:42:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:11.080 03:42:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:11.080 03:42:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:11.080 03:42:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:11.080 03:42:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:11.080 03:42:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:11.080 03:42:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:11.080 03:42:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:11.080 03:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:11.080 03:42:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:11.080 03:42:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:11.080 nvme0n1 00:33:11.080 03:42:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:11.080 03:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:11.080 03:42:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:11.080 03:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:11.080 03:42:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:11.080 03:42:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:11.339 03:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:11.339 03:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:11.339 03:42:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:11.339 03:42:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:11.339 03:42:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:11.339 03:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:11.339 03:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:33:11.339 03:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:11.339 03:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:11.339 03:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:11.339 03:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:11.339 03:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODdmNTk5NTM0ZjVmY2JiY2FlNTliNWUzZTI3MzdlNzMwODkzMDYyMWYxOTViMDdjNjc5M2NmOTMzZWZjNDg2NMogDDg=: 00:33:11.339 03:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:11.339 03:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:11.339 03:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:11.339 03:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODdmNTk5NTM0ZjVmY2JiY2FlNTliNWUzZTI3MzdlNzMwODkzMDYyMWYxOTViMDdjNjc5M2NmOTMzZWZjNDg2NMogDDg=: 00:33:11.339 03:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:11.339 03:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:33:11.339 03:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:11.339 03:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:11.339 03:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:11.339 03:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:11.339 03:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:11.339 03:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:33:11.339 03:42:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:11.339 03:42:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:11.339 03:42:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:11.339 03:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:11.339 03:42:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:11.339 03:42:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:11.339 03:42:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:11.339 03:42:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:11.339 03:42:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:11.339 03:42:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:11.339 03:42:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:11.339 03:42:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:11.339 03:42:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:11.339 03:42:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:11.339 03:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:11.339 03:42:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:11.339 03:42:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:11.339 nvme0n1 00:33:11.339 03:42:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:11.339 03:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:11.339 03:42:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:11.339 03:42:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:11.339 03:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:11.339 03:42:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:11.597 03:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:11.597 03:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:11.597 03:42:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:11.597 03:42:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:11.597 03:42:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:11.597 03:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:11.597 03:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:11.597 03:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:33:11.597 03:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:11.597 03:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:11.597 03:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:11.597 03:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:11.597 03:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzlhNjczMmVmMGQzMmQ1YjA0NzEyZmY0MTBhODQyOTS6vknX: 00:33:11.597 03:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWRjMDE0YjcxODJjMDhjNmMxODY3YWQzMzg0OWY2MWFmMjM5ZDE3Nzk3MTNiZGYyYTVkOWU3MWU1NGY4MmIwMy0+Dcw=: 00:33:11.597 03:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:11.597 03:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:11.597 03:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzlhNjczMmVmMGQzMmQ1YjA0NzEyZmY0MTBhODQyOTS6vknX: 00:33:11.597 03:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWRjMDE0YjcxODJjMDhjNmMxODY3YWQzMzg0OWY2MWFmMjM5ZDE3Nzk3MTNiZGYyYTVkOWU3MWU1NGY4MmIwMy0+Dcw=: ]] 00:33:11.597 03:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWRjMDE0YjcxODJjMDhjNmMxODY3YWQzMzg0OWY2MWFmMjM5ZDE3Nzk3MTNiZGYyYTVkOWU3MWU1NGY4MmIwMy0+Dcw=: 00:33:11.597 03:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:33:11.597 03:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:11.597 03:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:11.597 03:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:11.597 03:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:11.597 03:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:11.597 03:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:33:11.597 03:42:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:11.597 03:42:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:11.597 03:42:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:11.597 03:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:11.597 03:42:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:11.597 03:42:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:11.597 03:42:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:11.597 03:42:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:11.597 03:42:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:11.597 03:42:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:11.597 03:42:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:11.597 03:42:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:11.597 03:42:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:11.597 03:42:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:11.597 03:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:11.597 03:42:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:11.597 03:42:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:11.856 nvme0n1 00:33:11.856 03:42:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:11.856 03:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:11.856 03:42:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:11.856 03:42:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:11.856 03:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:11.856 03:42:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:11.856 03:42:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:11.856 03:42:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:11.856 03:42:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:11.856 03:42:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:11.856 03:42:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:11.856 03:42:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:11.856 03:42:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:33:11.856 03:42:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:11.856 03:42:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:11.856 03:42:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:11.856 03:42:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:11.856 03:42:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTlkMzViNTc5ODhkYzYwNTIzOWQ5NGNjYzc5YTk4ODM0ZDc5ZjdlZjMyODU0ZTMw3TdAGQ==: 00:33:11.856 03:42:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTJkZjBiOWMxNzY3N2RkMTcyYTIzMDc5NTIxYjc3NWYxYWUyMmE1MDFmMjMxNDllXZoZpA==: 00:33:11.856 03:42:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:11.856 03:42:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:11.856 03:42:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTlkMzViNTc5ODhkYzYwNTIzOWQ5NGNjYzc5YTk4ODM0ZDc5ZjdlZjMyODU0ZTMw3TdAGQ==: 00:33:11.856 03:42:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTJkZjBiOWMxNzY3N2RkMTcyYTIzMDc5NTIxYjc3NWYxYWUyMmE1MDFmMjMxNDllXZoZpA==: ]] 00:33:11.856 03:42:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTJkZjBiOWMxNzY3N2RkMTcyYTIzMDc5NTIxYjc3NWYxYWUyMmE1MDFmMjMxNDllXZoZpA==: 00:33:11.856 03:42:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:33:11.856 03:42:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:11.856 03:42:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:11.856 03:42:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:11.856 03:42:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:11.856 03:42:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:11.856 03:42:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:33:11.856 03:42:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:11.856 03:42:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:11.856 03:42:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:11.856 03:42:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:11.856 03:42:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:11.856 03:42:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:11.856 03:42:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:11.856 03:42:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:11.856 03:42:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:11.856 03:42:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:11.856 03:42:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:11.856 03:42:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:11.856 03:42:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:11.856 03:42:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:11.856 03:42:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:11.856 03:42:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:11.856 03:42:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:12.114 nvme0n1 00:33:12.114 03:42:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:12.114 03:42:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:12.114 03:42:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:12.114 03:42:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:12.114 03:42:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:12.114 03:42:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:12.114 03:42:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:12.114 03:42:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:12.114 03:42:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:12.114 03:42:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:12.114 03:42:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:12.114 03:42:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:12.114 03:42:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:33:12.114 03:42:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:12.114 03:42:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:12.114 03:42:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:12.114 03:42:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:12.114 03:42:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzIyYzM5MDY0YTg2MWQ0YjdlNmYwYjc5YzYwYTkzNTRhmmnN: 00:33:12.114 03:42:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDFlNTUyZjAxNmRmODNkZWM4ODA1ZTk5MWJmODBkNjiivZvV: 00:33:12.114 03:42:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:12.114 03:42:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:12.114 03:42:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzIyYzM5MDY0YTg2MWQ0YjdlNmYwYjc5YzYwYTkzNTRhmmnN: 00:33:12.114 03:42:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDFlNTUyZjAxNmRmODNkZWM4ODA1ZTk5MWJmODBkNjiivZvV: ]] 00:33:12.114 03:42:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDFlNTUyZjAxNmRmODNkZWM4ODA1ZTk5MWJmODBkNjiivZvV: 00:33:12.114 03:42:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:33:12.114 03:42:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:12.114 03:42:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:12.114 03:42:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:12.114 03:42:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:12.114 03:42:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:12.114 03:42:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:33:12.114 03:42:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:12.114 03:42:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:12.114 03:42:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:12.114 03:42:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:12.114 03:42:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:12.114 03:42:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:12.114 03:42:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:12.114 03:42:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:12.114 03:42:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:12.114 03:42:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:12.114 03:42:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:12.114 03:42:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:12.114 03:42:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:12.114 03:42:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:12.114 03:42:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:12.114 03:42:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:12.114 03:42:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:12.679 nvme0n1 00:33:12.679 03:42:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:12.679 03:42:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:12.679 03:42:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:12.679 03:42:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:12.679 03:42:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:12.679 03:42:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:12.679 03:42:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:12.679 03:42:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:12.679 03:42:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:12.679 03:42:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:12.679 03:42:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:12.679 03:42:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:12.679 03:42:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:33:12.679 03:42:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:12.679 03:42:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:12.679 03:42:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:12.679 03:42:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:12.679 03:42:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZWJhNDdjZGFhZmIyZTEwZTkxYTMyMmY5Zjg5ZGY1MWFkMTkzOWQyNjQxYjhhZDI5rzqCdQ==: 00:33:12.679 03:42:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjgyN2Y3ZmIxNTliMTBiZTFmZTJlN2FmNzA3MjA2YjMq02lR: 00:33:12.679 03:42:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:12.679 03:42:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:12.679 03:42:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZWJhNDdjZGFhZmIyZTEwZTkxYTMyMmY5Zjg5ZGY1MWFkMTkzOWQyNjQxYjhhZDI5rzqCdQ==: 00:33:12.679 03:42:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjgyN2Y3ZmIxNTliMTBiZTFmZTJlN2FmNzA3MjA2YjMq02lR: ]] 00:33:12.679 03:42:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjgyN2Y3ZmIxNTliMTBiZTFmZTJlN2FmNzA3MjA2YjMq02lR: 00:33:12.679 03:42:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:33:12.679 03:42:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:12.679 03:42:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:12.679 03:42:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:12.679 03:42:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:12.679 03:42:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:12.679 03:42:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:33:12.679 03:42:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:12.679 03:42:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:12.679 03:42:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:12.679 03:42:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:12.679 03:42:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:12.679 03:42:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:12.679 03:42:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:12.679 03:42:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:12.679 03:42:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:12.679 03:42:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:12.679 03:42:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:12.679 03:42:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:12.679 03:42:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:12.679 03:42:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:12.679 03:42:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:12.679 03:42:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:12.679 03:42:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:12.949 nvme0n1 00:33:12.949 03:42:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:12.949 03:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:12.949 03:42:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:12.949 03:42:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:12.949 03:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:12.949 03:42:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:12.949 03:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:12.949 03:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:12.949 03:42:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:12.949 03:42:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:12.949 03:42:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:12.949 03:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:12.949 03:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:33:12.949 03:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:12.949 03:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:12.949 03:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:12.949 03:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:12.949 03:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODdmNTk5NTM0ZjVmY2JiY2FlNTliNWUzZTI3MzdlNzMwODkzMDYyMWYxOTViMDdjNjc5M2NmOTMzZWZjNDg2NMogDDg=: 00:33:12.949 03:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:12.949 03:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:12.949 03:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:12.949 03:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODdmNTk5NTM0ZjVmY2JiY2FlNTliNWUzZTI3MzdlNzMwODkzMDYyMWYxOTViMDdjNjc5M2NmOTMzZWZjNDg2NMogDDg=: 00:33:12.949 03:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:12.949 03:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:33:12.949 03:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:12.949 03:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:12.949 03:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:12.949 03:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:12.949 03:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:12.949 03:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:33:12.949 03:42:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:12.949 03:42:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:12.949 03:42:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:12.949 03:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:12.949 03:42:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:12.949 03:42:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:12.949 03:42:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:12.949 03:42:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:12.949 03:42:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:12.949 03:42:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:12.949 03:42:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:12.949 03:42:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:12.950 03:42:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:12.950 03:42:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:12.950 03:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:12.950 03:42:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:12.950 03:42:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:13.207 nvme0n1 00:33:13.207 03:42:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:13.207 03:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:13.207 03:42:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:13.207 03:42:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:13.207 03:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:13.207 03:42:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:13.207 03:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:13.207 03:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:13.207 03:42:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:13.207 03:42:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:13.207 03:42:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:13.207 03:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:13.207 03:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:13.207 03:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:33:13.207 03:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:13.207 03:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:13.207 03:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:13.207 03:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:13.207 03:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzlhNjczMmVmMGQzMmQ1YjA0NzEyZmY0MTBhODQyOTS6vknX: 00:33:13.207 03:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWRjMDE0YjcxODJjMDhjNmMxODY3YWQzMzg0OWY2MWFmMjM5ZDE3Nzk3MTNiZGYyYTVkOWU3MWU1NGY4MmIwMy0+Dcw=: 00:33:13.207 03:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:13.207 03:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:13.207 03:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzlhNjczMmVmMGQzMmQ1YjA0NzEyZmY0MTBhODQyOTS6vknX: 00:33:13.207 03:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWRjMDE0YjcxODJjMDhjNmMxODY3YWQzMzg0OWY2MWFmMjM5ZDE3Nzk3MTNiZGYyYTVkOWU3MWU1NGY4MmIwMy0+Dcw=: ]] 00:33:13.207 03:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWRjMDE0YjcxODJjMDhjNmMxODY3YWQzMzg0OWY2MWFmMjM5ZDE3Nzk3MTNiZGYyYTVkOWU3MWU1NGY4MmIwMy0+Dcw=: 00:33:13.207 03:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:33:13.207 03:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:13.207 03:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:13.207 03:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:13.207 03:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:13.207 03:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:13.207 03:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:33:13.207 03:42:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:13.207 03:42:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:13.207 03:42:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:13.207 03:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:13.207 03:42:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:13.207 03:42:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:13.207 03:42:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:13.207 03:42:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:13.207 03:42:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:13.207 03:42:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:13.207 03:42:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:13.208 03:42:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:13.208 03:42:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:13.208 03:42:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:13.208 03:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:13.208 03:42:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:13.208 03:42:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:13.772 nvme0n1 00:33:13.772 03:42:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:13.772 03:42:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:13.772 03:42:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:13.772 03:42:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:13.772 03:42:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:13.772 03:42:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:13.772 03:42:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:13.772 03:42:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:13.772 03:42:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:13.772 03:42:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:13.772 03:42:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:13.772 03:42:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:13.772 03:42:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:33:13.772 03:42:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:13.772 03:42:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:13.772 03:42:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:13.772 03:42:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:13.772 03:42:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTlkMzViNTc5ODhkYzYwNTIzOWQ5NGNjYzc5YTk4ODM0ZDc5ZjdlZjMyODU0ZTMw3TdAGQ==: 00:33:13.772 03:42:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTJkZjBiOWMxNzY3N2RkMTcyYTIzMDc5NTIxYjc3NWYxYWUyMmE1MDFmMjMxNDllXZoZpA==: 00:33:13.772 03:42:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:13.772 03:42:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:13.772 03:42:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTlkMzViNTc5ODhkYzYwNTIzOWQ5NGNjYzc5YTk4ODM0ZDc5ZjdlZjMyODU0ZTMw3TdAGQ==: 00:33:13.772 03:42:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTJkZjBiOWMxNzY3N2RkMTcyYTIzMDc5NTIxYjc3NWYxYWUyMmE1MDFmMjMxNDllXZoZpA==: ]] 00:33:13.772 03:42:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTJkZjBiOWMxNzY3N2RkMTcyYTIzMDc5NTIxYjc3NWYxYWUyMmE1MDFmMjMxNDllXZoZpA==: 00:33:13.772 03:42:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:33:13.772 03:42:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:13.773 03:42:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:13.773 03:42:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:13.773 03:42:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:13.773 03:42:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:13.773 03:42:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:33:13.773 03:42:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:13.773 03:42:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:13.773 03:42:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:13.773 03:42:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:13.773 03:42:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:13.773 03:42:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:13.773 03:42:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:13.773 03:42:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:13.773 03:42:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:13.773 03:42:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:13.773 03:42:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:13.773 03:42:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:13.773 03:42:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:13.773 03:42:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:13.773 03:42:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:13.773 03:42:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:13.773 03:42:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:14.338 nvme0n1 00:33:14.338 03:42:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:14.338 03:42:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:14.338 03:42:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:14.338 03:42:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:14.338 03:42:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:14.338 03:42:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:14.596 03:42:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:14.596 03:42:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:14.596 03:42:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:14.596 03:42:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:14.596 03:42:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:14.596 03:42:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:14.596 03:42:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:33:14.596 03:42:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:14.596 03:42:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:14.596 03:42:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:14.596 03:42:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:14.596 03:42:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzIyYzM5MDY0YTg2MWQ0YjdlNmYwYjc5YzYwYTkzNTRhmmnN: 00:33:14.596 03:42:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDFlNTUyZjAxNmRmODNkZWM4ODA1ZTk5MWJmODBkNjiivZvV: 00:33:14.596 03:42:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:14.596 03:42:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:14.596 03:42:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzIyYzM5MDY0YTg2MWQ0YjdlNmYwYjc5YzYwYTkzNTRhmmnN: 00:33:14.596 03:42:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDFlNTUyZjAxNmRmODNkZWM4ODA1ZTk5MWJmODBkNjiivZvV: ]] 00:33:14.596 03:42:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDFlNTUyZjAxNmRmODNkZWM4ODA1ZTk5MWJmODBkNjiivZvV: 00:33:14.596 03:42:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:33:14.596 03:42:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:14.596 03:42:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:14.596 03:42:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:14.596 03:42:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:14.596 03:42:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:14.596 03:42:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:33:14.596 03:42:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:14.596 03:42:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:14.596 03:42:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:14.596 03:42:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:14.596 03:42:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:14.596 03:42:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:14.596 03:42:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:14.596 03:42:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:14.596 03:42:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:14.596 03:42:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:14.596 03:42:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:14.596 03:42:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:14.596 03:42:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:14.596 03:42:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:14.596 03:42:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:14.596 03:42:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:14.596 03:42:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:15.160 nvme0n1 00:33:15.160 03:43:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:15.160 03:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:15.160 03:43:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:15.160 03:43:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:15.160 03:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:15.160 03:43:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:15.160 03:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:15.161 03:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:15.161 03:43:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:15.161 03:43:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:15.161 03:43:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:15.161 03:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:15.161 03:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:33:15.161 03:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:15.161 03:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:15.161 03:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:15.161 03:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:15.161 03:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZWJhNDdjZGFhZmIyZTEwZTkxYTMyMmY5Zjg5ZGY1MWFkMTkzOWQyNjQxYjhhZDI5rzqCdQ==: 00:33:15.161 03:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjgyN2Y3ZmIxNTliMTBiZTFmZTJlN2FmNzA3MjA2YjMq02lR: 00:33:15.161 03:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:15.161 03:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:15.161 03:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZWJhNDdjZGFhZmIyZTEwZTkxYTMyMmY5Zjg5ZGY1MWFkMTkzOWQyNjQxYjhhZDI5rzqCdQ==: 00:33:15.161 03:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjgyN2Y3ZmIxNTliMTBiZTFmZTJlN2FmNzA3MjA2YjMq02lR: ]] 00:33:15.161 03:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjgyN2Y3ZmIxNTliMTBiZTFmZTJlN2FmNzA3MjA2YjMq02lR: 00:33:15.161 03:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:33:15.161 03:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:15.161 03:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:15.161 03:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:15.161 03:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:15.161 03:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:15.161 03:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:33:15.161 03:43:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:15.161 03:43:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:15.161 03:43:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:15.161 03:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:15.161 03:43:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:15.161 03:43:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:15.161 03:43:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:15.161 03:43:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:15.161 03:43:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:15.161 03:43:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:15.161 03:43:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:15.161 03:43:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:15.161 03:43:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:15.161 03:43:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:15.161 03:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:15.161 03:43:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:15.161 03:43:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:15.726 nvme0n1 00:33:15.726 03:43:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:15.726 03:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:15.726 03:43:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:15.726 03:43:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:15.726 03:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:15.726 03:43:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:15.726 03:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:15.726 03:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:15.726 03:43:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:15.726 03:43:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:15.726 03:43:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:15.726 03:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:15.726 03:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:33:15.726 03:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:15.726 03:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:15.726 03:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:15.726 03:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:15.726 03:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODdmNTk5NTM0ZjVmY2JiY2FlNTliNWUzZTI3MzdlNzMwODkzMDYyMWYxOTViMDdjNjc5M2NmOTMzZWZjNDg2NMogDDg=: 00:33:15.726 03:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:15.726 03:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:15.726 03:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:15.726 03:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODdmNTk5NTM0ZjVmY2JiY2FlNTliNWUzZTI3MzdlNzMwODkzMDYyMWYxOTViMDdjNjc5M2NmOTMzZWZjNDg2NMogDDg=: 00:33:15.726 03:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:15.726 03:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:33:15.726 03:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:15.726 03:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:15.726 03:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:15.726 03:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:15.726 03:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:15.726 03:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:33:15.726 03:43:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:15.726 03:43:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:15.727 03:43:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:15.727 03:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:15.727 03:43:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:15.727 03:43:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:15.727 03:43:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:15.727 03:43:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:15.727 03:43:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:15.727 03:43:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:15.727 03:43:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:15.727 03:43:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:15.727 03:43:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:15.727 03:43:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:15.727 03:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:15.727 03:43:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:15.727 03:43:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:16.292 nvme0n1 00:33:16.292 03:43:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:16.292 03:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:16.292 03:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:16.292 03:43:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:16.292 03:43:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:16.292 03:43:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:16.292 03:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:16.292 03:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:16.292 03:43:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:16.292 03:43:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:16.292 03:43:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:16.292 03:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:16.292 03:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:16.292 03:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:33:16.292 03:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:16.292 03:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:16.292 03:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:16.292 03:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:16.292 03:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzlhNjczMmVmMGQzMmQ1YjA0NzEyZmY0MTBhODQyOTS6vknX: 00:33:16.292 03:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWRjMDE0YjcxODJjMDhjNmMxODY3YWQzMzg0OWY2MWFmMjM5ZDE3Nzk3MTNiZGYyYTVkOWU3MWU1NGY4MmIwMy0+Dcw=: 00:33:16.292 03:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:16.292 03:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:16.292 03:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzlhNjczMmVmMGQzMmQ1YjA0NzEyZmY0MTBhODQyOTS6vknX: 00:33:16.292 03:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWRjMDE0YjcxODJjMDhjNmMxODY3YWQzMzg0OWY2MWFmMjM5ZDE3Nzk3MTNiZGYyYTVkOWU3MWU1NGY4MmIwMy0+Dcw=: ]] 00:33:16.292 03:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWRjMDE0YjcxODJjMDhjNmMxODY3YWQzMzg0OWY2MWFmMjM5ZDE3Nzk3MTNiZGYyYTVkOWU3MWU1NGY4MmIwMy0+Dcw=: 00:33:16.292 03:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:33:16.292 03:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:16.292 03:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:16.292 03:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:16.292 03:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:16.292 03:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:16.292 03:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:33:16.292 03:43:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:16.292 03:43:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:16.292 03:43:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:16.292 03:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:16.293 03:43:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:16.293 03:43:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:16.293 03:43:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:16.293 03:43:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:16.293 03:43:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:16.293 03:43:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:16.293 03:43:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:16.293 03:43:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:16.293 03:43:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:16.293 03:43:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:16.293 03:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:16.293 03:43:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:16.293 03:43:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:17.243 nvme0n1 00:33:17.243 03:43:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:17.243 03:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:17.243 03:43:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:17.243 03:43:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:17.243 03:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:17.243 03:43:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:17.243 03:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:17.243 03:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:17.243 03:43:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:17.243 03:43:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:17.243 03:43:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:17.243 03:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:17.243 03:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:33:17.243 03:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:17.243 03:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:17.243 03:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:17.243 03:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:17.243 03:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTlkMzViNTc5ODhkYzYwNTIzOWQ5NGNjYzc5YTk4ODM0ZDc5ZjdlZjMyODU0ZTMw3TdAGQ==: 00:33:17.243 03:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTJkZjBiOWMxNzY3N2RkMTcyYTIzMDc5NTIxYjc3NWYxYWUyMmE1MDFmMjMxNDllXZoZpA==: 00:33:17.243 03:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:17.243 03:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:17.243 03:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTlkMzViNTc5ODhkYzYwNTIzOWQ5NGNjYzc5YTk4ODM0ZDc5ZjdlZjMyODU0ZTMw3TdAGQ==: 00:33:17.243 03:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTJkZjBiOWMxNzY3N2RkMTcyYTIzMDc5NTIxYjc3NWYxYWUyMmE1MDFmMjMxNDllXZoZpA==: ]] 00:33:17.243 03:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTJkZjBiOWMxNzY3N2RkMTcyYTIzMDc5NTIxYjc3NWYxYWUyMmE1MDFmMjMxNDllXZoZpA==: 00:33:17.243 03:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:33:17.243 03:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:17.243 03:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:17.244 03:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:17.244 03:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:17.244 03:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:17.244 03:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:33:17.244 03:43:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:17.244 03:43:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:17.244 03:43:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:17.244 03:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:17.244 03:43:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:17.244 03:43:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:17.244 03:43:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:17.244 03:43:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:17.244 03:43:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:17.244 03:43:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:17.244 03:43:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:17.244 03:43:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:17.244 03:43:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:17.244 03:43:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:17.244 03:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:17.244 03:43:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:17.244 03:43:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:18.180 nvme0n1 00:33:18.180 03:43:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:18.180 03:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:18.180 03:43:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:18.180 03:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:18.180 03:43:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:18.180 03:43:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:18.180 03:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:18.180 03:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:18.180 03:43:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:18.180 03:43:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:18.180 03:43:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:18.180 03:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:18.180 03:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:33:18.180 03:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:18.180 03:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:18.180 03:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:18.180 03:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:18.180 03:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzIyYzM5MDY0YTg2MWQ0YjdlNmYwYjc5YzYwYTkzNTRhmmnN: 00:33:18.180 03:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDFlNTUyZjAxNmRmODNkZWM4ODA1ZTk5MWJmODBkNjiivZvV: 00:33:18.180 03:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:18.180 03:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:18.180 03:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzIyYzM5MDY0YTg2MWQ0YjdlNmYwYjc5YzYwYTkzNTRhmmnN: 00:33:18.180 03:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDFlNTUyZjAxNmRmODNkZWM4ODA1ZTk5MWJmODBkNjiivZvV: ]] 00:33:18.180 03:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDFlNTUyZjAxNmRmODNkZWM4ODA1ZTk5MWJmODBkNjiivZvV: 00:33:18.180 03:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:33:18.180 03:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:18.180 03:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:18.180 03:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:18.180 03:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:18.180 03:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:18.180 03:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:33:18.180 03:43:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:18.180 03:43:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:18.450 03:43:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:18.450 03:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:18.450 03:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:18.450 03:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:18.450 03:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:18.450 03:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:18.450 03:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:18.450 03:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:18.450 03:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:18.450 03:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:18.450 03:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:18.450 03:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:18.450 03:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:18.450 03:43:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:18.450 03:43:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:19.380 nvme0n1 00:33:19.380 03:43:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:19.380 03:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:19.380 03:43:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:19.380 03:43:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:19.380 03:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:19.380 03:43:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:19.380 03:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:19.380 03:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:19.380 03:43:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:19.380 03:43:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:19.380 03:43:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:19.380 03:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:19.380 03:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:33:19.381 03:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:19.381 03:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:19.381 03:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:19.381 03:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:19.381 03:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZWJhNDdjZGFhZmIyZTEwZTkxYTMyMmY5Zjg5ZGY1MWFkMTkzOWQyNjQxYjhhZDI5rzqCdQ==: 00:33:19.381 03:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjgyN2Y3ZmIxNTliMTBiZTFmZTJlN2FmNzA3MjA2YjMq02lR: 00:33:19.381 03:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:19.381 03:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:19.381 03:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZWJhNDdjZGFhZmIyZTEwZTkxYTMyMmY5Zjg5ZGY1MWFkMTkzOWQyNjQxYjhhZDI5rzqCdQ==: 00:33:19.381 03:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjgyN2Y3ZmIxNTliMTBiZTFmZTJlN2FmNzA3MjA2YjMq02lR: ]] 00:33:19.381 03:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjgyN2Y3ZmIxNTliMTBiZTFmZTJlN2FmNzA3MjA2YjMq02lR: 00:33:19.381 03:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:33:19.381 03:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:19.381 03:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:19.381 03:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:19.381 03:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:19.381 03:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:19.381 03:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:33:19.381 03:43:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:19.381 03:43:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:19.381 03:43:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:19.381 03:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:19.381 03:43:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:19.381 03:43:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:19.381 03:43:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:19.381 03:43:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:19.381 03:43:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:19.381 03:43:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:19.381 03:43:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:19.381 03:43:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:19.381 03:43:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:19.381 03:43:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:19.381 03:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:19.381 03:43:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:19.381 03:43:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:20.327 nvme0n1 00:33:20.327 03:43:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:20.327 03:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:20.327 03:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:20.327 03:43:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:20.327 03:43:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:20.327 03:43:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:20.327 03:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:20.327 03:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:20.327 03:43:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:20.327 03:43:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:20.327 03:43:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:20.327 03:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:20.327 03:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:33:20.327 03:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:20.327 03:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:20.327 03:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:20.327 03:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:20.327 03:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODdmNTk5NTM0ZjVmY2JiY2FlNTliNWUzZTI3MzdlNzMwODkzMDYyMWYxOTViMDdjNjc5M2NmOTMzZWZjNDg2NMogDDg=: 00:33:20.327 03:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:20.327 03:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:20.327 03:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:20.327 03:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODdmNTk5NTM0ZjVmY2JiY2FlNTliNWUzZTI3MzdlNzMwODkzMDYyMWYxOTViMDdjNjc5M2NmOTMzZWZjNDg2NMogDDg=: 00:33:20.327 03:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:20.327 03:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:33:20.327 03:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:20.327 03:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:20.327 03:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:20.327 03:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:20.327 03:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:20.327 03:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:33:20.327 03:43:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:20.327 03:43:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:20.327 03:43:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:20.327 03:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:20.327 03:43:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:20.327 03:43:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:20.327 03:43:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:20.327 03:43:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:20.327 03:43:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:20.327 03:43:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:20.327 03:43:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:20.327 03:43:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:20.327 03:43:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:20.327 03:43:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:20.327 03:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:20.327 03:43:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:20.327 03:43:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:21.258 nvme0n1 00:33:21.258 03:43:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:21.258 03:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:21.258 03:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:21.258 03:43:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:21.258 03:43:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:21.258 03:43:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:21.258 03:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:21.258 03:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:21.258 03:43:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:21.258 03:43:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:21.258 03:43:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:21.258 03:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:33:21.258 03:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:21.258 03:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:21.258 03:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:33:21.259 03:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:21.259 03:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:21.259 03:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:21.259 03:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:21.259 03:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzlhNjczMmVmMGQzMmQ1YjA0NzEyZmY0MTBhODQyOTS6vknX: 00:33:21.259 03:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWRjMDE0YjcxODJjMDhjNmMxODY3YWQzMzg0OWY2MWFmMjM5ZDE3Nzk3MTNiZGYyYTVkOWU3MWU1NGY4MmIwMy0+Dcw=: 00:33:21.259 03:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:21.259 03:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:21.259 03:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzlhNjczMmVmMGQzMmQ1YjA0NzEyZmY0MTBhODQyOTS6vknX: 00:33:21.259 03:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWRjMDE0YjcxODJjMDhjNmMxODY3YWQzMzg0OWY2MWFmMjM5ZDE3Nzk3MTNiZGYyYTVkOWU3MWU1NGY4MmIwMy0+Dcw=: ]] 00:33:21.259 03:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWRjMDE0YjcxODJjMDhjNmMxODY3YWQzMzg0OWY2MWFmMjM5ZDE3Nzk3MTNiZGYyYTVkOWU3MWU1NGY4MmIwMy0+Dcw=: 00:33:21.259 03:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:33:21.259 03:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:21.259 03:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:21.259 03:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:21.259 03:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:21.259 03:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:21.259 03:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:33:21.259 03:43:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:21.259 03:43:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:21.259 03:43:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:21.259 03:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:21.517 03:43:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:21.517 03:43:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:21.517 03:43:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:21.517 03:43:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:21.517 03:43:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:21.517 03:43:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:21.517 03:43:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:21.517 03:43:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:21.517 03:43:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:21.517 03:43:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:21.517 03:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:21.517 03:43:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:21.517 03:43:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:21.517 nvme0n1 00:33:21.517 03:43:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:21.517 03:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:21.517 03:43:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:21.517 03:43:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:21.517 03:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:21.517 03:43:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:21.517 03:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:21.517 03:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:21.517 03:43:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:21.517 03:43:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:21.517 03:43:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:21.517 03:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:21.517 03:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:33:21.517 03:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:21.517 03:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:21.517 03:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:21.517 03:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:21.517 03:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTlkMzViNTc5ODhkYzYwNTIzOWQ5NGNjYzc5YTk4ODM0ZDc5ZjdlZjMyODU0ZTMw3TdAGQ==: 00:33:21.517 03:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTJkZjBiOWMxNzY3N2RkMTcyYTIzMDc5NTIxYjc3NWYxYWUyMmE1MDFmMjMxNDllXZoZpA==: 00:33:21.517 03:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:21.517 03:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:21.517 03:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTlkMzViNTc5ODhkYzYwNTIzOWQ5NGNjYzc5YTk4ODM0ZDc5ZjdlZjMyODU0ZTMw3TdAGQ==: 00:33:21.517 03:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTJkZjBiOWMxNzY3N2RkMTcyYTIzMDc5NTIxYjc3NWYxYWUyMmE1MDFmMjMxNDllXZoZpA==: ]] 00:33:21.517 03:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTJkZjBiOWMxNzY3N2RkMTcyYTIzMDc5NTIxYjc3NWYxYWUyMmE1MDFmMjMxNDllXZoZpA==: 00:33:21.517 03:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:33:21.517 03:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:21.517 03:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:21.517 03:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:21.517 03:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:21.517 03:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:21.517 03:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:33:21.517 03:43:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:21.517 03:43:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:21.517 03:43:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:21.517 03:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:21.517 03:43:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:21.517 03:43:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:21.517 03:43:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:21.517 03:43:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:21.517 03:43:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:21.517 03:43:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:21.517 03:43:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:21.517 03:43:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:21.517 03:43:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:21.517 03:43:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:21.517 03:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:21.517 03:43:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:21.517 03:43:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:21.775 nvme0n1 00:33:21.775 03:43:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:21.775 03:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:21.775 03:43:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:21.775 03:43:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:21.775 03:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:21.775 03:43:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:21.775 03:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:21.775 03:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:21.775 03:43:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:21.775 03:43:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:21.775 03:43:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:21.775 03:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:21.775 03:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:33:21.775 03:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:21.775 03:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:21.775 03:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:21.775 03:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:21.775 03:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzIyYzM5MDY0YTg2MWQ0YjdlNmYwYjc5YzYwYTkzNTRhmmnN: 00:33:21.776 03:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDFlNTUyZjAxNmRmODNkZWM4ODA1ZTk5MWJmODBkNjiivZvV: 00:33:21.776 03:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:21.776 03:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:21.776 03:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzIyYzM5MDY0YTg2MWQ0YjdlNmYwYjc5YzYwYTkzNTRhmmnN: 00:33:21.776 03:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDFlNTUyZjAxNmRmODNkZWM4ODA1ZTk5MWJmODBkNjiivZvV: ]] 00:33:21.776 03:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDFlNTUyZjAxNmRmODNkZWM4ODA1ZTk5MWJmODBkNjiivZvV: 00:33:21.776 03:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:33:21.776 03:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:21.776 03:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:21.776 03:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:21.776 03:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:21.776 03:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:21.776 03:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:33:21.776 03:43:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:21.776 03:43:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:21.776 03:43:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:21.776 03:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:21.776 03:43:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:21.776 03:43:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:21.776 03:43:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:21.776 03:43:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:21.776 03:43:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:21.776 03:43:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:21.776 03:43:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:21.776 03:43:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:21.776 03:43:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:21.776 03:43:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:21.776 03:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:21.776 03:43:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:21.776 03:43:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:22.034 nvme0n1 00:33:22.034 03:43:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:22.034 03:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:22.034 03:43:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:22.034 03:43:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:22.034 03:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:22.034 03:43:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:22.034 03:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:22.034 03:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:22.034 03:43:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:22.034 03:43:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:22.034 03:43:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:22.034 03:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:22.034 03:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:33:22.034 03:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:22.034 03:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:22.034 03:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:22.034 03:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:22.034 03:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZWJhNDdjZGFhZmIyZTEwZTkxYTMyMmY5Zjg5ZGY1MWFkMTkzOWQyNjQxYjhhZDI5rzqCdQ==: 00:33:22.034 03:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjgyN2Y3ZmIxNTliMTBiZTFmZTJlN2FmNzA3MjA2YjMq02lR: 00:33:22.034 03:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:22.034 03:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:22.034 03:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZWJhNDdjZGFhZmIyZTEwZTkxYTMyMmY5Zjg5ZGY1MWFkMTkzOWQyNjQxYjhhZDI5rzqCdQ==: 00:33:22.034 03:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjgyN2Y3ZmIxNTliMTBiZTFmZTJlN2FmNzA3MjA2YjMq02lR: ]] 00:33:22.034 03:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjgyN2Y3ZmIxNTliMTBiZTFmZTJlN2FmNzA3MjA2YjMq02lR: 00:33:22.034 03:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:33:22.034 03:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:22.034 03:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:22.034 03:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:22.034 03:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:22.034 03:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:22.034 03:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:33:22.034 03:43:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:22.034 03:43:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:22.034 03:43:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:22.034 03:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:22.034 03:43:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:22.034 03:43:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:22.034 03:43:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:22.034 03:43:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:22.034 03:43:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:22.034 03:43:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:22.034 03:43:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:22.034 03:43:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:22.034 03:43:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:22.034 03:43:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:22.034 03:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:22.034 03:43:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:22.034 03:43:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:22.293 nvme0n1 00:33:22.293 03:43:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:22.293 03:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:22.293 03:43:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:22.293 03:43:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:22.293 03:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:22.293 03:43:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:22.293 03:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:22.293 03:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:22.293 03:43:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:22.293 03:43:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:22.293 03:43:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:22.293 03:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:22.293 03:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:33:22.293 03:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:22.293 03:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:22.293 03:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:22.293 03:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:22.293 03:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODdmNTk5NTM0ZjVmY2JiY2FlNTliNWUzZTI3MzdlNzMwODkzMDYyMWYxOTViMDdjNjc5M2NmOTMzZWZjNDg2NMogDDg=: 00:33:22.293 03:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:22.293 03:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:22.293 03:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:22.293 03:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODdmNTk5NTM0ZjVmY2JiY2FlNTliNWUzZTI3MzdlNzMwODkzMDYyMWYxOTViMDdjNjc5M2NmOTMzZWZjNDg2NMogDDg=: 00:33:22.293 03:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:22.293 03:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:33:22.293 03:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:22.293 03:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:22.293 03:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:22.293 03:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:22.293 03:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:22.293 03:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:33:22.293 03:43:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:22.293 03:43:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:22.293 03:43:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:22.293 03:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:22.293 03:43:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:22.293 03:43:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:22.293 03:43:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:22.293 03:43:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:22.293 03:43:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:22.293 03:43:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:22.293 03:43:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:22.293 03:43:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:22.293 03:43:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:22.293 03:43:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:22.293 03:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:22.293 03:43:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:22.293 03:43:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:22.551 nvme0n1 00:33:22.551 03:43:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:22.551 03:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:22.551 03:43:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:22.551 03:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:22.552 03:43:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:22.552 03:43:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:22.552 03:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:22.552 03:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:22.552 03:43:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:22.552 03:43:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:22.552 03:43:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:22.552 03:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:22.552 03:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:22.552 03:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:33:22.552 03:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:22.552 03:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:22.552 03:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:22.552 03:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:22.552 03:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzlhNjczMmVmMGQzMmQ1YjA0NzEyZmY0MTBhODQyOTS6vknX: 00:33:22.552 03:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWRjMDE0YjcxODJjMDhjNmMxODY3YWQzMzg0OWY2MWFmMjM5ZDE3Nzk3MTNiZGYyYTVkOWU3MWU1NGY4MmIwMy0+Dcw=: 00:33:22.552 03:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:22.552 03:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:22.552 03:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzlhNjczMmVmMGQzMmQ1YjA0NzEyZmY0MTBhODQyOTS6vknX: 00:33:22.552 03:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWRjMDE0YjcxODJjMDhjNmMxODY3YWQzMzg0OWY2MWFmMjM5ZDE3Nzk3MTNiZGYyYTVkOWU3MWU1NGY4MmIwMy0+Dcw=: ]] 00:33:22.552 03:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWRjMDE0YjcxODJjMDhjNmMxODY3YWQzMzg0OWY2MWFmMjM5ZDE3Nzk3MTNiZGYyYTVkOWU3MWU1NGY4MmIwMy0+Dcw=: 00:33:22.552 03:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:33:22.552 03:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:22.552 03:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:22.552 03:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:22.552 03:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:22.552 03:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:22.552 03:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:33:22.552 03:43:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:22.552 03:43:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:22.552 03:43:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:22.552 03:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:22.552 03:43:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:22.552 03:43:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:22.552 03:43:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:22.552 03:43:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:22.552 03:43:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:22.552 03:43:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:22.552 03:43:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:22.552 03:43:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:22.552 03:43:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:22.552 03:43:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:22.552 03:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:22.552 03:43:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:22.552 03:43:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:22.810 nvme0n1 00:33:22.810 03:43:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:22.810 03:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:22.810 03:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:22.810 03:43:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:22.810 03:43:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:22.810 03:43:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:22.810 03:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:22.810 03:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:22.810 03:43:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:22.810 03:43:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:22.810 03:43:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:22.810 03:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:22.810 03:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:33:22.810 03:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:22.810 03:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:22.810 03:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:22.810 03:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:22.810 03:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTlkMzViNTc5ODhkYzYwNTIzOWQ5NGNjYzc5YTk4ODM0ZDc5ZjdlZjMyODU0ZTMw3TdAGQ==: 00:33:22.810 03:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTJkZjBiOWMxNzY3N2RkMTcyYTIzMDc5NTIxYjc3NWYxYWUyMmE1MDFmMjMxNDllXZoZpA==: 00:33:22.810 03:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:22.810 03:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:22.810 03:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTlkMzViNTc5ODhkYzYwNTIzOWQ5NGNjYzc5YTk4ODM0ZDc5ZjdlZjMyODU0ZTMw3TdAGQ==: 00:33:22.810 03:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTJkZjBiOWMxNzY3N2RkMTcyYTIzMDc5NTIxYjc3NWYxYWUyMmE1MDFmMjMxNDllXZoZpA==: ]] 00:33:22.810 03:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTJkZjBiOWMxNzY3N2RkMTcyYTIzMDc5NTIxYjc3NWYxYWUyMmE1MDFmMjMxNDllXZoZpA==: 00:33:22.810 03:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:33:22.810 03:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:22.810 03:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:22.810 03:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:22.810 03:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:22.810 03:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:22.810 03:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:33:22.810 03:43:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:22.810 03:43:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:22.810 03:43:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:22.810 03:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:22.810 03:43:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:22.810 03:43:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:22.810 03:43:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:22.810 03:43:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:22.810 03:43:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:22.810 03:43:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:22.810 03:43:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:22.810 03:43:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:22.810 03:43:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:22.810 03:43:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:22.810 03:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:22.810 03:43:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:22.810 03:43:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:23.068 nvme0n1 00:33:23.068 03:43:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:23.068 03:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:23.068 03:43:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:23.068 03:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:23.068 03:43:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:23.068 03:43:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:23.068 03:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:23.068 03:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:23.068 03:43:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:23.068 03:43:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:23.068 03:43:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:23.068 03:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:23.068 03:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:33:23.068 03:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:23.068 03:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:23.068 03:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:23.068 03:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:23.069 03:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzIyYzM5MDY0YTg2MWQ0YjdlNmYwYjc5YzYwYTkzNTRhmmnN: 00:33:23.069 03:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDFlNTUyZjAxNmRmODNkZWM4ODA1ZTk5MWJmODBkNjiivZvV: 00:33:23.069 03:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:23.069 03:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:23.069 03:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzIyYzM5MDY0YTg2MWQ0YjdlNmYwYjc5YzYwYTkzNTRhmmnN: 00:33:23.069 03:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDFlNTUyZjAxNmRmODNkZWM4ODA1ZTk5MWJmODBkNjiivZvV: ]] 00:33:23.069 03:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDFlNTUyZjAxNmRmODNkZWM4ODA1ZTk5MWJmODBkNjiivZvV: 00:33:23.069 03:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:33:23.069 03:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:23.069 03:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:23.069 03:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:23.069 03:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:23.069 03:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:23.069 03:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:33:23.069 03:43:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:23.069 03:43:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:23.069 03:43:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:23.069 03:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:23.069 03:43:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:23.069 03:43:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:23.069 03:43:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:23.069 03:43:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:23.069 03:43:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:23.069 03:43:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:23.069 03:43:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:23.069 03:43:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:23.069 03:43:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:23.069 03:43:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:23.069 03:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:23.069 03:43:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:23.069 03:43:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:23.327 nvme0n1 00:33:23.327 03:43:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:23.327 03:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:23.327 03:43:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:23.327 03:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:23.327 03:43:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:23.327 03:43:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:23.327 03:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:23.327 03:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:23.327 03:43:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:23.327 03:43:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:23.327 03:43:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:23.327 03:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:23.327 03:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:33:23.327 03:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:23.327 03:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:23.327 03:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:23.327 03:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:23.327 03:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZWJhNDdjZGFhZmIyZTEwZTkxYTMyMmY5Zjg5ZGY1MWFkMTkzOWQyNjQxYjhhZDI5rzqCdQ==: 00:33:23.327 03:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjgyN2Y3ZmIxNTliMTBiZTFmZTJlN2FmNzA3MjA2YjMq02lR: 00:33:23.327 03:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:23.327 03:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:23.327 03:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZWJhNDdjZGFhZmIyZTEwZTkxYTMyMmY5Zjg5ZGY1MWFkMTkzOWQyNjQxYjhhZDI5rzqCdQ==: 00:33:23.327 03:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjgyN2Y3ZmIxNTliMTBiZTFmZTJlN2FmNzA3MjA2YjMq02lR: ]] 00:33:23.327 03:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjgyN2Y3ZmIxNTliMTBiZTFmZTJlN2FmNzA3MjA2YjMq02lR: 00:33:23.327 03:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:33:23.327 03:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:23.327 03:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:23.327 03:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:23.327 03:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:23.327 03:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:23.327 03:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:33:23.327 03:43:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:23.327 03:43:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:23.327 03:43:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:23.327 03:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:23.327 03:43:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:23.327 03:43:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:23.327 03:43:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:23.327 03:43:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:23.327 03:43:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:23.327 03:43:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:23.327 03:43:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:23.327 03:43:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:23.327 03:43:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:23.327 03:43:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:23.327 03:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:23.327 03:43:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:23.327 03:43:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:23.586 nvme0n1 00:33:23.586 03:43:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:23.586 03:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:23.586 03:43:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:23.586 03:43:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:23.586 03:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:23.586 03:43:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:23.586 03:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:23.586 03:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:23.586 03:43:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:23.586 03:43:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:23.586 03:43:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:23.586 03:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:23.586 03:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:33:23.586 03:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:23.586 03:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:23.586 03:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:23.586 03:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:23.586 03:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODdmNTk5NTM0ZjVmY2JiY2FlNTliNWUzZTI3MzdlNzMwODkzMDYyMWYxOTViMDdjNjc5M2NmOTMzZWZjNDg2NMogDDg=: 00:33:23.586 03:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:23.586 03:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:23.586 03:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:23.586 03:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODdmNTk5NTM0ZjVmY2JiY2FlNTliNWUzZTI3MzdlNzMwODkzMDYyMWYxOTViMDdjNjc5M2NmOTMzZWZjNDg2NMogDDg=: 00:33:23.586 03:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:23.586 03:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:33:23.586 03:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:23.586 03:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:23.586 03:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:23.586 03:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:23.586 03:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:23.586 03:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:33:23.586 03:43:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:23.586 03:43:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:23.586 03:43:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:23.586 03:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:23.586 03:43:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:23.586 03:43:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:23.586 03:43:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:23.586 03:43:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:23.586 03:43:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:23.586 03:43:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:23.586 03:43:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:23.586 03:43:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:23.586 03:43:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:23.586 03:43:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:23.586 03:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:23.586 03:43:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:23.586 03:43:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:23.845 nvme0n1 00:33:23.845 03:43:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:23.845 03:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:23.845 03:43:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:23.845 03:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:23.845 03:43:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:23.845 03:43:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:23.845 03:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:23.845 03:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:23.845 03:43:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:23.845 03:43:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:23.845 03:43:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:23.845 03:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:23.845 03:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:23.845 03:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:33:23.845 03:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:23.845 03:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:23.845 03:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:23.845 03:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:23.845 03:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzlhNjczMmVmMGQzMmQ1YjA0NzEyZmY0MTBhODQyOTS6vknX: 00:33:23.845 03:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWRjMDE0YjcxODJjMDhjNmMxODY3YWQzMzg0OWY2MWFmMjM5ZDE3Nzk3MTNiZGYyYTVkOWU3MWU1NGY4MmIwMy0+Dcw=: 00:33:23.845 03:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:23.845 03:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:23.845 03:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzlhNjczMmVmMGQzMmQ1YjA0NzEyZmY0MTBhODQyOTS6vknX: 00:33:23.845 03:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWRjMDE0YjcxODJjMDhjNmMxODY3YWQzMzg0OWY2MWFmMjM5ZDE3Nzk3MTNiZGYyYTVkOWU3MWU1NGY4MmIwMy0+Dcw=: ]] 00:33:23.845 03:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWRjMDE0YjcxODJjMDhjNmMxODY3YWQzMzg0OWY2MWFmMjM5ZDE3Nzk3MTNiZGYyYTVkOWU3MWU1NGY4MmIwMy0+Dcw=: 00:33:23.845 03:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:33:23.845 03:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:23.845 03:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:23.845 03:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:23.845 03:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:23.845 03:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:23.845 03:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:33:23.845 03:43:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:23.845 03:43:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:23.845 03:43:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:23.845 03:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:23.845 03:43:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:23.845 03:43:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:23.845 03:43:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:23.845 03:43:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:23.845 03:43:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:23.845 03:43:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:23.845 03:43:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:23.845 03:43:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:23.845 03:43:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:23.845 03:43:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:23.845 03:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:23.845 03:43:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:23.845 03:43:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:24.103 nvme0n1 00:33:24.103 03:43:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:24.103 03:43:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:24.103 03:43:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:24.103 03:43:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:24.103 03:43:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:24.103 03:43:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:24.103 03:43:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:24.103 03:43:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:24.103 03:43:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:24.103 03:43:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:24.103 03:43:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:24.103 03:43:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:24.103 03:43:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:33:24.103 03:43:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:24.103 03:43:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:24.103 03:43:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:24.103 03:43:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:24.103 03:43:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTlkMzViNTc5ODhkYzYwNTIzOWQ5NGNjYzc5YTk4ODM0ZDc5ZjdlZjMyODU0ZTMw3TdAGQ==: 00:33:24.103 03:43:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTJkZjBiOWMxNzY3N2RkMTcyYTIzMDc5NTIxYjc3NWYxYWUyMmE1MDFmMjMxNDllXZoZpA==: 00:33:24.103 03:43:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:24.103 03:43:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:24.103 03:43:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTlkMzViNTc5ODhkYzYwNTIzOWQ5NGNjYzc5YTk4ODM0ZDc5ZjdlZjMyODU0ZTMw3TdAGQ==: 00:33:24.103 03:43:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTJkZjBiOWMxNzY3N2RkMTcyYTIzMDc5NTIxYjc3NWYxYWUyMmE1MDFmMjMxNDllXZoZpA==: ]] 00:33:24.103 03:43:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTJkZjBiOWMxNzY3N2RkMTcyYTIzMDc5NTIxYjc3NWYxYWUyMmE1MDFmMjMxNDllXZoZpA==: 00:33:24.103 03:43:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:33:24.103 03:43:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:24.103 03:43:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:24.103 03:43:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:24.103 03:43:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:24.103 03:43:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:24.103 03:43:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:33:24.103 03:43:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:24.103 03:43:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:24.103 03:43:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:24.103 03:43:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:24.103 03:43:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:24.103 03:43:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:24.103 03:43:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:24.103 03:43:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:24.103 03:43:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:24.103 03:43:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:24.103 03:43:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:24.103 03:43:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:24.104 03:43:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:24.104 03:43:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:24.104 03:43:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:24.104 03:43:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:24.104 03:43:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:24.361 nvme0n1 00:33:24.361 03:43:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:24.361 03:43:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:24.361 03:43:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:24.361 03:43:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:24.361 03:43:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:24.361 03:43:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:24.620 03:43:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:24.621 03:43:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:24.621 03:43:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:24.621 03:43:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:24.621 03:43:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:24.621 03:43:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:24.621 03:43:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:33:24.621 03:43:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:24.621 03:43:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:24.621 03:43:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:24.621 03:43:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:24.621 03:43:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzIyYzM5MDY0YTg2MWQ0YjdlNmYwYjc5YzYwYTkzNTRhmmnN: 00:33:24.621 03:43:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDFlNTUyZjAxNmRmODNkZWM4ODA1ZTk5MWJmODBkNjiivZvV: 00:33:24.621 03:43:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:24.621 03:43:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:24.621 03:43:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzIyYzM5MDY0YTg2MWQ0YjdlNmYwYjc5YzYwYTkzNTRhmmnN: 00:33:24.621 03:43:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDFlNTUyZjAxNmRmODNkZWM4ODA1ZTk5MWJmODBkNjiivZvV: ]] 00:33:24.621 03:43:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDFlNTUyZjAxNmRmODNkZWM4ODA1ZTk5MWJmODBkNjiivZvV: 00:33:24.621 03:43:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:33:24.621 03:43:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:24.621 03:43:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:24.621 03:43:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:24.621 03:43:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:24.621 03:43:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:24.621 03:43:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:33:24.621 03:43:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:24.621 03:43:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:24.621 03:43:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:24.621 03:43:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:24.621 03:43:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:24.621 03:43:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:24.621 03:43:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:24.621 03:43:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:24.621 03:43:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:24.621 03:43:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:24.621 03:43:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:24.621 03:43:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:24.621 03:43:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:24.621 03:43:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:24.621 03:43:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:24.621 03:43:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:24.621 03:43:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:24.913 nvme0n1 00:33:24.913 03:43:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:24.913 03:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:24.913 03:43:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:24.913 03:43:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:24.913 03:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:24.913 03:43:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:24.913 03:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:24.913 03:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:24.913 03:43:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:24.913 03:43:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:24.913 03:43:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:24.913 03:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:24.913 03:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:33:24.913 03:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:24.913 03:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:24.913 03:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:24.913 03:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:24.913 03:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZWJhNDdjZGFhZmIyZTEwZTkxYTMyMmY5Zjg5ZGY1MWFkMTkzOWQyNjQxYjhhZDI5rzqCdQ==: 00:33:24.913 03:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjgyN2Y3ZmIxNTliMTBiZTFmZTJlN2FmNzA3MjA2YjMq02lR: 00:33:24.913 03:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:24.913 03:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:24.913 03:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZWJhNDdjZGFhZmIyZTEwZTkxYTMyMmY5Zjg5ZGY1MWFkMTkzOWQyNjQxYjhhZDI5rzqCdQ==: 00:33:24.913 03:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjgyN2Y3ZmIxNTliMTBiZTFmZTJlN2FmNzA3MjA2YjMq02lR: ]] 00:33:24.913 03:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjgyN2Y3ZmIxNTliMTBiZTFmZTJlN2FmNzA3MjA2YjMq02lR: 00:33:24.913 03:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:33:24.913 03:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:24.913 03:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:24.913 03:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:24.913 03:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:24.913 03:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:24.913 03:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:33:24.913 03:43:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:24.913 03:43:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:24.913 03:43:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:24.913 03:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:24.913 03:43:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:24.913 03:43:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:24.913 03:43:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:24.913 03:43:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:24.913 03:43:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:24.913 03:43:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:24.913 03:43:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:24.913 03:43:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:24.913 03:43:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:24.913 03:43:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:24.913 03:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:24.913 03:43:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:24.913 03:43:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:25.172 nvme0n1 00:33:25.172 03:43:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:25.172 03:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:25.172 03:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:25.172 03:43:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:25.172 03:43:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:25.172 03:43:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:25.172 03:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:25.172 03:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:25.172 03:43:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:25.172 03:43:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:25.172 03:43:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:25.172 03:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:25.172 03:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:33:25.172 03:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:25.172 03:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:25.172 03:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:25.172 03:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:25.172 03:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODdmNTk5NTM0ZjVmY2JiY2FlNTliNWUzZTI3MzdlNzMwODkzMDYyMWYxOTViMDdjNjc5M2NmOTMzZWZjNDg2NMogDDg=: 00:33:25.172 03:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:25.172 03:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:25.172 03:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:25.172 03:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODdmNTk5NTM0ZjVmY2JiY2FlNTliNWUzZTI3MzdlNzMwODkzMDYyMWYxOTViMDdjNjc5M2NmOTMzZWZjNDg2NMogDDg=: 00:33:25.172 03:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:25.172 03:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:33:25.172 03:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:25.172 03:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:25.172 03:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:25.172 03:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:25.172 03:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:25.172 03:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:33:25.172 03:43:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:25.172 03:43:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:25.172 03:43:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:25.172 03:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:25.172 03:43:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:25.172 03:43:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:25.172 03:43:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:25.172 03:43:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:25.172 03:43:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:25.172 03:43:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:25.172 03:43:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:25.172 03:43:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:25.172 03:43:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:25.172 03:43:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:25.172 03:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:25.172 03:43:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:25.172 03:43:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:25.431 nvme0n1 00:33:25.431 03:43:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:25.431 03:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:25.431 03:43:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:25.431 03:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:25.431 03:43:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:25.689 03:43:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:25.689 03:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:25.689 03:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:25.689 03:43:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:25.689 03:43:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:25.689 03:43:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:25.689 03:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:25.689 03:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:25.689 03:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:33:25.689 03:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:25.689 03:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:25.689 03:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:25.689 03:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:25.689 03:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzlhNjczMmVmMGQzMmQ1YjA0NzEyZmY0MTBhODQyOTS6vknX: 00:33:25.689 03:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWRjMDE0YjcxODJjMDhjNmMxODY3YWQzMzg0OWY2MWFmMjM5ZDE3Nzk3MTNiZGYyYTVkOWU3MWU1NGY4MmIwMy0+Dcw=: 00:33:25.689 03:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:25.689 03:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:25.689 03:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzlhNjczMmVmMGQzMmQ1YjA0NzEyZmY0MTBhODQyOTS6vknX: 00:33:25.689 03:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWRjMDE0YjcxODJjMDhjNmMxODY3YWQzMzg0OWY2MWFmMjM5ZDE3Nzk3MTNiZGYyYTVkOWU3MWU1NGY4MmIwMy0+Dcw=: ]] 00:33:25.689 03:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWRjMDE0YjcxODJjMDhjNmMxODY3YWQzMzg0OWY2MWFmMjM5ZDE3Nzk3MTNiZGYyYTVkOWU3MWU1NGY4MmIwMy0+Dcw=: 00:33:25.689 03:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:33:25.689 03:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:25.689 03:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:25.689 03:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:25.689 03:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:25.689 03:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:25.689 03:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:33:25.689 03:43:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:25.689 03:43:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:25.689 03:43:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:25.689 03:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:25.689 03:43:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:25.689 03:43:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:25.689 03:43:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:25.689 03:43:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:25.689 03:43:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:25.689 03:43:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:25.689 03:43:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:25.689 03:43:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:25.689 03:43:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:25.689 03:43:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:25.689 03:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:25.689 03:43:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:25.689 03:43:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:26.255 nvme0n1 00:33:26.255 03:43:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:26.255 03:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:26.255 03:43:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:26.255 03:43:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:26.255 03:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:26.255 03:43:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:26.255 03:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:26.255 03:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:26.255 03:43:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:26.255 03:43:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:26.255 03:43:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:26.255 03:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:26.255 03:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:33:26.255 03:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:26.255 03:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:26.255 03:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:26.255 03:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:26.255 03:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTlkMzViNTc5ODhkYzYwNTIzOWQ5NGNjYzc5YTk4ODM0ZDc5ZjdlZjMyODU0ZTMw3TdAGQ==: 00:33:26.255 03:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTJkZjBiOWMxNzY3N2RkMTcyYTIzMDc5NTIxYjc3NWYxYWUyMmE1MDFmMjMxNDllXZoZpA==: 00:33:26.255 03:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:26.255 03:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:26.255 03:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTlkMzViNTc5ODhkYzYwNTIzOWQ5NGNjYzc5YTk4ODM0ZDc5ZjdlZjMyODU0ZTMw3TdAGQ==: 00:33:26.255 03:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTJkZjBiOWMxNzY3N2RkMTcyYTIzMDc5NTIxYjc3NWYxYWUyMmE1MDFmMjMxNDllXZoZpA==: ]] 00:33:26.255 03:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTJkZjBiOWMxNzY3N2RkMTcyYTIzMDc5NTIxYjc3NWYxYWUyMmE1MDFmMjMxNDllXZoZpA==: 00:33:26.255 03:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:33:26.255 03:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:26.255 03:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:26.255 03:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:26.255 03:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:26.255 03:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:26.255 03:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:33:26.255 03:43:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:26.255 03:43:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:26.255 03:43:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:26.255 03:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:26.255 03:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:26.255 03:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:26.255 03:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:26.255 03:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:26.255 03:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:26.255 03:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:26.255 03:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:26.255 03:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:26.255 03:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:26.255 03:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:26.255 03:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:26.255 03:43:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:26.255 03:43:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:26.822 nvme0n1 00:33:26.822 03:43:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:26.822 03:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:26.822 03:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:26.822 03:43:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:26.822 03:43:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:26.822 03:43:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:26.822 03:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:26.822 03:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:26.822 03:43:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:26.822 03:43:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:26.822 03:43:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:26.822 03:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:26.822 03:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:33:26.822 03:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:26.822 03:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:26.822 03:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:26.822 03:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:26.822 03:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzIyYzM5MDY0YTg2MWQ0YjdlNmYwYjc5YzYwYTkzNTRhmmnN: 00:33:26.822 03:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDFlNTUyZjAxNmRmODNkZWM4ODA1ZTk5MWJmODBkNjiivZvV: 00:33:26.822 03:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:26.822 03:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:26.822 03:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzIyYzM5MDY0YTg2MWQ0YjdlNmYwYjc5YzYwYTkzNTRhmmnN: 00:33:26.822 03:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDFlNTUyZjAxNmRmODNkZWM4ODA1ZTk5MWJmODBkNjiivZvV: ]] 00:33:26.822 03:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDFlNTUyZjAxNmRmODNkZWM4ODA1ZTk5MWJmODBkNjiivZvV: 00:33:26.822 03:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:33:26.822 03:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:26.822 03:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:26.822 03:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:26.822 03:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:26.822 03:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:26.822 03:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:33:26.822 03:43:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:26.822 03:43:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:26.822 03:43:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:26.822 03:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:26.822 03:43:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:26.822 03:43:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:26.822 03:43:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:26.822 03:43:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:26.822 03:43:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:26.822 03:43:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:26.822 03:43:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:26.822 03:43:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:26.822 03:43:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:26.822 03:43:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:26.822 03:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:26.822 03:43:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:26.822 03:43:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:27.388 nvme0n1 00:33:27.388 03:43:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:27.388 03:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:27.388 03:43:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:27.388 03:43:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:27.388 03:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:27.388 03:43:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:27.388 03:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:27.388 03:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:27.388 03:43:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:27.388 03:43:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:27.388 03:43:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:27.388 03:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:27.388 03:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:33:27.388 03:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:27.388 03:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:27.388 03:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:27.388 03:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:27.388 03:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZWJhNDdjZGFhZmIyZTEwZTkxYTMyMmY5Zjg5ZGY1MWFkMTkzOWQyNjQxYjhhZDI5rzqCdQ==: 00:33:27.388 03:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjgyN2Y3ZmIxNTliMTBiZTFmZTJlN2FmNzA3MjA2YjMq02lR: 00:33:27.388 03:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:27.388 03:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:27.388 03:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZWJhNDdjZGFhZmIyZTEwZTkxYTMyMmY5Zjg5ZGY1MWFkMTkzOWQyNjQxYjhhZDI5rzqCdQ==: 00:33:27.388 03:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjgyN2Y3ZmIxNTliMTBiZTFmZTJlN2FmNzA3MjA2YjMq02lR: ]] 00:33:27.388 03:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjgyN2Y3ZmIxNTliMTBiZTFmZTJlN2FmNzA3MjA2YjMq02lR: 00:33:27.388 03:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:33:27.388 03:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:27.388 03:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:27.388 03:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:27.388 03:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:27.388 03:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:27.388 03:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:33:27.388 03:43:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:27.388 03:43:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:27.388 03:43:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:27.388 03:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:27.388 03:43:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:27.388 03:43:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:27.388 03:43:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:27.388 03:43:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:27.388 03:43:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:27.388 03:43:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:27.388 03:43:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:27.388 03:43:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:27.388 03:43:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:27.388 03:43:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:27.388 03:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:27.388 03:43:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:27.388 03:43:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:27.954 nvme0n1 00:33:27.954 03:43:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:27.954 03:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:27.954 03:43:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:27.954 03:43:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:27.954 03:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:27.954 03:43:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:27.954 03:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:27.954 03:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:27.954 03:43:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:27.954 03:43:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:27.954 03:43:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:27.954 03:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:27.954 03:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:33:27.954 03:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:27.954 03:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:27.954 03:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:27.954 03:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:27.954 03:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODdmNTk5NTM0ZjVmY2JiY2FlNTliNWUzZTI3MzdlNzMwODkzMDYyMWYxOTViMDdjNjc5M2NmOTMzZWZjNDg2NMogDDg=: 00:33:27.954 03:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:27.954 03:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:27.954 03:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:27.954 03:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODdmNTk5NTM0ZjVmY2JiY2FlNTliNWUzZTI3MzdlNzMwODkzMDYyMWYxOTViMDdjNjc5M2NmOTMzZWZjNDg2NMogDDg=: 00:33:27.954 03:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:27.954 03:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:33:27.954 03:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:27.954 03:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:27.954 03:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:27.954 03:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:27.954 03:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:27.954 03:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:33:27.954 03:43:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:27.954 03:43:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:28.212 03:43:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:28.212 03:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:28.212 03:43:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:28.212 03:43:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:28.212 03:43:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:28.212 03:43:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:28.212 03:43:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:28.212 03:43:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:28.212 03:43:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:28.212 03:43:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:28.212 03:43:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:28.212 03:43:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:28.212 03:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:28.212 03:43:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:28.212 03:43:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:28.776 nvme0n1 00:33:28.776 03:43:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:28.776 03:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:28.776 03:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:28.776 03:43:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:28.776 03:43:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:28.776 03:43:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:28.776 03:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:28.776 03:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:28.776 03:43:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:28.776 03:43:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:28.776 03:43:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:28.776 03:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:28.776 03:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:28.776 03:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:33:28.776 03:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:28.776 03:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:28.776 03:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:28.776 03:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:28.776 03:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzlhNjczMmVmMGQzMmQ1YjA0NzEyZmY0MTBhODQyOTS6vknX: 00:33:28.776 03:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWRjMDE0YjcxODJjMDhjNmMxODY3YWQzMzg0OWY2MWFmMjM5ZDE3Nzk3MTNiZGYyYTVkOWU3MWU1NGY4MmIwMy0+Dcw=: 00:33:28.776 03:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:28.776 03:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:28.776 03:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzlhNjczMmVmMGQzMmQ1YjA0NzEyZmY0MTBhODQyOTS6vknX: 00:33:28.776 03:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWRjMDE0YjcxODJjMDhjNmMxODY3YWQzMzg0OWY2MWFmMjM5ZDE3Nzk3MTNiZGYyYTVkOWU3MWU1NGY4MmIwMy0+Dcw=: ]] 00:33:28.776 03:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWRjMDE0YjcxODJjMDhjNmMxODY3YWQzMzg0OWY2MWFmMjM5ZDE3Nzk3MTNiZGYyYTVkOWU3MWU1NGY4MmIwMy0+Dcw=: 00:33:28.776 03:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:33:28.776 03:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:28.776 03:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:28.776 03:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:28.776 03:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:28.777 03:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:28.777 03:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:33:28.777 03:43:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:28.777 03:43:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:28.777 03:43:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:28.777 03:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:28.777 03:43:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:28.777 03:43:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:28.777 03:43:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:28.777 03:43:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:28.777 03:43:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:28.777 03:43:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:28.777 03:43:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:28.777 03:43:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:28.777 03:43:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:28.777 03:43:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:28.777 03:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:28.777 03:43:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:28.777 03:43:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:29.710 nvme0n1 00:33:29.710 03:43:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:29.710 03:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:29.710 03:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:29.710 03:43:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:29.710 03:43:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:29.710 03:43:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:29.710 03:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:29.710 03:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:29.710 03:43:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:29.710 03:43:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:29.710 03:43:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:29.710 03:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:29.710 03:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:33:29.710 03:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:29.710 03:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:29.710 03:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:29.710 03:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:29.710 03:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTlkMzViNTc5ODhkYzYwNTIzOWQ5NGNjYzc5YTk4ODM0ZDc5ZjdlZjMyODU0ZTMw3TdAGQ==: 00:33:29.710 03:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTJkZjBiOWMxNzY3N2RkMTcyYTIzMDc5NTIxYjc3NWYxYWUyMmE1MDFmMjMxNDllXZoZpA==: 00:33:29.710 03:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:29.710 03:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:29.710 03:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTlkMzViNTc5ODhkYzYwNTIzOWQ5NGNjYzc5YTk4ODM0ZDc5ZjdlZjMyODU0ZTMw3TdAGQ==: 00:33:29.710 03:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTJkZjBiOWMxNzY3N2RkMTcyYTIzMDc5NTIxYjc3NWYxYWUyMmE1MDFmMjMxNDllXZoZpA==: ]] 00:33:29.710 03:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTJkZjBiOWMxNzY3N2RkMTcyYTIzMDc5NTIxYjc3NWYxYWUyMmE1MDFmMjMxNDllXZoZpA==: 00:33:29.710 03:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:33:29.710 03:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:29.710 03:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:29.710 03:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:29.710 03:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:29.710 03:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:29.710 03:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:33:29.710 03:43:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:29.710 03:43:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:29.710 03:43:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:29.710 03:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:29.710 03:43:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:29.710 03:43:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:29.710 03:43:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:29.710 03:43:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:29.711 03:43:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:29.711 03:43:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:29.711 03:43:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:29.711 03:43:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:29.711 03:43:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:29.711 03:43:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:29.711 03:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:29.711 03:43:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:29.711 03:43:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:30.697 nvme0n1 00:33:30.697 03:43:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:30.697 03:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:30.697 03:43:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:30.697 03:43:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:30.697 03:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:30.697 03:43:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:30.697 03:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:30.697 03:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:30.697 03:43:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:30.697 03:43:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:30.697 03:43:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:30.697 03:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:30.697 03:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:33:30.697 03:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:30.697 03:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:30.697 03:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:30.697 03:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:30.697 03:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzIyYzM5MDY0YTg2MWQ0YjdlNmYwYjc5YzYwYTkzNTRhmmnN: 00:33:30.697 03:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDFlNTUyZjAxNmRmODNkZWM4ODA1ZTk5MWJmODBkNjiivZvV: 00:33:30.697 03:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:30.697 03:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:30.697 03:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzIyYzM5MDY0YTg2MWQ0YjdlNmYwYjc5YzYwYTkzNTRhmmnN: 00:33:30.697 03:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDFlNTUyZjAxNmRmODNkZWM4ODA1ZTk5MWJmODBkNjiivZvV: ]] 00:33:30.697 03:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDFlNTUyZjAxNmRmODNkZWM4ODA1ZTk5MWJmODBkNjiivZvV: 00:33:30.697 03:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:33:30.697 03:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:30.697 03:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:30.697 03:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:30.697 03:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:30.697 03:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:30.697 03:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:33:30.697 03:43:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:30.697 03:43:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:30.697 03:43:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:30.697 03:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:30.697 03:43:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:30.697 03:43:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:30.697 03:43:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:30.697 03:43:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:30.697 03:43:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:30.697 03:43:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:30.697 03:43:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:30.697 03:43:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:30.697 03:43:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:30.697 03:43:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:30.697 03:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:30.697 03:43:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:30.697 03:43:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:31.630 nvme0n1 00:33:31.630 03:43:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:31.630 03:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:31.630 03:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:31.630 03:43:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:31.630 03:43:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:31.630 03:43:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:31.630 03:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:31.630 03:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:31.630 03:43:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:31.630 03:43:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:31.630 03:43:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:31.630 03:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:31.630 03:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:33:31.630 03:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:31.630 03:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:31.630 03:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:31.630 03:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:31.630 03:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZWJhNDdjZGFhZmIyZTEwZTkxYTMyMmY5Zjg5ZGY1MWFkMTkzOWQyNjQxYjhhZDI5rzqCdQ==: 00:33:31.630 03:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjgyN2Y3ZmIxNTliMTBiZTFmZTJlN2FmNzA3MjA2YjMq02lR: 00:33:31.630 03:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:31.630 03:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:31.630 03:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZWJhNDdjZGFhZmIyZTEwZTkxYTMyMmY5Zjg5ZGY1MWFkMTkzOWQyNjQxYjhhZDI5rzqCdQ==: 00:33:31.630 03:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjgyN2Y3ZmIxNTliMTBiZTFmZTJlN2FmNzA3MjA2YjMq02lR: ]] 00:33:31.630 03:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjgyN2Y3ZmIxNTliMTBiZTFmZTJlN2FmNzA3MjA2YjMq02lR: 00:33:31.630 03:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:33:31.630 03:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:31.630 03:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:31.630 03:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:31.630 03:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:31.630 03:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:31.630 03:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:33:31.630 03:43:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:31.630 03:43:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:31.630 03:43:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:31.630 03:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:31.630 03:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:31.630 03:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:31.630 03:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:31.630 03:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:31.630 03:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:31.630 03:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:31.630 03:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:31.630 03:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:31.630 03:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:31.630 03:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:31.630 03:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:31.630 03:43:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:31.630 03:43:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:32.563 nvme0n1 00:33:32.563 03:43:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:32.563 03:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:32.563 03:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:32.563 03:43:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:32.563 03:43:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:32.563 03:43:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:32.563 03:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:32.563 03:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:32.563 03:43:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:32.563 03:43:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:32.563 03:43:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:32.563 03:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:32.563 03:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:33:32.563 03:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:32.563 03:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:32.563 03:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:32.563 03:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:32.563 03:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODdmNTk5NTM0ZjVmY2JiY2FlNTliNWUzZTI3MzdlNzMwODkzMDYyMWYxOTViMDdjNjc5M2NmOTMzZWZjNDg2NMogDDg=: 00:33:32.563 03:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:32.563 03:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:32.563 03:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:32.563 03:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODdmNTk5NTM0ZjVmY2JiY2FlNTliNWUzZTI3MzdlNzMwODkzMDYyMWYxOTViMDdjNjc5M2NmOTMzZWZjNDg2NMogDDg=: 00:33:32.563 03:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:32.563 03:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:33:32.563 03:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:32.563 03:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:32.563 03:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:32.563 03:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:32.563 03:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:32.563 03:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:33:32.563 03:43:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:32.563 03:43:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:32.563 03:43:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:32.563 03:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:32.563 03:43:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:32.563 03:43:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:32.563 03:43:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:32.563 03:43:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:32.563 03:43:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:32.563 03:43:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:32.563 03:43:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:32.563 03:43:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:32.563 03:43:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:32.563 03:43:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:32.563 03:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:32.563 03:43:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:32.563 03:43:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:33.496 nvme0n1 00:33:33.496 03:43:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:33.496 03:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:33.496 03:43:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:33.496 03:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:33.496 03:43:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:33.496 03:43:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:33.754 03:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:33.754 03:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:33.754 03:43:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:33.754 03:43:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:33.754 03:43:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:33.754 03:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:33:33.754 03:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:33.754 03:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:33.754 03:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:33.754 03:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:33.754 03:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTlkMzViNTc5ODhkYzYwNTIzOWQ5NGNjYzc5YTk4ODM0ZDc5ZjdlZjMyODU0ZTMw3TdAGQ==: 00:33:33.754 03:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTJkZjBiOWMxNzY3N2RkMTcyYTIzMDc5NTIxYjc3NWYxYWUyMmE1MDFmMjMxNDllXZoZpA==: 00:33:33.754 03:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:33.754 03:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:33.754 03:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTlkMzViNTc5ODhkYzYwNTIzOWQ5NGNjYzc5YTk4ODM0ZDc5ZjdlZjMyODU0ZTMw3TdAGQ==: 00:33:33.754 03:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTJkZjBiOWMxNzY3N2RkMTcyYTIzMDc5NTIxYjc3NWYxYWUyMmE1MDFmMjMxNDllXZoZpA==: ]] 00:33:33.754 03:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTJkZjBiOWMxNzY3N2RkMTcyYTIzMDc5NTIxYjc3NWYxYWUyMmE1MDFmMjMxNDllXZoZpA==: 00:33:33.754 03:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:33:33.754 03:43:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:33.754 03:43:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:33.754 03:43:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:33.754 03:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:33:33.754 03:43:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:33.754 03:43:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:33.754 03:43:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:33.754 03:43:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:33.754 03:43:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:33.754 03:43:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:33.754 03:43:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:33.754 03:43:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:33.754 03:43:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:33.754 03:43:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:33.754 03:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:33:33.754 03:43:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:33:33.754 03:43:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:33:33.754 03:43:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:33:33.754 03:43:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:33.754 03:43:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:33:33.754 03:43:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:33.754 03:43:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:33:33.754 03:43:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:33.754 03:43:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:33.754 request: 00:33:33.754 { 00:33:33.754 "name": "nvme0", 00:33:33.754 "trtype": "tcp", 00:33:33.754 "traddr": "10.0.0.1", 00:33:33.754 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:33:33.754 "adrfam": "ipv4", 00:33:33.754 "trsvcid": "4420", 00:33:33.754 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:33:33.754 "method": "bdev_nvme_attach_controller", 00:33:33.754 "req_id": 1 00:33:33.754 } 00:33:33.754 Got JSON-RPC error response 00:33:33.754 response: 00:33:33.754 { 00:33:33.754 "code": -5, 00:33:33.754 "message": "Input/output error" 00:33:33.754 } 00:33:33.754 03:43:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:33:33.754 03:43:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:33:33.754 03:43:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:33:33.754 03:43:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:33:33.754 03:43:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:33:33.754 03:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:33:33.754 03:43:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:33.754 03:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:33:33.754 03:43:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:33.754 03:43:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:33.755 03:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:33:33.755 03:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:33:33.755 03:43:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:33.755 03:43:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:33.755 03:43:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:33.755 03:43:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:33.755 03:43:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:33.755 03:43:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:33.755 03:43:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:33.755 03:43:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:33.755 03:43:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:33.755 03:43:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:33.755 03:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:33:33.755 03:43:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:33:33.755 03:43:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:33:33.755 03:43:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:33:33.755 03:43:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:33.755 03:43:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:33:33.755 03:43:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:33.755 03:43:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:33:33.755 03:43:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:33.755 03:43:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:33.755 request: 00:33:33.755 { 00:33:33.755 "name": "nvme0", 00:33:33.755 "trtype": "tcp", 00:33:33.755 "traddr": "10.0.0.1", 00:33:33.755 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:33:33.755 "adrfam": "ipv4", 00:33:33.755 "trsvcid": "4420", 00:33:33.755 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:33:33.755 "dhchap_key": "key2", 00:33:33.755 "method": "bdev_nvme_attach_controller", 00:33:33.755 "req_id": 1 00:33:33.755 } 00:33:33.755 Got JSON-RPC error response 00:33:33.755 response: 00:33:33.755 { 00:33:33.755 "code": -5, 00:33:33.755 "message": "Input/output error" 00:33:33.755 } 00:33:33.755 03:43:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:33:33.755 03:43:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:33:33.755 03:43:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:33:33.755 03:43:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:33:33.755 03:43:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:33:33.755 03:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:33:33.755 03:43:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:33.755 03:43:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:33.755 03:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:33:33.755 03:43:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:34.013 03:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:33:34.013 03:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:33:34.013 03:43:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:34.013 03:43:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:34.013 03:43:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:34.013 03:43:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:34.013 03:43:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:34.013 03:43:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:34.013 03:43:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:34.013 03:43:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:34.013 03:43:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:34.013 03:43:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:34.013 03:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:33:34.013 03:43:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:33:34.013 03:43:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:33:34.013 03:43:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:33:34.013 03:43:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:34.013 03:43:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:33:34.013 03:43:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:34.013 03:43:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:33:34.013 03:43:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:34.013 03:43:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:34.013 request: 00:33:34.013 { 00:33:34.013 "name": "nvme0", 00:33:34.013 "trtype": "tcp", 00:33:34.013 "traddr": "10.0.0.1", 00:33:34.013 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:33:34.013 "adrfam": "ipv4", 00:33:34.013 "trsvcid": "4420", 00:33:34.013 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:33:34.013 "dhchap_key": "key1", 00:33:34.013 "dhchap_ctrlr_key": "ckey2", 00:33:34.013 "method": "bdev_nvme_attach_controller", 00:33:34.013 "req_id": 1 00:33:34.013 } 00:33:34.013 Got JSON-RPC error response 00:33:34.013 response: 00:33:34.013 { 00:33:34.013 "code": -5, 00:33:34.013 "message": "Input/output error" 00:33:34.013 } 00:33:34.013 03:43:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:33:34.013 03:43:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:33:34.013 03:43:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:33:34.013 03:43:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:33:34.013 03:43:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:33:34.013 03:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:33:34.013 03:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:33:34.013 03:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:33:34.013 03:43:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:34.013 03:43:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:33:34.013 03:43:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:33:34.013 03:43:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:33:34.013 03:43:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:34.013 03:43:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:33:34.013 rmmod nvme_tcp 00:33:34.013 rmmod nvme_fabrics 00:33:34.013 03:43:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:34.013 03:43:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:33:34.013 03:43:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:33:34.013 03:43:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 2549179 ']' 00:33:34.013 03:43:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 2549179 00:33:34.013 03:43:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@946 -- # '[' -z 2549179 ']' 00:33:34.013 03:43:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@950 -- # kill -0 2549179 00:33:34.013 03:43:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@951 -- # uname 00:33:34.013 03:43:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:34.013 03:43:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2549179 00:33:34.013 03:43:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:33:34.013 03:43:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:33:34.013 03:43:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2549179' 00:33:34.013 killing process with pid 2549179 00:33:34.013 03:43:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@965 -- # kill 2549179 00:33:34.013 03:43:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@970 -- # wait 2549179 00:33:34.271 03:43:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:33:34.271 03:43:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:33:34.271 03:43:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:33:34.271 03:43:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:34.271 03:43:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:34.271 03:43:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:34.271 03:43:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:34.271 03:43:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:36.187 03:43:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:33:36.187 03:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:33:36.187 03:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:33:36.187 03:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:33:36.187 03:43:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:33:36.187 03:43:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:33:36.187 03:43:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:33:36.187 03:43:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:33:36.445 03:43:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:33:36.445 03:43:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:33:36.445 03:43:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:33:36.445 03:43:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:33:36.445 03:43:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:33:37.378 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:33:37.378 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:33:37.378 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:33:37.378 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:33:37.378 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:33:37.378 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:33:37.636 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:33:37.636 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:33:37.636 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:33:37.636 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:33:37.636 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:33:37.636 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:33:37.636 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:33:37.636 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:33:37.636 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:33:37.636 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:33:38.569 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:33:38.569 03:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.9kp /tmp/spdk.key-null.Z2L /tmp/spdk.key-sha256.mbo /tmp/spdk.key-sha384.dFh /tmp/spdk.key-sha512.OJF /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:33:38.569 03:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:33:39.945 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:33:39.945 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:33:39.945 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:33:39.945 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:33:39.945 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:33:39.945 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:33:39.945 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:33:39.945 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:33:39.945 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:33:39.945 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:33:39.945 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:33:39.945 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:33:39.945 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:33:39.945 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:33:39.945 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:33:39.945 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:33:39.945 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:33:39.945 00:33:39.945 real 0m49.319s 00:33:39.945 user 0m47.090s 00:33:39.945 sys 0m5.692s 00:33:39.945 03:43:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1122 -- # xtrace_disable 00:33:39.945 03:43:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:39.945 ************************************ 00:33:39.945 END TEST nvmf_auth_host 00:33:39.945 ************************************ 00:33:39.945 03:43:25 nvmf_tcp -- nvmf/nvmf.sh@107 -- # [[ tcp == \t\c\p ]] 00:33:39.945 03:43:25 nvmf_tcp -- nvmf/nvmf.sh@108 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:33:39.945 03:43:25 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:33:39.945 03:43:25 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:33:39.945 03:43:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:39.945 ************************************ 00:33:39.945 START TEST nvmf_digest 00:33:39.945 ************************************ 00:33:39.945 03:43:25 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:33:39.945 * Looking for test storage... 00:33:39.945 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:39.945 03:43:25 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:39.945 03:43:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:33:39.945 03:43:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:39.945 03:43:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:39.945 03:43:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:39.945 03:43:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:39.945 03:43:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:39.945 03:43:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:39.945 03:43:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:39.945 03:43:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:39.945 03:43:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:39.945 03:43:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:39.945 03:43:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:33:39.945 03:43:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:33:39.945 03:43:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:39.945 03:43:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:39.945 03:43:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:39.946 03:43:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:39.946 03:43:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:39.946 03:43:25 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:39.946 03:43:25 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:39.946 03:43:25 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:39.946 03:43:25 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:39.946 03:43:25 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:39.946 03:43:25 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:39.946 03:43:25 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:33:39.946 03:43:25 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:39.946 03:43:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:33:39.946 03:43:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:39.946 03:43:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:39.946 03:43:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:39.946 03:43:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:39.946 03:43:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:39.946 03:43:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:39.946 03:43:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:39.946 03:43:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:39.946 03:43:25 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:33:39.946 03:43:25 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:33:39.946 03:43:25 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:33:39.946 03:43:25 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:33:39.946 03:43:25 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:33:39.946 03:43:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:33:39.946 03:43:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:39.946 03:43:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:33:39.946 03:43:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:33:39.946 03:43:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:33:39.946 03:43:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:39.946 03:43:25 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:39.946 03:43:25 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:39.946 03:43:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:33:39.946 03:43:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:33:39.946 03:43:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:33:39.946 03:43:25 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:33:42.474 03:43:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:42.474 03:43:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:33:42.474 03:43:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:33:42.474 03:43:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:33:42.474 03:43:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:33:42.474 03:43:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:33:42.474 03:43:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:33:42.474 03:43:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:33:42.474 03:43:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:33:42.474 03:43:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:33:42.474 03:43:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:33:42.474 03:43:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:33:42.474 03:43:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:33:42.474 03:43:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:33:42.474 03:43:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:33:42.474 03:43:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:42.474 03:43:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:42.474 03:43:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:42.474 03:43:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:42.474 03:43:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:42.474 03:43:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:42.474 03:43:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:42.474 03:43:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:42.474 03:43:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:42.474 03:43:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:42.474 03:43:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:42.474 03:43:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:33:42.474 03:43:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:33:42.474 03:43:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:33:42.474 03:43:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:33:42.474 03:43:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:33:42.474 03:43:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:33:42.474 03:43:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:42.474 03:43:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:33:42.474 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:33:42.474 03:43:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:42.474 03:43:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:42.474 03:43:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:42.474 03:43:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:42.474 03:43:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:42.474 03:43:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:42.474 03:43:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:33:42.474 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:33:42.474 03:43:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:42.474 03:43:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:42.474 03:43:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:42.474 03:43:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:42.474 03:43:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:42.474 03:43:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:33:42.474 03:43:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:33:42.474 03:43:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:33:42.474 03:43:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:42.474 03:43:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:42.474 03:43:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:42.474 03:43:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:42.474 03:43:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:42.474 03:43:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:42.474 03:43:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:42.474 03:43:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:33:42.474 Found net devices under 0000:0a:00.0: cvl_0_0 00:33:42.474 03:43:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:42.474 03:43:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:42.474 03:43:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:42.474 03:43:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:42.475 03:43:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:42.475 03:43:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:42.475 03:43:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:42.475 03:43:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:42.475 03:43:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:33:42.475 Found net devices under 0000:0a:00.1: cvl_0_1 00:33:42.475 03:43:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:42.475 03:43:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:33:42.475 03:43:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:33:42.475 03:43:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:33:42.475 03:43:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:33:42.475 03:43:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:33:42.475 03:43:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:42.475 03:43:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:42.475 03:43:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:42.475 03:43:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:33:42.475 03:43:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:42.475 03:43:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:42.475 03:43:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:33:42.475 03:43:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:42.475 03:43:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:42.475 03:43:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:33:42.475 03:43:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:33:42.475 03:43:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:33:42.475 03:43:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:42.475 03:43:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:42.475 03:43:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:42.475 03:43:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:33:42.475 03:43:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:42.475 03:43:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:42.475 03:43:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:42.475 03:43:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:33:42.475 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:42.475 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.121 ms 00:33:42.475 00:33:42.475 --- 10.0.0.2 ping statistics --- 00:33:42.475 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:42.475 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:33:42.475 03:43:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:42.475 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:42.475 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.087 ms 00:33:42.475 00:33:42.475 --- 10.0.0.1 ping statistics --- 00:33:42.475 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:42.475 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:33:42.475 03:43:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:42.475 03:43:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:33:42.475 03:43:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:33:42.475 03:43:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:42.475 03:43:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:33:42.475 03:43:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:33:42.475 03:43:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:42.475 03:43:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:33:42.475 03:43:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:33:42.475 03:43:27 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:33:42.475 03:43:27 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:33:42.475 03:43:27 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:33:42.475 03:43:27 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:33:42.475 03:43:27 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:33:42.475 03:43:27 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:33:42.475 ************************************ 00:33:42.475 START TEST nvmf_digest_clean 00:33:42.475 ************************************ 00:33:42.475 03:43:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1121 -- # run_digest 00:33:42.475 03:43:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:33:42.475 03:43:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:33:42.475 03:43:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:33:42.475 03:43:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:33:42.475 03:43:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:33:42.475 03:43:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:42.475 03:43:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@720 -- # xtrace_disable 00:33:42.475 03:43:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:42.475 03:43:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=2558738 00:33:42.475 03:43:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:33:42.475 03:43:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 2558738 00:33:42.475 03:43:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 2558738 ']' 00:33:42.475 03:43:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:42.475 03:43:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:42.475 03:43:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:42.475 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:42.475 03:43:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:42.475 03:43:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:42.475 [2024-07-21 03:43:27.455107] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:33:42.475 [2024-07-21 03:43:27.455178] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:42.475 EAL: No free 2048 kB hugepages reported on node 1 00:33:42.475 [2024-07-21 03:43:27.517730] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:42.475 [2024-07-21 03:43:27.600761] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:42.475 [2024-07-21 03:43:27.600815] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:42.475 [2024-07-21 03:43:27.600829] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:42.475 [2024-07-21 03:43:27.600847] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:42.475 [2024-07-21 03:43:27.600857] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:42.475 [2024-07-21 03:43:27.600882] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:42.475 03:43:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:42.475 03:43:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:33:42.475 03:43:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:42.475 03:43:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:42.475 03:43:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:42.475 03:43:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:42.475 03:43:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:33:42.475 03:43:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:33:42.475 03:43:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:33:42.475 03:43:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:42.475 03:43:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:42.733 null0 00:33:42.733 [2024-07-21 03:43:27.836470] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:42.734 [2024-07-21 03:43:27.860707] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:42.734 03:43:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:42.734 03:43:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:33:42.734 03:43:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:33:42.734 03:43:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:33:42.734 03:43:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:33:42.734 03:43:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:33:42.734 03:43:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:33:42.734 03:43:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:33:42.734 03:43:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2558758 00:33:42.734 03:43:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2558758 /var/tmp/bperf.sock 00:33:42.734 03:43:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 2558758 ']' 00:33:42.734 03:43:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:42.734 03:43:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:33:42.734 03:43:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:42.734 03:43:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:42.734 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:42.734 03:43:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:42.734 03:43:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:42.734 [2024-07-21 03:43:27.909965] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:33:42.734 [2024-07-21 03:43:27.910052] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2558758 ] 00:33:42.734 EAL: No free 2048 kB hugepages reported on node 1 00:33:42.734 [2024-07-21 03:43:27.968485] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:42.992 [2024-07-21 03:43:28.058900] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:42.992 03:43:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:42.992 03:43:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:33:42.992 03:43:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:33:42.992 03:43:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:33:42.992 03:43:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:33:43.279 03:43:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:43.279 03:43:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:43.538 nvme0n1 00:33:43.538 03:43:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:33:43.538 03:43:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:43.796 Running I/O for 2 seconds... 00:33:45.711 00:33:45.712 Latency(us) 00:33:45.712 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:45.712 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:33:45.712 nvme0n1 : 2.04 18429.68 71.99 0.00 0.00 6804.65 3665.16 46020.84 00:33:45.712 =================================================================================================================== 00:33:45.712 Total : 18429.68 71.99 0.00 0.00 6804.65 3665.16 46020.84 00:33:45.712 0 00:33:45.712 03:43:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:33:45.712 03:43:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:33:45.712 03:43:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:33:45.712 03:43:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:33:45.712 | select(.opcode=="crc32c") 00:33:45.712 | "\(.module_name) \(.executed)"' 00:33:45.712 03:43:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:33:45.969 03:43:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:33:45.969 03:43:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:33:45.969 03:43:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:33:45.969 03:43:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:33:45.969 03:43:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2558758 00:33:45.969 03:43:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 2558758 ']' 00:33:45.969 03:43:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 2558758 00:33:45.969 03:43:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:33:45.969 03:43:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:45.969 03:43:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2558758 00:33:45.969 03:43:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:33:45.969 03:43:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:33:45.969 03:43:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2558758' 00:33:45.969 killing process with pid 2558758 00:33:45.969 03:43:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 2558758 00:33:45.969 Received shutdown signal, test time was about 2.000000 seconds 00:33:45.969 00:33:45.969 Latency(us) 00:33:45.969 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:45.969 =================================================================================================================== 00:33:45.969 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:45.969 03:43:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 2558758 00:33:46.226 03:43:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:33:46.226 03:43:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:33:46.226 03:43:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:33:46.226 03:43:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:33:46.226 03:43:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:33:46.226 03:43:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:33:46.226 03:43:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:33:46.227 03:43:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2559172 00:33:46.227 03:43:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:33:46.227 03:43:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2559172 /var/tmp/bperf.sock 00:33:46.227 03:43:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 2559172 ']' 00:33:46.227 03:43:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:46.227 03:43:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:46.227 03:43:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:46.227 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:46.227 03:43:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:46.227 03:43:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:46.227 [2024-07-21 03:43:31.526485] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:33:46.227 [2024-07-21 03:43:31.526575] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2559172 ] 00:33:46.227 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:46.227 Zero copy mechanism will not be used. 00:33:46.483 EAL: No free 2048 kB hugepages reported on node 1 00:33:46.483 [2024-07-21 03:43:31.589409] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:46.483 [2024-07-21 03:43:31.681730] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:46.483 03:43:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:46.483 03:43:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:33:46.483 03:43:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:33:46.483 03:43:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:33:46.483 03:43:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:33:47.047 03:43:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:47.047 03:43:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:47.304 nvme0n1 00:33:47.304 03:43:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:33:47.304 03:43:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:47.305 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:47.305 Zero copy mechanism will not be used. 00:33:47.305 Running I/O for 2 seconds... 00:33:49.824 00:33:49.824 Latency(us) 00:33:49.824 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:49.824 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:33:49.824 nvme0n1 : 2.00 4514.10 564.26 0.00 0.00 3539.87 910.22 12815.93 00:33:49.824 =================================================================================================================== 00:33:49.824 Total : 4514.10 564.26 0.00 0.00 3539.87 910.22 12815.93 00:33:49.824 0 00:33:49.824 03:43:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:33:49.824 03:43:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:33:49.824 03:43:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:33:49.824 03:43:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:33:49.824 03:43:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:33:49.824 | select(.opcode=="crc32c") 00:33:49.824 | "\(.module_name) \(.executed)"' 00:33:49.824 03:43:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:33:49.824 03:43:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:33:49.824 03:43:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:33:49.824 03:43:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:33:49.824 03:43:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2559172 00:33:49.824 03:43:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 2559172 ']' 00:33:49.824 03:43:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 2559172 00:33:49.824 03:43:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:33:49.824 03:43:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:49.824 03:43:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2559172 00:33:49.824 03:43:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:33:49.824 03:43:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:33:49.824 03:43:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2559172' 00:33:49.824 killing process with pid 2559172 00:33:49.824 03:43:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 2559172 00:33:49.824 Received shutdown signal, test time was about 2.000000 seconds 00:33:49.824 00:33:49.824 Latency(us) 00:33:49.824 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:49.824 =================================================================================================================== 00:33:49.824 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:49.824 03:43:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 2559172 00:33:49.824 03:43:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:33:49.825 03:43:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:33:49.825 03:43:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:33:49.825 03:43:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:33:49.825 03:43:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:33:49.825 03:43:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:33:49.825 03:43:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:33:49.825 03:43:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2559576 00:33:49.825 03:43:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:33:49.825 03:43:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2559576 /var/tmp/bperf.sock 00:33:49.825 03:43:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 2559576 ']' 00:33:49.825 03:43:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:49.825 03:43:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:49.825 03:43:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:49.825 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:49.825 03:43:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:49.825 03:43:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:50.082 [2024-07-21 03:43:35.157189] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:33:50.082 [2024-07-21 03:43:35.157283] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2559576 ] 00:33:50.082 EAL: No free 2048 kB hugepages reported on node 1 00:33:50.082 [2024-07-21 03:43:35.221857] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:50.082 [2024-07-21 03:43:35.313352] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:50.082 03:43:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:50.082 03:43:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:33:50.082 03:43:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:33:50.082 03:43:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:33:50.082 03:43:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:33:50.648 03:43:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:50.648 03:43:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:50.905 nvme0n1 00:33:50.905 03:43:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:33:50.905 03:43:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:50.905 Running I/O for 2 seconds... 00:33:53.430 00:33:53.430 Latency(us) 00:33:53.430 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:53.430 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:53.430 nvme0n1 : 2.01 19852.85 77.55 0.00 0.00 6431.67 3228.25 11893.57 00:33:53.430 =================================================================================================================== 00:33:53.430 Total : 19852.85 77.55 0.00 0.00 6431.67 3228.25 11893.57 00:33:53.430 0 00:33:53.430 03:43:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:33:53.430 03:43:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:33:53.430 03:43:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:33:53.430 03:43:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:33:53.430 | select(.opcode=="crc32c") 00:33:53.430 | "\(.module_name) \(.executed)"' 00:33:53.430 03:43:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:33:53.430 03:43:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:33:53.430 03:43:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:33:53.430 03:43:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:33:53.430 03:43:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:33:53.430 03:43:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2559576 00:33:53.430 03:43:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 2559576 ']' 00:33:53.430 03:43:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 2559576 00:33:53.430 03:43:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:33:53.430 03:43:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:53.430 03:43:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2559576 00:33:53.430 03:43:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:33:53.430 03:43:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:33:53.430 03:43:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2559576' 00:33:53.430 killing process with pid 2559576 00:33:53.430 03:43:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 2559576 00:33:53.430 Received shutdown signal, test time was about 2.000000 seconds 00:33:53.430 00:33:53.430 Latency(us) 00:33:53.430 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:53.430 =================================================================================================================== 00:33:53.430 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:53.430 03:43:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 2559576 00:33:53.430 03:43:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:33:53.430 03:43:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:33:53.430 03:43:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:33:53.430 03:43:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:33:53.430 03:43:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:33:53.430 03:43:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:33:53.430 03:43:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:33:53.430 03:43:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2560087 00:33:53.430 03:43:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2560087 /var/tmp/bperf.sock 00:33:53.430 03:43:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:33:53.430 03:43:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 2560087 ']' 00:33:53.430 03:43:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:53.430 03:43:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:53.430 03:43:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:53.430 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:53.430 03:43:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:53.430 03:43:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:53.430 [2024-07-21 03:43:38.720623] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:33:53.430 [2024-07-21 03:43:38.720697] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2560087 ] 00:33:53.430 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:53.430 Zero copy mechanism will not be used. 00:33:53.687 EAL: No free 2048 kB hugepages reported on node 1 00:33:53.687 [2024-07-21 03:43:38.783176] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:53.687 [2024-07-21 03:43:38.874195] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:53.687 03:43:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:53.687 03:43:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:33:53.687 03:43:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:33:53.687 03:43:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:33:53.687 03:43:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:33:53.944 03:43:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:53.944 03:43:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:54.508 nvme0n1 00:33:54.508 03:43:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:33:54.508 03:43:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:54.508 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:54.508 Zero copy mechanism will not be used. 00:33:54.508 Running I/O for 2 seconds... 00:33:56.401 00:33:56.401 Latency(us) 00:33:56.401 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:56.401 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:33:56.401 nvme0n1 : 2.00 5978.80 747.35 0.00 0.00 2668.49 2148.12 11456.66 00:33:56.401 =================================================================================================================== 00:33:56.401 Total : 5978.80 747.35 0.00 0.00 2668.49 2148.12 11456.66 00:33:56.401 0 00:33:56.659 03:43:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:33:56.659 03:43:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:33:56.659 03:43:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:33:56.659 03:43:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:33:56.659 03:43:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:33:56.659 | select(.opcode=="crc32c") 00:33:56.659 | "\(.module_name) \(.executed)"' 00:33:56.659 03:43:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:33:56.659 03:43:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:33:56.659 03:43:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:33:56.659 03:43:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:33:56.659 03:43:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2560087 00:33:56.659 03:43:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 2560087 ']' 00:33:56.659 03:43:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 2560087 00:33:56.659 03:43:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:33:56.918 03:43:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:56.918 03:43:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2560087 00:33:56.918 03:43:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:33:56.918 03:43:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:33:56.918 03:43:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2560087' 00:33:56.918 killing process with pid 2560087 00:33:56.918 03:43:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 2560087 00:33:56.918 Received shutdown signal, test time was about 2.000000 seconds 00:33:56.918 00:33:56.918 Latency(us) 00:33:56.918 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:56.918 =================================================================================================================== 00:33:56.918 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:56.918 03:43:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 2560087 00:33:56.918 03:43:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 2558738 00:33:56.918 03:43:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 2558738 ']' 00:33:56.918 03:43:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 2558738 00:33:56.918 03:43:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:33:56.918 03:43:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:56.918 03:43:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2558738 00:33:57.176 03:43:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:33:57.176 03:43:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:33:57.176 03:43:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2558738' 00:33:57.176 killing process with pid 2558738 00:33:57.176 03:43:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 2558738 00:33:57.176 03:43:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 2558738 00:33:57.176 00:33:57.176 real 0m15.039s 00:33:57.176 user 0m29.799s 00:33:57.176 sys 0m4.240s 00:33:57.176 03:43:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1122 -- # xtrace_disable 00:33:57.176 03:43:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:57.176 ************************************ 00:33:57.176 END TEST nvmf_digest_clean 00:33:57.176 ************************************ 00:33:57.176 03:43:42 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:33:57.176 03:43:42 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:33:57.176 03:43:42 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:33:57.176 03:43:42 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:33:57.434 ************************************ 00:33:57.434 START TEST nvmf_digest_error 00:33:57.434 ************************************ 00:33:57.434 03:43:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1121 -- # run_digest_error 00:33:57.434 03:43:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:33:57.434 03:43:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:57.434 03:43:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@720 -- # xtrace_disable 00:33:57.434 03:43:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:57.434 03:43:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=2560531 00:33:57.434 03:43:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:33:57.434 03:43:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 2560531 00:33:57.434 03:43:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 2560531 ']' 00:33:57.434 03:43:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:57.434 03:43:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:57.434 03:43:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:57.434 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:57.434 03:43:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:57.434 03:43:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:57.434 [2024-07-21 03:43:42.544398] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:33:57.434 [2024-07-21 03:43:42.544473] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:57.434 EAL: No free 2048 kB hugepages reported on node 1 00:33:57.434 [2024-07-21 03:43:42.618452] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:57.434 [2024-07-21 03:43:42.706751] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:57.434 [2024-07-21 03:43:42.706813] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:57.434 [2024-07-21 03:43:42.706829] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:57.434 [2024-07-21 03:43:42.706844] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:57.434 [2024-07-21 03:43:42.706856] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:57.434 [2024-07-21 03:43:42.706885] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:57.434 03:43:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:57.434 03:43:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:33:57.434 03:43:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:57.434 03:43:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:57.434 03:43:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:57.693 03:43:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:57.693 03:43:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:33:57.693 03:43:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:57.693 03:43:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:57.693 [2024-07-21 03:43:42.763453] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:33:57.693 03:43:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:57.693 03:43:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:33:57.693 03:43:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:33:57.693 03:43:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:57.693 03:43:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:57.693 null0 00:33:57.693 [2024-07-21 03:43:42.873209] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:57.693 [2024-07-21 03:43:42.897418] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:57.693 03:43:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:57.693 03:43:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:33:57.693 03:43:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:33:57.693 03:43:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:33:57.693 03:43:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:33:57.693 03:43:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:33:57.693 03:43:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2560560 00:33:57.693 03:43:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2560560 /var/tmp/bperf.sock 00:33:57.693 03:43:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:33:57.693 03:43:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 2560560 ']' 00:33:57.693 03:43:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:57.693 03:43:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:57.693 03:43:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:57.693 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:57.693 03:43:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:57.693 03:43:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:57.693 [2024-07-21 03:43:42.943199] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:33:57.693 [2024-07-21 03:43:42.943271] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2560560 ] 00:33:57.693 EAL: No free 2048 kB hugepages reported on node 1 00:33:57.951 [2024-07-21 03:43:43.005741] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:57.951 [2024-07-21 03:43:43.097805] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:57.951 03:43:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:57.951 03:43:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:33:57.951 03:43:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:57.951 03:43:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:58.208 03:43:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:33:58.208 03:43:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:58.208 03:43:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:58.208 03:43:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:58.208 03:43:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:58.208 03:43:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:58.773 nvme0n1 00:33:58.773 03:43:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:33:58.773 03:43:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:58.773 03:43:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:58.773 03:43:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:58.773 03:43:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:33:58.773 03:43:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:58.773 Running I/O for 2 seconds... 00:33:58.773 [2024-07-21 03:43:44.055989] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b360) 00:33:58.773 [2024-07-21 03:43:44.056039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:6758 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.773 [2024-07-21 03:43:44.056061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.773 [2024-07-21 03:43:44.074636] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b360) 00:33:58.773 [2024-07-21 03:43:44.074697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9824 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.773 [2024-07-21 03:43:44.074742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.031 [2024-07-21 03:43:44.091526] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b360) 00:33:59.031 [2024-07-21 03:43:44.091563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:5686 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.031 [2024-07-21 03:43:44.091583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.031 [2024-07-21 03:43:44.105358] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b360) 00:33:59.031 [2024-07-21 03:43:44.105412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:23503 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.031 [2024-07-21 03:43:44.105460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.031 [2024-07-21 03:43:44.120468] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b360) 00:33:59.031 [2024-07-21 03:43:44.120514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:7087 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.031 [2024-07-21 03:43:44.120543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.031 [2024-07-21 03:43:44.133456] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b360) 00:33:59.031 [2024-07-21 03:43:44.133492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:1529 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.031 [2024-07-21 03:43:44.133512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.031 [2024-07-21 03:43:44.151776] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b360) 00:33:59.031 [2024-07-21 03:43:44.151807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:15218 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.031 [2024-07-21 03:43:44.151837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.031 [2024-07-21 03:43:44.167303] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b360) 00:33:59.031 [2024-07-21 03:43:44.167345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:1368 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.031 [2024-07-21 03:43:44.167377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.031 [2024-07-21 03:43:44.180149] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b360) 00:33:59.031 [2024-07-21 03:43:44.180186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:16894 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.031 [2024-07-21 03:43:44.180206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.031 [2024-07-21 03:43:44.194477] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b360) 00:33:59.031 [2024-07-21 03:43:44.194518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:17151 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.031 [2024-07-21 03:43:44.194540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.031 [2024-07-21 03:43:44.207161] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b360) 00:33:59.031 [2024-07-21 03:43:44.207198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:23783 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.031 [2024-07-21 03:43:44.207217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.031 [2024-07-21 03:43:44.224953] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b360) 00:33:59.031 [2024-07-21 03:43:44.225018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:22342 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.031 [2024-07-21 03:43:44.225051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.031 [2024-07-21 03:43:44.237025] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b360) 00:33:59.031 [2024-07-21 03:43:44.237062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:24311 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.031 [2024-07-21 03:43:44.237081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.031 [2024-07-21 03:43:44.250817] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b360) 00:33:59.031 [2024-07-21 03:43:44.250848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:18302 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.031 [2024-07-21 03:43:44.250877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.031 [2024-07-21 03:43:44.267499] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b360) 00:33:59.031 [2024-07-21 03:43:44.267540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:3950 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.031 [2024-07-21 03:43:44.267570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.031 [2024-07-21 03:43:44.279975] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b360) 00:33:59.031 [2024-07-21 03:43:44.280012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:19103 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.031 [2024-07-21 03:43:44.280031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.031 [2024-07-21 03:43:44.298007] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b360) 00:33:59.031 [2024-07-21 03:43:44.298044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:20521 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.031 [2024-07-21 03:43:44.298063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.031 [2024-07-21 03:43:44.312165] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b360) 00:33:59.031 [2024-07-21 03:43:44.312209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:16631 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.031 [2024-07-21 03:43:44.312232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.031 [2024-07-21 03:43:44.325094] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b360) 00:33:59.031 [2024-07-21 03:43:44.325131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:24275 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.031 [2024-07-21 03:43:44.325152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.289 [2024-07-21 03:43:44.343236] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b360) 00:33:59.289 [2024-07-21 03:43:44.343272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:8921 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.289 [2024-07-21 03:43:44.343292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.289 [2024-07-21 03:43:44.356575] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b360) 00:33:59.289 [2024-07-21 03:43:44.356611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:19734 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.289 [2024-07-21 03:43:44.356641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.289 [2024-07-21 03:43:44.368929] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b360) 00:33:59.289 [2024-07-21 03:43:44.368982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:4691 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.289 [2024-07-21 03:43:44.369015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.289 [2024-07-21 03:43:44.384917] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b360) 00:33:59.289 [2024-07-21 03:43:44.384974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:6094 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.289 [2024-07-21 03:43:44.385002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.289 [2024-07-21 03:43:44.397824] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b360) 00:33:59.290 [2024-07-21 03:43:44.397865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:2009 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.290 [2024-07-21 03:43:44.397893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.290 [2024-07-21 03:43:44.410916] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b360) 00:33:59.290 [2024-07-21 03:43:44.410963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:13617 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.290 [2024-07-21 03:43:44.410991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.290 [2024-07-21 03:43:44.424534] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b360) 00:33:59.290 [2024-07-21 03:43:44.424569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:19955 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.290 [2024-07-21 03:43:44.424589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.290 [2024-07-21 03:43:44.440379] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b360) 00:33:59.290 [2024-07-21 03:43:44.440416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:1544 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.290 [2024-07-21 03:43:44.440435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.290 [2024-07-21 03:43:44.453777] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b360) 00:33:59.290 [2024-07-21 03:43:44.453825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22516 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.290 [2024-07-21 03:43:44.453842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.290 [2024-07-21 03:43:44.468510] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b360) 00:33:59.290 [2024-07-21 03:43:44.468546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21977 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.290 [2024-07-21 03:43:44.468566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.290 [2024-07-21 03:43:44.482663] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b360) 00:33:59.290 [2024-07-21 03:43:44.482708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:930 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.290 [2024-07-21 03:43:44.482735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.290 [2024-07-21 03:43:44.494866] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b360) 00:33:59.290 [2024-07-21 03:43:44.494896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19995 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.290 [2024-07-21 03:43:44.494912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.290 [2024-07-21 03:43:44.509905] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b360) 00:33:59.290 [2024-07-21 03:43:44.509955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20975 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.290 [2024-07-21 03:43:44.509975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.290 [2024-07-21 03:43:44.523424] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b360) 00:33:59.290 [2024-07-21 03:43:44.523461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15458 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.290 [2024-07-21 03:43:44.523481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.290 [2024-07-21 03:43:44.537545] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b360) 00:33:59.290 [2024-07-21 03:43:44.537581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15009 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.290 [2024-07-21 03:43:44.537600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.290 [2024-07-21 03:43:44.549989] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b360) 00:33:59.290 [2024-07-21 03:43:44.550041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:11153 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.290 [2024-07-21 03:43:44.550061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.290 [2024-07-21 03:43:44.566479] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b360) 00:33:59.290 [2024-07-21 03:43:44.566534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:12034 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.290 [2024-07-21 03:43:44.566563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.290 [2024-07-21 03:43:44.578527] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b360) 00:33:59.290 [2024-07-21 03:43:44.578574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:12300 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.290 [2024-07-21 03:43:44.578643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.290 [2024-07-21 03:43:44.596568] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b360) 00:33:59.290 [2024-07-21 03:43:44.596604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:19354 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.290 [2024-07-21 03:43:44.596633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.548 [2024-07-21 03:43:44.609680] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b360) 00:33:59.548 [2024-07-21 03:43:44.609733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:16100 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.548 [2024-07-21 03:43:44.609760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.548 [2024-07-21 03:43:44.622256] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b360) 00:33:59.548 [2024-07-21 03:43:44.622292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21713 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.548 [2024-07-21 03:43:44.622320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.548 [2024-07-21 03:43:44.639140] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b360) 00:33:59.548 [2024-07-21 03:43:44.639176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:11273 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.548 [2024-07-21 03:43:44.639195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.548 [2024-07-21 03:43:44.650362] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b360) 00:33:59.548 [2024-07-21 03:43:44.650398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13995 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.548 [2024-07-21 03:43:44.650418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.548 [2024-07-21 03:43:44.667774] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b360) 00:33:59.548 [2024-07-21 03:43:44.667805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:21371 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.548 [2024-07-21 03:43:44.667821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.548 [2024-07-21 03:43:44.681682] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b360) 00:33:59.548 [2024-07-21 03:43:44.681713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:10096 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.548 [2024-07-21 03:43:44.681730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.548 [2024-07-21 03:43:44.694827] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b360) 00:33:59.548 [2024-07-21 03:43:44.694871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:2426 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.548 [2024-07-21 03:43:44.694888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.548 [2024-07-21 03:43:44.711903] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b360) 00:33:59.548 [2024-07-21 03:43:44.711952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:6499 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.548 [2024-07-21 03:43:44.711972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.548 [2024-07-21 03:43:44.728027] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b360) 00:33:59.548 [2024-07-21 03:43:44.728062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:9650 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.548 [2024-07-21 03:43:44.728082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.548 [2024-07-21 03:43:44.741103] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b360) 00:33:59.548 [2024-07-21 03:43:44.741155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:13160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.548 [2024-07-21 03:43:44.741185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.548 [2024-07-21 03:43:44.754644] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b360) 00:33:59.548 [2024-07-21 03:43:44.754702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:10609 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.548 [2024-07-21 03:43:44.754720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.548 [2024-07-21 03:43:44.768007] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b360) 00:33:59.548 [2024-07-21 03:43:44.768042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:20670 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.548 [2024-07-21 03:43:44.768062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.548 [2024-07-21 03:43:44.782347] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b360) 00:33:59.548 [2024-07-21 03:43:44.782383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:2258 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.548 [2024-07-21 03:43:44.782402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.548 [2024-07-21 03:43:44.795347] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b360) 00:33:59.548 [2024-07-21 03:43:44.795399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:22443 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.548 [2024-07-21 03:43:44.795432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.548 [2024-07-21 03:43:44.808495] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b360) 00:33:59.548 [2024-07-21 03:43:44.808531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:2799 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.548 [2024-07-21 03:43:44.808551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.548 [2024-07-21 03:43:44.823599] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b360) 00:33:59.548 [2024-07-21 03:43:44.823643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:18219 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.548 [2024-07-21 03:43:44.823664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.548 [2024-07-21 03:43:44.839734] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b360) 00:33:59.548 [2024-07-21 03:43:44.839771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:5396 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.548 [2024-07-21 03:43:44.839791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.548 [2024-07-21 03:43:44.852375] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b360) 00:33:59.548 [2024-07-21 03:43:44.852411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:14240 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.548 [2024-07-21 03:43:44.852430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.805 [2024-07-21 03:43:44.870274] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b360) 00:33:59.805 [2024-07-21 03:43:44.870310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:231 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.805 [2024-07-21 03:43:44.870330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.805 [2024-07-21 03:43:44.888451] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b360) 00:33:59.805 [2024-07-21 03:43:44.888487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:4742 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.805 [2024-07-21 03:43:44.888507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.805 [2024-07-21 03:43:44.900737] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b360) 00:33:59.805 [2024-07-21 03:43:44.900767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:8608 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.805 [2024-07-21 03:43:44.900783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.805 [2024-07-21 03:43:44.917178] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b360) 00:33:59.805 [2024-07-21 03:43:44.917214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7454 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.805 [2024-07-21 03:43:44.917233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.805 [2024-07-21 03:43:44.933638] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b360) 00:33:59.805 [2024-07-21 03:43:44.933685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:24632 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.805 [2024-07-21 03:43:44.933703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.805 [2024-07-21 03:43:44.947656] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b360) 00:33:59.805 [2024-07-21 03:43:44.947700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:15902 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.805 [2024-07-21 03:43:44.947726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.805 [2024-07-21 03:43:44.959272] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b360) 00:33:59.805 [2024-07-21 03:43:44.959308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:16019 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.805 [2024-07-21 03:43:44.959328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.805 [2024-07-21 03:43:44.974881] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b360) 00:33:59.805 [2024-07-21 03:43:44.974928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:615 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.805 [2024-07-21 03:43:44.974944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.805 [2024-07-21 03:43:44.991016] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b360) 00:33:59.805 [2024-07-21 03:43:44.991052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:3510 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.805 [2024-07-21 03:43:44.991072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.805 [2024-07-21 03:43:45.003712] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b360) 00:33:59.805 [2024-07-21 03:43:45.003742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17584 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.805 [2024-07-21 03:43:45.003765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.805 [2024-07-21 03:43:45.021491] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b360) 00:33:59.805 [2024-07-21 03:43:45.021526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:22260 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.805 [2024-07-21 03:43:45.021546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.805 [2024-07-21 03:43:45.039533] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b360) 00:33:59.805 [2024-07-21 03:43:45.039569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:13364 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.805 [2024-07-21 03:43:45.039589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.805 [2024-07-21 03:43:45.057756] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b360) 00:33:59.806 [2024-07-21 03:43:45.057787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:14368 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.806 [2024-07-21 03:43:45.057802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.806 [2024-07-21 03:43:45.075007] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b360) 00:33:59.806 [2024-07-21 03:43:45.075043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21097 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.806 [2024-07-21 03:43:45.075063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.806 [2024-07-21 03:43:45.091594] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b360) 00:33:59.806 [2024-07-21 03:43:45.091650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:3773 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.806 [2024-07-21 03:43:45.091693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.806 [2024-07-21 03:43:45.105214] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b360) 00:33:59.806 [2024-07-21 03:43:45.105250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:24488 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.806 [2024-07-21 03:43:45.105270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.063 [2024-07-21 03:43:45.121694] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b360) 00:34:00.063 [2024-07-21 03:43:45.121729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18986 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.063 [2024-07-21 03:43:45.121747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.063 [2024-07-21 03:43:45.138110] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b360) 00:34:00.063 [2024-07-21 03:43:45.138155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:6886 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.063 [2024-07-21 03:43:45.138188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.063 [2024-07-21 03:43:45.151483] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b360) 00:34:00.063 [2024-07-21 03:43:45.151520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:1240 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.063 [2024-07-21 03:43:45.151539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.063 [2024-07-21 03:43:45.165847] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b360) 00:34:00.063 [2024-07-21 03:43:45.165882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:17498 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.063 [2024-07-21 03:43:45.165899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.063 [2024-07-21 03:43:45.178295] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b360) 00:34:00.063 [2024-07-21 03:43:45.178332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21620 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.063 [2024-07-21 03:43:45.178351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.063 [2024-07-21 03:43:45.196117] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b360) 00:34:00.063 [2024-07-21 03:43:45.196153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:10037 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.063 [2024-07-21 03:43:45.196187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.063 [2024-07-21 03:43:45.208378] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b360) 00:34:00.063 [2024-07-21 03:43:45.208414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:7453 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.063 [2024-07-21 03:43:45.208433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.063 [2024-07-21 03:43:45.225069] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b360) 00:34:00.063 [2024-07-21 03:43:45.225105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:12879 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.063 [2024-07-21 03:43:45.225126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.063 [2024-07-21 03:43:45.241257] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b360) 00:34:00.063 [2024-07-21 03:43:45.241294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:1936 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.063 [2024-07-21 03:43:45.241314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.063 [2024-07-21 03:43:45.255406] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b360) 00:34:00.063 [2024-07-21 03:43:45.255442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:21461 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.063 [2024-07-21 03:43:45.255462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.063 [2024-07-21 03:43:45.271837] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b360) 00:34:00.063 [2024-07-21 03:43:45.271866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:19031 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.063 [2024-07-21 03:43:45.271889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.063 [2024-07-21 03:43:45.289077] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b360) 00:34:00.063 [2024-07-21 03:43:45.289122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:8188 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.063 [2024-07-21 03:43:45.289155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.063 [2024-07-21 03:43:45.301757] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b360) 00:34:00.063 [2024-07-21 03:43:45.301802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:11775 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.063 [2024-07-21 03:43:45.301818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.063 [2024-07-21 03:43:45.319217] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b360) 00:34:00.063 [2024-07-21 03:43:45.319253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:5810 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.064 [2024-07-21 03:43:45.319273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.064 [2024-07-21 03:43:45.335954] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b360) 00:34:00.064 [2024-07-21 03:43:45.335991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:10304 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.064 [2024-07-21 03:43:45.336011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.064 [2024-07-21 03:43:45.349576] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b360) 00:34:00.064 [2024-07-21 03:43:45.349621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:3144 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.064 [2024-07-21 03:43:45.349644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.064 [2024-07-21 03:43:45.366629] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b360) 00:34:00.064 [2024-07-21 03:43:45.366666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:10803 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.064 [2024-07-21 03:43:45.366685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.321 [2024-07-21 03:43:45.383706] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b360) 00:34:00.321 [2024-07-21 03:43:45.383737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:18729 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.321 [2024-07-21 03:43:45.383753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.321 [2024-07-21 03:43:45.397550] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b360) 00:34:00.321 [2024-07-21 03:43:45.397587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:2432 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.321 [2024-07-21 03:43:45.397607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.321 [2024-07-21 03:43:45.410313] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b360) 00:34:00.321 [2024-07-21 03:43:45.410354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:17543 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.321 [2024-07-21 03:43:45.410374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.321 [2024-07-21 03:43:45.424583] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b360) 00:34:00.321 [2024-07-21 03:43:45.424665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:17550 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.321 [2024-07-21 03:43:45.424693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.321 [2024-07-21 03:43:45.440448] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b360) 00:34:00.321 [2024-07-21 03:43:45.440486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:521 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.321 [2024-07-21 03:43:45.440506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.321 [2024-07-21 03:43:45.452004] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b360) 00:34:00.321 [2024-07-21 03:43:45.452040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14769 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.321 [2024-07-21 03:43:45.452060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.321 [2024-07-21 03:43:45.469491] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b360) 00:34:00.321 [2024-07-21 03:43:45.469535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:639 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.321 [2024-07-21 03:43:45.469555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.321 [2024-07-21 03:43:45.484023] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b360) 00:34:00.321 [2024-07-21 03:43:45.484060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20832 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.321 [2024-07-21 03:43:45.484079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.321 [2024-07-21 03:43:45.495041] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b360) 00:34:00.321 [2024-07-21 03:43:45.495077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:19342 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.321 [2024-07-21 03:43:45.495096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.321 [2024-07-21 03:43:45.511389] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b360) 00:34:00.321 [2024-07-21 03:43:45.511424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25437 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.321 [2024-07-21 03:43:45.511444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.321 [2024-07-21 03:43:45.530280] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b360) 00:34:00.321 [2024-07-21 03:43:45.530316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:23141 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.321 [2024-07-21 03:43:45.530336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.321 [2024-07-21 03:43:45.547939] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b360) 00:34:00.321 [2024-07-21 03:43:45.547985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:16108 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.321 [2024-07-21 03:43:45.548017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.321 [2024-07-21 03:43:45.560141] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b360) 00:34:00.321 [2024-07-21 03:43:45.560178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:17767 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.321 [2024-07-21 03:43:45.560197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.321 [2024-07-21 03:43:45.574847] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b360) 00:34:00.321 [2024-07-21 03:43:45.574878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:9726 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.321 [2024-07-21 03:43:45.574894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.321 [2024-07-21 03:43:45.591648] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b360) 00:34:00.321 [2024-07-21 03:43:45.591696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:20704 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.321 [2024-07-21 03:43:45.591713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.321 [2024-07-21 03:43:45.604797] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b360) 00:34:00.321 [2024-07-21 03:43:45.604827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:15202 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.321 [2024-07-21 03:43:45.604843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.321 [2024-07-21 03:43:45.622733] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b360) 00:34:00.322 [2024-07-21 03:43:45.622763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:7856 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.322 [2024-07-21 03:43:45.622780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.579 [2024-07-21 03:43:45.640554] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b360) 00:34:00.579 [2024-07-21 03:43:45.640590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:17403 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.579 [2024-07-21 03:43:45.640610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.579 [2024-07-21 03:43:45.656732] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b360) 00:34:00.579 [2024-07-21 03:43:45.656763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:6403 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.579 [2024-07-21 03:43:45.656780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.579 [2024-07-21 03:43:45.669299] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b360) 00:34:00.579 [2024-07-21 03:43:45.669340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:17126 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.579 [2024-07-21 03:43:45.669368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.579 [2024-07-21 03:43:45.688124] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b360) 00:34:00.579 [2024-07-21 03:43:45.688161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:14328 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.579 [2024-07-21 03:43:45.688181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.579 [2024-07-21 03:43:45.703544] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b360) 00:34:00.579 [2024-07-21 03:43:45.703580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:16559 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.579 [2024-07-21 03:43:45.703600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.579 [2024-07-21 03:43:45.717119] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b360) 00:34:00.579 [2024-07-21 03:43:45.717155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:7766 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.579 [2024-07-21 03:43:45.717174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.579 [2024-07-21 03:43:45.733995] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b360) 00:34:00.579 [2024-07-21 03:43:45.734031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:16773 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.579 [2024-07-21 03:43:45.734051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.579 [2024-07-21 03:43:45.748563] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b360) 00:34:00.579 [2024-07-21 03:43:45.748632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:8507 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.579 [2024-07-21 03:43:45.748685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.579 [2024-07-21 03:43:45.762046] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b360) 00:34:00.579 [2024-07-21 03:43:45.762087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:4709 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.579 [2024-07-21 03:43:45.762108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.579 [2024-07-21 03:43:45.778384] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b360) 00:34:00.579 [2024-07-21 03:43:45.778439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:21197 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.579 [2024-07-21 03:43:45.778471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.579 [2024-07-21 03:43:45.790849] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b360) 00:34:00.580 [2024-07-21 03:43:45.790879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:19782 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.580 [2024-07-21 03:43:45.790896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.580 [2024-07-21 03:43:45.807251] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b360) 00:34:00.580 [2024-07-21 03:43:45.807298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:20456 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.580 [2024-07-21 03:43:45.807319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.580 [2024-07-21 03:43:45.822301] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b360) 00:34:00.580 [2024-07-21 03:43:45.822346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:19540 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.580 [2024-07-21 03:43:45.822378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.580 [2024-07-21 03:43:45.835300] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b360) 00:34:00.580 [2024-07-21 03:43:45.835336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:12494 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.580 [2024-07-21 03:43:45.835355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.580 [2024-07-21 03:43:45.849971] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b360) 00:34:00.580 [2024-07-21 03:43:45.850020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:7191 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.580 [2024-07-21 03:43:45.850053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.580 [2024-07-21 03:43:45.861983] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b360) 00:34:00.580 [2024-07-21 03:43:45.862020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:9435 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.580 [2024-07-21 03:43:45.862040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.580 [2024-07-21 03:43:45.879120] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b360) 00:34:00.580 [2024-07-21 03:43:45.879156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:12455 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.580 [2024-07-21 03:43:45.879176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.837 [2024-07-21 03:43:45.891827] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b360) 00:34:00.837 [2024-07-21 03:43:45.891858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:24880 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.837 [2024-07-21 03:43:45.891875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.837 [2024-07-21 03:43:45.908414] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b360) 00:34:00.837 [2024-07-21 03:43:45.908451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:10714 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.837 [2024-07-21 03:43:45.908476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.837 [2024-07-21 03:43:45.924403] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b360) 00:34:00.837 [2024-07-21 03:43:45.924443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:20067 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.837 [2024-07-21 03:43:45.924473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.837 [2024-07-21 03:43:45.937300] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b360) 00:34:00.837 [2024-07-21 03:43:45.937336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:10871 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.837 [2024-07-21 03:43:45.937356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.837 [2024-07-21 03:43:45.953930] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b360) 00:34:00.837 [2024-07-21 03:43:45.953959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:8501 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.837 [2024-07-21 03:43:45.953975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.837 [2024-07-21 03:43:45.966742] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b360) 00:34:00.837 [2024-07-21 03:43:45.966780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:19042 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.837 [2024-07-21 03:43:45.966797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.837 [2024-07-21 03:43:45.983699] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b360) 00:34:00.837 [2024-07-21 03:43:45.983746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:19004 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.837 [2024-07-21 03:43:45.983775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.837 [2024-07-21 03:43:46.002268] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b360) 00:34:00.837 [2024-07-21 03:43:46.002304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:7118 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.837 [2024-07-21 03:43:46.002324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.837 [2024-07-21 03:43:46.014630] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b360) 00:34:00.838 [2024-07-21 03:43:46.014679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:10389 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.838 [2024-07-21 03:43:46.014696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.838 [2024-07-21 03:43:46.030015] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b360) 00:34:00.838 [2024-07-21 03:43:46.030051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:3950 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.838 [2024-07-21 03:43:46.030071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.838 00:34:00.838 Latency(us) 00:34:00.838 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:00.838 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:34:00.838 nvme0n1 : 2.00 17032.64 66.53 0.00 0.00 7504.72 4004.98 26796.94 00:34:00.838 =================================================================================================================== 00:34:00.838 Total : 17032.64 66.53 0.00 0.00 7504.72 4004.98 26796.94 00:34:00.838 0 00:34:00.838 03:43:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:34:00.838 03:43:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:34:00.838 03:43:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:34:00.838 03:43:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:34:00.838 | .driver_specific 00:34:00.838 | .nvme_error 00:34:00.838 | .status_code 00:34:00.838 | .command_transient_transport_error' 00:34:01.098 03:43:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 133 > 0 )) 00:34:01.098 03:43:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2560560 00:34:01.098 03:43:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 2560560 ']' 00:34:01.098 03:43:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 2560560 00:34:01.098 03:43:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:34:01.098 03:43:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:34:01.098 03:43:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2560560 00:34:01.098 03:43:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:34:01.098 03:43:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:34:01.098 03:43:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2560560' 00:34:01.098 killing process with pid 2560560 00:34:01.098 03:43:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 2560560 00:34:01.098 Received shutdown signal, test time was about 2.000000 seconds 00:34:01.098 00:34:01.098 Latency(us) 00:34:01.098 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:01.098 =================================================================================================================== 00:34:01.098 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:01.098 03:43:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 2560560 00:34:01.397 03:43:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:34:01.397 03:43:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:34:01.397 03:43:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:34:01.397 03:43:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:34:01.397 03:43:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:34:01.397 03:43:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2561020 00:34:01.397 03:43:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:34:01.397 03:43:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2561020 /var/tmp/bperf.sock 00:34:01.397 03:43:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 2561020 ']' 00:34:01.397 03:43:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:01.397 03:43:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:34:01.397 03:43:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:01.397 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:01.397 03:43:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:34:01.397 03:43:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:01.397 [2024-07-21 03:43:46.598083] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:34:01.397 [2024-07-21 03:43:46.598169] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2561020 ] 00:34:01.397 I/O size of 131072 is greater than zero copy threshold (65536). 00:34:01.397 Zero copy mechanism will not be used. 00:34:01.397 EAL: No free 2048 kB hugepages reported on node 1 00:34:01.397 [2024-07-21 03:43:46.664460] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:01.655 [2024-07-21 03:43:46.757376] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:34:01.655 03:43:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:34:01.655 03:43:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:34:01.655 03:43:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:34:01.655 03:43:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:34:01.913 03:43:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:34:01.913 03:43:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:01.913 03:43:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:01.913 03:43:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:01.913 03:43:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:01.913 03:43:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:02.478 nvme0n1 00:34:02.478 03:43:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:34:02.479 03:43:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:02.479 03:43:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:02.479 03:43:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:02.479 03:43:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:34:02.479 03:43:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:02.479 I/O size of 131072 is greater than zero copy threshold (65536). 00:34:02.479 Zero copy mechanism will not be used. 00:34:02.479 Running I/O for 2 seconds... 00:34:02.479 [2024-07-21 03:43:47.668668] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:02.479 [2024-07-21 03:43:47.668721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.479 [2024-07-21 03:43:47.668742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:02.479 [2024-07-21 03:43:47.675945] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:02.479 [2024-07-21 03:43:47.676005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.479 [2024-07-21 03:43:47.676026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:02.479 [2024-07-21 03:43:47.682375] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:02.479 [2024-07-21 03:43:47.682420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.479 [2024-07-21 03:43:47.682441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:02.479 [2024-07-21 03:43:47.687513] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:02.479 [2024-07-21 03:43:47.687548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.479 [2024-07-21 03:43:47.687573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:02.479 [2024-07-21 03:43:47.693501] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:02.479 [2024-07-21 03:43:47.693536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.479 [2024-07-21 03:43:47.693555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:02.479 [2024-07-21 03:43:47.699266] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:02.479 [2024-07-21 03:43:47.699301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.479 [2024-07-21 03:43:47.699321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:02.479 [2024-07-21 03:43:47.705037] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:02.479 [2024-07-21 03:43:47.705072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.479 [2024-07-21 03:43:47.705091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:02.479 [2024-07-21 03:43:47.710752] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:02.479 [2024-07-21 03:43:47.710783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.479 [2024-07-21 03:43:47.710800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:02.479 [2024-07-21 03:43:47.716419] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:02.479 [2024-07-21 03:43:47.716453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.479 [2024-07-21 03:43:47.716472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:02.479 [2024-07-21 03:43:47.722300] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:02.479 [2024-07-21 03:43:47.722334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.479 [2024-07-21 03:43:47.722354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:02.479 [2024-07-21 03:43:47.728135] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:02.479 [2024-07-21 03:43:47.728170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.479 [2024-07-21 03:43:47.728189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:02.479 [2024-07-21 03:43:47.733990] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:02.479 [2024-07-21 03:43:47.734024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.479 [2024-07-21 03:43:47.734043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:02.479 [2024-07-21 03:43:47.739829] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:02.479 [2024-07-21 03:43:47.739858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.479 [2024-07-21 03:43:47.739874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:02.479 [2024-07-21 03:43:47.745415] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:02.479 [2024-07-21 03:43:47.745449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.479 [2024-07-21 03:43:47.745468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:02.479 [2024-07-21 03:43:47.751143] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:02.479 [2024-07-21 03:43:47.751178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.479 [2024-07-21 03:43:47.751196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:02.479 [2024-07-21 03:43:47.756963] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:02.479 [2024-07-21 03:43:47.756998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.479 [2024-07-21 03:43:47.757018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:02.479 [2024-07-21 03:43:47.762694] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:02.479 [2024-07-21 03:43:47.762741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.479 [2024-07-21 03:43:47.762760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:02.479 [2024-07-21 03:43:47.768142] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:02.479 [2024-07-21 03:43:47.768175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.479 [2024-07-21 03:43:47.768195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:02.479 [2024-07-21 03:43:47.773732] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:02.479 [2024-07-21 03:43:47.773761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.479 [2024-07-21 03:43:47.773794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:02.479 [2024-07-21 03:43:47.779146] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:02.479 [2024-07-21 03:43:47.779179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.479 [2024-07-21 03:43:47.779204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:02.479 [2024-07-21 03:43:47.784722] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:02.479 [2024-07-21 03:43:47.784765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.479 [2024-07-21 03:43:47.784782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:02.479 [2024-07-21 03:43:47.790001] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:02.479 [2024-07-21 03:43:47.790035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.479 [2024-07-21 03:43:47.790054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:02.738 [2024-07-21 03:43:47.795422] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:02.738 [2024-07-21 03:43:47.795456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.738 [2024-07-21 03:43:47.795476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:02.738 [2024-07-21 03:43:47.801222] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:02.738 [2024-07-21 03:43:47.801255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.738 [2024-07-21 03:43:47.801274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:02.738 [2024-07-21 03:43:47.806687] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:02.738 [2024-07-21 03:43:47.806716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.738 [2024-07-21 03:43:47.806733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:02.738 [2024-07-21 03:43:47.812251] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:02.738 [2024-07-21 03:43:47.812284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.738 [2024-07-21 03:43:47.812302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:02.738 [2024-07-21 03:43:47.817736] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:02.738 [2024-07-21 03:43:47.817765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.738 [2024-07-21 03:43:47.817782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:02.738 [2024-07-21 03:43:47.823075] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:02.738 [2024-07-21 03:43:47.823107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.738 [2024-07-21 03:43:47.823126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:02.738 [2024-07-21 03:43:47.828422] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:02.738 [2024-07-21 03:43:47.828454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.738 [2024-07-21 03:43:47.828473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:02.738 [2024-07-21 03:43:47.833972] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:02.738 [2024-07-21 03:43:47.834000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.738 [2024-07-21 03:43:47.834031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:02.738 [2024-07-21 03:43:47.840079] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:02.738 [2024-07-21 03:43:47.840115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.738 [2024-07-21 03:43:47.840134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:02.738 [2024-07-21 03:43:47.847787] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:02.738 [2024-07-21 03:43:47.847818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.738 [2024-07-21 03:43:47.847835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:02.738 [2024-07-21 03:43:47.855197] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:02.738 [2024-07-21 03:43:47.855232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.738 [2024-07-21 03:43:47.855251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:02.738 [2024-07-21 03:43:47.862751] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:02.738 [2024-07-21 03:43:47.862782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.738 [2024-07-21 03:43:47.862799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:02.738 [2024-07-21 03:43:47.870578] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:02.738 [2024-07-21 03:43:47.870621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.738 [2024-07-21 03:43:47.870642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:02.738 [2024-07-21 03:43:47.878360] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:02.738 [2024-07-21 03:43:47.878390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.738 [2024-07-21 03:43:47.878406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:02.738 [2024-07-21 03:43:47.886137] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:02.738 [2024-07-21 03:43:47.886172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.738 [2024-07-21 03:43:47.886198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:02.738 [2024-07-21 03:43:47.893914] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:02.738 [2024-07-21 03:43:47.893943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.738 [2024-07-21 03:43:47.893977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:02.738 [2024-07-21 03:43:47.901671] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:02.738 [2024-07-21 03:43:47.901716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.738 [2024-07-21 03:43:47.901732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:02.738 [2024-07-21 03:43:47.909479] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:02.738 [2024-07-21 03:43:47.909518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.738 [2024-07-21 03:43:47.909538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:02.738 [2024-07-21 03:43:47.916974] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:02.738 [2024-07-21 03:43:47.917008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.738 [2024-07-21 03:43:47.917028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:02.738 [2024-07-21 03:43:47.924554] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:02.738 [2024-07-21 03:43:47.924589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.738 [2024-07-21 03:43:47.924607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:02.738 [2024-07-21 03:43:47.932517] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:02.738 [2024-07-21 03:43:47.932552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.738 [2024-07-21 03:43:47.932572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:02.739 [2024-07-21 03:43:47.940328] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:02.739 [2024-07-21 03:43:47.940363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.739 [2024-07-21 03:43:47.940396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:02.739 [2024-07-21 03:43:47.948104] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:02.739 [2024-07-21 03:43:47.948138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.739 [2024-07-21 03:43:47.948157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:02.739 [2024-07-21 03:43:47.955850] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:02.739 [2024-07-21 03:43:47.955886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.739 [2024-07-21 03:43:47.955905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:02.739 [2024-07-21 03:43:47.963621] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:02.739 [2024-07-21 03:43:47.963655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.739 [2024-07-21 03:43:47.963689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:02.739 [2024-07-21 03:43:47.971338] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:02.739 [2024-07-21 03:43:47.971372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.739 [2024-07-21 03:43:47.971391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:02.739 [2024-07-21 03:43:47.978407] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:02.739 [2024-07-21 03:43:47.978441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.739 [2024-07-21 03:43:47.978460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:02.739 [2024-07-21 03:43:47.984389] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:02.739 [2024-07-21 03:43:47.984424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.739 [2024-07-21 03:43:47.984443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:02.739 [2024-07-21 03:43:47.989900] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:02.739 [2024-07-21 03:43:47.989947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.739 [2024-07-21 03:43:47.989966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:02.739 [2024-07-21 03:43:47.995564] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:02.739 [2024-07-21 03:43:47.995601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.739 [2024-07-21 03:43:47.995633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:02.739 [2024-07-21 03:43:48.001091] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:02.739 [2024-07-21 03:43:48.001124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.739 [2024-07-21 03:43:48.001143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:02.739 [2024-07-21 03:43:48.006623] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:02.739 [2024-07-21 03:43:48.006673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.739 [2024-07-21 03:43:48.006691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:02.739 [2024-07-21 03:43:48.012233] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:02.739 [2024-07-21 03:43:48.012265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.739 [2024-07-21 03:43:48.012283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:02.739 [2024-07-21 03:43:48.017831] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:02.739 [2024-07-21 03:43:48.017865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.739 [2024-07-21 03:43:48.017884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:02.739 [2024-07-21 03:43:48.023278] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:02.739 [2024-07-21 03:43:48.023311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.739 [2024-07-21 03:43:48.023331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:02.739 [2024-07-21 03:43:48.028865] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:02.739 [2024-07-21 03:43:48.028894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.739 [2024-07-21 03:43:48.028911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:02.739 [2024-07-21 03:43:48.034410] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:02.739 [2024-07-21 03:43:48.034442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.739 [2024-07-21 03:43:48.034461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:02.739 [2024-07-21 03:43:48.038555] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:02.739 [2024-07-21 03:43:48.038589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.739 [2024-07-21 03:43:48.038625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:02.739 [2024-07-21 03:43:48.043957] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:02.739 [2024-07-21 03:43:48.043992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.739 [2024-07-21 03:43:48.044011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:02.997 [2024-07-21 03:43:48.051350] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:02.997 [2024-07-21 03:43:48.051386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.997 [2024-07-21 03:43:48.051405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:02.997 [2024-07-21 03:43:48.058310] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:02.997 [2024-07-21 03:43:48.058344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.997 [2024-07-21 03:43:48.058370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:02.997 [2024-07-21 03:43:48.066103] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:02.997 [2024-07-21 03:43:48.066138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.997 [2024-07-21 03:43:48.066157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:02.997 [2024-07-21 03:43:48.073287] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:02.997 [2024-07-21 03:43:48.073321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.997 [2024-07-21 03:43:48.073340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:02.997 [2024-07-21 03:43:48.080081] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:02.997 [2024-07-21 03:43:48.080116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.997 [2024-07-21 03:43:48.080135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:02.997 [2024-07-21 03:43:48.087354] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:02.997 [2024-07-21 03:43:48.087388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.997 [2024-07-21 03:43:48.087408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:02.997 [2024-07-21 03:43:48.094396] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:02.997 [2024-07-21 03:43:48.094431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.997 [2024-07-21 03:43:48.094450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:02.997 [2024-07-21 03:43:48.102281] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:02.997 [2024-07-21 03:43:48.102317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.997 [2024-07-21 03:43:48.102336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:02.997 [2024-07-21 03:43:48.110791] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:02.997 [2024-07-21 03:43:48.110821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.997 [2024-07-21 03:43:48.110853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:02.997 [2024-07-21 03:43:48.118821] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:02.997 [2024-07-21 03:43:48.118851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.997 [2024-07-21 03:43:48.118868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:02.997 [2024-07-21 03:43:48.125904] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:02.997 [2024-07-21 03:43:48.125959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.997 [2024-07-21 03:43:48.125979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:02.997 [2024-07-21 03:43:48.132258] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:02.997 [2024-07-21 03:43:48.132292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.997 [2024-07-21 03:43:48.132311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:02.997 [2024-07-21 03:43:48.139285] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:02.997 [2024-07-21 03:43:48.139319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.997 [2024-07-21 03:43:48.139338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:02.997 [2024-07-21 03:43:48.145947] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:02.997 [2024-07-21 03:43:48.145981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.997 [2024-07-21 03:43:48.146000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:02.997 [2024-07-21 03:43:48.152711] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:02.997 [2024-07-21 03:43:48.152742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.998 [2024-07-21 03:43:48.152759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:02.998 [2024-07-21 03:43:48.159053] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:02.998 [2024-07-21 03:43:48.159088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.998 [2024-07-21 03:43:48.159108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:02.998 [2024-07-21 03:43:48.165441] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:02.998 [2024-07-21 03:43:48.165475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.998 [2024-07-21 03:43:48.165494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:02.998 [2024-07-21 03:43:48.171794] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:02.998 [2024-07-21 03:43:48.171826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.998 [2024-07-21 03:43:48.171843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:02.998 [2024-07-21 03:43:48.178321] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:02.998 [2024-07-21 03:43:48.178356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.998 [2024-07-21 03:43:48.178375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:02.998 [2024-07-21 03:43:48.184575] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:02.998 [2024-07-21 03:43:48.184610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.998 [2024-07-21 03:43:48.184638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:02.998 [2024-07-21 03:43:48.190933] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:02.998 [2024-07-21 03:43:48.190968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.998 [2024-07-21 03:43:48.190988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:02.998 [2024-07-21 03:43:48.197130] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:02.998 [2024-07-21 03:43:48.197164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.998 [2024-07-21 03:43:48.197183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:02.998 [2024-07-21 03:43:48.203359] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:02.998 [2024-07-21 03:43:48.203393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.998 [2024-07-21 03:43:48.203412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:02.998 [2024-07-21 03:43:48.207638] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:02.998 [2024-07-21 03:43:48.207687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.998 [2024-07-21 03:43:48.207705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:02.998 [2024-07-21 03:43:48.212531] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:02.998 [2024-07-21 03:43:48.212566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.998 [2024-07-21 03:43:48.212585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:02.998 [2024-07-21 03:43:48.218791] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:02.998 [2024-07-21 03:43:48.218822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.998 [2024-07-21 03:43:48.218838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:02.998 [2024-07-21 03:43:48.225035] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:02.998 [2024-07-21 03:43:48.225070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.998 [2024-07-21 03:43:48.225089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:02.998 [2024-07-21 03:43:48.231294] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:02.998 [2024-07-21 03:43:48.231328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.998 [2024-07-21 03:43:48.231353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:02.998 [2024-07-21 03:43:48.237874] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:02.998 [2024-07-21 03:43:48.237921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.998 [2024-07-21 03:43:48.237941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:02.998 [2024-07-21 03:43:48.243894] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:02.998 [2024-07-21 03:43:48.243945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.998 [2024-07-21 03:43:48.243964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:02.998 [2024-07-21 03:43:48.249976] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:02.998 [2024-07-21 03:43:48.250009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.998 [2024-07-21 03:43:48.250028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:02.998 [2024-07-21 03:43:48.255799] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:02.998 [2024-07-21 03:43:48.255843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.998 [2024-07-21 03:43:48.255861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:02.998 [2024-07-21 03:43:48.261298] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:02.998 [2024-07-21 03:43:48.261332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.998 [2024-07-21 03:43:48.261351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:02.998 [2024-07-21 03:43:48.266772] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:02.998 [2024-07-21 03:43:48.266802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.998 [2024-07-21 03:43:48.266819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:02.998 [2024-07-21 03:43:48.272809] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:02.998 [2024-07-21 03:43:48.272840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.998 [2024-07-21 03:43:48.272873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:02.998 [2024-07-21 03:43:48.278843] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:02.998 [2024-07-21 03:43:48.278874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.998 [2024-07-21 03:43:48.278891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:02.998 [2024-07-21 03:43:48.284975] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:02.998 [2024-07-21 03:43:48.285010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.998 [2024-07-21 03:43:48.285029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:02.998 [2024-07-21 03:43:48.290657] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:02.998 [2024-07-21 03:43:48.290702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.998 [2024-07-21 03:43:48.290719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:02.998 [2024-07-21 03:43:48.296405] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:02.998 [2024-07-21 03:43:48.296438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.998 [2024-07-21 03:43:48.296457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:02.998 [2024-07-21 03:43:48.302447] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:02.998 [2024-07-21 03:43:48.302480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.998 [2024-07-21 03:43:48.302499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:02.998 [2024-07-21 03:43:48.308000] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:02.998 [2024-07-21 03:43:48.308034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.998 [2024-07-21 03:43:48.308053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:03.257 [2024-07-21 03:43:48.314187] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:03.257 [2024-07-21 03:43:48.314222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.257 [2024-07-21 03:43:48.314242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:03.257 [2024-07-21 03:43:48.320296] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:03.257 [2024-07-21 03:43:48.320330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.257 [2024-07-21 03:43:48.320349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:03.257 [2024-07-21 03:43:48.326700] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:03.257 [2024-07-21 03:43:48.326731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.257 [2024-07-21 03:43:48.326748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:03.257 [2024-07-21 03:43:48.332934] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:03.257 [2024-07-21 03:43:48.332968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.257 [2024-07-21 03:43:48.332994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:03.257 [2024-07-21 03:43:48.339188] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:03.257 [2024-07-21 03:43:48.339222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.257 [2024-07-21 03:43:48.339241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:03.257 [2024-07-21 03:43:48.345091] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:03.257 [2024-07-21 03:43:48.345125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.257 [2024-07-21 03:43:48.345144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:03.257 [2024-07-21 03:43:48.351105] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:03.257 [2024-07-21 03:43:48.351139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.257 [2024-07-21 03:43:48.351158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:03.257 [2024-07-21 03:43:48.357259] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:03.257 [2024-07-21 03:43:48.357294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.257 [2024-07-21 03:43:48.357313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:03.257 [2024-07-21 03:43:48.363132] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:03.257 [2024-07-21 03:43:48.363167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.257 [2024-07-21 03:43:48.363186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:03.257 [2024-07-21 03:43:48.368648] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:03.257 [2024-07-21 03:43:48.368696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.257 [2024-07-21 03:43:48.368714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:03.257 [2024-07-21 03:43:48.374235] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:03.257 [2024-07-21 03:43:48.374269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.257 [2024-07-21 03:43:48.374287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:03.257 [2024-07-21 03:43:48.380333] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:03.257 [2024-07-21 03:43:48.380367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.257 [2024-07-21 03:43:48.380386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:03.257 [2024-07-21 03:43:48.384572] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:03.257 [2024-07-21 03:43:48.384626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.257 [2024-07-21 03:43:48.384648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:03.257 [2024-07-21 03:43:48.389367] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:03.257 [2024-07-21 03:43:48.389401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.257 [2024-07-21 03:43:48.389420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:03.257 [2024-07-21 03:43:48.395863] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:03.257 [2024-07-21 03:43:48.395909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.257 [2024-07-21 03:43:48.395927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:03.257 [2024-07-21 03:43:48.402136] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:03.257 [2024-07-21 03:43:48.402171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.257 [2024-07-21 03:43:48.402190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:03.257 [2024-07-21 03:43:48.408105] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:03.257 [2024-07-21 03:43:48.408139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.257 [2024-07-21 03:43:48.408159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:03.257 [2024-07-21 03:43:48.414236] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:03.257 [2024-07-21 03:43:48.414270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.257 [2024-07-21 03:43:48.414289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:03.257 [2024-07-21 03:43:48.420233] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:03.257 [2024-07-21 03:43:48.420272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.257 [2024-07-21 03:43:48.420292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:03.257 [2024-07-21 03:43:48.425871] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:03.257 [2024-07-21 03:43:48.425901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.257 [2024-07-21 03:43:48.425918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:03.257 [2024-07-21 03:43:48.431597] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:03.257 [2024-07-21 03:43:48.431640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.257 [2024-07-21 03:43:48.431660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:03.257 [2024-07-21 03:43:48.437796] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:03.258 [2024-07-21 03:43:48.437827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.258 [2024-07-21 03:43:48.437845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:03.258 [2024-07-21 03:43:48.442085] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:03.258 [2024-07-21 03:43:48.442122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.258 [2024-07-21 03:43:48.442144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:03.258 [2024-07-21 03:43:48.447203] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:03.258 [2024-07-21 03:43:48.447237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.258 [2024-07-21 03:43:48.447257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:03.258 [2024-07-21 03:43:48.453800] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:03.258 [2024-07-21 03:43:48.453832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.258 [2024-07-21 03:43:48.453850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:03.258 [2024-07-21 03:43:48.460770] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:03.258 [2024-07-21 03:43:48.460817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.258 [2024-07-21 03:43:48.460836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:03.258 [2024-07-21 03:43:48.466837] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:03.258 [2024-07-21 03:43:48.466869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.258 [2024-07-21 03:43:48.466889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:03.258 [2024-07-21 03:43:48.472788] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:03.258 [2024-07-21 03:43:48.472821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.258 [2024-07-21 03:43:48.472839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:03.258 [2024-07-21 03:43:48.479072] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:03.258 [2024-07-21 03:43:48.479108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.258 [2024-07-21 03:43:48.479128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:03.258 [2024-07-21 03:43:48.485634] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:03.258 [2024-07-21 03:43:48.485689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.258 [2024-07-21 03:43:48.485712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:03.258 [2024-07-21 03:43:48.492794] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:03.258 [2024-07-21 03:43:48.492825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.258 [2024-07-21 03:43:48.492842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:03.258 [2024-07-21 03:43:48.499724] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:03.258 [2024-07-21 03:43:48.499756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.258 [2024-07-21 03:43:48.499773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:03.258 [2024-07-21 03:43:48.506738] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:03.258 [2024-07-21 03:43:48.506768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.258 [2024-07-21 03:43:48.506785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:03.258 [2024-07-21 03:43:48.513804] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:03.258 [2024-07-21 03:43:48.513835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.258 [2024-07-21 03:43:48.513852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:03.258 [2024-07-21 03:43:48.520776] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:03.258 [2024-07-21 03:43:48.520822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.258 [2024-07-21 03:43:48.520839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:03.258 [2024-07-21 03:43:48.527647] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:03.258 [2024-07-21 03:43:48.527696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.258 [2024-07-21 03:43:48.527713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:03.258 [2024-07-21 03:43:48.534591] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:03.258 [2024-07-21 03:43:48.534632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.258 [2024-07-21 03:43:48.534653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:03.258 [2024-07-21 03:43:48.541405] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:03.258 [2024-07-21 03:43:48.541440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.258 [2024-07-21 03:43:48.541460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:03.258 [2024-07-21 03:43:48.548468] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:03.258 [2024-07-21 03:43:48.548511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.258 [2024-07-21 03:43:48.548531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:03.258 [2024-07-21 03:43:48.555476] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:03.258 [2024-07-21 03:43:48.555513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.258 [2024-07-21 03:43:48.555533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:03.258 [2024-07-21 03:43:48.562384] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:03.258 [2024-07-21 03:43:48.562419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.258 [2024-07-21 03:43:48.562438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:03.517 [2024-07-21 03:43:48.569292] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:03.517 [2024-07-21 03:43:48.569325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.517 [2024-07-21 03:43:48.569343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:03.517 [2024-07-21 03:43:48.576362] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:03.517 [2024-07-21 03:43:48.576398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.517 [2024-07-21 03:43:48.576417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:03.517 [2024-07-21 03:43:48.583444] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:03.517 [2024-07-21 03:43:48.583478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.517 [2024-07-21 03:43:48.583497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:03.517 [2024-07-21 03:43:48.590719] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:03.517 [2024-07-21 03:43:48.590751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.517 [2024-07-21 03:43:48.590769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:03.517 [2024-07-21 03:43:48.597907] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:03.517 [2024-07-21 03:43:48.597948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.517 [2024-07-21 03:43:48.597967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:03.517 [2024-07-21 03:43:48.605196] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:03.517 [2024-07-21 03:43:48.605231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.517 [2024-07-21 03:43:48.605249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:03.517 [2024-07-21 03:43:48.611510] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:03.517 [2024-07-21 03:43:48.611545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.517 [2024-07-21 03:43:48.611565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:03.517 [2024-07-21 03:43:48.616019] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:03.517 [2024-07-21 03:43:48.616052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.517 [2024-07-21 03:43:48.616071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:03.517 [2024-07-21 03:43:48.622970] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:03.517 [2024-07-21 03:43:48.623017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.517 [2024-07-21 03:43:48.623037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:03.517 [2024-07-21 03:43:48.629986] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:03.517 [2024-07-21 03:43:48.630021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.517 [2024-07-21 03:43:48.630041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:03.517 [2024-07-21 03:43:48.637009] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:03.517 [2024-07-21 03:43:48.637043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.517 [2024-07-21 03:43:48.637062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:03.517 [2024-07-21 03:43:48.643442] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:03.517 [2024-07-21 03:43:48.643478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.517 [2024-07-21 03:43:48.643498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:03.517 [2024-07-21 03:43:48.650566] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:03.517 [2024-07-21 03:43:48.650600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.517 [2024-07-21 03:43:48.650632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:03.517 [2024-07-21 03:43:48.657667] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:03.517 [2024-07-21 03:43:48.657699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.517 [2024-07-21 03:43:48.657733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:03.517 [2024-07-21 03:43:48.664304] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:03.517 [2024-07-21 03:43:48.664338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.517 [2024-07-21 03:43:48.664364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:03.517 [2024-07-21 03:43:48.670824] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:03.517 [2024-07-21 03:43:48.670866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.517 [2024-07-21 03:43:48.670884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:03.517 [2024-07-21 03:43:48.677936] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:03.517 [2024-07-21 03:43:48.677984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.517 [2024-07-21 03:43:48.678004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:03.517 [2024-07-21 03:43:48.685026] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:03.517 [2024-07-21 03:43:48.685061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.517 [2024-07-21 03:43:48.685081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:03.517 [2024-07-21 03:43:48.692052] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:03.517 [2024-07-21 03:43:48.692087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.517 [2024-07-21 03:43:48.692107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:03.517 [2024-07-21 03:43:48.699082] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:03.517 [2024-07-21 03:43:48.699117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.517 [2024-07-21 03:43:48.699136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:03.517 [2024-07-21 03:43:48.706015] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:03.517 [2024-07-21 03:43:48.706049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.517 [2024-07-21 03:43:48.706068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:03.517 [2024-07-21 03:43:48.713219] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:03.517 [2024-07-21 03:43:48.713254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.517 [2024-07-21 03:43:48.713274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:03.517 [2024-07-21 03:43:48.720261] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:03.517 [2024-07-21 03:43:48.720294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.517 [2024-07-21 03:43:48.720314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:03.517 [2024-07-21 03:43:48.724859] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:03.517 [2024-07-21 03:43:48.724897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.517 [2024-07-21 03:43:48.724914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:03.517 [2024-07-21 03:43:48.730510] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:03.517 [2024-07-21 03:43:48.730545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.517 [2024-07-21 03:43:48.730564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:03.517 [2024-07-21 03:43:48.737419] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:03.517 [2024-07-21 03:43:48.737453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.517 [2024-07-21 03:43:48.737473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:03.517 [2024-07-21 03:43:48.744383] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:03.517 [2024-07-21 03:43:48.744418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.517 [2024-07-21 03:43:48.744437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:03.517 [2024-07-21 03:43:48.751173] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:03.517 [2024-07-21 03:43:48.751208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.518 [2024-07-21 03:43:48.751228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:03.518 [2024-07-21 03:43:48.758166] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:03.518 [2024-07-21 03:43:48.758201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.518 [2024-07-21 03:43:48.758221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:03.518 [2024-07-21 03:43:48.765374] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:03.518 [2024-07-21 03:43:48.765410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.518 [2024-07-21 03:43:48.765429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:03.518 [2024-07-21 03:43:48.772518] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:03.518 [2024-07-21 03:43:48.772558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.518 [2024-07-21 03:43:48.772578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:03.518 [2024-07-21 03:43:48.779167] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:03.518 [2024-07-21 03:43:48.779202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.518 [2024-07-21 03:43:48.779229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:03.518 [2024-07-21 03:43:48.786382] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:03.518 [2024-07-21 03:43:48.786418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.518 [2024-07-21 03:43:48.786438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:03.518 [2024-07-21 03:43:48.793399] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:03.518 [2024-07-21 03:43:48.793434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.518 [2024-07-21 03:43:48.793453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:03.518 [2024-07-21 03:43:48.800301] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:03.518 [2024-07-21 03:43:48.800335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.518 [2024-07-21 03:43:48.800355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:03.518 [2024-07-21 03:43:48.807243] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:03.518 [2024-07-21 03:43:48.807278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.518 [2024-07-21 03:43:48.807297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:03.518 [2024-07-21 03:43:48.814029] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:03.518 [2024-07-21 03:43:48.814064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.518 [2024-07-21 03:43:48.814083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:03.518 [2024-07-21 03:43:48.820897] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:03.518 [2024-07-21 03:43:48.820947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.518 [2024-07-21 03:43:48.820967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:03.518 [2024-07-21 03:43:48.827995] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:03.518 [2024-07-21 03:43:48.828030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.518 [2024-07-21 03:43:48.828050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:03.776 [2024-07-21 03:43:48.835065] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:03.776 [2024-07-21 03:43:48.835100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.776 [2024-07-21 03:43:48.835120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:03.776 [2024-07-21 03:43:48.841891] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:03.776 [2024-07-21 03:43:48.841950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.776 [2024-07-21 03:43:48.841970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:03.776 [2024-07-21 03:43:48.848751] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:03.776 [2024-07-21 03:43:48.848782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.776 [2024-07-21 03:43:48.848799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:03.776 [2024-07-21 03:43:48.855397] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:03.776 [2024-07-21 03:43:48.855441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.776 [2024-07-21 03:43:48.855460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:03.776 [2024-07-21 03:43:48.862858] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:03.776 [2024-07-21 03:43:48.862895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.776 [2024-07-21 03:43:48.862913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:03.776 [2024-07-21 03:43:48.869857] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:03.776 [2024-07-21 03:43:48.869896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.776 [2024-07-21 03:43:48.869929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:03.776 [2024-07-21 03:43:48.877213] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:03.776 [2024-07-21 03:43:48.877247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.776 [2024-07-21 03:43:48.877266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:03.776 [2024-07-21 03:43:48.884514] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:03.776 [2024-07-21 03:43:48.884549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.776 [2024-07-21 03:43:48.884569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:03.776 [2024-07-21 03:43:48.891703] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:03.776 [2024-07-21 03:43:48.891734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.776 [2024-07-21 03:43:48.891751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:03.776 [2024-07-21 03:43:48.898826] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:03.776 [2024-07-21 03:43:48.898858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.776 [2024-07-21 03:43:48.898875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:03.776 [2024-07-21 03:43:48.903396] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:03.776 [2024-07-21 03:43:48.903431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.776 [2024-07-21 03:43:48.903450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:03.776 [2024-07-21 03:43:48.909194] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:03.776 [2024-07-21 03:43:48.909228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.776 [2024-07-21 03:43:48.909248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:03.776 [2024-07-21 03:43:48.915971] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:03.776 [2024-07-21 03:43:48.916007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.776 [2024-07-21 03:43:48.916027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:03.776 [2024-07-21 03:43:48.922962] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:03.776 [2024-07-21 03:43:48.922996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.776 [2024-07-21 03:43:48.923015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:03.776 [2024-07-21 03:43:48.930248] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:03.776 [2024-07-21 03:43:48.930283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.776 [2024-07-21 03:43:48.930302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:03.776 [2024-07-21 03:43:48.937120] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:03.776 [2024-07-21 03:43:48.937154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.776 [2024-07-21 03:43:48.937173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:03.776 [2024-07-21 03:43:48.943985] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:03.776 [2024-07-21 03:43:48.944020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.776 [2024-07-21 03:43:48.944040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:03.776 [2024-07-21 03:43:48.950773] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:03.776 [2024-07-21 03:43:48.950805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.776 [2024-07-21 03:43:48.950823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:03.776 [2024-07-21 03:43:48.957441] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:03.776 [2024-07-21 03:43:48.957475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.776 [2024-07-21 03:43:48.957501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:03.776 [2024-07-21 03:43:48.963961] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:03.776 [2024-07-21 03:43:48.963995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.776 [2024-07-21 03:43:48.964015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:03.776 [2024-07-21 03:43:48.970729] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:03.776 [2024-07-21 03:43:48.970760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.776 [2024-07-21 03:43:48.970777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:03.776 [2024-07-21 03:43:48.977560] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:03.776 [2024-07-21 03:43:48.977592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.776 [2024-07-21 03:43:48.977610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:03.776 [2024-07-21 03:43:48.984550] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:03.776 [2024-07-21 03:43:48.984583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.776 [2024-07-21 03:43:48.984602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:03.776 [2024-07-21 03:43:48.991403] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:03.776 [2024-07-21 03:43:48.991436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.776 [2024-07-21 03:43:48.991455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:03.776 [2024-07-21 03:43:48.998322] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:03.776 [2024-07-21 03:43:48.998356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.776 [2024-07-21 03:43:48.998376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:03.776 [2024-07-21 03:43:49.005281] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:03.776 [2024-07-21 03:43:49.005314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.776 [2024-07-21 03:43:49.005334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:03.776 [2024-07-21 03:43:49.012393] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:03.776 [2024-07-21 03:43:49.012428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.776 [2024-07-21 03:43:49.012447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:03.776 [2024-07-21 03:43:49.019259] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:03.776 [2024-07-21 03:43:49.019301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.776 [2024-07-21 03:43:49.019322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:03.776 [2024-07-21 03:43:49.025946] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:03.776 [2024-07-21 03:43:49.025980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.776 [2024-07-21 03:43:49.025998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:03.776 [2024-07-21 03:43:49.032855] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:03.776 [2024-07-21 03:43:49.032886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.776 [2024-07-21 03:43:49.032903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:03.776 [2024-07-21 03:43:49.039750] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:03.776 [2024-07-21 03:43:49.039782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.776 [2024-07-21 03:43:49.039799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:03.776 [2024-07-21 03:43:49.046702] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:03.776 [2024-07-21 03:43:49.046746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.776 [2024-07-21 03:43:49.046763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:03.776 [2024-07-21 03:43:49.053428] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:03.776 [2024-07-21 03:43:49.053461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.776 [2024-07-21 03:43:49.053480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:03.776 [2024-07-21 03:43:49.060407] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:03.777 [2024-07-21 03:43:49.060441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.777 [2024-07-21 03:43:49.060459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:03.777 [2024-07-21 03:43:49.067283] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:03.777 [2024-07-21 03:43:49.067316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.777 [2024-07-21 03:43:49.067336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:03.777 [2024-07-21 03:43:49.074208] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:03.777 [2024-07-21 03:43:49.074241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.777 [2024-07-21 03:43:49.074260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:03.777 [2024-07-21 03:43:49.081014] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:03.777 [2024-07-21 03:43:49.081048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.777 [2024-07-21 03:43:49.081067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:03.777 [2024-07-21 03:43:49.087523] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:03.777 [2024-07-21 03:43:49.087557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.777 [2024-07-21 03:43:49.087577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.035 [2024-07-21 03:43:49.094270] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:04.035 [2024-07-21 03:43:49.094304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.035 [2024-07-21 03:43:49.094323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:04.035 [2024-07-21 03:43:49.099834] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:04.035 [2024-07-21 03:43:49.099865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.035 [2024-07-21 03:43:49.099882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:04.035 [2024-07-21 03:43:49.103746] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:04.035 [2024-07-21 03:43:49.103774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.035 [2024-07-21 03:43:49.103790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:04.035 [2024-07-21 03:43:49.110177] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:04.035 [2024-07-21 03:43:49.110211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.035 [2024-07-21 03:43:49.110230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.035 [2024-07-21 03:43:49.117058] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:04.035 [2024-07-21 03:43:49.117092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.035 [2024-07-21 03:43:49.117111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:04.035 [2024-07-21 03:43:49.124030] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:04.035 [2024-07-21 03:43:49.124065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.035 [2024-07-21 03:43:49.124085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:04.035 [2024-07-21 03:43:49.130863] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:04.035 [2024-07-21 03:43:49.130891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.035 [2024-07-21 03:43:49.130914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:04.035 [2024-07-21 03:43:49.137626] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:04.035 [2024-07-21 03:43:49.137674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.035 [2024-07-21 03:43:49.137692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.035 [2024-07-21 03:43:49.144478] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:04.035 [2024-07-21 03:43:49.144514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.035 [2024-07-21 03:43:49.144533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:04.035 [2024-07-21 03:43:49.151414] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:04.035 [2024-07-21 03:43:49.151448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.035 [2024-07-21 03:43:49.151467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:04.035 [2024-07-21 03:43:49.158237] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:04.035 [2024-07-21 03:43:49.158270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.035 [2024-07-21 03:43:49.158288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:04.035 [2024-07-21 03:43:49.164842] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:04.035 [2024-07-21 03:43:49.164871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.035 [2024-07-21 03:43:49.164888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.035 [2024-07-21 03:43:49.171655] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:04.035 [2024-07-21 03:43:49.171701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.035 [2024-07-21 03:43:49.171718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:04.035 [2024-07-21 03:43:49.178405] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:04.035 [2024-07-21 03:43:49.178437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.035 [2024-07-21 03:43:49.178456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:04.035 [2024-07-21 03:43:49.184778] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:04.035 [2024-07-21 03:43:49.184808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.035 [2024-07-21 03:43:49.184825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:04.035 [2024-07-21 03:43:49.191085] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:04.035 [2024-07-21 03:43:49.191118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.035 [2024-07-21 03:43:49.191137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.035 [2024-07-21 03:43:49.198027] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:04.035 [2024-07-21 03:43:49.198060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.035 [2024-07-21 03:43:49.198079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:04.035 [2024-07-21 03:43:49.204827] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:04.035 [2024-07-21 03:43:49.204857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.035 [2024-07-21 03:43:49.204874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:04.035 [2024-07-21 03:43:49.211698] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:04.035 [2024-07-21 03:43:49.211729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.035 [2024-07-21 03:43:49.211746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:04.035 [2024-07-21 03:43:49.218363] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:04.035 [2024-07-21 03:43:49.218396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.035 [2024-07-21 03:43:49.218415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.035 [2024-07-21 03:43:49.225269] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:04.035 [2024-07-21 03:43:49.225303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.035 [2024-07-21 03:43:49.225323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:04.035 [2024-07-21 03:43:49.232312] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:04.035 [2024-07-21 03:43:49.232347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.035 [2024-07-21 03:43:49.232366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:04.035 [2024-07-21 03:43:49.239239] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:04.035 [2024-07-21 03:43:49.239273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.035 [2024-07-21 03:43:49.239292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:04.035 [2024-07-21 03:43:49.246109] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:04.035 [2024-07-21 03:43:49.246142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.035 [2024-07-21 03:43:49.246168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.035 [2024-07-21 03:43:49.253059] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:04.035 [2024-07-21 03:43:49.253091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.035 [2024-07-21 03:43:49.253110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:04.035 [2024-07-21 03:43:49.260031] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:04.035 [2024-07-21 03:43:49.260065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.035 [2024-07-21 03:43:49.260084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:04.035 [2024-07-21 03:43:49.267119] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:04.035 [2024-07-21 03:43:49.267152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.035 [2024-07-21 03:43:49.267170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:04.035 [2024-07-21 03:43:49.274176] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:04.035 [2024-07-21 03:43:49.274210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.035 [2024-07-21 03:43:49.274228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.035 [2024-07-21 03:43:49.281221] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:04.035 [2024-07-21 03:43:49.281255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.035 [2024-07-21 03:43:49.281274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:04.035 [2024-07-21 03:43:49.288311] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:04.035 [2024-07-21 03:43:49.288344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.035 [2024-07-21 03:43:49.288363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:04.035 [2024-07-21 03:43:49.295209] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:04.035 [2024-07-21 03:43:49.295244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.035 [2024-07-21 03:43:49.295263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:04.035 [2024-07-21 03:43:49.301984] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:04.035 [2024-07-21 03:43:49.302018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.035 [2024-07-21 03:43:49.302037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.035 [2024-07-21 03:43:49.308763] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:04.036 [2024-07-21 03:43:49.308799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.036 [2024-07-21 03:43:49.308817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:04.036 [2024-07-21 03:43:49.315610] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:04.036 [2024-07-21 03:43:49.315650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.036 [2024-07-21 03:43:49.315684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:04.036 [2024-07-21 03:43:49.322279] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:04.036 [2024-07-21 03:43:49.322312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.036 [2024-07-21 03:43:49.322331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:04.036 [2024-07-21 03:43:49.329213] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:04.036 [2024-07-21 03:43:49.329246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.036 [2024-07-21 03:43:49.329265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.036 [2024-07-21 03:43:49.336209] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:04.036 [2024-07-21 03:43:49.336241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.036 [2024-07-21 03:43:49.336260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:04.036 [2024-07-21 03:43:49.343338] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:04.036 [2024-07-21 03:43:49.343370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.036 [2024-07-21 03:43:49.343389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:04.294 [2024-07-21 03:43:49.349794] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:04.294 [2024-07-21 03:43:49.349824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.294 [2024-07-21 03:43:49.349840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:04.294 [2024-07-21 03:43:49.356458] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:04.294 [2024-07-21 03:43:49.356491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.294 [2024-07-21 03:43:49.356510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.294 [2024-07-21 03:43:49.363492] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:04.294 [2024-07-21 03:43:49.363525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.294 [2024-07-21 03:43:49.363544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:04.294 [2024-07-21 03:43:49.370356] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:04.294 [2024-07-21 03:43:49.370389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.294 [2024-07-21 03:43:49.370408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:04.294 [2024-07-21 03:43:49.377293] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:04.294 [2024-07-21 03:43:49.377326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.294 [2024-07-21 03:43:49.377345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:04.294 [2024-07-21 03:43:49.383894] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:04.294 [2024-07-21 03:43:49.383940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.294 [2024-07-21 03:43:49.383959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.294 [2024-07-21 03:43:49.390551] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:04.294 [2024-07-21 03:43:49.390583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.294 [2024-07-21 03:43:49.390602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:04.294 [2024-07-21 03:43:49.397130] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:04.294 [2024-07-21 03:43:49.397165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.294 [2024-07-21 03:43:49.397184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:04.294 [2024-07-21 03:43:49.404108] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:04.294 [2024-07-21 03:43:49.404141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.294 [2024-07-21 03:43:49.404160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:04.294 [2024-07-21 03:43:49.411164] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:04.294 [2024-07-21 03:43:49.411199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.294 [2024-07-21 03:43:49.411218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.294 [2024-07-21 03:43:49.418120] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:04.294 [2024-07-21 03:43:49.418154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.294 [2024-07-21 03:43:49.418173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:04.294 [2024-07-21 03:43:49.424914] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:04.294 [2024-07-21 03:43:49.424963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.294 [2024-07-21 03:43:49.424988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:04.294 [2024-07-21 03:43:49.431565] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:04.294 [2024-07-21 03:43:49.431599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.294 [2024-07-21 03:43:49.431626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:04.294 [2024-07-21 03:43:49.438409] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:04.294 [2024-07-21 03:43:49.438441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.294 [2024-07-21 03:43:49.438460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.294 [2024-07-21 03:43:49.445291] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:04.294 [2024-07-21 03:43:49.445323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.294 [2024-07-21 03:43:49.445342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:04.294 [2024-07-21 03:43:49.452281] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:04.294 [2024-07-21 03:43:49.452314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.294 [2024-07-21 03:43:49.452332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:04.294 [2024-07-21 03:43:49.458893] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:04.294 [2024-07-21 03:43:49.458940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.294 [2024-07-21 03:43:49.458958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:04.294 [2024-07-21 03:43:49.465375] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:04.294 [2024-07-21 03:43:49.465408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.294 [2024-07-21 03:43:49.465427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.294 [2024-07-21 03:43:49.471985] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:04.294 [2024-07-21 03:43:49.472018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.294 [2024-07-21 03:43:49.472038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:04.294 [2024-07-21 03:43:49.478948] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:04.294 [2024-07-21 03:43:49.478996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.294 [2024-07-21 03:43:49.479015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:04.294 [2024-07-21 03:43:49.485895] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:04.294 [2024-07-21 03:43:49.485945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.294 [2024-07-21 03:43:49.485963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:04.294 [2024-07-21 03:43:49.492799] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:04.294 [2024-07-21 03:43:49.492829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.294 [2024-07-21 03:43:49.492846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.294 [2024-07-21 03:43:49.499434] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:04.294 [2024-07-21 03:43:49.499466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.294 [2024-07-21 03:43:49.499485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:04.294 [2024-07-21 03:43:49.506086] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:04.294 [2024-07-21 03:43:49.506119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.294 [2024-07-21 03:43:49.506138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:04.294 [2024-07-21 03:43:49.512899] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:04.294 [2024-07-21 03:43:49.512944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.294 [2024-07-21 03:43:49.512964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:04.294 [2024-07-21 03:43:49.519722] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:04.294 [2024-07-21 03:43:49.519752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.294 [2024-07-21 03:43:49.519769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.294 [2024-07-21 03:43:49.526750] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:04.294 [2024-07-21 03:43:49.526781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.294 [2024-07-21 03:43:49.526799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:04.294 [2024-07-21 03:43:49.533686] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:04.294 [2024-07-21 03:43:49.533716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.294 [2024-07-21 03:43:49.533746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:04.294 [2024-07-21 03:43:49.540500] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:04.294 [2024-07-21 03:43:49.540532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.294 [2024-07-21 03:43:49.540551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:04.294 [2024-07-21 03:43:49.547134] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:04.294 [2024-07-21 03:43:49.547167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.294 [2024-07-21 03:43:49.547186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.294 [2024-07-21 03:43:49.554141] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:04.294 [2024-07-21 03:43:49.554174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.294 [2024-07-21 03:43:49.554193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:04.294 [2024-07-21 03:43:49.561135] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:04.294 [2024-07-21 03:43:49.561167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.294 [2024-07-21 03:43:49.561186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:04.294 [2024-07-21 03:43:49.568190] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:04.294 [2024-07-21 03:43:49.568224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.294 [2024-07-21 03:43:49.568242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:04.294 [2024-07-21 03:43:49.575014] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:04.294 [2024-07-21 03:43:49.575048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.294 [2024-07-21 03:43:49.575067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.294 [2024-07-21 03:43:49.582546] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:04.294 [2024-07-21 03:43:49.582581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.294 [2024-07-21 03:43:49.582600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:04.294 [2024-07-21 03:43:49.590953] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:04.294 [2024-07-21 03:43:49.590988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.294 [2024-07-21 03:43:49.591007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:04.294 [2024-07-21 03:43:49.598470] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:04.294 [2024-07-21 03:43:49.598506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.294 [2024-07-21 03:43:49.598525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:04.294 [2024-07-21 03:43:49.605596] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:04.294 [2024-07-21 03:43:49.605641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.294 [2024-07-21 03:43:49.605684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.552 [2024-07-21 03:43:49.613093] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:04.552 [2024-07-21 03:43:49.613129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.552 [2024-07-21 03:43:49.613148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:04.552 [2024-07-21 03:43:49.617344] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:04.552 [2024-07-21 03:43:49.617377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.552 [2024-07-21 03:43:49.617396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:04.552 [2024-07-21 03:43:49.625177] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:04.552 [2024-07-21 03:43:49.625210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.552 [2024-07-21 03:43:49.625229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:04.552 [2024-07-21 03:43:49.632367] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:04.552 [2024-07-21 03:43:49.632402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.552 [2024-07-21 03:43:49.632422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.552 [2024-07-21 03:43:49.639428] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:04.552 [2024-07-21 03:43:49.639463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.552 [2024-07-21 03:43:49.639482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:04.552 [2024-07-21 03:43:49.648566] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:04.552 [2024-07-21 03:43:49.648606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.552 [2024-07-21 03:43:49.648636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:04.552 [2024-07-21 03:43:49.655977] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c33d50) 00:34:04.552 [2024-07-21 03:43:49.656011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.552 [2024-07-21 03:43:49.656030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:04.552 00:34:04.552 Latency(us) 00:34:04.552 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:04.552 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:34:04.552 nvme0n1 : 2.00 4700.99 587.62 0.00 0.00 3399.03 758.52 11747.93 00:34:04.552 =================================================================================================================== 00:34:04.552 Total : 4700.99 587.62 0.00 0.00 3399.03 758.52 11747.93 00:34:04.552 0 00:34:04.552 03:43:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:34:04.552 03:43:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:34:04.552 03:43:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:34:04.552 | .driver_specific 00:34:04.552 | .nvme_error 00:34:04.552 | .status_code 00:34:04.552 | .command_transient_transport_error' 00:34:04.552 03:43:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:34:04.810 03:43:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 303 > 0 )) 00:34:04.810 03:43:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2561020 00:34:04.810 03:43:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 2561020 ']' 00:34:04.810 03:43:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 2561020 00:34:04.810 03:43:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:34:04.810 03:43:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:34:04.810 03:43:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2561020 00:34:04.810 03:43:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:34:04.810 03:43:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:34:04.810 03:43:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2561020' 00:34:04.810 killing process with pid 2561020 00:34:04.810 03:43:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 2561020 00:34:04.810 Received shutdown signal, test time was about 2.000000 seconds 00:34:04.810 00:34:04.810 Latency(us) 00:34:04.810 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:04.810 =================================================================================================================== 00:34:04.810 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:04.810 03:43:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 2561020 00:34:05.068 03:43:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:34:05.068 03:43:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:34:05.068 03:43:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:34:05.068 03:43:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:34:05.068 03:43:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:34:05.068 03:43:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2561490 00:34:05.068 03:43:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:34:05.068 03:43:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2561490 /var/tmp/bperf.sock 00:34:05.068 03:43:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 2561490 ']' 00:34:05.068 03:43:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:05.068 03:43:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:34:05.068 03:43:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:05.068 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:05.068 03:43:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:34:05.068 03:43:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:05.068 [2024-07-21 03:43:50.221587] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:34:05.068 [2024-07-21 03:43:50.221695] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2561490 ] 00:34:05.068 EAL: No free 2048 kB hugepages reported on node 1 00:34:05.068 [2024-07-21 03:43:50.283596] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:05.068 [2024-07-21 03:43:50.370799] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:34:05.326 03:43:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:34:05.326 03:43:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:34:05.326 03:43:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:34:05.326 03:43:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:34:05.583 03:43:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:34:05.583 03:43:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:05.583 03:43:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:05.583 03:43:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:05.583 03:43:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:05.584 03:43:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:05.841 nvme0n1 00:34:05.841 03:43:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:34:05.841 03:43:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:05.841 03:43:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:05.841 03:43:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:05.841 03:43:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:34:05.841 03:43:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:06.099 Running I/O for 2 seconds... 00:34:06.099 [2024-07-21 03:43:51.232284] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993bc0) with pdu=0x2000190ee5c8 00:34:06.099 [2024-07-21 03:43:51.233204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1901 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.099 [2024-07-21 03:43:51.233247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:34:06.099 [2024-07-21 03:43:51.245481] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993bc0) with pdu=0x2000190f81e0 00:34:06.099 [2024-07-21 03:43:51.246324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:6777 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.099 [2024-07-21 03:43:51.246354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:06.099 [2024-07-21 03:43:51.259462] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993bc0) with pdu=0x2000190e01f8 00:34:06.099 [2024-07-21 03:43:51.261076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:1243 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.099 [2024-07-21 03:43:51.261112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:34:06.099 [2024-07-21 03:43:51.272800] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993bc0) with pdu=0x2000190e99d8 00:34:06.099 [2024-07-21 03:43:51.274541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:6465 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.099 [2024-07-21 03:43:51.274574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:06.099 [2024-07-21 03:43:51.284545] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993bc0) with pdu=0x2000190fa7d8 00:34:06.099 [2024-07-21 03:43:51.285854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:11356 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.099 [2024-07-21 03:43:51.285898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:06.099 [2024-07-21 03:43:51.296179] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993bc0) with pdu=0x2000190f35f0 00:34:06.099 [2024-07-21 03:43:51.298131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19235 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.099 [2024-07-21 03:43:51.298164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:34:06.099 [2024-07-21 03:43:51.309511] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993bc0) with pdu=0x2000190e5a90 00:34:06.099 [2024-07-21 03:43:51.311546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:15275 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.099 [2024-07-21 03:43:51.311578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:06.099 [2024-07-21 03:43:51.321342] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993bc0) with pdu=0x2000190e88f8 00:34:06.099 [2024-07-21 03:43:51.322295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:13278 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.099 [2024-07-21 03:43:51.322322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:34:06.099 [2024-07-21 03:43:51.334373] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993bc0) with pdu=0x2000190ed4e8 00:34:06.099 [2024-07-21 03:43:51.335484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:14472 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.099 [2024-07-21 03:43:51.335516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:34:06.099 [2024-07-21 03:43:51.346348] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993bc0) with pdu=0x2000190f5be8 00:34:06.099 [2024-07-21 03:43:51.347418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:23053 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.099 [2024-07-21 03:43:51.347450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:06.099 [2024-07-21 03:43:51.360346] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993bc0) with pdu=0x2000190eff18 00:34:06.099 [2024-07-21 03:43:51.361779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:9000 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.099 [2024-07-21 03:43:51.361826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:34:06.099 [2024-07-21 03:43:51.373431] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993bc0) with pdu=0x2000190ec408 00:34:06.099 [2024-07-21 03:43:51.374896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:2328 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.099 [2024-07-21 03:43:51.374939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:34:06.100 [2024-07-21 03:43:51.385327] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993bc0) with pdu=0x2000190eb328 00:34:06.100 [2024-07-21 03:43:51.386797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:5023 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.100 [2024-07-21 03:43:51.386851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:06.100 [2024-07-21 03:43:51.398565] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993bc0) with pdu=0x2000190eee38 00:34:06.100 [2024-07-21 03:43:51.400181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:9760 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.100 [2024-07-21 03:43:51.400213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:34:06.358 [2024-07-21 03:43:51.411772] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993bc0) with pdu=0x2000190e6b70 00:34:06.358 [2024-07-21 03:43:51.413345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:13911 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.358 [2024-07-21 03:43:51.413389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:06.358 [2024-07-21 03:43:51.421603] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993bc0) with pdu=0x2000190e6b70 00:34:06.358 [2024-07-21 03:43:51.422707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:22513 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.358 [2024-07-21 03:43:51.422735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:06.358 [2024-07-21 03:43:51.434751] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993bc0) with pdu=0x2000190eff18 00:34:06.358 [2024-07-21 03:43:51.435968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:21371 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.358 [2024-07-21 03:43:51.436014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:34:06.358 [2024-07-21 03:43:51.448833] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993bc0) with pdu=0x2000190ebb98 00:34:06.358 [2024-07-21 03:43:51.450314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:22002 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.358 [2024-07-21 03:43:51.450342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:06.358 [2024-07-21 03:43:51.460454] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993bc0) with pdu=0x2000190e5ec8 00:34:06.358 [2024-07-21 03:43:51.462492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:8554 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.358 [2024-07-21 03:43:51.462523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:34:06.358 [2024-07-21 03:43:51.472169] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993bc0) with pdu=0x2000190ef6a8 00:34:06.358 [2024-07-21 03:43:51.473121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:16918 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.358 [2024-07-21 03:43:51.473148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:34:06.358 [2024-07-21 03:43:51.485298] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993bc0) with pdu=0x2000190de470 00:34:06.358 [2024-07-21 03:43:51.486349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:11815 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.358 [2024-07-21 03:43:51.486382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:34:06.358 [2024-07-21 03:43:51.498312] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993bc0) with pdu=0x2000190f4b08 00:34:06.358 [2024-07-21 03:43:51.499249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:2507 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.358 [2024-07-21 03:43:51.499292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:34:06.358 [2024-07-21 03:43:51.512061] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993bc0) with pdu=0x2000190e4140 00:34:06.358 [2024-07-21 03:43:51.513797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:25542 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.358 [2024-07-21 03:43:51.513839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:34:06.358 [2024-07-21 03:43:51.525264] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993bc0) with pdu=0x2000190e8d30 00:34:06.358 [2024-07-21 03:43:51.527114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:24040 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.358 [2024-07-21 03:43:51.527145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:34:06.358 [2024-07-21 03:43:51.538374] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993bc0) with pdu=0x2000190eff18 00:34:06.358 [2024-07-21 03:43:51.540443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:21470 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.358 [2024-07-21 03:43:51.540471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:06.358 [2024-07-21 03:43:51.547237] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993bc0) with pdu=0x2000190fb480 00:34:06.358 [2024-07-21 03:43:51.548146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:23970 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.358 [2024-07-21 03:43:51.548172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:34:06.358 [2024-07-21 03:43:51.561770] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993bc0) with pdu=0x2000190ed920 00:34:06.358 [2024-07-21 03:43:51.563277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5628 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.358 [2024-07-21 03:43:51.563309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:34:06.358 [2024-07-21 03:43:51.575075] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993bc0) with pdu=0x2000190eb760 00:34:06.358 [2024-07-21 03:43:51.576802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:6262 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.358 [2024-07-21 03:43:51.576844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:34:06.358 [2024-07-21 03:43:51.588286] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993bc0) with pdu=0x2000190ef6a8 00:34:06.358 [2024-07-21 03:43:51.590232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:19725 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.358 [2024-07-21 03:43:51.590269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:34:06.358 [2024-07-21 03:43:51.601491] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993bc0) with pdu=0x2000190f7970 00:34:06.359 [2024-07-21 03:43:51.603640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:6942 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.359 [2024-07-21 03:43:51.603668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:34:06.359 [2024-07-21 03:43:51.610422] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993bc0) with pdu=0x2000190fd640 00:34:06.359 [2024-07-21 03:43:51.611283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:2003 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.359 [2024-07-21 03:43:51.611314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:06.359 [2024-07-21 03:43:51.623409] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993bc0) with pdu=0x2000190f7538 00:34:06.359 [2024-07-21 03:43:51.624150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23423 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.359 [2024-07-21 03:43:51.624179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:06.359 [2024-07-21 03:43:51.637262] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993bc0) with pdu=0x2000190e6738 00:34:06.359 [2024-07-21 03:43:51.638856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:19535 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.359 [2024-07-21 03:43:51.638885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:34:06.359 [2024-07-21 03:43:51.650515] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993bc0) with pdu=0x2000190e01f8 00:34:06.359 [2024-07-21 03:43:51.652262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:1944 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.359 [2024-07-21 03:43:51.652294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:06.359 [2024-07-21 03:43:51.662338] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993bc0) with pdu=0x2000190f9f68 00:34:06.359 [2024-07-21 03:43:51.663548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:15931 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.359 [2024-07-21 03:43:51.663575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:34:06.617 [2024-07-21 03:43:51.674882] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993bc0) with pdu=0x2000190e73e0 00:34:06.617 [2024-07-21 03:43:51.675881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:7028 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.617 [2024-07-21 03:43:51.675910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:34:06.617 [2024-07-21 03:43:51.688684] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993bc0) with pdu=0x2000190e99d8 00:34:06.617 [2024-07-21 03:43:51.690466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:24688 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.617 [2024-07-21 03:43:51.690497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:34:06.617 [2024-07-21 03:43:51.700430] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993bc0) with pdu=0x2000190f1868 00:34:06.617 [2024-07-21 03:43:51.701816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:9751 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.617 [2024-07-21 03:43:51.701860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:34:06.617 [2024-07-21 03:43:51.711946] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993bc0) with pdu=0x2000190fdeb0 00:34:06.617 [2024-07-21 03:43:51.713869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:12635 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.617 [2024-07-21 03:43:51.713897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:06.617 [2024-07-21 03:43:51.722751] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993bc0) with pdu=0x2000190f8618 00:34:06.617 [2024-07-21 03:43:51.723558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:13683 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.617 [2024-07-21 03:43:51.723589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:34:06.617 [2024-07-21 03:43:51.736768] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993bc0) with pdu=0x2000190e0630 00:34:06.617 [2024-07-21 03:43:51.737768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:7787 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.617 [2024-07-21 03:43:51.737813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:34:06.617 [2024-07-21 03:43:51.750980] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993bc0) with pdu=0x2000190f4f40 00:34:06.617 [2024-07-21 03:43:51.752660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:338 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.617 [2024-07-21 03:43:51.752693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:34:06.617 [2024-07-21 03:43:51.761073] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993bc0) with pdu=0x2000190e6738 00:34:06.617 [2024-07-21 03:43:51.762016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:11907 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.617 [2024-07-21 03:43:51.762048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:34:06.617 [2024-07-21 03:43:51.775085] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993bc0) with pdu=0x2000190e1f80 00:34:06.617 [2024-07-21 03:43:51.776258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:16715 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.617 [2024-07-21 03:43:51.776286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:06.617 [2024-07-21 03:43:51.786850] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993bc0) with pdu=0x2000190ebb98 00:34:06.617 [2024-07-21 03:43:51.787927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:21354 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.617 [2024-07-21 03:43:51.787969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:06.617 [2024-07-21 03:43:51.800037] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993bc0) with pdu=0x2000190f7970 00:34:06.617 [2024-07-21 03:43:51.801320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:10673 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.617 [2024-07-21 03:43:51.801352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:34:06.617 [2024-07-21 03:43:51.812467] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993bc0) with pdu=0x2000190fda78 00:34:06.617 [2024-07-21 03:43:51.814371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2617 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.618 [2024-07-21 03:43:51.814403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:06.618 [2024-07-21 03:43:51.827400] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993bc0) with pdu=0x2000190dfdc0 00:34:06.618 [2024-07-21 03:43:51.829327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:894 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.618 [2024-07-21 03:43:51.829359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:06.618 [2024-07-21 03:43:51.836339] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993bc0) with pdu=0x2000190e8d30 00:34:06.618 [2024-07-21 03:43:51.837088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:20171 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.618 [2024-07-21 03:43:51.837114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:34:06.618 [2024-07-21 03:43:51.850588] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993bc0) with pdu=0x2000190df550 00:34:06.618 [2024-07-21 03:43:51.852677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:17228 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.618 [2024-07-21 03:43:51.852705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:06.618 [2024-07-21 03:43:51.862419] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993bc0) with pdu=0x2000190f6cc8 00:34:06.618 [2024-07-21 03:43:51.863383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:21040 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.618 [2024-07-21 03:43:51.863409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:34:06.618 [2024-07-21 03:43:51.876549] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993bc0) with pdu=0x2000190f0350 00:34:06.618 [2024-07-21 03:43:51.878177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:6970 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.618 [2024-07-21 03:43:51.878203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:34:06.618 [2024-07-21 03:43:51.889731] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993bc0) with pdu=0x2000190e7c50 00:34:06.618 [2024-07-21 03:43:51.891515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:19481 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.618 [2024-07-21 03:43:51.891542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:06.618 [2024-07-21 03:43:51.901430] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993bc0) with pdu=0x2000190ee190 00:34:06.618 [2024-07-21 03:43:51.902804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:19432 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.618 [2024-07-21 03:43:51.902849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:34:06.618 [2024-07-21 03:43:51.914205] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993bc0) with pdu=0x2000190fd208 00:34:06.618 [2024-07-21 03:43:51.915323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:24394 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.618 [2024-07-21 03:43:51.915361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:06.618 [2024-07-21 03:43:51.926844] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993bc0) with pdu=0x2000190e01f8 00:34:06.618 [2024-07-21 03:43:51.928288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:21598 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.618 [2024-07-21 03:43:51.928320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:34:06.876 [2024-07-21 03:43:51.938621] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993bc0) with pdu=0x2000190e01f8 00:34:06.876 [2024-07-21 03:43:51.940020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:12557 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.876 [2024-07-21 03:43:51.940052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:34:06.876 [2024-07-21 03:43:51.951825] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993bc0) with pdu=0x2000190f1430 00:34:06.876 [2024-07-21 03:43:51.953385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:3327 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.876 [2024-07-21 03:43:51.953417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:34:06.876 [2024-07-21 03:43:51.965016] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993bc0) with pdu=0x2000190fb048 00:34:06.876 [2024-07-21 03:43:51.966778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:10140 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.876 [2024-07-21 03:43:51.966820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:06.876 [2024-07-21 03:43:51.976873] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993bc0) with pdu=0x2000190f6458 00:34:06.876 [2024-07-21 03:43:51.978123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:15739 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.876 [2024-07-21 03:43:51.978150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:34:06.876 [2024-07-21 03:43:51.989622] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993bc0) with pdu=0x2000190fe720 00:34:06.876 [2024-07-21 03:43:51.990783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:13206 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.876 [2024-07-21 03:43:51.990814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:06.876 [2024-07-21 03:43:52.004160] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993bc0) with pdu=0x2000190e5ec8 00:34:06.876 [2024-07-21 03:43:52.006258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:13753 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.876 [2024-07-21 03:43:52.006305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:06.876 [2024-07-21 03:43:52.013168] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993bc0) with pdu=0x2000190e6b70 00:34:06.876 [2024-07-21 03:43:52.014071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:1131 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.876 [2024-07-21 03:43:52.014102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:34:06.876 [2024-07-21 03:43:52.025144] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993bc0) with pdu=0x2000190e5658 00:34:06.876 [2024-07-21 03:43:52.026074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:3535 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.876 [2024-07-21 03:43:52.026107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:34:06.876 [2024-07-21 03:43:52.039238] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993bc0) with pdu=0x2000190e73e0 00:34:06.876 [2024-07-21 03:43:52.040380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:16333 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.876 [2024-07-21 03:43:52.040408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:34:06.876 [2024-07-21 03:43:52.052228] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993bc0) with pdu=0x2000190ff3c8 00:34:06.876 [2024-07-21 03:43:52.053535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:24811 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.876 [2024-07-21 03:43:52.053567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:34:06.876 [2024-07-21 03:43:52.066727] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993bc0) with pdu=0x2000190f1ca0 00:34:06.876 [2024-07-21 03:43:52.068611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:12327 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.876 [2024-07-21 03:43:52.068658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:34:06.876 [2024-07-21 03:43:52.078422] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993bc0) with pdu=0x2000190fa3a0 00:34:06.876 [2024-07-21 03:43:52.079870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:14153 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.876 [2024-07-21 03:43:52.079921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:34:06.876 [2024-07-21 03:43:52.090960] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993bc0) with pdu=0x2000190e73e0 00:34:06.876 [2024-07-21 03:43:52.092393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:511 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.877 [2024-07-21 03:43:52.092421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:34:06.877 [2024-07-21 03:43:52.103896] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993bc0) with pdu=0x2000190f6cc8 00:34:06.877 [2024-07-21 03:43:52.105471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15135 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.877 [2024-07-21 03:43:52.105498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:06.877 [2024-07-21 03:43:52.114242] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993bc0) with pdu=0x2000190e8d30 00:34:06.877 [2024-07-21 03:43:52.115087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:12405 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.877 [2024-07-21 03:43:52.115114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:34:06.877 [2024-07-21 03:43:52.125843] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993bc0) with pdu=0x2000190e99d8 00:34:06.877 [2024-07-21 03:43:52.126704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:17546 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.877 [2024-07-21 03:43:52.126729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:34:06.877 [2024-07-21 03:43:52.138928] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993bc0) with pdu=0x2000190f2510 00:34:06.877 [2024-07-21 03:43:52.139988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21671 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.877 [2024-07-21 03:43:52.140014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:34:06.877 [2024-07-21 03:43:52.153011] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993bc0) with pdu=0x2000190e9e10 00:34:06.877 [2024-07-21 03:43:52.154269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:11402 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.877 [2024-07-21 03:43:52.154296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:34:06.877 [2024-07-21 03:43:52.166049] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993bc0) with pdu=0x2000190f57b0 00:34:06.877 [2024-07-21 03:43:52.167427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:7209 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.877 [2024-07-21 03:43:52.167458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:34:06.877 [2024-07-21 03:43:52.177903] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993bc0) with pdu=0x2000190f4f40 00:34:06.877 [2024-07-21 03:43:52.179299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:4834 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.877 [2024-07-21 03:43:52.179331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:34:07.135 [2024-07-21 03:43:52.191190] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993bc0) with pdu=0x2000190ecc78 00:34:07.135 [2024-07-21 03:43:52.192708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:13936 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.135 [2024-07-21 03:43:52.192749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:34:07.135 [2024-07-21 03:43:52.202882] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993bc0) with pdu=0x2000190fef90 00:34:07.135 [2024-07-21 03:43:52.204045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:19221 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.135 [2024-07-21 03:43:52.204076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:34:07.135 [2024-07-21 03:43:52.215857] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993bc0) with pdu=0x2000190e8088 00:34:07.135 [2024-07-21 03:43:52.216782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:17734 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.135 [2024-07-21 03:43:52.216810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:34:07.135 [2024-07-21 03:43:52.230406] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993bc0) with pdu=0x2000190e6300 00:34:07.135 [2024-07-21 03:43:52.232325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:2127 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.135 [2024-07-21 03:43:52.232358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:34:07.135 [2024-07-21 03:43:52.242245] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993bc0) with pdu=0x2000190f96f8 00:34:07.135 [2024-07-21 03:43:52.243581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:15414 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.135 [2024-07-21 03:43:52.243658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:07.135 [2024-07-21 03:43:52.253708] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993bc0) with pdu=0x2000190f4298 00:34:07.135 [2024-07-21 03:43:52.255690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:22616 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.135 [2024-07-21 03:43:52.255719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:34:07.135 [2024-07-21 03:43:52.265477] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993bc0) with pdu=0x2000190f7970 00:34:07.135 [2024-07-21 03:43:52.266438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:13056 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.135 [2024-07-21 03:43:52.266464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:34:07.135 [2024-07-21 03:43:52.278581] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993bc0) with pdu=0x2000190eff18 00:34:07.135 [2024-07-21 03:43:52.279716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:12673 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.135 [2024-07-21 03:43:52.279744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:34:07.135 [2024-07-21 03:43:52.290749] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993bc0) with pdu=0x2000190f1430 00:34:07.135 [2024-07-21 03:43:52.291851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:3845 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.135 [2024-07-21 03:43:52.291879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:34:07.135 [2024-07-21 03:43:52.304013] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993bc0) with pdu=0x2000190e12d8 00:34:07.135 [2024-07-21 03:43:52.305237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:18329 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.135 [2024-07-21 03:43:52.305268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:07.135 [2024-07-21 03:43:52.318148] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993bc0) with pdu=0x2000190e5ec8 00:34:07.135 [2024-07-21 03:43:52.319631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:5050 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.135 [2024-07-21 03:43:52.319658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:34:07.135 [2024-07-21 03:43:52.331267] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993bc0) with pdu=0x2000190f31b8 00:34:07.135 [2024-07-21 03:43:52.332858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:2008 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.135 [2024-07-21 03:43:52.332901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:34:07.135 [2024-07-21 03:43:52.343290] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993bc0) with pdu=0x2000190f96f8 00:34:07.135 [2024-07-21 03:43:52.344882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12824 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.135 [2024-07-21 03:43:52.344928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:34:07.135 [2024-07-21 03:43:52.356589] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993bc0) with pdu=0x2000190ea248 00:34:07.135 [2024-07-21 03:43:52.358360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:3024 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.135 [2024-07-21 03:43:52.358392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:34:07.135 [2024-07-21 03:43:52.369875] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993bc0) with pdu=0x2000190f2510 00:34:07.135 [2024-07-21 03:43:52.371810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:22280 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.135 [2024-07-21 03:43:52.371853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:34:07.135 [2024-07-21 03:43:52.383071] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993bc0) with pdu=0x2000190f35f0 00:34:07.135 [2024-07-21 03:43:52.385153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:16226 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.135 [2024-07-21 03:43:52.385184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:07.135 [2024-07-21 03:43:52.392012] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993bc0) with pdu=0x2000190f9f68 00:34:07.135 [2024-07-21 03:43:52.392912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:9434 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.135 [2024-07-21 03:43:52.392958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:34:07.135 [2024-07-21 03:43:52.406312] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993bc0) with pdu=0x2000190ec840 00:34:07.135 [2024-07-21 03:43:52.407894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:19878 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.135 [2024-07-21 03:43:52.407938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:34:07.135 [2024-07-21 03:43:52.419595] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993bc0) with pdu=0x2000190eb760 00:34:07.135 [2024-07-21 03:43:52.421331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19373 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.135 [2024-07-21 03:43:52.421362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:34:07.135 [2024-07-21 03:43:52.431365] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993bc0) with pdu=0x2000190fd208 00:34:07.135 [2024-07-21 03:43:52.432656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:21728 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.135 [2024-07-21 03:43:52.432682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:34:07.135 [2024-07-21 03:43:52.444146] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993bc0) with pdu=0x2000190ec408 00:34:07.135 [2024-07-21 03:43:52.445189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:13542 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.136 [2024-07-21 03:43:52.445217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:34:07.393 [2024-07-21 03:43:52.455870] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993bc0) with pdu=0x2000190f3e60 00:34:07.393 [2024-07-21 03:43:52.457881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:3441 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.393 [2024-07-21 03:43:52.457926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:07.393 [2024-07-21 03:43:52.466737] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993bc0) with pdu=0x2000190fe2e8 00:34:07.394 [2024-07-21 03:43:52.467654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:16797 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.394 [2024-07-21 03:43:52.467680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:34:07.394 [2024-07-21 03:43:52.483364] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993bc0) with pdu=0x2000190e6fa8 00:34:07.394 [2024-07-21 03:43:52.485298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:18369 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.394 [2024-07-21 03:43:52.485330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:34:07.394 [2024-07-21 03:43:52.496653] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993bc0) with pdu=0x2000190e23b8 00:34:07.394 [2024-07-21 03:43:52.498794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:2057 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.394 [2024-07-21 03:43:52.498839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:34:07.394 [2024-07-21 03:43:52.505734] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993bc0) with pdu=0x2000190fdeb0 00:34:07.394 [2024-07-21 03:43:52.506651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:22840 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.394 [2024-07-21 03:43:52.506678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:07.394 [2024-07-21 03:43:52.520099] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993bc0) with pdu=0x2000190e27f0 00:34:07.394 [2024-07-21 03:43:52.521672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:9210 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.394 [2024-07-21 03:43:52.521699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:34:07.394 [2024-07-21 03:43:52.533225] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993bc0) with pdu=0x2000190edd58 00:34:07.394 [2024-07-21 03:43:52.534940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:11301 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.394 [2024-07-21 03:43:52.534985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:07.394 [2024-07-21 03:43:52.546472] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993bc0) with pdu=0x2000190e5658 00:34:07.394 [2024-07-21 03:43:52.548386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:16959 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.394 [2024-07-21 03:43:52.548412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:34:07.394 [2024-07-21 03:43:52.559777] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993bc0) with pdu=0x2000190f8a50 00:34:07.394 [2024-07-21 03:43:52.561859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:14004 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.394 [2024-07-21 03:43:52.561900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:07.394 [2024-07-21 03:43:52.568803] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993bc0) with pdu=0x2000190f7da8 00:34:07.394 [2024-07-21 03:43:52.569725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:3865 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.394 [2024-07-21 03:43:52.569765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:34:07.394 [2024-07-21 03:43:52.581792] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993bc0) with pdu=0x2000190fc998 00:34:07.394 [2024-07-21 03:43:52.582832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:218 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.394 [2024-07-21 03:43:52.582861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:07.394 [2024-07-21 03:43:52.594621] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993bc0) with pdu=0x2000190fe2e8 00:34:07.394 [2024-07-21 03:43:52.595520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:9137 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.394 [2024-07-21 03:43:52.595546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:07.394 [2024-07-21 03:43:52.607506] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993bc0) with pdu=0x2000190eb328 00:34:07.394 [2024-07-21 03:43:52.608242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:16715 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.394 [2024-07-21 03:43:52.608269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:34:07.394 [2024-07-21 03:43:52.621253] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993bc0) with pdu=0x2000190f1868 00:34:07.394 [2024-07-21 03:43:52.622803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:5758 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.394 [2024-07-21 03:43:52.622846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:34:07.394 [2024-07-21 03:43:52.633035] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993bc0) with pdu=0x2000190f2d80 00:34:07.394 [2024-07-21 03:43:52.634037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:13666 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.394 [2024-07-21 03:43:52.634063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:34:07.394 [2024-07-21 03:43:52.645756] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993bc0) with pdu=0x2000190ea680 00:34:07.394 [2024-07-21 03:43:52.646606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:6844 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.394 [2024-07-21 03:43:52.646660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:07.394 [2024-07-21 03:43:52.658963] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993bc0) with pdu=0x2000190df550 00:34:07.394 [2024-07-21 03:43:52.660100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:5142 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.394 [2024-07-21 03:43:52.660142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:34:07.394 [2024-07-21 03:43:52.670840] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993bc0) with pdu=0x2000190f2510 00:34:07.394 [2024-07-21 03:43:52.672819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:14044 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.394 [2024-07-21 03:43:52.672847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:34:07.394 [2024-07-21 03:43:52.682504] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993bc0) with pdu=0x2000190ebb98 00:34:07.394 [2024-07-21 03:43:52.683404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:21482 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.394 [2024-07-21 03:43:52.683435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:34:07.394 [2024-07-21 03:43:52.695435] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993bc0) with pdu=0x2000190f4b08 00:34:07.394 [2024-07-21 03:43:52.696481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:25068 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.394 [2024-07-21 03:43:52.696512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:34:07.652 [2024-07-21 03:43:52.707235] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993bc0) with pdu=0x2000190ef270 00:34:07.652 [2024-07-21 03:43:52.708222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:20444 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.652 [2024-07-21 03:43:52.708253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:34:07.652 [2024-07-21 03:43:52.721191] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993bc0) with pdu=0x2000190e12d8 00:34:07.652 [2024-07-21 03:43:52.722421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20501 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.652 [2024-07-21 03:43:52.722447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:34:07.652 [2024-07-21 03:43:52.735393] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993bc0) with pdu=0x2000190fda78 00:34:07.652 [2024-07-21 03:43:52.737257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:18260 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.652 [2024-07-21 03:43:52.737288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:34:07.652 [2024-07-21 03:43:52.748565] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993bc0) with pdu=0x2000190f9b30 00:34:07.652 [2024-07-21 03:43:52.750611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:18941 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.652 [2024-07-21 03:43:52.750665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:34:07.652 [2024-07-21 03:43:52.757549] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993bc0) with pdu=0x2000190e4578 00:34:07.652 [2024-07-21 03:43:52.758394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:12564 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.652 [2024-07-21 03:43:52.758425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:34:07.652 [2024-07-21 03:43:52.771855] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993bc0) with pdu=0x2000190e3060 00:34:07.652 [2024-07-21 03:43:52.773374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:3903 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.652 [2024-07-21 03:43:52.773406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:34:07.652 [2024-07-21 03:43:52.785068] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993bc0) with pdu=0x2000190f7970 00:34:07.652 [2024-07-21 03:43:52.786794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:12589 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.652 [2024-07-21 03:43:52.786837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:34:07.653 [2024-07-21 03:43:52.796846] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993bc0) with pdu=0x2000190f5378 00:34:07.653 [2024-07-21 03:43:52.798043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:11224 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.653 [2024-07-21 03:43:52.798069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:34:07.653 [2024-07-21 03:43:52.809695] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993bc0) with pdu=0x2000190eea00 00:34:07.653 [2024-07-21 03:43:52.810735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:486 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.653 [2024-07-21 03:43:52.810762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:07.653 [2024-07-21 03:43:52.822294] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993bc0) with pdu=0x2000190fa3a0 00:34:07.653 [2024-07-21 03:43:52.823644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:12442 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.653 [2024-07-21 03:43:52.823689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:07.653 [2024-07-21 03:43:52.836716] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993bc0) with pdu=0x2000190e01f8 00:34:07.653 [2024-07-21 03:43:52.838730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:21429 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.653 [2024-07-21 03:43:52.838773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:34:07.653 [2024-07-21 03:43:52.845761] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993bc0) with pdu=0x2000190df118 00:34:07.653 [2024-07-21 03:43:52.846587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:2714 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.653 [2024-07-21 03:43:52.846624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:34:07.653 [2024-07-21 03:43:52.858611] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993bc0) with pdu=0x2000190f7100 00:34:07.653 [2024-07-21 03:43:52.859461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:41 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.653 [2024-07-21 03:43:52.859487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:34:07.653 [2024-07-21 03:43:52.871651] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993bc0) with pdu=0x2000190ea248 00:34:07.653 [2024-07-21 03:43:52.872647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:24989 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.653 [2024-07-21 03:43:52.872679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:34:07.653 [2024-07-21 03:43:52.883699] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993bc0) with pdu=0x2000190e1b48 00:34:07.653 [2024-07-21 03:43:52.884675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:18396 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.653 [2024-07-21 03:43:52.884701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:34:07.653 [2024-07-21 03:43:52.896836] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993bc0) with pdu=0x2000190e1710 00:34:07.653 [2024-07-21 03:43:52.897971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:9615 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.653 [2024-07-21 03:43:52.898016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:07.653 [2024-07-21 03:43:52.910567] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993bc0) with pdu=0x2000190f6cc8 00:34:07.653 [2024-07-21 03:43:52.911565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:5989 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.653 [2024-07-21 03:43:52.911596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:07.653 [2024-07-21 03:43:52.922498] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993bc0) with pdu=0x2000190e3498 00:34:07.653 [2024-07-21 03:43:52.924353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:746 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.653 [2024-07-21 03:43:52.924384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:07.653 [2024-07-21 03:43:52.936437] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993bc0) with pdu=0x2000190fe2e8 00:34:07.653 [2024-07-21 03:43:52.937915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:2883 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.653 [2024-07-21 03:43:52.937958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:34:07.653 [2024-07-21 03:43:52.948318] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993bc0) with pdu=0x2000190e0630 00:34:07.653 [2024-07-21 03:43:52.949812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:1585 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.653 [2024-07-21 03:43:52.949854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:07.653 [2024-07-21 03:43:52.961282] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993bc0) with pdu=0x2000190f6458 00:34:07.653 [2024-07-21 03:43:52.962780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:9937 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.653 [2024-07-21 03:43:52.962808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:34:07.910 [2024-07-21 03:43:52.973590] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993bc0) with pdu=0x2000190e4de8 00:34:07.910 [2024-07-21 03:43:52.974589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:16981 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.910 [2024-07-21 03:43:52.974640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:07.910 [2024-07-21 03:43:52.986215] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993bc0) with pdu=0x2000190f8a50 00:34:07.910 [2024-07-21 03:43:52.987502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:23095 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.910 [2024-07-21 03:43:52.987534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:07.910 [2024-07-21 03:43:52.998997] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993bc0) with pdu=0x2000190e5658 00:34:07.910 [2024-07-21 03:43:53.000317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:2605 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.910 [2024-07-21 03:43:53.000343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:34:07.910 [2024-07-21 03:43:53.013194] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993bc0) with pdu=0x2000190fe2e8 00:34:07.910 [2024-07-21 03:43:53.015149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:16214 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.910 [2024-07-21 03:43:53.015180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:34:07.910 [2024-07-21 03:43:53.026349] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993bc0) with pdu=0x2000190f5378 00:34:07.910 [2024-07-21 03:43:53.028511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:6369 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.910 [2024-07-21 03:43:53.028538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:07.910 [2024-07-21 03:43:53.035304] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993bc0) with pdu=0x2000190e8d30 00:34:07.910 [2024-07-21 03:43:53.036282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:17848 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.910 [2024-07-21 03:43:53.036308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:34:07.910 [2024-07-21 03:43:53.047320] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993bc0) with pdu=0x2000190fcdd0 00:34:07.910 [2024-07-21 03:43:53.048263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:12638 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.910 [2024-07-21 03:43:53.048294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:34:07.910 [2024-07-21 03:43:53.060560] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993bc0) with pdu=0x2000190f2948 00:34:07.910 [2024-07-21 03:43:53.061676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9608 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.910 [2024-07-21 03:43:53.061703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:34:07.910 [2024-07-21 03:43:53.074362] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993bc0) with pdu=0x2000190ee190 00:34:07.910 [2024-07-21 03:43:53.075316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:19650 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.910 [2024-07-21 03:43:53.075343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:34:07.910 [2024-07-21 03:43:53.087568] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993bc0) with pdu=0x2000190f0350 00:34:07.910 [2024-07-21 03:43:53.088720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:15735 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.910 [2024-07-21 03:43:53.088762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:34:07.910 [2024-07-21 03:43:53.099526] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993bc0) with pdu=0x2000190f1ca0 00:34:07.910 [2024-07-21 03:43:53.101520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:15311 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.910 [2024-07-21 03:43:53.101551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:07.910 [2024-07-21 03:43:53.113493] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993bc0) with pdu=0x2000190e49b0 00:34:07.910 [2024-07-21 03:43:53.115088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:18354 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.910 [2024-07-21 03:43:53.115120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:07.910 [2024-07-21 03:43:53.125435] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993bc0) with pdu=0x2000190de038 00:34:07.910 [2024-07-21 03:43:53.127045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:12123 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.910 [2024-07-21 03:43:53.127077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:34:07.910 [2024-07-21 03:43:53.138706] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993bc0) with pdu=0x2000190f5378 00:34:07.910 [2024-07-21 03:43:53.140478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:5587 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.910 [2024-07-21 03:43:53.140506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:34:07.910 [2024-07-21 03:43:53.151853] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993bc0) with pdu=0x2000190e8d30 00:34:07.910 [2024-07-21 03:43:53.153802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:24509 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.910 [2024-07-21 03:43:53.153845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:34:07.910 [2024-07-21 03:43:53.161995] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993bc0) with pdu=0x2000190f6458 00:34:07.910 [2024-07-21 03:43:53.163241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:21451 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.910 [2024-07-21 03:43:53.163267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:07.910 [2024-07-21 03:43:53.175230] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993bc0) with pdu=0x2000190f3a28 00:34:07.910 [2024-07-21 03:43:53.176637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:6976 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.910 [2024-07-21 03:43:53.176680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:34:07.910 [2024-07-21 03:43:53.188437] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993bc0) with pdu=0x2000190df550 00:34:07.910 [2024-07-21 03:43:53.190018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:10714 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.910 [2024-07-21 03:43:53.190049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:34:07.910 [2024-07-21 03:43:53.200231] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993bc0) with pdu=0x2000190e6b70 00:34:07.910 [2024-07-21 03:43:53.201301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:2961 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.910 [2024-07-21 03:43:53.201328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:34:07.910 [2024-07-21 03:43:53.212959] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993bc0) with pdu=0x2000190e5658 00:34:07.910 [2024-07-21 03:43:53.213887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:22905 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.910 [2024-07-21 03:43:53.213915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:34:08.168 00:34:08.168 Latency(us) 00:34:08.168 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:08.168 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:08.168 nvme0n1 : 2.00 20205.13 78.93 0.00 0.00 6327.98 2961.26 15534.46 00:34:08.168 =================================================================================================================== 00:34:08.168 Total : 20205.13 78.93 0.00 0.00 6327.98 2961.26 15534.46 00:34:08.168 0 00:34:08.168 03:43:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:34:08.168 03:43:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:34:08.168 03:43:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:34:08.168 03:43:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:34:08.168 | .driver_specific 00:34:08.168 | .nvme_error 00:34:08.168 | .status_code 00:34:08.168 | .command_transient_transport_error' 00:34:08.426 03:43:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 158 > 0 )) 00:34:08.426 03:43:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2561490 00:34:08.426 03:43:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 2561490 ']' 00:34:08.426 03:43:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 2561490 00:34:08.426 03:43:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:34:08.426 03:43:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:34:08.426 03:43:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2561490 00:34:08.426 03:43:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:34:08.426 03:43:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:34:08.426 03:43:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2561490' 00:34:08.426 killing process with pid 2561490 00:34:08.426 03:43:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 2561490 00:34:08.426 Received shutdown signal, test time was about 2.000000 seconds 00:34:08.426 00:34:08.426 Latency(us) 00:34:08.426 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:08.426 =================================================================================================================== 00:34:08.426 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:08.426 03:43:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 2561490 00:34:08.684 03:43:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:34:08.684 03:43:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:34:08.684 03:43:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:34:08.684 03:43:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:34:08.684 03:43:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:34:08.684 03:43:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2561899 00:34:08.684 03:43:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:34:08.684 03:43:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2561899 /var/tmp/bperf.sock 00:34:08.684 03:43:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 2561899 ']' 00:34:08.684 03:43:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:08.684 03:43:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:34:08.684 03:43:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:08.684 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:08.684 03:43:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:34:08.684 03:43:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:08.685 [2024-07-21 03:43:53.785803] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:34:08.685 [2024-07-21 03:43:53.785884] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2561899 ] 00:34:08.685 I/O size of 131072 is greater than zero copy threshold (65536). 00:34:08.685 Zero copy mechanism will not be used. 00:34:08.685 EAL: No free 2048 kB hugepages reported on node 1 00:34:08.685 [2024-07-21 03:43:53.845817] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:08.685 [2024-07-21 03:43:53.931708] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:34:08.942 03:43:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:34:08.942 03:43:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:34:08.942 03:43:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:34:08.942 03:43:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:34:09.200 03:43:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:34:09.200 03:43:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:09.200 03:43:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:09.200 03:43:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:09.200 03:43:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:09.200 03:43:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:09.460 nvme0n1 00:34:09.460 03:43:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:34:09.460 03:43:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:09.460 03:43:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:09.460 03:43:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:09.460 03:43:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:34:09.460 03:43:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:09.460 I/O size of 131072 is greater than zero copy threshold (65536). 00:34:09.460 Zero copy mechanism will not be used. 00:34:09.460 Running I/O for 2 seconds... 00:34:09.460 [2024-07-21 03:43:54.758554] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:09.460 [2024-07-21 03:43:54.758940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.460 [2024-07-21 03:43:54.758983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:09.460 [2024-07-21 03:43:54.764548] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:09.460 [2024-07-21 03:43:54.764888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.460 [2024-07-21 03:43:54.764927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:09.460 [2024-07-21 03:43:54.770317] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:09.460 [2024-07-21 03:43:54.770672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.460 [2024-07-21 03:43:54.770703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:09.721 [2024-07-21 03:43:54.775871] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:09.721 [2024-07-21 03:43:54.776166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.721 [2024-07-21 03:43:54.776197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:09.721 [2024-07-21 03:43:54.782491] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:09.721 [2024-07-21 03:43:54.782825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.721 [2024-07-21 03:43:54.782854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:09.721 [2024-07-21 03:43:54.789194] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:09.721 [2024-07-21 03:43:54.789520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.721 [2024-07-21 03:43:54.789553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:09.721 [2024-07-21 03:43:54.796535] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:09.721 [2024-07-21 03:43:54.796859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.721 [2024-07-21 03:43:54.796888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:09.721 [2024-07-21 03:43:54.803820] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:09.721 [2024-07-21 03:43:54.804180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.721 [2024-07-21 03:43:54.804213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:09.721 [2024-07-21 03:43:54.810064] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:09.721 [2024-07-21 03:43:54.810390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.721 [2024-07-21 03:43:54.810423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:09.721 [2024-07-21 03:43:54.815626] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:09.721 [2024-07-21 03:43:54.815934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.721 [2024-07-21 03:43:54.815981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:09.721 [2024-07-21 03:43:54.821042] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:09.721 [2024-07-21 03:43:54.821365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.721 [2024-07-21 03:43:54.821404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:09.721 [2024-07-21 03:43:54.827842] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:09.721 [2024-07-21 03:43:54.828213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.721 [2024-07-21 03:43:54.828245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:09.721 [2024-07-21 03:43:54.833927] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:09.721 [2024-07-21 03:43:54.834264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.721 [2024-07-21 03:43:54.834296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:09.721 [2024-07-21 03:43:54.840239] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:09.721 [2024-07-21 03:43:54.840590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.721 [2024-07-21 03:43:54.840630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:09.721 [2024-07-21 03:43:54.847244] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:09.721 [2024-07-21 03:43:54.847609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.721 [2024-07-21 03:43:54.847662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:09.721 [2024-07-21 03:43:54.853491] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:09.721 [2024-07-21 03:43:54.853859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.721 [2024-07-21 03:43:54.853889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:09.721 [2024-07-21 03:43:54.858958] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:09.721 [2024-07-21 03:43:54.859295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.721 [2024-07-21 03:43:54.859325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:09.721 [2024-07-21 03:43:54.864649] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:09.721 [2024-07-21 03:43:54.864987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.721 [2024-07-21 03:43:54.865022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:09.721 [2024-07-21 03:43:54.870147] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:09.721 [2024-07-21 03:43:54.870476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.721 [2024-07-21 03:43:54.870508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:09.721 [2024-07-21 03:43:54.875717] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:09.721 [2024-07-21 03:43:54.876118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.721 [2024-07-21 03:43:54.876150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:09.721 [2024-07-21 03:43:54.881399] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:09.721 [2024-07-21 03:43:54.881767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.721 [2024-07-21 03:43:54.881796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:09.721 [2024-07-21 03:43:54.886824] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:09.721 [2024-07-21 03:43:54.887176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.721 [2024-07-21 03:43:54.887204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:09.721 [2024-07-21 03:43:54.893145] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:09.721 [2024-07-21 03:43:54.893483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.721 [2024-07-21 03:43:54.893515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:09.721 [2024-07-21 03:43:54.899405] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:09.721 [2024-07-21 03:43:54.899740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.721 [2024-07-21 03:43:54.899769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:09.721 [2024-07-21 03:43:54.904809] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:09.721 [2024-07-21 03:43:54.905138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.721 [2024-07-21 03:43:54.905169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:09.721 [2024-07-21 03:43:54.910271] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:09.721 [2024-07-21 03:43:54.910605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.721 [2024-07-21 03:43:54.910649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:09.721 [2024-07-21 03:43:54.915960] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:09.721 [2024-07-21 03:43:54.916281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.721 [2024-07-21 03:43:54.916311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:09.721 [2024-07-21 03:43:54.922601] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:09.721 [2024-07-21 03:43:54.922937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.721 [2024-07-21 03:43:54.922966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:09.721 [2024-07-21 03:43:54.929252] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:09.721 [2024-07-21 03:43:54.929589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.721 [2024-07-21 03:43:54.929630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:09.721 [2024-07-21 03:43:54.935398] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:09.721 [2024-07-21 03:43:54.935736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.721 [2024-07-21 03:43:54.935765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:09.721 [2024-07-21 03:43:54.940871] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:09.722 [2024-07-21 03:43:54.941201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.722 [2024-07-21 03:43:54.941233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:09.722 [2024-07-21 03:43:54.946256] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:09.722 [2024-07-21 03:43:54.946592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.722 [2024-07-21 03:43:54.946627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:09.722 [2024-07-21 03:43:54.951821] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:09.722 [2024-07-21 03:43:54.952170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.722 [2024-07-21 03:43:54.952202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:09.722 [2024-07-21 03:43:54.957407] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:09.722 [2024-07-21 03:43:54.957749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.722 [2024-07-21 03:43:54.957778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:09.722 [2024-07-21 03:43:54.962818] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:09.722 [2024-07-21 03:43:54.963150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.722 [2024-07-21 03:43:54.963181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:09.722 [2024-07-21 03:43:54.969075] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:09.722 [2024-07-21 03:43:54.969415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.722 [2024-07-21 03:43:54.969446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:09.722 [2024-07-21 03:43:54.975325] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:09.722 [2024-07-21 03:43:54.975652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.722 [2024-07-21 03:43:54.975706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:09.722 [2024-07-21 03:43:54.980934] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:09.722 [2024-07-21 03:43:54.981277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.722 [2024-07-21 03:43:54.981305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:09.722 [2024-07-21 03:43:54.986387] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:09.722 [2024-07-21 03:43:54.986730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.722 [2024-07-21 03:43:54.986760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:09.722 [2024-07-21 03:43:54.992140] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:09.722 [2024-07-21 03:43:54.992474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.722 [2024-07-21 03:43:54.992502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:09.722 [2024-07-21 03:43:54.998667] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:09.722 [2024-07-21 03:43:54.998981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.722 [2024-07-21 03:43:54.999010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:09.722 [2024-07-21 03:43:55.004228] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:09.722 [2024-07-21 03:43:55.004566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.722 [2024-07-21 03:43:55.004597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:09.722 [2024-07-21 03:43:55.009691] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:09.722 [2024-07-21 03:43:55.010021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.722 [2024-07-21 03:43:55.010052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:09.722 [2024-07-21 03:43:55.015471] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:09.722 [2024-07-21 03:43:55.015814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.722 [2024-07-21 03:43:55.015844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:09.722 [2024-07-21 03:43:55.021031] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:09.722 [2024-07-21 03:43:55.021358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.722 [2024-07-21 03:43:55.021390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:09.722 [2024-07-21 03:43:55.026772] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:09.722 [2024-07-21 03:43:55.027151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.722 [2024-07-21 03:43:55.027184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:09.999 [2024-07-21 03:43:55.033219] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:09.999 [2024-07-21 03:43:55.033553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.999 [2024-07-21 03:43:55.033582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:09.999 [2024-07-21 03:43:55.038834] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:09.999 [2024-07-21 03:43:55.039174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.999 [2024-07-21 03:43:55.039204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:09.999 [2024-07-21 03:43:55.044374] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:09.999 [2024-07-21 03:43:55.044720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.999 [2024-07-21 03:43:55.044750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:09.999 [2024-07-21 03:43:55.049781] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:09.999 [2024-07-21 03:43:55.050118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.999 [2024-07-21 03:43:55.050147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:09.999 [2024-07-21 03:43:55.056185] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:09.999 [2024-07-21 03:43:55.056513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.999 [2024-07-21 03:43:55.056542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:09.999 [2024-07-21 03:43:55.062801] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:09.999 [2024-07-21 03:43:55.063134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.999 [2024-07-21 03:43:55.063171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:09.999 [2024-07-21 03:43:55.069363] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:09.999 [2024-07-21 03:43:55.069709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.999 [2024-07-21 03:43:55.069737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:09.999 [2024-07-21 03:43:55.075894] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:09.999 [2024-07-21 03:43:55.076223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.999 [2024-07-21 03:43:55.076264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:09.999 [2024-07-21 03:43:55.082473] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:09.999 [2024-07-21 03:43:55.082801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.999 [2024-07-21 03:43:55.082830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:09.999 [2024-07-21 03:43:55.088162] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:09.999 [2024-07-21 03:43:55.088522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.999 [2024-07-21 03:43:55.088554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:09.999 [2024-07-21 03:43:55.093693] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:09.999 [2024-07-21 03:43:55.094143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.999 [2024-07-21 03:43:55.094174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:09.999 [2024-07-21 03:43:55.099357] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:09.999 [2024-07-21 03:43:55.099708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.999 [2024-07-21 03:43:55.099736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:09.999 [2024-07-21 03:43:55.104740] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:09.999 [2024-07-21 03:43:55.105059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.999 [2024-07-21 03:43:55.105091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:09.999 [2024-07-21 03:43:55.110504] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:09.999 [2024-07-21 03:43:55.110829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.999 [2024-07-21 03:43:55.110857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:09.999 [2024-07-21 03:43:55.117103] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:09.999 [2024-07-21 03:43:55.117441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.999 [2024-07-21 03:43:55.117469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:09.999 [2024-07-21 03:43:55.123553] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:09.999 [2024-07-21 03:43:55.123888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.999 [2024-07-21 03:43:55.123916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:09.999 [2024-07-21 03:43:55.129400] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:09.999 [2024-07-21 03:43:55.129746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.999 [2024-07-21 03:43:55.129774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:09.999 [2024-07-21 03:43:55.134748] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:09.999 [2024-07-21 03:43:55.135089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.999 [2024-07-21 03:43:55.135120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:09.999 [2024-07-21 03:43:55.140237] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:09.999 [2024-07-21 03:43:55.140559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.999 [2024-07-21 03:43:55.140591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:09.999 [2024-07-21 03:43:55.145701] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:09.999 [2024-07-21 03:43:55.146021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.999 [2024-07-21 03:43:55.146052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:09.999 [2024-07-21 03:43:55.151416] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:09.999 [2024-07-21 03:43:55.151754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.999 [2024-07-21 03:43:55.151783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:09.999 [2024-07-21 03:43:55.158080] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:10.000 [2024-07-21 03:43:55.158420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.000 [2024-07-21 03:43:55.158452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:10.000 [2024-07-21 03:43:55.163987] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:10.000 [2024-07-21 03:43:55.164319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.000 [2024-07-21 03:43:55.164346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:10.000 [2024-07-21 03:43:55.170288] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:10.000 [2024-07-21 03:43:55.170612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.000 [2024-07-21 03:43:55.170650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:10.000 [2024-07-21 03:43:55.175785] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:10.000 [2024-07-21 03:43:55.176110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.000 [2024-07-21 03:43:55.176142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:10.000 [2024-07-21 03:43:55.181252] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:10.000 [2024-07-21 03:43:55.181583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.000 [2024-07-21 03:43:55.181612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:10.000 [2024-07-21 03:43:55.188020] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:10.000 [2024-07-21 03:43:55.188343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.000 [2024-07-21 03:43:55.188374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:10.000 [2024-07-21 03:43:55.194082] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:10.000 [2024-07-21 03:43:55.194415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.000 [2024-07-21 03:43:55.194442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:10.000 [2024-07-21 03:43:55.201054] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:10.000 [2024-07-21 03:43:55.201270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.000 [2024-07-21 03:43:55.201301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:10.000 [2024-07-21 03:43:55.208837] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:10.000 [2024-07-21 03:43:55.209195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.000 [2024-07-21 03:43:55.209227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:10.000 [2024-07-21 03:43:55.215947] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:10.000 [2024-07-21 03:43:55.216267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.000 [2024-07-21 03:43:55.216299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:10.000 [2024-07-21 03:43:55.223594] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:10.000 [2024-07-21 03:43:55.223950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.000 [2024-07-21 03:43:55.223983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:10.000 [2024-07-21 03:43:55.231060] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:10.000 [2024-07-21 03:43:55.231400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.000 [2024-07-21 03:43:55.231432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:10.000 [2024-07-21 03:43:55.238942] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:10.000 [2024-07-21 03:43:55.239277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.000 [2024-07-21 03:43:55.239319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:10.000 [2024-07-21 03:43:55.246500] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:10.000 [2024-07-21 03:43:55.246911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.000 [2024-07-21 03:43:55.246953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:10.000 [2024-07-21 03:43:55.254193] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:10.000 [2024-07-21 03:43:55.254501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.000 [2024-07-21 03:43:55.254533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:10.000 [2024-07-21 03:43:55.260959] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:10.000 [2024-07-21 03:43:55.261343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.000 [2024-07-21 03:43:55.261375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:10.000 [2024-07-21 03:43:55.268041] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:10.000 [2024-07-21 03:43:55.268370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.000 [2024-07-21 03:43:55.268403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:10.000 [2024-07-21 03:43:55.273741] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:10.000 [2024-07-21 03:43:55.274076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.000 [2024-07-21 03:43:55.274108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:10.000 [2024-07-21 03:43:55.279320] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:10.000 [2024-07-21 03:43:55.279651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.000 [2024-07-21 03:43:55.279698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:10.000 [2024-07-21 03:43:55.284749] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:10.000 [2024-07-21 03:43:55.285071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.000 [2024-07-21 03:43:55.285103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:10.000 [2024-07-21 03:43:55.290750] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:10.000 [2024-07-21 03:43:55.291069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.000 [2024-07-21 03:43:55.291101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:10.000 [2024-07-21 03:43:55.297007] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:10.000 [2024-07-21 03:43:55.297348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.000 [2024-07-21 03:43:55.297380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:10.000 [2024-07-21 03:43:55.302812] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:10.000 [2024-07-21 03:43:55.303141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.000 [2024-07-21 03:43:55.303173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:10.000 [2024-07-21 03:43:55.308471] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:10.000 [2024-07-21 03:43:55.308832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.000 [2024-07-21 03:43:55.308862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:10.259 [2024-07-21 03:43:55.314326] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:10.260 [2024-07-21 03:43:55.314704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.260 [2024-07-21 03:43:55.314742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:10.260 [2024-07-21 03:43:55.321056] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:10.260 [2024-07-21 03:43:55.321426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.260 [2024-07-21 03:43:55.321456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:10.260 [2024-07-21 03:43:55.326564] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:10.260 [2024-07-21 03:43:55.326883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.260 [2024-07-21 03:43:55.326932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:10.260 [2024-07-21 03:43:55.331960] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:10.260 [2024-07-21 03:43:55.332294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.260 [2024-07-21 03:43:55.332323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:10.260 [2024-07-21 03:43:55.337815] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:10.260 [2024-07-21 03:43:55.338160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.260 [2024-07-21 03:43:55.338189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:10.260 [2024-07-21 03:43:55.345533] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:10.260 [2024-07-21 03:43:55.345863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.260 [2024-07-21 03:43:55.345906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:10.260 [2024-07-21 03:43:55.351522] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:10.260 [2024-07-21 03:43:55.351855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.260 [2024-07-21 03:43:55.351888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:10.260 [2024-07-21 03:43:55.357067] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:10.260 [2024-07-21 03:43:55.357391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.260 [2024-07-21 03:43:55.357423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:10.260 [2024-07-21 03:43:55.362416] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:10.260 [2024-07-21 03:43:55.362754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.260 [2024-07-21 03:43:55.362782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:10.260 [2024-07-21 03:43:55.368193] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:10.260 [2024-07-21 03:43:55.368519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.260 [2024-07-21 03:43:55.368547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:10.260 [2024-07-21 03:43:55.374903] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:10.260 [2024-07-21 03:43:55.375271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.260 [2024-07-21 03:43:55.375303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:10.260 [2024-07-21 03:43:55.381138] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:10.260 [2024-07-21 03:43:55.381473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.260 [2024-07-21 03:43:55.381502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:10.260 [2024-07-21 03:43:55.386971] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:10.260 [2024-07-21 03:43:55.387312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.260 [2024-07-21 03:43:55.387341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:10.260 [2024-07-21 03:43:55.392938] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:10.260 [2024-07-21 03:43:55.393269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.260 [2024-07-21 03:43:55.393298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:10.260 [2024-07-21 03:43:55.398824] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:10.260 [2024-07-21 03:43:55.399167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.260 [2024-07-21 03:43:55.399205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:10.260 [2024-07-21 03:43:55.406184] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:10.260 [2024-07-21 03:43:55.406507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.260 [2024-07-21 03:43:55.406539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:10.260 [2024-07-21 03:43:55.413280] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:10.260 [2024-07-21 03:43:55.413599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.260 [2024-07-21 03:43:55.413643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:10.260 [2024-07-21 03:43:55.420809] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:10.260 [2024-07-21 03:43:55.421156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.260 [2024-07-21 03:43:55.421184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:10.260 [2024-07-21 03:43:55.428706] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:10.260 [2024-07-21 03:43:55.429073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.260 [2024-07-21 03:43:55.429106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:10.260 [2024-07-21 03:43:55.436055] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:10.260 [2024-07-21 03:43:55.436396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.260 [2024-07-21 03:43:55.436428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:10.260 [2024-07-21 03:43:55.442788] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:10.260 [2024-07-21 03:43:55.443117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.260 [2024-07-21 03:43:55.443152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:10.260 [2024-07-21 03:43:55.449423] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:10.260 [2024-07-21 03:43:55.449761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.260 [2024-07-21 03:43:55.449790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:10.260 [2024-07-21 03:43:55.456510] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:10.260 [2024-07-21 03:43:55.456862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.260 [2024-07-21 03:43:55.456892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:10.260 [2024-07-21 03:43:55.464809] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:10.260 [2024-07-21 03:43:55.465214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.260 [2024-07-21 03:43:55.465246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:10.260 [2024-07-21 03:43:55.471075] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:10.260 [2024-07-21 03:43:55.471397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.260 [2024-07-21 03:43:55.471429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:10.260 [2024-07-21 03:43:55.476837] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:10.260 [2024-07-21 03:43:55.477185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.260 [2024-07-21 03:43:55.477217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:10.260 [2024-07-21 03:43:55.482883] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:10.260 [2024-07-21 03:43:55.482971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.260 [2024-07-21 03:43:55.483001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:10.260 [2024-07-21 03:43:55.490235] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:10.260 [2024-07-21 03:43:55.490559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.260 [2024-07-21 03:43:55.490591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:10.260 [2024-07-21 03:43:55.496403] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:10.260 [2024-07-21 03:43:55.496490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.261 [2024-07-21 03:43:55.496519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:10.261 [2024-07-21 03:43:55.502187] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:10.261 [2024-07-21 03:43:55.502552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.261 [2024-07-21 03:43:55.502581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:10.261 [2024-07-21 03:43:55.507683] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:10.261 [2024-07-21 03:43:55.508008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.261 [2024-07-21 03:43:55.508041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:10.261 [2024-07-21 03:43:55.513137] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:10.261 [2024-07-21 03:43:55.513457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.261 [2024-07-21 03:43:55.513490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:10.261 [2024-07-21 03:43:55.518688] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:10.261 [2024-07-21 03:43:55.518999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.261 [2024-07-21 03:43:55.519028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:10.261 [2024-07-21 03:43:55.524201] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:10.261 [2024-07-21 03:43:55.524521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.261 [2024-07-21 03:43:55.524553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:10.261 [2024-07-21 03:43:55.529575] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:10.261 [2024-07-21 03:43:55.529915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.261 [2024-07-21 03:43:55.529944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:10.261 [2024-07-21 03:43:55.536049] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:10.261 [2024-07-21 03:43:55.536372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.261 [2024-07-21 03:43:55.536405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:10.261 [2024-07-21 03:43:55.542468] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:10.261 [2024-07-21 03:43:55.542847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.261 [2024-07-21 03:43:55.542875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:10.261 [2024-07-21 03:43:55.549287] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:10.261 [2024-07-21 03:43:55.549632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.261 [2024-07-21 03:43:55.549660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:10.261 [2024-07-21 03:43:55.556442] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:10.261 [2024-07-21 03:43:55.556778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.261 [2024-07-21 03:43:55.556806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:10.261 [2024-07-21 03:43:55.563719] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:10.261 [2024-07-21 03:43:55.564031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.261 [2024-07-21 03:43:55.564059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:10.261 [2024-07-21 03:43:55.570741] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:10.261 [2024-07-21 03:43:55.571110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.520 [2024-07-21 03:43:55.571162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:10.520 [2024-07-21 03:43:55.577857] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:10.520 [2024-07-21 03:43:55.578180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.521 [2024-07-21 03:43:55.578210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:10.521 [2024-07-21 03:43:55.584979] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:10.521 [2024-07-21 03:43:55.585334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.521 [2024-07-21 03:43:55.585364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:10.521 [2024-07-21 03:43:55.591302] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:10.521 [2024-07-21 03:43:55.591626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.521 [2024-07-21 03:43:55.591656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:10.521 [2024-07-21 03:43:55.596445] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:10.521 [2024-07-21 03:43:55.596780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.521 [2024-07-21 03:43:55.596809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:10.521 [2024-07-21 03:43:55.601853] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:10.521 [2024-07-21 03:43:55.602148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.521 [2024-07-21 03:43:55.602180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:10.521 [2024-07-21 03:43:55.607048] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:10.521 [2024-07-21 03:43:55.607353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.521 [2024-07-21 03:43:55.607382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:10.521 [2024-07-21 03:43:55.612205] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:10.521 [2024-07-21 03:43:55.612502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.521 [2024-07-21 03:43:55.612541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:10.521 [2024-07-21 03:43:55.617473] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:10.521 [2024-07-21 03:43:55.617806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.521 [2024-07-21 03:43:55.617836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:10.521 [2024-07-21 03:43:55.623491] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:10.521 [2024-07-21 03:43:55.623838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.521 [2024-07-21 03:43:55.623869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:10.521 [2024-07-21 03:43:55.629942] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:10.521 [2024-07-21 03:43:55.630281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.521 [2024-07-21 03:43:55.630309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:10.521 [2024-07-21 03:43:55.635472] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:10.521 [2024-07-21 03:43:55.635799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.521 [2024-07-21 03:43:55.635828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:10.521 [2024-07-21 03:43:55.640990] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:10.521 [2024-07-21 03:43:55.641335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.521 [2024-07-21 03:43:55.641363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:10.521 [2024-07-21 03:43:55.646894] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:10.521 [2024-07-21 03:43:55.647244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.521 [2024-07-21 03:43:55.647277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:10.521 [2024-07-21 03:43:55.653512] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:10.521 [2024-07-21 03:43:55.653846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.521 [2024-07-21 03:43:55.653875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:10.521 [2024-07-21 03:43:55.659145] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:10.521 [2024-07-21 03:43:55.659485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.521 [2024-07-21 03:43:55.659518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:10.521 [2024-07-21 03:43:55.664645] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:10.521 [2024-07-21 03:43:55.664939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.521 [2024-07-21 03:43:55.664968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:10.521 [2024-07-21 03:43:55.671007] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:10.521 [2024-07-21 03:43:55.671293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.521 [2024-07-21 03:43:55.671326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:10.521 [2024-07-21 03:43:55.678256] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:10.521 [2024-07-21 03:43:55.678552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.521 [2024-07-21 03:43:55.678582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:10.521 [2024-07-21 03:43:55.685969] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:10.521 [2024-07-21 03:43:55.686326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.521 [2024-07-21 03:43:55.686358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:10.521 [2024-07-21 03:43:55.692518] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:10.521 [2024-07-21 03:43:55.692851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.521 [2024-07-21 03:43:55.692884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:10.521 [2024-07-21 03:43:55.698345] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:10.521 [2024-07-21 03:43:55.698691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.521 [2024-07-21 03:43:55.698721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:10.521 [2024-07-21 03:43:55.704074] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:10.521 [2024-07-21 03:43:55.704404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.521 [2024-07-21 03:43:55.704433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:10.521 [2024-07-21 03:43:55.710761] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:10.521 [2024-07-21 03:43:55.711098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.521 [2024-07-21 03:43:55.711131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:10.521 [2024-07-21 03:43:55.716308] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:10.521 [2024-07-21 03:43:55.716635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.521 [2024-07-21 03:43:55.716681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:10.521 [2024-07-21 03:43:55.721865] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:10.521 [2024-07-21 03:43:55.722207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.521 [2024-07-21 03:43:55.722240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:10.521 [2024-07-21 03:43:55.727425] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:10.521 [2024-07-21 03:43:55.727745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.521 [2024-07-21 03:43:55.727779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:10.521 [2024-07-21 03:43:55.734185] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:10.521 [2024-07-21 03:43:55.734513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.521 [2024-07-21 03:43:55.734541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:10.521 [2024-07-21 03:43:55.740969] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:10.521 [2024-07-21 03:43:55.741303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.521 [2024-07-21 03:43:55.741331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:10.521 [2024-07-21 03:43:55.748721] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:10.521 [2024-07-21 03:43:55.749086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.521 [2024-07-21 03:43:55.749119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:10.521 [2024-07-21 03:43:55.756370] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:10.522 [2024-07-21 03:43:55.756739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.522 [2024-07-21 03:43:55.756767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:10.522 [2024-07-21 03:43:55.763469] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:10.522 [2024-07-21 03:43:55.763811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.522 [2024-07-21 03:43:55.763839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:10.522 [2024-07-21 03:43:55.769970] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:10.522 [2024-07-21 03:43:55.770328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.522 [2024-07-21 03:43:55.770365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:10.522 [2024-07-21 03:43:55.776128] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:10.522 [2024-07-21 03:43:55.776452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.522 [2024-07-21 03:43:55.776484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:10.522 [2024-07-21 03:43:55.782021] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:10.522 [2024-07-21 03:43:55.782377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.522 [2024-07-21 03:43:55.782409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:10.522 [2024-07-21 03:43:55.787826] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:10.522 [2024-07-21 03:43:55.788171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.522 [2024-07-21 03:43:55.788204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:10.522 [2024-07-21 03:43:55.793244] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:10.522 [2024-07-21 03:43:55.793559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.522 [2024-07-21 03:43:55.793588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:10.522 [2024-07-21 03:43:55.799715] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:10.522 [2024-07-21 03:43:55.800109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.522 [2024-07-21 03:43:55.800141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:10.522 [2024-07-21 03:43:55.806341] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:10.522 [2024-07-21 03:43:55.806695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.522 [2024-07-21 03:43:55.806724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:10.522 [2024-07-21 03:43:55.812960] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:10.522 [2024-07-21 03:43:55.813285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.522 [2024-07-21 03:43:55.813314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:10.522 [2024-07-21 03:43:55.819321] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:10.522 [2024-07-21 03:43:55.819662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.522 [2024-07-21 03:43:55.819691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:10.522 [2024-07-21 03:43:55.824803] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:10.522 [2024-07-21 03:43:55.825127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.522 [2024-07-21 03:43:55.825159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:10.522 [2024-07-21 03:43:55.830249] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:10.522 [2024-07-21 03:43:55.830541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.522 [2024-07-21 03:43:55.830586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:10.781 [2024-07-21 03:43:55.835670] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:10.781 [2024-07-21 03:43:55.835961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.781 [2024-07-21 03:43:55.835996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:10.781 [2024-07-21 03:43:55.841024] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:10.781 [2024-07-21 03:43:55.841350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.781 [2024-07-21 03:43:55.841379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:10.781 [2024-07-21 03:43:55.847522] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:10.781 [2024-07-21 03:43:55.847847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.781 [2024-07-21 03:43:55.847877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:10.781 [2024-07-21 03:43:55.853477] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:10.781 [2024-07-21 03:43:55.853807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.781 [2024-07-21 03:43:55.853836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:10.781 [2024-07-21 03:43:55.859089] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:10.781 [2024-07-21 03:43:55.859419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.781 [2024-07-21 03:43:55.859448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:10.781 [2024-07-21 03:43:55.864456] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:10.781 [2024-07-21 03:43:55.864818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.781 [2024-07-21 03:43:55.864847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:10.781 [2024-07-21 03:43:55.869943] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:10.781 [2024-07-21 03:43:55.870264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.781 [2024-07-21 03:43:55.870296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:10.781 [2024-07-21 03:43:55.875334] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:10.781 [2024-07-21 03:43:55.875660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.781 [2024-07-21 03:43:55.875706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:10.781 [2024-07-21 03:43:55.880730] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:10.781 [2024-07-21 03:43:55.881062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.781 [2024-07-21 03:43:55.881094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:10.781 [2024-07-21 03:43:55.886831] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:10.781 [2024-07-21 03:43:55.887188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.781 [2024-07-21 03:43:55.887221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:10.781 [2024-07-21 03:43:55.893011] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:10.781 [2024-07-21 03:43:55.893349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.782 [2024-07-21 03:43:55.893379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:10.782 [2024-07-21 03:43:55.898452] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:10.782 [2024-07-21 03:43:55.898783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.782 [2024-07-21 03:43:55.898812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:10.782 [2024-07-21 03:43:55.903874] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:10.782 [2024-07-21 03:43:55.904210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.782 [2024-07-21 03:43:55.904242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:10.782 [2024-07-21 03:43:55.909884] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:10.782 [2024-07-21 03:43:55.910235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.782 [2024-07-21 03:43:55.910263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:10.782 [2024-07-21 03:43:55.916341] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:10.782 [2024-07-21 03:43:55.916712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.782 [2024-07-21 03:43:55.916740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:10.782 [2024-07-21 03:43:55.923154] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:10.782 [2024-07-21 03:43:55.923511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.782 [2024-07-21 03:43:55.923544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:10.782 [2024-07-21 03:43:55.928928] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:10.782 [2024-07-21 03:43:55.929251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.782 [2024-07-21 03:43:55.929283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:10.782 [2024-07-21 03:43:55.934325] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:10.782 [2024-07-21 03:43:55.934665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.782 [2024-07-21 03:43:55.934694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:10.782 [2024-07-21 03:43:55.939688] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:10.782 [2024-07-21 03:43:55.940014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.782 [2024-07-21 03:43:55.940046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:10.782 [2024-07-21 03:43:55.945143] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:10.782 [2024-07-21 03:43:55.945471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.782 [2024-07-21 03:43:55.945503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:10.782 [2024-07-21 03:43:55.950895] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:10.782 [2024-07-21 03:43:55.951250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.782 [2024-07-21 03:43:55.951282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:10.782 [2024-07-21 03:43:55.957563] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:10.782 [2024-07-21 03:43:55.957888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.782 [2024-07-21 03:43:55.957932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:10.782 [2024-07-21 03:43:55.963389] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:10.782 [2024-07-21 03:43:55.963728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.782 [2024-07-21 03:43:55.963756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:10.782 [2024-07-21 03:43:55.969520] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:10.782 [2024-07-21 03:43:55.969856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.782 [2024-07-21 03:43:55.969885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:10.782 [2024-07-21 03:43:55.976697] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:10.782 [2024-07-21 03:43:55.977006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.782 [2024-07-21 03:43:55.977036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:10.782 [2024-07-21 03:43:55.984464] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:10.782 [2024-07-21 03:43:55.984800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.782 [2024-07-21 03:43:55.984830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:10.782 [2024-07-21 03:43:55.990347] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:10.782 [2024-07-21 03:43:55.990693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.782 [2024-07-21 03:43:55.990727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:10.782 [2024-07-21 03:43:55.996250] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:10.782 [2024-07-21 03:43:55.996587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.782 [2024-07-21 03:43:55.996621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:10.782 [2024-07-21 03:43:56.002146] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:10.782 [2024-07-21 03:43:56.002479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.782 [2024-07-21 03:43:56.002509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:10.782 [2024-07-21 03:43:56.008421] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:10.782 [2024-07-21 03:43:56.008759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.782 [2024-07-21 03:43:56.008787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:10.782 [2024-07-21 03:43:56.015887] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:10.782 [2024-07-21 03:43:56.016283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.782 [2024-07-21 03:43:56.016315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:10.782 [2024-07-21 03:43:56.022155] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:10.782 [2024-07-21 03:43:56.022475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.782 [2024-07-21 03:43:56.022508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:10.782 [2024-07-21 03:43:56.027796] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:10.782 [2024-07-21 03:43:56.028125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.782 [2024-07-21 03:43:56.028157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:10.782 [2024-07-21 03:43:56.034005] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:10.782 [2024-07-21 03:43:56.034326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.782 [2024-07-21 03:43:56.034359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:10.782 [2024-07-21 03:43:56.040296] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:10.782 [2024-07-21 03:43:56.040674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.782 [2024-07-21 03:43:56.040703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:10.782 [2024-07-21 03:43:56.046476] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:10.782 [2024-07-21 03:43:56.046805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.782 [2024-07-21 03:43:56.046854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:10.782 [2024-07-21 03:43:56.053988] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:10.782 [2024-07-21 03:43:56.054361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.782 [2024-07-21 03:43:56.054393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:10.782 [2024-07-21 03:43:56.061024] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:10.782 [2024-07-21 03:43:56.061147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.782 [2024-07-21 03:43:56.061179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:10.782 [2024-07-21 03:43:56.069030] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:10.782 [2024-07-21 03:43:56.069369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.782 [2024-07-21 03:43:56.069402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:10.782 [2024-07-21 03:43:56.074710] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:10.782 [2024-07-21 03:43:56.075059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.783 [2024-07-21 03:43:56.075095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:10.783 [2024-07-21 03:43:56.080070] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:10.783 [2024-07-21 03:43:56.080419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.783 [2024-07-21 03:43:56.080447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:10.783 [2024-07-21 03:43:56.085889] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:10.783 [2024-07-21 03:43:56.086264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.783 [2024-07-21 03:43:56.086292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:10.783 [2024-07-21 03:43:56.091323] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:10.783 [2024-07-21 03:43:56.091682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.783 [2024-07-21 03:43:56.091713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:11.042 [2024-07-21 03:43:56.096685] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:11.042 [2024-07-21 03:43:56.097020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.042 [2024-07-21 03:43:56.097050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:11.042 [2024-07-21 03:43:56.102088] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:11.042 [2024-07-21 03:43:56.102392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.042 [2024-07-21 03:43:56.102429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:11.042 [2024-07-21 03:43:56.107988] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:11.042 [2024-07-21 03:43:56.108309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.042 [2024-07-21 03:43:56.108341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:11.042 [2024-07-21 03:43:56.114501] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:11.042 [2024-07-21 03:43:56.114838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.042 [2024-07-21 03:43:56.114866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:11.042 [2024-07-21 03:43:56.121046] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:11.042 [2024-07-21 03:43:56.121387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.042 [2024-07-21 03:43:56.121419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:11.042 [2024-07-21 03:43:56.126521] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:11.042 [2024-07-21 03:43:56.126842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.042 [2024-07-21 03:43:56.126869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:11.042 [2024-07-21 03:43:56.131840] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:11.042 [2024-07-21 03:43:56.132269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.042 [2024-07-21 03:43:56.132301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:11.042 [2024-07-21 03:43:56.137293] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:11.042 [2024-07-21 03:43:56.137636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.042 [2024-07-21 03:43:56.137665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:11.042 [2024-07-21 03:43:56.142780] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:11.042 [2024-07-21 03:43:56.143119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.042 [2024-07-21 03:43:56.143147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:11.042 [2024-07-21 03:43:56.148232] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:11.042 [2024-07-21 03:43:56.148565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.042 [2024-07-21 03:43:56.148600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:11.042 [2024-07-21 03:43:56.153814] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:11.042 [2024-07-21 03:43:56.154152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.042 [2024-07-21 03:43:56.154179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:11.042 [2024-07-21 03:43:56.159296] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:11.042 [2024-07-21 03:43:56.159633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.042 [2024-07-21 03:43:56.159661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:11.042 [2024-07-21 03:43:56.165287] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:11.042 [2024-07-21 03:43:56.165611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.042 [2024-07-21 03:43:56.165653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:11.042 [2024-07-21 03:43:56.171930] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:11.042 [2024-07-21 03:43:56.172270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.042 [2024-07-21 03:43:56.172302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:11.042 [2024-07-21 03:43:56.178431] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:11.042 [2024-07-21 03:43:56.178780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.042 [2024-07-21 03:43:56.178810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:11.042 [2024-07-21 03:43:56.183813] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:11.042 [2024-07-21 03:43:56.184142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.042 [2024-07-21 03:43:56.184174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:11.042 [2024-07-21 03:43:56.189277] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:11.042 [2024-07-21 03:43:56.189609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.042 [2024-07-21 03:43:56.189645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:11.042 [2024-07-21 03:43:56.194862] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:11.043 [2024-07-21 03:43:56.195206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.043 [2024-07-21 03:43:56.195234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:11.043 [2024-07-21 03:43:56.200351] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:11.043 [2024-07-21 03:43:56.200723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.043 [2024-07-21 03:43:56.200751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:11.043 [2024-07-21 03:43:56.205820] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:11.043 [2024-07-21 03:43:56.206157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.043 [2024-07-21 03:43:56.206186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:11.043 [2024-07-21 03:43:56.212173] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:11.043 [2024-07-21 03:43:56.212509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.043 [2024-07-21 03:43:56.212541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:11.043 [2024-07-21 03:43:56.218572] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:11.043 [2024-07-21 03:43:56.218686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.043 [2024-07-21 03:43:56.218713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:11.043 [2024-07-21 03:43:56.225026] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:11.043 [2024-07-21 03:43:56.225350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.043 [2024-07-21 03:43:56.225381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:11.043 [2024-07-21 03:43:56.230508] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:11.043 [2024-07-21 03:43:56.230875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.043 [2024-07-21 03:43:56.230903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:11.043 [2024-07-21 03:43:56.235905] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:11.043 [2024-07-21 03:43:56.236251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.043 [2024-07-21 03:43:56.236280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:11.043 [2024-07-21 03:43:56.241313] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:11.043 [2024-07-21 03:43:56.241654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.043 [2024-07-21 03:43:56.241683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:11.043 [2024-07-21 03:43:56.246671] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:11.043 [2024-07-21 03:43:56.246984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.043 [2024-07-21 03:43:56.247012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:11.043 [2024-07-21 03:43:56.252003] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:11.043 [2024-07-21 03:43:56.252333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.043 [2024-07-21 03:43:56.252361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:11.043 [2024-07-21 03:43:56.257481] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:11.043 [2024-07-21 03:43:56.257818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.043 [2024-07-21 03:43:56.257846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:11.043 [2024-07-21 03:43:56.264029] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:11.043 [2024-07-21 03:43:56.264354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.043 [2024-07-21 03:43:56.264386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:11.043 [2024-07-21 03:43:56.270039] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:11.043 [2024-07-21 03:43:56.270375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.043 [2024-07-21 03:43:56.270403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:11.043 [2024-07-21 03:43:56.275681] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:11.043 [2024-07-21 03:43:56.275986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.043 [2024-07-21 03:43:56.276015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:11.043 [2024-07-21 03:43:56.281133] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:11.043 [2024-07-21 03:43:56.281467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.043 [2024-07-21 03:43:56.281510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:11.043 [2024-07-21 03:43:56.286567] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:11.043 [2024-07-21 03:43:56.286895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.043 [2024-07-21 03:43:56.286939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:11.043 [2024-07-21 03:43:56.291985] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:11.043 [2024-07-21 03:43:56.292324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.043 [2024-07-21 03:43:56.292353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:11.043 [2024-07-21 03:43:56.297436] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:11.043 [2024-07-21 03:43:56.297767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.043 [2024-07-21 03:43:56.297801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:11.043 [2024-07-21 03:43:56.303604] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:11.043 [2024-07-21 03:43:56.303961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.043 [2024-07-21 03:43:56.303993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:11.043 [2024-07-21 03:43:56.309602] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:11.043 [2024-07-21 03:43:56.309954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.043 [2024-07-21 03:43:56.309987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:11.043 [2024-07-21 03:43:56.315117] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:11.043 [2024-07-21 03:43:56.315441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.043 [2024-07-21 03:43:56.315470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:11.043 [2024-07-21 03:43:56.320352] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:11.043 [2024-07-21 03:43:56.320696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.043 [2024-07-21 03:43:56.320725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:11.043 [2024-07-21 03:43:56.326143] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:11.043 [2024-07-21 03:43:56.326472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.043 [2024-07-21 03:43:56.326503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:11.043 [2024-07-21 03:43:56.332749] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:11.043 [2024-07-21 03:43:56.333075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.043 [2024-07-21 03:43:56.333103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:11.043 [2024-07-21 03:43:56.339375] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:11.043 [2024-07-21 03:43:56.339715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.043 [2024-07-21 03:43:56.339743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:11.043 [2024-07-21 03:43:56.345210] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:11.043 [2024-07-21 03:43:56.345542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.043 [2024-07-21 03:43:56.345570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:11.043 [2024-07-21 03:43:56.350539] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:11.043 [2024-07-21 03:43:56.350878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.043 [2024-07-21 03:43:56.350908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:11.302 [2024-07-21 03:43:56.355900] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:11.302 [2024-07-21 03:43:56.356236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.302 [2024-07-21 03:43:56.356270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:11.302 [2024-07-21 03:43:56.361157] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:11.302 [2024-07-21 03:43:56.361479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.302 [2024-07-21 03:43:56.361512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:11.302 [2024-07-21 03:43:56.367046] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:11.302 [2024-07-21 03:43:56.367381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.302 [2024-07-21 03:43:56.367414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:11.302 [2024-07-21 03:43:56.373530] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:11.302 [2024-07-21 03:43:56.373867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.302 [2024-07-21 03:43:56.373896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:11.302 [2024-07-21 03:43:56.379961] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:11.302 [2024-07-21 03:43:56.380307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.302 [2024-07-21 03:43:56.380335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:11.302 [2024-07-21 03:43:56.386621] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:11.302 [2024-07-21 03:43:56.386952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.302 [2024-07-21 03:43:56.386981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:11.302 [2024-07-21 03:43:56.392719] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:11.302 [2024-07-21 03:43:56.393018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.302 [2024-07-21 03:43:56.393055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:11.302 [2024-07-21 03:43:56.398155] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:11.302 [2024-07-21 03:43:56.398475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.302 [2024-07-21 03:43:56.398507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:11.302 [2024-07-21 03:43:56.404212] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:11.302 [2024-07-21 03:43:56.404589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.303 [2024-07-21 03:43:56.404644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:11.303 [2024-07-21 03:43:56.411496] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:11.303 [2024-07-21 03:43:56.411841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.303 [2024-07-21 03:43:56.411869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:11.303 [2024-07-21 03:43:56.417218] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:11.303 [2024-07-21 03:43:56.417551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.303 [2024-07-21 03:43:56.417580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:11.303 [2024-07-21 03:43:56.422737] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:11.303 [2024-07-21 03:43:56.423034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.303 [2024-07-21 03:43:56.423063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:11.303 [2024-07-21 03:43:56.428191] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:11.303 [2024-07-21 03:43:56.428529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.303 [2024-07-21 03:43:56.428558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:11.303 [2024-07-21 03:43:56.433725] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:11.303 [2024-07-21 03:43:56.434043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.303 [2024-07-21 03:43:56.434075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:11.303 [2024-07-21 03:43:56.439366] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:11.303 [2024-07-21 03:43:56.439706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.303 [2024-07-21 03:43:56.439735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:11.303 [2024-07-21 03:43:56.445052] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:11.303 [2024-07-21 03:43:56.445354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.303 [2024-07-21 03:43:56.445390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:11.303 [2024-07-21 03:43:56.450703] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:11.303 [2024-07-21 03:43:56.451035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.303 [2024-07-21 03:43:56.451068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:11.303 [2024-07-21 03:43:56.457206] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:11.303 [2024-07-21 03:43:56.457540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.303 [2024-07-21 03:43:56.457569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:11.303 [2024-07-21 03:43:56.462769] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:11.303 [2024-07-21 03:43:56.463110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.303 [2024-07-21 03:43:56.463139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:11.303 [2024-07-21 03:43:56.468532] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:11.303 [2024-07-21 03:43:56.468878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.303 [2024-07-21 03:43:56.468906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:11.303 [2024-07-21 03:43:56.474055] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:11.303 [2024-07-21 03:43:56.474387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.303 [2024-07-21 03:43:56.474417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:11.303 [2024-07-21 03:43:56.480035] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:11.303 [2024-07-21 03:43:56.480359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.303 [2024-07-21 03:43:56.480392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:11.303 [2024-07-21 03:43:56.486883] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:11.303 [2024-07-21 03:43:56.487238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.303 [2024-07-21 03:43:56.487274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:11.303 [2024-07-21 03:43:56.493469] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:11.303 [2024-07-21 03:43:56.493801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.303 [2024-07-21 03:43:56.493829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:11.303 [2024-07-21 03:43:56.499970] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:11.303 [2024-07-21 03:43:56.500303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.303 [2024-07-21 03:43:56.500331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:11.303 [2024-07-21 03:43:56.505760] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:11.303 [2024-07-21 03:43:56.506091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.303 [2024-07-21 03:43:56.506124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:11.303 [2024-07-21 03:43:56.511098] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:11.303 [2024-07-21 03:43:56.511421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.303 [2024-07-21 03:43:56.511452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:11.303 [2024-07-21 03:43:56.516502] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:11.303 [2024-07-21 03:43:56.516833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.303 [2024-07-21 03:43:56.516861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:11.303 [2024-07-21 03:43:56.522014] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:11.303 [2024-07-21 03:43:56.522345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.303 [2024-07-21 03:43:56.522373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:11.303 [2024-07-21 03:43:56.527502] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:11.303 [2024-07-21 03:43:56.527837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.303 [2024-07-21 03:43:56.527867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:11.303 [2024-07-21 03:43:56.533347] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:11.303 [2024-07-21 03:43:56.533692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.303 [2024-07-21 03:43:56.533720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:11.303 [2024-07-21 03:43:56.539948] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:11.303 [2024-07-21 03:43:56.540313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.303 [2024-07-21 03:43:56.540346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:11.303 [2024-07-21 03:43:56.545527] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:11.303 [2024-07-21 03:43:56.545869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.303 [2024-07-21 03:43:56.545897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:11.303 [2024-07-21 03:43:56.551275] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:11.303 [2024-07-21 03:43:56.551597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.303 [2024-07-21 03:43:56.551641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:11.303 [2024-07-21 03:43:56.556748] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:11.303 [2024-07-21 03:43:56.557089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.303 [2024-07-21 03:43:56.557122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:11.303 [2024-07-21 03:43:56.562988] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:11.303 [2024-07-21 03:43:56.563309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.303 [2024-07-21 03:43:56.563341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:11.303 [2024-07-21 03:43:56.569431] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:11.303 [2024-07-21 03:43:56.569800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.304 [2024-07-21 03:43:56.569842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:11.304 [2024-07-21 03:43:56.575724] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:11.304 [2024-07-21 03:43:56.576044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.304 [2024-07-21 03:43:56.576077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:11.304 [2024-07-21 03:43:56.581205] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:11.304 [2024-07-21 03:43:56.581524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.304 [2024-07-21 03:43:56.581556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:11.304 [2024-07-21 03:43:56.586657] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:11.304 [2024-07-21 03:43:56.586961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.304 [2024-07-21 03:43:56.586989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:11.304 [2024-07-21 03:43:56.592133] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:11.304 [2024-07-21 03:43:56.592495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.304 [2024-07-21 03:43:56.592523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:11.304 [2024-07-21 03:43:56.598192] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:11.304 [2024-07-21 03:43:56.598511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.304 [2024-07-21 03:43:56.598539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:11.304 [2024-07-21 03:43:56.604789] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:11.304 [2024-07-21 03:43:56.605122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.304 [2024-07-21 03:43:56.605155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:11.304 [2024-07-21 03:43:56.612074] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:11.304 [2024-07-21 03:43:56.612375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.304 [2024-07-21 03:43:56.612405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:11.563 [2024-07-21 03:43:56.619255] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:11.563 [2024-07-21 03:43:56.619623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.563 [2024-07-21 03:43:56.619653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:11.563 [2024-07-21 03:43:56.627240] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:11.563 [2024-07-21 03:43:56.627579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.563 [2024-07-21 03:43:56.627612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:11.563 [2024-07-21 03:43:56.635265] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:11.563 [2024-07-21 03:43:56.635612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.563 [2024-07-21 03:43:56.635664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:11.563 [2024-07-21 03:43:56.642886] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:11.563 [2024-07-21 03:43:56.643221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.563 [2024-07-21 03:43:56.643256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:11.563 [2024-07-21 03:43:56.651130] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:11.563 [2024-07-21 03:43:56.651484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.563 [2024-07-21 03:43:56.651512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:11.563 [2024-07-21 03:43:56.658687] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:11.563 [2024-07-21 03:43:56.659052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.563 [2024-07-21 03:43:56.659084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:11.563 [2024-07-21 03:43:56.666980] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:11.563 [2024-07-21 03:43:56.667308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.563 [2024-07-21 03:43:56.667337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:11.563 [2024-07-21 03:43:56.675090] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:11.563 [2024-07-21 03:43:56.675539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.563 [2024-07-21 03:43:56.675566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:11.563 [2024-07-21 03:43:56.683155] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:11.563 [2024-07-21 03:43:56.683548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.563 [2024-07-21 03:43:56.683581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:11.563 [2024-07-21 03:43:56.691305] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:11.563 [2024-07-21 03:43:56.691710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.563 [2024-07-21 03:43:56.691741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:11.563 [2024-07-21 03:43:56.698790] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:11.563 [2024-07-21 03:43:56.699158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.563 [2024-07-21 03:43:56.699186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:11.563 [2024-07-21 03:43:56.705044] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:11.563 [2024-07-21 03:43:56.705350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.563 [2024-07-21 03:43:56.705377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:11.563 [2024-07-21 03:43:56.710588] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:11.563 [2024-07-21 03:43:56.710920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.563 [2024-07-21 03:43:56.710948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:11.563 [2024-07-21 03:43:56.716045] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:11.563 [2024-07-21 03:43:56.716368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.563 [2024-07-21 03:43:56.716396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:11.563 [2024-07-21 03:43:56.722509] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:11.563 [2024-07-21 03:43:56.722849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.563 [2024-07-21 03:43:56.722879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:11.563 [2024-07-21 03:43:56.729220] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:11.563 [2024-07-21 03:43:56.729564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.563 [2024-07-21 03:43:56.729592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:11.563 [2024-07-21 03:43:56.735843] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:11.563 [2024-07-21 03:43:56.736158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.563 [2024-07-21 03:43:56.736186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:11.563 [2024-07-21 03:43:56.742352] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:11.563 [2024-07-21 03:43:56.742644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.563 [2024-07-21 03:43:56.742674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:11.563 [2024-07-21 03:43:56.747696] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:11.563 [2024-07-21 03:43:56.747968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.563 [2024-07-21 03:43:56.747995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:11.563 [2024-07-21 03:43:56.752795] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x993e90) with pdu=0x2000190fef90 00:34:11.563 [2024-07-21 03:43:56.753094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.563 [2024-07-21 03:43:56.753122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:11.563 00:34:11.563 Latency(us) 00:34:11.563 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:11.563 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:34:11.563 nvme0n1 : 2.00 5062.70 632.84 0.00 0.00 3152.87 2402.99 8155.59 00:34:11.563 =================================================================================================================== 00:34:11.563 Total : 5062.70 632.84 0.00 0.00 3152.87 2402.99 8155.59 00:34:11.563 0 00:34:11.563 03:43:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:34:11.563 03:43:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:34:11.563 03:43:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:34:11.563 | .driver_specific 00:34:11.563 | .nvme_error 00:34:11.563 | .status_code 00:34:11.563 | .command_transient_transport_error' 00:34:11.563 03:43:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:34:11.821 03:43:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 326 > 0 )) 00:34:11.821 03:43:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2561899 00:34:11.821 03:43:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 2561899 ']' 00:34:11.821 03:43:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 2561899 00:34:11.822 03:43:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:34:11.822 03:43:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:34:11.822 03:43:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2561899 00:34:11.822 03:43:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:34:11.822 03:43:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:34:11.822 03:43:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2561899' 00:34:11.822 killing process with pid 2561899 00:34:11.822 03:43:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 2561899 00:34:11.822 Received shutdown signal, test time was about 2.000000 seconds 00:34:11.822 00:34:11.822 Latency(us) 00:34:11.822 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:11.822 =================================================================================================================== 00:34:11.822 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:11.822 03:43:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 2561899 00:34:12.079 03:43:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 2560531 00:34:12.079 03:43:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 2560531 ']' 00:34:12.079 03:43:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 2560531 00:34:12.079 03:43:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:34:12.079 03:43:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:34:12.079 03:43:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2560531 00:34:12.079 03:43:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:34:12.079 03:43:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:34:12.079 03:43:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2560531' 00:34:12.079 killing process with pid 2560531 00:34:12.079 03:43:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 2560531 00:34:12.079 03:43:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 2560531 00:34:12.337 00:34:12.337 real 0m15.052s 00:34:12.337 user 0m29.852s 00:34:12.337 sys 0m4.171s 00:34:12.337 03:43:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1122 -- # xtrace_disable 00:34:12.337 03:43:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:12.337 ************************************ 00:34:12.337 END TEST nvmf_digest_error 00:34:12.337 ************************************ 00:34:12.337 03:43:57 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:34:12.337 03:43:57 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:34:12.337 03:43:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:34:12.337 03:43:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:34:12.337 03:43:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:34:12.337 03:43:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:34:12.337 03:43:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:34:12.337 03:43:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:34:12.337 rmmod nvme_tcp 00:34:12.337 rmmod nvme_fabrics 00:34:12.337 rmmod nvme_keyring 00:34:12.337 03:43:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:34:12.337 03:43:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:34:12.337 03:43:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:34:12.337 03:43:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 2560531 ']' 00:34:12.337 03:43:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 2560531 00:34:12.337 03:43:57 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@946 -- # '[' -z 2560531 ']' 00:34:12.337 03:43:57 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@950 -- # kill -0 2560531 00:34:12.337 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (2560531) - No such process 00:34:12.337 03:43:57 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@973 -- # echo 'Process with pid 2560531 is not found' 00:34:12.337 Process with pid 2560531 is not found 00:34:12.337 03:43:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:34:12.337 03:43:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:34:12.337 03:43:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:34:12.337 03:43:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:12.337 03:43:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:34:12.337 03:43:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:12.337 03:43:57 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:12.337 03:43:57 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:14.865 03:43:59 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:34:14.865 00:34:14.865 real 0m34.510s 00:34:14.865 user 1m0.508s 00:34:14.865 sys 0m9.956s 00:34:14.865 03:43:59 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1122 -- # xtrace_disable 00:34:14.865 03:43:59 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:34:14.865 ************************************ 00:34:14.865 END TEST nvmf_digest 00:34:14.865 ************************************ 00:34:14.865 03:43:59 nvmf_tcp -- nvmf/nvmf.sh@111 -- # [[ 0 -eq 1 ]] 00:34:14.865 03:43:59 nvmf_tcp -- nvmf/nvmf.sh@116 -- # [[ 0 -eq 1 ]] 00:34:14.865 03:43:59 nvmf_tcp -- nvmf/nvmf.sh@121 -- # [[ phy == phy ]] 00:34:14.865 03:43:59 nvmf_tcp -- nvmf/nvmf.sh@122 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:34:14.865 03:43:59 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:34:14.865 03:43:59 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:34:14.865 03:43:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:14.865 ************************************ 00:34:14.865 START TEST nvmf_bdevperf 00:34:14.865 ************************************ 00:34:14.865 03:43:59 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:34:14.865 * Looking for test storage... 00:34:14.865 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:14.865 03:43:59 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:14.865 03:43:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:34:14.865 03:43:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:14.865 03:43:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:14.865 03:43:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:14.865 03:43:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:14.865 03:43:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:14.865 03:43:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:14.865 03:43:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:14.865 03:43:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:14.865 03:43:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:14.865 03:43:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:14.865 03:43:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:34:14.865 03:43:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:34:14.865 03:43:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:14.865 03:43:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:14.865 03:43:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:14.865 03:43:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:14.865 03:43:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:14.865 03:43:59 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:14.865 03:43:59 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:14.865 03:43:59 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:14.865 03:43:59 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:14.865 03:43:59 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:14.865 03:43:59 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:14.865 03:43:59 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:34:14.865 03:43:59 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:14.865 03:43:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:34:14.865 03:43:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:14.865 03:43:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:14.865 03:43:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:14.865 03:43:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:14.865 03:43:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:14.865 03:43:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:14.865 03:43:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:14.865 03:43:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:14.865 03:43:59 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:34:14.865 03:43:59 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:34:14.865 03:43:59 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:34:14.865 03:43:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:34:14.865 03:43:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:14.865 03:43:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:34:14.865 03:43:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:34:14.865 03:43:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:34:14.865 03:43:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:14.865 03:43:59 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:14.865 03:43:59 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:14.865 03:43:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:34:14.865 03:43:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:34:14.865 03:43:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:34:14.865 03:43:59 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:16.808 03:44:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:16.808 03:44:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:34:16.808 03:44:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:34:16.808 03:44:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:34:16.808 03:44:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:34:16.808 03:44:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:34:16.808 03:44:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:34:16.808 03:44:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:34:16.808 03:44:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:34:16.808 03:44:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:34:16.808 03:44:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:34:16.808 03:44:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:34:16.808 03:44:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:34:16.808 03:44:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:34:16.808 03:44:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:34:16.808 03:44:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:16.808 03:44:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:16.808 03:44:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:16.808 03:44:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:16.808 03:44:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:16.808 03:44:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:16.808 03:44:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:16.808 03:44:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:16.808 03:44:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:16.808 03:44:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:16.808 03:44:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:16.808 03:44:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:34:16.808 03:44:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:34:16.808 03:44:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:34:16.808 03:44:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:34:16.808 03:44:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:34:16.808 03:44:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:34:16.808 03:44:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:16.808 03:44:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:34:16.808 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:34:16.808 03:44:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:16.808 03:44:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:16.808 03:44:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:16.808 03:44:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:16.808 03:44:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:16.808 03:44:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:16.808 03:44:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:34:16.808 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:34:16.808 03:44:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:16.808 03:44:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:16.808 03:44:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:16.808 03:44:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:16.808 03:44:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:16.808 03:44:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:34:16.808 03:44:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:34:16.808 03:44:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:34:16.808 03:44:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:16.808 03:44:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:16.808 03:44:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:16.808 03:44:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:16.808 03:44:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:16.808 03:44:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:16.808 03:44:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:16.808 03:44:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:34:16.808 Found net devices under 0000:0a:00.0: cvl_0_0 00:34:16.808 03:44:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:16.808 03:44:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:16.808 03:44:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:16.808 03:44:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:16.808 03:44:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:16.808 03:44:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:16.808 03:44:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:16.808 03:44:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:16.808 03:44:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:34:16.808 Found net devices under 0000:0a:00.1: cvl_0_1 00:34:16.808 03:44:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:16.808 03:44:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:34:16.808 03:44:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:34:16.808 03:44:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:34:16.808 03:44:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:34:16.808 03:44:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:34:16.808 03:44:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:16.808 03:44:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:16.809 03:44:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:16.809 03:44:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:34:16.809 03:44:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:16.809 03:44:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:16.809 03:44:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:34:16.809 03:44:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:16.809 03:44:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:16.809 03:44:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:34:16.809 03:44:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:34:16.809 03:44:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:34:16.809 03:44:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:16.809 03:44:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:16.809 03:44:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:16.809 03:44:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:34:16.809 03:44:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:16.809 03:44:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:16.809 03:44:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:16.809 03:44:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:34:16.809 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:16.809 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.257 ms 00:34:16.809 00:34:16.809 --- 10.0.0.2 ping statistics --- 00:34:16.809 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:16.809 rtt min/avg/max/mdev = 0.257/0.257/0.257/0.000 ms 00:34:16.809 03:44:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:16.809 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:16.809 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.149 ms 00:34:16.809 00:34:16.809 --- 10.0.0.1 ping statistics --- 00:34:16.809 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:16.809 rtt min/avg/max/mdev = 0.149/0.149/0.149/0.000 ms 00:34:16.809 03:44:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:16.809 03:44:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:34:16.809 03:44:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:34:16.809 03:44:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:16.809 03:44:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:34:16.809 03:44:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:34:16.809 03:44:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:16.809 03:44:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:34:16.809 03:44:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:34:16.809 03:44:01 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:34:16.809 03:44:01 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:34:16.809 03:44:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:16.809 03:44:01 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@720 -- # xtrace_disable 00:34:16.809 03:44:01 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:16.809 03:44:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=2564245 00:34:16.809 03:44:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:34:16.809 03:44:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 2564245 00:34:16.809 03:44:01 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@827 -- # '[' -z 2564245 ']' 00:34:16.809 03:44:01 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:16.809 03:44:01 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@832 -- # local max_retries=100 00:34:16.809 03:44:01 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:16.809 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:16.809 03:44:01 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # xtrace_disable 00:34:16.809 03:44:01 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:16.809 [2024-07-21 03:44:02.025255] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:34:16.809 [2024-07-21 03:44:02.025347] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:16.809 EAL: No free 2048 kB hugepages reported on node 1 00:34:16.809 [2024-07-21 03:44:02.104846] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:34:17.068 [2024-07-21 03:44:02.204459] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:17.068 [2024-07-21 03:44:02.204526] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:17.068 [2024-07-21 03:44:02.204543] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:17.068 [2024-07-21 03:44:02.204557] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:17.068 [2024-07-21 03:44:02.204574] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:17.068 [2024-07-21 03:44:02.204637] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:34:17.068 [2024-07-21 03:44:02.204700] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:34:17.068 [2024-07-21 03:44:02.204704] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:34:17.068 03:44:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:34:17.068 03:44:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@860 -- # return 0 00:34:17.068 03:44:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:17.068 03:44:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:17.068 03:44:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:17.068 03:44:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:17.068 03:44:02 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:17.068 03:44:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:17.068 03:44:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:17.068 [2024-07-21 03:44:02.351195] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:17.068 03:44:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:17.068 03:44:02 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:17.068 03:44:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:17.068 03:44:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:17.326 Malloc0 00:34:17.326 03:44:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:17.326 03:44:02 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:17.326 03:44:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:17.326 03:44:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:17.326 03:44:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:17.326 03:44:02 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:17.326 03:44:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:17.326 03:44:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:17.326 03:44:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:17.326 03:44:02 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:17.326 03:44:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:17.326 03:44:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:17.326 [2024-07-21 03:44:02.408131] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:17.326 03:44:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:17.326 03:44:02 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:34:17.326 03:44:02 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:34:17.326 03:44:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:34:17.326 03:44:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:34:17.326 03:44:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:17.326 03:44:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:17.326 { 00:34:17.326 "params": { 00:34:17.326 "name": "Nvme$subsystem", 00:34:17.326 "trtype": "$TEST_TRANSPORT", 00:34:17.326 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:17.326 "adrfam": "ipv4", 00:34:17.326 "trsvcid": "$NVMF_PORT", 00:34:17.326 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:17.326 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:17.326 "hdgst": ${hdgst:-false}, 00:34:17.326 "ddgst": ${ddgst:-false} 00:34:17.326 }, 00:34:17.326 "method": "bdev_nvme_attach_controller" 00:34:17.326 } 00:34:17.326 EOF 00:34:17.326 )") 00:34:17.326 03:44:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:34:17.326 03:44:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:34:17.326 03:44:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:34:17.326 03:44:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:34:17.326 "params": { 00:34:17.326 "name": "Nvme1", 00:34:17.326 "trtype": "tcp", 00:34:17.326 "traddr": "10.0.0.2", 00:34:17.326 "adrfam": "ipv4", 00:34:17.326 "trsvcid": "4420", 00:34:17.326 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:17.326 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:17.326 "hdgst": false, 00:34:17.326 "ddgst": false 00:34:17.326 }, 00:34:17.326 "method": "bdev_nvme_attach_controller" 00:34:17.327 }' 00:34:17.327 [2024-07-21 03:44:02.456126] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:34:17.327 [2024-07-21 03:44:02.456216] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2564305 ] 00:34:17.327 EAL: No free 2048 kB hugepages reported on node 1 00:34:17.327 [2024-07-21 03:44:02.520380] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:17.327 [2024-07-21 03:44:02.607562] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:34:17.585 Running I/O for 1 seconds... 00:34:18.972 00:34:18.972 Latency(us) 00:34:18.972 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:18.972 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:34:18.972 Verification LBA range: start 0x0 length 0x4000 00:34:18.972 Nvme1n1 : 1.01 8759.81 34.22 0.00 0.00 14525.43 2912.71 15728.64 00:34:18.972 =================================================================================================================== 00:34:18.972 Total : 8759.81 34.22 0.00 0.00 14525.43 2912.71 15728.64 00:34:18.972 03:44:04 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=2564532 00:34:18.972 03:44:04 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:34:18.972 03:44:04 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:34:18.972 03:44:04 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:34:18.972 03:44:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:34:18.972 03:44:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:34:18.972 03:44:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:18.972 03:44:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:18.972 { 00:34:18.972 "params": { 00:34:18.972 "name": "Nvme$subsystem", 00:34:18.972 "trtype": "$TEST_TRANSPORT", 00:34:18.972 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:18.972 "adrfam": "ipv4", 00:34:18.972 "trsvcid": "$NVMF_PORT", 00:34:18.972 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:18.972 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:18.972 "hdgst": ${hdgst:-false}, 00:34:18.972 "ddgst": ${ddgst:-false} 00:34:18.972 }, 00:34:18.972 "method": "bdev_nvme_attach_controller" 00:34:18.972 } 00:34:18.972 EOF 00:34:18.972 )") 00:34:18.972 03:44:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:34:18.972 03:44:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:34:18.972 03:44:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:34:18.972 03:44:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:34:18.972 "params": { 00:34:18.972 "name": "Nvme1", 00:34:18.972 "trtype": "tcp", 00:34:18.972 "traddr": "10.0.0.2", 00:34:18.972 "adrfam": "ipv4", 00:34:18.972 "trsvcid": "4420", 00:34:18.972 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:18.972 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:18.972 "hdgst": false, 00:34:18.972 "ddgst": false 00:34:18.972 }, 00:34:18.972 "method": "bdev_nvme_attach_controller" 00:34:18.972 }' 00:34:18.972 [2024-07-21 03:44:04.148482] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:34:18.972 [2024-07-21 03:44:04.148555] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2564532 ] 00:34:18.972 EAL: No free 2048 kB hugepages reported on node 1 00:34:18.972 [2024-07-21 03:44:04.208983] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:19.230 [2024-07-21 03:44:04.297385] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:34:19.230 Running I/O for 15 seconds... 00:34:22.511 03:44:07 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 2564245 00:34:22.511 03:44:07 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:34:22.511 [2024-07-21 03:44:07.116159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:49656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:22.511 [2024-07-21 03:44:07.116212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:22.511 [2024-07-21 03:44:07.116252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:49664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:22.511 [2024-07-21 03:44:07.116271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:22.511 [2024-07-21 03:44:07.116290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:49672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:22.511 [2024-07-21 03:44:07.116306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:22.511 [2024-07-21 03:44:07.116324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:49680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:22.511 [2024-07-21 03:44:07.116341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:22.511 [2024-07-21 03:44:07.116359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:49688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:22.511 [2024-07-21 03:44:07.116378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:22.511 [2024-07-21 03:44:07.116396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:49696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:22.511 [2024-07-21 03:44:07.116413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:22.511 [2024-07-21 03:44:07.116434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:49704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:22.511 [2024-07-21 03:44:07.116451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:22.511 [2024-07-21 03:44:07.116469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:49712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:22.511 [2024-07-21 03:44:07.116486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:22.511 [2024-07-21 03:44:07.116504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:49720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:22.511 [2024-07-21 03:44:07.116520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:22.511 [2024-07-21 03:44:07.116538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:49728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:22.511 [2024-07-21 03:44:07.116555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:22.511 [2024-07-21 03:44:07.116573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:49736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:22.511 [2024-07-21 03:44:07.116589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:22.511 [2024-07-21 03:44:07.116619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:49744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:22.511 [2024-07-21 03:44:07.116637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:22.511 [2024-07-21 03:44:07.116679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:49752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:22.511 [2024-07-21 03:44:07.116694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:22.511 [2024-07-21 03:44:07.116709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:49760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:22.512 [2024-07-21 03:44:07.116723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:22.512 [2024-07-21 03:44:07.116739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:49768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:22.512 [2024-07-21 03:44:07.116753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:22.512 [2024-07-21 03:44:07.116769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:49776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:22.512 [2024-07-21 03:44:07.116784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:22.512 [2024-07-21 03:44:07.116800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:49784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:22.512 [2024-07-21 03:44:07.116815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:22.512 [2024-07-21 03:44:07.116831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:49792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:22.512 [2024-07-21 03:44:07.116845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:22.512 [2024-07-21 03:44:07.116861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:49800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:22.512 [2024-07-21 03:44:07.116876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:22.512 [2024-07-21 03:44:07.116893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:49808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:22.512 [2024-07-21 03:44:07.116924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:22.512 [2024-07-21 03:44:07.116939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:49816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:22.512 [2024-07-21 03:44:07.116952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:22.512 [2024-07-21 03:44:07.116983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:49824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:22.512 [2024-07-21 03:44:07.117000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:22.512 [2024-07-21 03:44:07.117017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:49832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:22.512 [2024-07-21 03:44:07.117033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:22.512 [2024-07-21 03:44:07.117050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:49840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:22.512 [2024-07-21 03:44:07.117065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:22.512 [2024-07-21 03:44:07.117082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:49848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:22.512 [2024-07-21 03:44:07.117102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:22.512 [2024-07-21 03:44:07.117119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:49856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:22.512 [2024-07-21 03:44:07.117135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:22.512 [2024-07-21 03:44:07.117152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:49864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:22.512 [2024-07-21 03:44:07.117166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:22.512 [2024-07-21 03:44:07.117183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:49872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:22.512 [2024-07-21 03:44:07.117198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:22.512 [2024-07-21 03:44:07.117215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:49880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:22.512 [2024-07-21 03:44:07.117231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:22.512 [2024-07-21 03:44:07.117248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:49888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:22.512 [2024-07-21 03:44:07.117263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:22.512 [2024-07-21 03:44:07.117280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:49896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:22.512 [2024-07-21 03:44:07.117296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:22.512 [2024-07-21 03:44:07.117313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:49904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:22.512 [2024-07-21 03:44:07.117328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:22.512 [2024-07-21 03:44:07.117345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:49912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:22.512 [2024-07-21 03:44:07.117367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:22.512 [2024-07-21 03:44:07.117384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:49920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:22.512 [2024-07-21 03:44:07.117400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:22.512 [2024-07-21 03:44:07.117417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:49928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:22.512 [2024-07-21 03:44:07.117432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:22.512 [2024-07-21 03:44:07.117449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:49936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:22.512 [2024-07-21 03:44:07.117465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:22.512 [2024-07-21 03:44:07.117482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:49944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:22.512 [2024-07-21 03:44:07.117498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:22.512 [2024-07-21 03:44:07.117519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:49952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:22.512 [2024-07-21 03:44:07.117534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:22.512 [2024-07-21 03:44:07.117552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:49960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:22.512 [2024-07-21 03:44:07.117567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:22.512 [2024-07-21 03:44:07.117584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:49968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:22.512 [2024-07-21 03:44:07.117610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:22.512 [2024-07-21 03:44:07.117636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:49976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:22.512 [2024-07-21 03:44:07.117668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:22.512 [2024-07-21 03:44:07.117684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:49984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:22.512 [2024-07-21 03:44:07.117698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:22.512 [2024-07-21 03:44:07.117713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:49992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:22.512 [2024-07-21 03:44:07.117728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:22.512 [2024-07-21 03:44:07.117743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:50000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:22.512 [2024-07-21 03:44:07.117757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:22.512 [2024-07-21 03:44:07.117772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:49512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:22.512 [2024-07-21 03:44:07.117786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:22.512 [2024-07-21 03:44:07.117801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:49520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:22.512 [2024-07-21 03:44:07.117815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:22.512 [2024-07-21 03:44:07.117830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:50008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:22.512 [2024-07-21 03:44:07.117844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:22.512 [2024-07-21 03:44:07.117859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:50016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:22.512 [2024-07-21 03:44:07.117872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:22.512 [2024-07-21 03:44:07.117888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:50024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:22.512 [2024-07-21 03:44:07.117922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:22.512 [2024-07-21 03:44:07.117941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:50032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:22.512 [2024-07-21 03:44:07.117960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:22.512 [2024-07-21 03:44:07.117983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:50040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:22.512 [2024-07-21 03:44:07.117999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:22.512 [2024-07-21 03:44:07.118016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:50048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:22.512 [2024-07-21 03:44:07.118031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:22.512 [2024-07-21 03:44:07.118048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:50056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:22.512 [2024-07-21 03:44:07.118063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:22.512 [2024-07-21 03:44:07.118080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:50064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:22.512 [2024-07-21 03:44:07.118096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:22.512 [2024-07-21 03:44:07.118112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:50072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:22.512 [2024-07-21 03:44:07.118127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:22.512 [2024-07-21 03:44:07.118144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:50080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:22.512 [2024-07-21 03:44:07.118159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:22.512 [2024-07-21 03:44:07.118176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:50088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:22.513 [2024-07-21 03:44:07.118191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:22.513 [2024-07-21 03:44:07.118208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:50096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:22.513 [2024-07-21 03:44:07.118224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:22.513 [2024-07-21 03:44:07.118240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:50104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:22.513 [2024-07-21 03:44:07.118256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:22.513 [2024-07-21 03:44:07.118272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:50112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:22.513 [2024-07-21 03:44:07.118288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:22.513 [2024-07-21 03:44:07.118305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:50120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:22.513 [2024-07-21 03:44:07.118321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:22.513 [2024-07-21 03:44:07.118338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:50128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:22.513 [2024-07-21 03:44:07.118353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:22.513 [2024-07-21 03:44:07.118369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:50136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:22.513 [2024-07-21 03:44:07.118389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:22.513 [2024-07-21 03:44:07.118406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:50144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:22.513 [2024-07-21 03:44:07.118422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:22.513 [2024-07-21 03:44:07.118439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:50152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:22.513 [2024-07-21 03:44:07.118460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:22.513 [2024-07-21 03:44:07.118477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:50160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:22.513 [2024-07-21 03:44:07.118493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:22.513 [2024-07-21 03:44:07.118510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:50168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:22.513 [2024-07-21 03:44:07.118525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:22.513 [2024-07-21 03:44:07.118543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:50176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:22.513 [2024-07-21 03:44:07.118559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:22.513 [2024-07-21 03:44:07.118576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:50184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:22.513 [2024-07-21 03:44:07.118591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:22.513 [2024-07-21 03:44:07.118611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:50192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:22.513 [2024-07-21 03:44:07.118637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:22.513 [2024-07-21 03:44:07.118669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:50200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:22.513 [2024-07-21 03:44:07.118684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:22.513 [2024-07-21 03:44:07.118699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:50208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:22.513 [2024-07-21 03:44:07.118713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:22.513 [2024-07-21 03:44:07.118728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:50216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:22.513 [2024-07-21 03:44:07.118742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:22.513 [2024-07-21 03:44:07.118757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:50224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:22.513 [2024-07-21 03:44:07.118770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:22.513 [2024-07-21 03:44:07.118786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:50232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:22.513 [2024-07-21 03:44:07.118799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:22.513 [2024-07-21 03:44:07.118818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:50240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:22.513 [2024-07-21 03:44:07.118833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:22.513 [2024-07-21 03:44:07.118848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:50248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:22.513 [2024-07-21 03:44:07.118862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:22.513 [2024-07-21 03:44:07.118877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:50256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:22.513 [2024-07-21 03:44:07.118890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:22.513 [2024-07-21 03:44:07.118923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:50264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:22.513 [2024-07-21 03:44:07.118938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:22.513 [2024-07-21 03:44:07.118955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:50272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:22.513 [2024-07-21 03:44:07.118973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:22.513 [2024-07-21 03:44:07.118989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:50280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:22.513 [2024-07-21 03:44:07.119006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:22.513 [2024-07-21 03:44:07.119023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:50288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:22.513 [2024-07-21 03:44:07.119039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:22.513 [2024-07-21 03:44:07.119056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:50296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:22.513 [2024-07-21 03:44:07.119071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:22.513 [2024-07-21 03:44:07.119087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:50304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:22.513 [2024-07-21 03:44:07.119102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:22.513 [2024-07-21 03:44:07.119119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:50312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:22.513 [2024-07-21 03:44:07.119134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:22.513 [2024-07-21 03:44:07.119151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:50320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:22.513 [2024-07-21 03:44:07.119166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:22.513 [2024-07-21 03:44:07.119183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:50328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:22.513 [2024-07-21 03:44:07.119198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:22.513 [2024-07-21 03:44:07.119215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:50336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:22.513 [2024-07-21 03:44:07.119234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:22.513 [2024-07-21 03:44:07.119252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:50344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:22.513 [2024-07-21 03:44:07.119267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:22.513 [2024-07-21 03:44:07.119284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:50352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:22.513 [2024-07-21 03:44:07.119299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:22.513 [2024-07-21 03:44:07.119316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:50360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:22.513 [2024-07-21 03:44:07.119331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:22.513 [2024-07-21 03:44:07.119348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:50368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:22.513 [2024-07-21 03:44:07.119363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:22.513 [2024-07-21 03:44:07.119380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:50376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:22.513 [2024-07-21 03:44:07.119395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:22.513 [2024-07-21 03:44:07.119412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:50384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:22.513 [2024-07-21 03:44:07.119427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:22.513 [2024-07-21 03:44:07.119444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:50392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:22.513 [2024-07-21 03:44:07.119459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:22.513 [2024-07-21 03:44:07.119476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:50400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:22.513 [2024-07-21 03:44:07.119491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:22.513 [2024-07-21 03:44:07.119508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:50408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:22.513 [2024-07-21 03:44:07.119523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:22.513 [2024-07-21 03:44:07.119541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:50416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:22.513 [2024-07-21 03:44:07.119561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:22.513 [2024-07-21 03:44:07.119578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:50424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:22.513 [2024-07-21 03:44:07.119593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:22.514 [2024-07-21 03:44:07.119611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:50432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:22.514 [2024-07-21 03:44:07.119635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:22.514 [2024-07-21 03:44:07.119667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:50440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:22.514 [2024-07-21 03:44:07.119686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:22.514 [2024-07-21 03:44:07.119703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:50448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:22.514 [2024-07-21 03:44:07.119716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:22.514 [2024-07-21 03:44:07.119732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:50456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:22.514 [2024-07-21 03:44:07.119747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:22.514 [2024-07-21 03:44:07.119763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:50464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:22.514 [2024-07-21 03:44:07.119777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:22.514 [2024-07-21 03:44:07.119793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:50472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:22.514 [2024-07-21 03:44:07.119807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:22.514 [2024-07-21 03:44:07.119822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:50480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:22.514 [2024-07-21 03:44:07.119836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:22.514 [2024-07-21 03:44:07.119851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:50488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:22.514 [2024-07-21 03:44:07.119865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:22.514 [2024-07-21 03:44:07.119881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:50496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:22.514 [2024-07-21 03:44:07.119920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:22.514 [2024-07-21 03:44:07.119939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:50504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:22.514 [2024-07-21 03:44:07.119955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:22.514 [2024-07-21 03:44:07.119972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:50512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:22.514 [2024-07-21 03:44:07.119987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:22.514 [2024-07-21 03:44:07.120005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:50520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:22.514 [2024-07-21 03:44:07.120020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:22.514 [2024-07-21 03:44:07.120037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:49528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:22.514 [2024-07-21 03:44:07.120053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:22.514 [2024-07-21 03:44:07.120070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:49536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:22.514 [2024-07-21 03:44:07.120086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:22.514 [2024-07-21 03:44:07.120107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:49544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:22.514 [2024-07-21 03:44:07.120129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:22.514 [2024-07-21 03:44:07.120146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:49552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:22.514 [2024-07-21 03:44:07.120162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:22.514 [2024-07-21 03:44:07.120179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:49560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:22.514 [2024-07-21 03:44:07.120194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:22.514 [2024-07-21 03:44:07.120211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:49568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:22.514 [2024-07-21 03:44:07.120231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:22.514 [2024-07-21 03:44:07.120249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:49576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:22.514 [2024-07-21 03:44:07.120264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:22.514 [2024-07-21 03:44:07.120280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:49584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:22.514 [2024-07-21 03:44:07.120296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:22.514 [2024-07-21 03:44:07.120313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:49592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:22.514 [2024-07-21 03:44:07.120329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:22.514 [2024-07-21 03:44:07.120346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:49600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:22.514 [2024-07-21 03:44:07.120361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:22.514 [2024-07-21 03:44:07.120378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:49608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:22.514 [2024-07-21 03:44:07.120393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:22.514 [2024-07-21 03:44:07.120410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:49616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:22.514 [2024-07-21 03:44:07.120424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:22.514 [2024-07-21 03:44:07.120441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:49624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:22.514 [2024-07-21 03:44:07.120457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:22.514 [2024-07-21 03:44:07.120474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:49632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:22.514 [2024-07-21 03:44:07.120489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:22.514 [2024-07-21 03:44:07.120506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:49640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:22.514 [2024-07-21 03:44:07.120525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:22.514 [2024-07-21 03:44:07.120542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:49648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:22.514 [2024-07-21 03:44:07.120558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:22.514 [2024-07-21 03:44:07.120574] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc69a0 is same with the state(5) to be set 00:34:22.514 [2024-07-21 03:44:07.120592] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:22.514 [2024-07-21 03:44:07.120620] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:22.514 [2024-07-21 03:44:07.120635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:50528 len:8 PRP1 0x0 PRP2 0x0 00:34:22.514 [2024-07-21 03:44:07.120650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:22.514 [2024-07-21 03:44:07.120732] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1cc69a0 was disconnected and freed. reset controller. 00:34:22.514 [2024-07-21 03:44:07.124488] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.514 [2024-07-21 03:44:07.124566] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:22.514 [2024-07-21 03:44:07.125307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.514 [2024-07-21 03:44:07.125337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:22.514 [2024-07-21 03:44:07.125363] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:22.514 [2024-07-21 03:44:07.125626] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:22.514 [2024-07-21 03:44:07.125865] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.514 [2024-07-21 03:44:07.125887] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.514 [2024-07-21 03:44:07.125927] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.514 [2024-07-21 03:44:07.129566] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.514 [2024-07-21 03:44:07.138746] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.514 [2024-07-21 03:44:07.139167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.514 [2024-07-21 03:44:07.139199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:22.514 [2024-07-21 03:44:07.139217] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:22.514 [2024-07-21 03:44:07.139455] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:22.514 [2024-07-21 03:44:07.139722] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.514 [2024-07-21 03:44:07.139744] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.514 [2024-07-21 03:44:07.139758] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.514 [2024-07-21 03:44:07.143308] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.514 [2024-07-21 03:44:07.152601] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.514 [2024-07-21 03:44:07.152994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.514 [2024-07-21 03:44:07.153035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:22.514 [2024-07-21 03:44:07.153053] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:22.514 [2024-07-21 03:44:07.153291] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:22.514 [2024-07-21 03:44:07.153534] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.514 [2024-07-21 03:44:07.153558] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.514 [2024-07-21 03:44:07.153574] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.514 [2024-07-21 03:44:07.157160] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.515 [2024-07-21 03:44:07.166468] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.515 [2024-07-21 03:44:07.166882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.515 [2024-07-21 03:44:07.166914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:22.515 [2024-07-21 03:44:07.166932] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:22.515 [2024-07-21 03:44:07.167172] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:22.515 [2024-07-21 03:44:07.167414] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.515 [2024-07-21 03:44:07.167438] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.515 [2024-07-21 03:44:07.167455] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.515 [2024-07-21 03:44:07.171037] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.515 [2024-07-21 03:44:07.180336] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.515 [2024-07-21 03:44:07.180759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.515 [2024-07-21 03:44:07.180790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:22.515 [2024-07-21 03:44:07.180809] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:22.515 [2024-07-21 03:44:07.181048] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:22.515 [2024-07-21 03:44:07.181291] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.515 [2024-07-21 03:44:07.181314] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.515 [2024-07-21 03:44:07.181330] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.515 [2024-07-21 03:44:07.184908] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.515 [2024-07-21 03:44:07.194213] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.515 [2024-07-21 03:44:07.194627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.515 [2024-07-21 03:44:07.194660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:22.515 [2024-07-21 03:44:07.194678] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:22.515 [2024-07-21 03:44:07.194918] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:22.515 [2024-07-21 03:44:07.195166] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.515 [2024-07-21 03:44:07.195190] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.515 [2024-07-21 03:44:07.195206] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.515 [2024-07-21 03:44:07.198787] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.515 [2024-07-21 03:44:07.208066] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.515 [2024-07-21 03:44:07.208444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.515 [2024-07-21 03:44:07.208483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:22.515 [2024-07-21 03:44:07.208501] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:22.515 [2024-07-21 03:44:07.208752] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:22.515 [2024-07-21 03:44:07.208996] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.515 [2024-07-21 03:44:07.209020] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.515 [2024-07-21 03:44:07.209036] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.515 [2024-07-21 03:44:07.212607] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.515 [2024-07-21 03:44:07.222096] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.515 [2024-07-21 03:44:07.222491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.515 [2024-07-21 03:44:07.222527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:22.515 [2024-07-21 03:44:07.222546] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:22.515 [2024-07-21 03:44:07.222801] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:22.515 [2024-07-21 03:44:07.223045] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.515 [2024-07-21 03:44:07.223068] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.515 [2024-07-21 03:44:07.223085] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.515 [2024-07-21 03:44:07.226663] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.515 [2024-07-21 03:44:07.235956] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.515 [2024-07-21 03:44:07.236363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.515 [2024-07-21 03:44:07.236394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:22.515 [2024-07-21 03:44:07.236412] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:22.515 [2024-07-21 03:44:07.236661] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:22.515 [2024-07-21 03:44:07.236904] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.515 [2024-07-21 03:44:07.236928] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.515 [2024-07-21 03:44:07.236944] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.515 [2024-07-21 03:44:07.240521] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.515 [2024-07-21 03:44:07.249807] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.515 [2024-07-21 03:44:07.250203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.515 [2024-07-21 03:44:07.250236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:22.515 [2024-07-21 03:44:07.250254] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:22.515 [2024-07-21 03:44:07.250499] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:22.515 [2024-07-21 03:44:07.250752] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.515 [2024-07-21 03:44:07.250777] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.515 [2024-07-21 03:44:07.250793] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.515 [2024-07-21 03:44:07.254364] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.515 [2024-07-21 03:44:07.263656] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.515 [2024-07-21 03:44:07.264023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.515 [2024-07-21 03:44:07.264053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:22.515 [2024-07-21 03:44:07.264071] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:22.515 [2024-07-21 03:44:07.264309] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:22.515 [2024-07-21 03:44:07.264551] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.515 [2024-07-21 03:44:07.264575] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.515 [2024-07-21 03:44:07.264591] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.515 [2024-07-21 03:44:07.268175] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.515 [2024-07-21 03:44:07.277684] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.515 [2024-07-21 03:44:07.278097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.515 [2024-07-21 03:44:07.278128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:22.515 [2024-07-21 03:44:07.278146] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:22.515 [2024-07-21 03:44:07.278384] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:22.515 [2024-07-21 03:44:07.278638] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.515 [2024-07-21 03:44:07.278663] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.516 [2024-07-21 03:44:07.278679] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.516 [2024-07-21 03:44:07.282249] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.516 [2024-07-21 03:44:07.291531] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.516 [2024-07-21 03:44:07.291963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.516 [2024-07-21 03:44:07.291995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:22.516 [2024-07-21 03:44:07.292026] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:22.516 [2024-07-21 03:44:07.292266] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:22.516 [2024-07-21 03:44:07.292508] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.516 [2024-07-21 03:44:07.292532] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.516 [2024-07-21 03:44:07.292548] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.516 [2024-07-21 03:44:07.296143] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.516 [2024-07-21 03:44:07.305424] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.516 [2024-07-21 03:44:07.305798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.516 [2024-07-21 03:44:07.305840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:22.516 [2024-07-21 03:44:07.305858] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:22.516 [2024-07-21 03:44:07.306099] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:22.516 [2024-07-21 03:44:07.306342] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.516 [2024-07-21 03:44:07.306366] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.516 [2024-07-21 03:44:07.306381] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.516 [2024-07-21 03:44:07.309966] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.516 [2024-07-21 03:44:07.319446] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.516 [2024-07-21 03:44:07.319803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.516 [2024-07-21 03:44:07.319834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:22.516 [2024-07-21 03:44:07.319853] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:22.516 [2024-07-21 03:44:07.320091] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:22.516 [2024-07-21 03:44:07.320335] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.516 [2024-07-21 03:44:07.320358] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.516 [2024-07-21 03:44:07.320374] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.516 [2024-07-21 03:44:07.323953] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.516 [2024-07-21 03:44:07.333455] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.516 [2024-07-21 03:44:07.333837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.516 [2024-07-21 03:44:07.333878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:22.516 [2024-07-21 03:44:07.333895] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:22.516 [2024-07-21 03:44:07.334133] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:22.516 [2024-07-21 03:44:07.334381] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.516 [2024-07-21 03:44:07.334406] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.516 [2024-07-21 03:44:07.334422] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.516 [2024-07-21 03:44:07.338002] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.516 [2024-07-21 03:44:07.347486] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.516 [2024-07-21 03:44:07.347904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.516 [2024-07-21 03:44:07.347935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:22.516 [2024-07-21 03:44:07.347957] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:22.516 [2024-07-21 03:44:07.348195] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:22.516 [2024-07-21 03:44:07.348437] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.516 [2024-07-21 03:44:07.348461] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.516 [2024-07-21 03:44:07.348477] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.516 [2024-07-21 03:44:07.352058] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.516 [2024-07-21 03:44:07.361338] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.516 [2024-07-21 03:44:07.361755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.516 [2024-07-21 03:44:07.361786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:22.516 [2024-07-21 03:44:07.361810] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:22.516 [2024-07-21 03:44:07.362048] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:22.516 [2024-07-21 03:44:07.362291] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.516 [2024-07-21 03:44:07.362314] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.516 [2024-07-21 03:44:07.362331] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.516 [2024-07-21 03:44:07.365910] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.516 [2024-07-21 03:44:07.375188] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.516 [2024-07-21 03:44:07.375577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.516 [2024-07-21 03:44:07.375608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:22.516 [2024-07-21 03:44:07.375636] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:22.516 [2024-07-21 03:44:07.375875] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:22.516 [2024-07-21 03:44:07.376119] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.516 [2024-07-21 03:44:07.376153] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.516 [2024-07-21 03:44:07.376168] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.516 [2024-07-21 03:44:07.379772] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.516 [2024-07-21 03:44:07.389057] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.516 [2024-07-21 03:44:07.389466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.516 [2024-07-21 03:44:07.389497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:22.516 [2024-07-21 03:44:07.389515] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:22.516 [2024-07-21 03:44:07.389763] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:22.516 [2024-07-21 03:44:07.390007] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.516 [2024-07-21 03:44:07.390031] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.516 [2024-07-21 03:44:07.390047] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.516 [2024-07-21 03:44:07.393624] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.516 [2024-07-21 03:44:07.402920] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.516 [2024-07-21 03:44:07.403325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.516 [2024-07-21 03:44:07.403356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:22.516 [2024-07-21 03:44:07.403374] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:22.516 [2024-07-21 03:44:07.403624] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:22.516 [2024-07-21 03:44:07.403867] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.516 [2024-07-21 03:44:07.403903] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.516 [2024-07-21 03:44:07.403919] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.516 [2024-07-21 03:44:07.407490] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.516 [2024-07-21 03:44:07.416781] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.516 [2024-07-21 03:44:07.417272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.516 [2024-07-21 03:44:07.417325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:22.516 [2024-07-21 03:44:07.417343] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:22.516 [2024-07-21 03:44:07.417582] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:22.516 [2024-07-21 03:44:07.417835] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.516 [2024-07-21 03:44:07.417860] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.516 [2024-07-21 03:44:07.417875] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.516 [2024-07-21 03:44:07.421448] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.516 [2024-07-21 03:44:07.430746] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.516 [2024-07-21 03:44:07.431223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.516 [2024-07-21 03:44:07.431276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:22.516 [2024-07-21 03:44:07.431300] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:22.516 [2024-07-21 03:44:07.431539] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:22.516 [2024-07-21 03:44:07.431794] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.517 [2024-07-21 03:44:07.431818] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.517 [2024-07-21 03:44:07.431834] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.517 [2024-07-21 03:44:07.435406] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.517 [2024-07-21 03:44:07.444695] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.517 [2024-07-21 03:44:07.445130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.517 [2024-07-21 03:44:07.445182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:22.517 [2024-07-21 03:44:07.445200] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:22.517 [2024-07-21 03:44:07.445438] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:22.517 [2024-07-21 03:44:07.445692] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.517 [2024-07-21 03:44:07.445717] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.517 [2024-07-21 03:44:07.445733] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.517 [2024-07-21 03:44:07.449302] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.517 [2024-07-21 03:44:07.458590] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.517 [2024-07-21 03:44:07.458985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.517 [2024-07-21 03:44:07.459025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:22.517 [2024-07-21 03:44:07.459043] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:22.517 [2024-07-21 03:44:07.459287] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:22.517 [2024-07-21 03:44:07.459529] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.517 [2024-07-21 03:44:07.459553] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.517 [2024-07-21 03:44:07.459570] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.517 [2024-07-21 03:44:07.463153] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.517 [2024-07-21 03:44:07.472434] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.517 [2024-07-21 03:44:07.472843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.517 [2024-07-21 03:44:07.472873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:22.517 [2024-07-21 03:44:07.472891] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:22.517 [2024-07-21 03:44:07.473129] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:22.517 [2024-07-21 03:44:07.473372] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.517 [2024-07-21 03:44:07.473402] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.517 [2024-07-21 03:44:07.473419] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.517 [2024-07-21 03:44:07.477003] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.517 [2024-07-21 03:44:07.486294] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.517 [2024-07-21 03:44:07.486697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.517 [2024-07-21 03:44:07.486730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:22.517 [2024-07-21 03:44:07.486748] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:22.517 [2024-07-21 03:44:07.486988] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:22.517 [2024-07-21 03:44:07.487231] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.517 [2024-07-21 03:44:07.487255] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.517 [2024-07-21 03:44:07.487270] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.517 [2024-07-21 03:44:07.490870] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.517 [2024-07-21 03:44:07.500158] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.517 [2024-07-21 03:44:07.500575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.517 [2024-07-21 03:44:07.500607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:22.517 [2024-07-21 03:44:07.500642] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:22.517 [2024-07-21 03:44:07.500883] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:22.517 [2024-07-21 03:44:07.501126] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.517 [2024-07-21 03:44:07.501150] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.517 [2024-07-21 03:44:07.501166] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.517 [2024-07-21 03:44:07.504747] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.517 [2024-07-21 03:44:07.514028] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.517 [2024-07-21 03:44:07.514405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.517 [2024-07-21 03:44:07.514440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:22.517 [2024-07-21 03:44:07.514459] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:22.517 [2024-07-21 03:44:07.514709] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:22.517 [2024-07-21 03:44:07.514953] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.517 [2024-07-21 03:44:07.514977] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.517 [2024-07-21 03:44:07.514993] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.517 [2024-07-21 03:44:07.518562] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.517 [2024-07-21 03:44:07.528066] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.517 [2024-07-21 03:44:07.528482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.517 [2024-07-21 03:44:07.528513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:22.517 [2024-07-21 03:44:07.528542] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:22.517 [2024-07-21 03:44:07.528792] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:22.517 [2024-07-21 03:44:07.529036] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.517 [2024-07-21 03:44:07.529060] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.517 [2024-07-21 03:44:07.529076] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.517 [2024-07-21 03:44:07.532655] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.517 [2024-07-21 03:44:07.541933] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.517 [2024-07-21 03:44:07.542351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.517 [2024-07-21 03:44:07.542382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:22.517 [2024-07-21 03:44:07.542407] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:22.517 [2024-07-21 03:44:07.542657] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:22.517 [2024-07-21 03:44:07.542901] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.517 [2024-07-21 03:44:07.542925] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.517 [2024-07-21 03:44:07.542941] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.517 [2024-07-21 03:44:07.546511] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.517 [2024-07-21 03:44:07.555798] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.517 [2024-07-21 03:44:07.556193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.517 [2024-07-21 03:44:07.556227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:22.517 [2024-07-21 03:44:07.556245] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:22.517 [2024-07-21 03:44:07.556489] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:22.517 [2024-07-21 03:44:07.556742] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.517 [2024-07-21 03:44:07.556766] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.517 [2024-07-21 03:44:07.556782] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.517 [2024-07-21 03:44:07.560451] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.517 [2024-07-21 03:44:07.569738] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.517 [2024-07-21 03:44:07.570131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.517 [2024-07-21 03:44:07.570165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:22.517 [2024-07-21 03:44:07.570183] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:22.517 [2024-07-21 03:44:07.570426] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:22.517 [2024-07-21 03:44:07.570682] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.517 [2024-07-21 03:44:07.570708] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.517 [2024-07-21 03:44:07.570723] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.517 [2024-07-21 03:44:07.574295] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.517 [2024-07-21 03:44:07.583590] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.517 [2024-07-21 03:44:07.583995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.517 [2024-07-21 03:44:07.584036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:22.517 [2024-07-21 03:44:07.584054] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:22.517 [2024-07-21 03:44:07.584298] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:22.517 [2024-07-21 03:44:07.584541] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.518 [2024-07-21 03:44:07.584565] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.518 [2024-07-21 03:44:07.584581] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.518 [2024-07-21 03:44:07.588163] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.518 [2024-07-21 03:44:07.597453] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.518 [2024-07-21 03:44:07.597872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.518 [2024-07-21 03:44:07.597903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:22.518 [2024-07-21 03:44:07.597924] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:22.518 [2024-07-21 03:44:07.598162] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:22.518 [2024-07-21 03:44:07.598405] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.518 [2024-07-21 03:44:07.598430] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.518 [2024-07-21 03:44:07.598446] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.518 [2024-07-21 03:44:07.602028] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.518 [2024-07-21 03:44:07.611325] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.518 [2024-07-21 03:44:07.611726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.518 [2024-07-21 03:44:07.611758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:22.518 [2024-07-21 03:44:07.611776] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:22.518 [2024-07-21 03:44:07.612016] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:22.518 [2024-07-21 03:44:07.612259] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.518 [2024-07-21 03:44:07.612283] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.518 [2024-07-21 03:44:07.612305] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.518 [2024-07-21 03:44:07.615889] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.518 [2024-07-21 03:44:07.625176] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.518 [2024-07-21 03:44:07.625628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.518 [2024-07-21 03:44:07.625659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:22.518 [2024-07-21 03:44:07.625677] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:22.518 [2024-07-21 03:44:07.625915] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:22.518 [2024-07-21 03:44:07.626158] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.518 [2024-07-21 03:44:07.626183] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.518 [2024-07-21 03:44:07.626199] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.518 [2024-07-21 03:44:07.629804] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.518 [2024-07-21 03:44:07.639085] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.518 [2024-07-21 03:44:07.639499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.518 [2024-07-21 03:44:07.639530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:22.518 [2024-07-21 03:44:07.639552] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:22.518 [2024-07-21 03:44:07.639802] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:22.518 [2024-07-21 03:44:07.640045] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.518 [2024-07-21 03:44:07.640069] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.518 [2024-07-21 03:44:07.640085] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.518 [2024-07-21 03:44:07.643665] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.518 [2024-07-21 03:44:07.652940] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.518 [2024-07-21 03:44:07.653333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.518 [2024-07-21 03:44:07.653368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:22.518 [2024-07-21 03:44:07.653385] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:22.518 [2024-07-21 03:44:07.653641] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:22.518 [2024-07-21 03:44:07.653884] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.518 [2024-07-21 03:44:07.653909] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.518 [2024-07-21 03:44:07.653925] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.518 [2024-07-21 03:44:07.657496] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.518 [2024-07-21 03:44:07.666786] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.518 [2024-07-21 03:44:07.667201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.518 [2024-07-21 03:44:07.667237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:22.518 [2024-07-21 03:44:07.667256] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:22.518 [2024-07-21 03:44:07.667495] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:22.518 [2024-07-21 03:44:07.667750] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.518 [2024-07-21 03:44:07.667775] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.518 [2024-07-21 03:44:07.667791] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.518 [2024-07-21 03:44:07.671362] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.518 [2024-07-21 03:44:07.680667] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.518 [2024-07-21 03:44:07.681051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.518 [2024-07-21 03:44:07.681083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:22.518 [2024-07-21 03:44:07.681101] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:22.518 [2024-07-21 03:44:07.681340] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:22.518 [2024-07-21 03:44:07.681583] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.518 [2024-07-21 03:44:07.681607] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.518 [2024-07-21 03:44:07.681634] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.518 [2024-07-21 03:44:07.685209] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.518 [2024-07-21 03:44:07.694505] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.518 [2024-07-21 03:44:07.694922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.518 [2024-07-21 03:44:07.694956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:22.518 [2024-07-21 03:44:07.694974] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:22.518 [2024-07-21 03:44:07.695214] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:22.518 [2024-07-21 03:44:07.695459] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.518 [2024-07-21 03:44:07.695484] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.518 [2024-07-21 03:44:07.695500] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.518 [2024-07-21 03:44:07.699087] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.518 [2024-07-21 03:44:07.708382] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.518 [2024-07-21 03:44:07.708814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.518 [2024-07-21 03:44:07.708846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:22.518 [2024-07-21 03:44:07.708865] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:22.518 [2024-07-21 03:44:07.709104] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:22.518 [2024-07-21 03:44:07.709354] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.518 [2024-07-21 03:44:07.709380] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.518 [2024-07-21 03:44:07.709395] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.518 [2024-07-21 03:44:07.712984] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.518 [2024-07-21 03:44:07.722272] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.518 [2024-07-21 03:44:07.722666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.518 [2024-07-21 03:44:07.722698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:22.518 [2024-07-21 03:44:07.722717] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:22.518 [2024-07-21 03:44:07.722957] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:22.518 [2024-07-21 03:44:07.723201] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.518 [2024-07-21 03:44:07.723226] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.518 [2024-07-21 03:44:07.723242] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.518 [2024-07-21 03:44:07.726827] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.518 [2024-07-21 03:44:07.736124] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.518 [2024-07-21 03:44:07.736520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.518 [2024-07-21 03:44:07.736553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:22.518 [2024-07-21 03:44:07.736572] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:22.518 [2024-07-21 03:44:07.736824] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:22.519 [2024-07-21 03:44:07.737069] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.519 [2024-07-21 03:44:07.737095] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.519 [2024-07-21 03:44:07.737111] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.519 [2024-07-21 03:44:07.740690] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.519 [2024-07-21 03:44:07.749973] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.519 [2024-07-21 03:44:07.750416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.519 [2024-07-21 03:44:07.750448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:22.519 [2024-07-21 03:44:07.750467] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:22.519 [2024-07-21 03:44:07.750736] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:22.519 [2024-07-21 03:44:07.750982] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.519 [2024-07-21 03:44:07.751007] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.519 [2024-07-21 03:44:07.751024] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.519 [2024-07-21 03:44:07.754607] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.519 [2024-07-21 03:44:07.763906] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.519 [2024-07-21 03:44:07.764282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.519 [2024-07-21 03:44:07.764315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:22.519 [2024-07-21 03:44:07.764334] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:22.519 [2024-07-21 03:44:07.764574] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:22.519 [2024-07-21 03:44:07.764832] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.519 [2024-07-21 03:44:07.764859] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.519 [2024-07-21 03:44:07.764876] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.519 [2024-07-21 03:44:07.768449] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.519 [2024-07-21 03:44:07.777961] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.519 [2024-07-21 03:44:07.778372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.519 [2024-07-21 03:44:07.778403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:22.519 [2024-07-21 03:44:07.778421] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:22.519 [2024-07-21 03:44:07.778672] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:22.519 [2024-07-21 03:44:07.778915] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.519 [2024-07-21 03:44:07.778940] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.519 [2024-07-21 03:44:07.778957] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.519 [2024-07-21 03:44:07.782533] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.519 [2024-07-21 03:44:07.791831] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.519 [2024-07-21 03:44:07.792249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.519 [2024-07-21 03:44:07.792281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:22.519 [2024-07-21 03:44:07.792299] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:22.519 [2024-07-21 03:44:07.792538] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:22.519 [2024-07-21 03:44:07.792795] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.519 [2024-07-21 03:44:07.792821] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.519 [2024-07-21 03:44:07.792837] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.519 [2024-07-21 03:44:07.796423] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.519 [2024-07-21 03:44:07.805723] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.519 [2024-07-21 03:44:07.806123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.519 [2024-07-21 03:44:07.806155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:22.519 [2024-07-21 03:44:07.806188] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:22.519 [2024-07-21 03:44:07.806428] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:22.519 [2024-07-21 03:44:07.806686] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.519 [2024-07-21 03:44:07.806720] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.519 [2024-07-21 03:44:07.806737] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.519 [2024-07-21 03:44:07.810311] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.778 [2024-07-21 03:44:07.819623] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.778 [2024-07-21 03:44:07.820031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.778 [2024-07-21 03:44:07.820063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:22.778 [2024-07-21 03:44:07.820080] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:22.778 [2024-07-21 03:44:07.820319] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:22.778 [2024-07-21 03:44:07.820562] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.778 [2024-07-21 03:44:07.820586] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.778 [2024-07-21 03:44:07.820602] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.779 [2024-07-21 03:44:07.824187] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.779 [2024-07-21 03:44:07.833502] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.779 [2024-07-21 03:44:07.833882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.779 [2024-07-21 03:44:07.833915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:22.779 [2024-07-21 03:44:07.833933] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:22.779 [2024-07-21 03:44:07.834172] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:22.779 [2024-07-21 03:44:07.834415] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.779 [2024-07-21 03:44:07.834440] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.779 [2024-07-21 03:44:07.834456] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.779 [2024-07-21 03:44:07.838038] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.779 [2024-07-21 03:44:07.847548] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.779 [2024-07-21 03:44:07.847982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.779 [2024-07-21 03:44:07.848013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:22.779 [2024-07-21 03:44:07.848032] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:22.779 [2024-07-21 03:44:07.848269] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:22.779 [2024-07-21 03:44:07.848511] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.779 [2024-07-21 03:44:07.848542] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.779 [2024-07-21 03:44:07.848559] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.779 [2024-07-21 03:44:07.852147] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.779 [2024-07-21 03:44:07.861439] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.779 [2024-07-21 03:44:07.861845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.779 [2024-07-21 03:44:07.861877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:22.779 [2024-07-21 03:44:07.861895] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:22.779 [2024-07-21 03:44:07.862134] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:22.779 [2024-07-21 03:44:07.862376] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.779 [2024-07-21 03:44:07.862401] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.779 [2024-07-21 03:44:07.862417] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.779 [2024-07-21 03:44:07.866006] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.779 [2024-07-21 03:44:07.875296] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.779 [2024-07-21 03:44:07.875692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.779 [2024-07-21 03:44:07.875724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:22.779 [2024-07-21 03:44:07.875742] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:22.779 [2024-07-21 03:44:07.875981] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:22.779 [2024-07-21 03:44:07.876223] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.779 [2024-07-21 03:44:07.876248] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.779 [2024-07-21 03:44:07.876264] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.779 [2024-07-21 03:44:07.879872] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.779 [2024-07-21 03:44:07.889179] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.779 [2024-07-21 03:44:07.889571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.779 [2024-07-21 03:44:07.889602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:22.779 [2024-07-21 03:44:07.889632] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:22.779 [2024-07-21 03:44:07.889873] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:22.779 [2024-07-21 03:44:07.890115] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.779 [2024-07-21 03:44:07.890141] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.779 [2024-07-21 03:44:07.890157] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.779 [2024-07-21 03:44:07.893740] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.779 [2024-07-21 03:44:07.903048] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.779 [2024-07-21 03:44:07.903453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.779 [2024-07-21 03:44:07.903485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:22.779 [2024-07-21 03:44:07.903503] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:22.779 [2024-07-21 03:44:07.903753] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:22.779 [2024-07-21 03:44:07.903996] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.779 [2024-07-21 03:44:07.904021] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.779 [2024-07-21 03:44:07.904037] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.779 [2024-07-21 03:44:07.907622] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.779 [2024-07-21 03:44:07.916910] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.779 [2024-07-21 03:44:07.917416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.779 [2024-07-21 03:44:07.917469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:22.779 [2024-07-21 03:44:07.917488] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:22.779 [2024-07-21 03:44:07.917739] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:22.779 [2024-07-21 03:44:07.917983] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.779 [2024-07-21 03:44:07.918008] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.779 [2024-07-21 03:44:07.918024] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.779 [2024-07-21 03:44:07.921600] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.779 [2024-07-21 03:44:07.930903] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.779 [2024-07-21 03:44:07.931299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.779 [2024-07-21 03:44:07.931331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:22.779 [2024-07-21 03:44:07.931350] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:22.779 [2024-07-21 03:44:07.931590] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:22.779 [2024-07-21 03:44:07.931846] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.779 [2024-07-21 03:44:07.931872] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.779 [2024-07-21 03:44:07.931888] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.779 [2024-07-21 03:44:07.935463] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.779 [2024-07-21 03:44:07.944816] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.779 [2024-07-21 03:44:07.945210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.779 [2024-07-21 03:44:07.945243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:22.779 [2024-07-21 03:44:07.945269] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:22.779 [2024-07-21 03:44:07.945509] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:22.779 [2024-07-21 03:44:07.945767] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.779 [2024-07-21 03:44:07.945793] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.779 [2024-07-21 03:44:07.945810] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.779 [2024-07-21 03:44:07.949385] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.779 [2024-07-21 03:44:07.958699] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.779 [2024-07-21 03:44:07.959074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.779 [2024-07-21 03:44:07.959107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:22.779 [2024-07-21 03:44:07.959125] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:22.779 [2024-07-21 03:44:07.959365] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:22.779 [2024-07-21 03:44:07.959607] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.779 [2024-07-21 03:44:07.959645] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.779 [2024-07-21 03:44:07.959663] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.779 [2024-07-21 03:44:07.963236] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.779 [2024-07-21 03:44:07.972743] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.779 [2024-07-21 03:44:07.973139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.779 [2024-07-21 03:44:07.973170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:22.779 [2024-07-21 03:44:07.973189] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:22.779 [2024-07-21 03:44:07.973428] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:22.779 [2024-07-21 03:44:07.973685] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.779 [2024-07-21 03:44:07.973711] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.780 [2024-07-21 03:44:07.973727] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.780 [2024-07-21 03:44:07.977300] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.780 [2024-07-21 03:44:07.986600] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.780 [2024-07-21 03:44:07.987010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.780 [2024-07-21 03:44:07.987041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:22.780 [2024-07-21 03:44:07.987059] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:22.780 [2024-07-21 03:44:07.987298] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:22.780 [2024-07-21 03:44:07.987540] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.780 [2024-07-21 03:44:07.987565] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.780 [2024-07-21 03:44:07.987591] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.780 [2024-07-21 03:44:07.991181] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.780 [2024-07-21 03:44:08.000484] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.780 [2024-07-21 03:44:08.000869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.780 [2024-07-21 03:44:08.000902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:22.780 [2024-07-21 03:44:08.000920] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:22.780 [2024-07-21 03:44:08.001159] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:22.780 [2024-07-21 03:44:08.001401] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.780 [2024-07-21 03:44:08.001426] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.780 [2024-07-21 03:44:08.001442] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.780 [2024-07-21 03:44:08.005029] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.780 [2024-07-21 03:44:08.014351] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.780 [2024-07-21 03:44:08.014791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.780 [2024-07-21 03:44:08.014825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:22.780 [2024-07-21 03:44:08.014843] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:22.780 [2024-07-21 03:44:08.015083] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:22.780 [2024-07-21 03:44:08.015326] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.780 [2024-07-21 03:44:08.015351] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.780 [2024-07-21 03:44:08.015367] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.780 [2024-07-21 03:44:08.018955] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.780 [2024-07-21 03:44:08.028240] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.780 [2024-07-21 03:44:08.028658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.780 [2024-07-21 03:44:08.028691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:22.780 [2024-07-21 03:44:08.028710] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:22.780 [2024-07-21 03:44:08.028949] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:22.780 [2024-07-21 03:44:08.029192] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.780 [2024-07-21 03:44:08.029216] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.780 [2024-07-21 03:44:08.029232] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.780 [2024-07-21 03:44:08.032819] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.780 [2024-07-21 03:44:08.042120] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.780 [2024-07-21 03:44:08.042517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.780 [2024-07-21 03:44:08.042548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:22.780 [2024-07-21 03:44:08.042567] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:22.780 [2024-07-21 03:44:08.042816] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:22.780 [2024-07-21 03:44:08.043060] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.780 [2024-07-21 03:44:08.043084] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.780 [2024-07-21 03:44:08.043100] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.780 [2024-07-21 03:44:08.046683] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.780 [2024-07-21 03:44:08.055986] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.780 [2024-07-21 03:44:08.056426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.780 [2024-07-21 03:44:08.056457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:22.780 [2024-07-21 03:44:08.056476] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:22.780 [2024-07-21 03:44:08.056724] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:22.780 [2024-07-21 03:44:08.056968] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.780 [2024-07-21 03:44:08.056993] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.780 [2024-07-21 03:44:08.057009] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.780 [2024-07-21 03:44:08.060602] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.780 [2024-07-21 03:44:08.069909] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.780 [2024-07-21 03:44:08.070279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.780 [2024-07-21 03:44:08.070312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:22.780 [2024-07-21 03:44:08.070330] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:22.780 [2024-07-21 03:44:08.070569] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:22.780 [2024-07-21 03:44:08.070825] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.780 [2024-07-21 03:44:08.070850] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.780 [2024-07-21 03:44:08.070866] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.780 [2024-07-21 03:44:08.074451] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.780 [2024-07-21 03:44:08.083791] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.780 [2024-07-21 03:44:08.084187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.780 [2024-07-21 03:44:08.084218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:22.780 [2024-07-21 03:44:08.084237] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:22.780 [2024-07-21 03:44:08.084481] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:22.780 [2024-07-21 03:44:08.084739] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.780 [2024-07-21 03:44:08.084765] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.780 [2024-07-21 03:44:08.084781] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.780 [2024-07-21 03:44:08.088359] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.039 [2024-07-21 03:44:08.097691] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.039 [2024-07-21 03:44:08.098071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.039 [2024-07-21 03:44:08.098103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:23.039 [2024-07-21 03:44:08.098121] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:23.039 [2024-07-21 03:44:08.098360] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:23.039 [2024-07-21 03:44:08.098604] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.039 [2024-07-21 03:44:08.098639] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.039 [2024-07-21 03:44:08.098656] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.039 [2024-07-21 03:44:08.102231] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.039 [2024-07-21 03:44:08.111538] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.039 [2024-07-21 03:44:08.111941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.039 [2024-07-21 03:44:08.111974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:23.039 [2024-07-21 03:44:08.111992] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:23.039 [2024-07-21 03:44:08.112231] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:23.039 [2024-07-21 03:44:08.112473] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.039 [2024-07-21 03:44:08.112498] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.039 [2024-07-21 03:44:08.112513] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.039 [2024-07-21 03:44:08.116106] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.039 [2024-07-21 03:44:08.125405] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.039 [2024-07-21 03:44:08.125809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.039 [2024-07-21 03:44:08.125840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:23.039 [2024-07-21 03:44:08.125857] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:23.039 [2024-07-21 03:44:08.126096] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:23.039 [2024-07-21 03:44:08.126340] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.039 [2024-07-21 03:44:08.126364] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.039 [2024-07-21 03:44:08.126386] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.040 [2024-07-21 03:44:08.129991] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.040 [2024-07-21 03:44:08.139295] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.040 [2024-07-21 03:44:08.139691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.040 [2024-07-21 03:44:08.139724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:23.040 [2024-07-21 03:44:08.139743] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:23.040 [2024-07-21 03:44:08.139981] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:23.040 [2024-07-21 03:44:08.140235] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.040 [2024-07-21 03:44:08.140261] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.040 [2024-07-21 03:44:08.140286] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.040 [2024-07-21 03:44:08.143945] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.040 [2024-07-21 03:44:08.153236] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.040 [2024-07-21 03:44:08.153646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.040 [2024-07-21 03:44:08.153687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:23.040 [2024-07-21 03:44:08.153705] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:23.040 [2024-07-21 03:44:08.153944] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:23.040 [2024-07-21 03:44:08.154188] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.040 [2024-07-21 03:44:08.154212] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.040 [2024-07-21 03:44:08.154228] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.040 [2024-07-21 03:44:08.157817] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.040 [2024-07-21 03:44:08.167112] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.040 [2024-07-21 03:44:08.167512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.040 [2024-07-21 03:44:08.167543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:23.040 [2024-07-21 03:44:08.167561] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:23.040 [2024-07-21 03:44:08.167810] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:23.040 [2024-07-21 03:44:08.168054] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.040 [2024-07-21 03:44:08.168078] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.040 [2024-07-21 03:44:08.168094] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.040 [2024-07-21 03:44:08.171675] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.040 [2024-07-21 03:44:08.180978] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.040 [2024-07-21 03:44:08.181348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.040 [2024-07-21 03:44:08.181404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:23.040 [2024-07-21 03:44:08.181424] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:23.040 [2024-07-21 03:44:08.181674] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:23.040 [2024-07-21 03:44:08.181919] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.040 [2024-07-21 03:44:08.181943] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.040 [2024-07-21 03:44:08.181959] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.040 [2024-07-21 03:44:08.185536] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.040 [2024-07-21 03:44:08.194837] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.040 [2024-07-21 03:44:08.195243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.040 [2024-07-21 03:44:08.195275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:23.040 [2024-07-21 03:44:08.195294] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:23.040 [2024-07-21 03:44:08.195533] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:23.040 [2024-07-21 03:44:08.195787] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.040 [2024-07-21 03:44:08.195811] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.040 [2024-07-21 03:44:08.195827] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.040 [2024-07-21 03:44:08.199396] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.040 [2024-07-21 03:44:08.208680] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.040 [2024-07-21 03:44:08.209046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.040 [2024-07-21 03:44:08.209077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:23.040 [2024-07-21 03:44:08.209095] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:23.040 [2024-07-21 03:44:08.209333] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:23.040 [2024-07-21 03:44:08.209576] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.040 [2024-07-21 03:44:08.209600] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.040 [2024-07-21 03:44:08.209624] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.040 [2024-07-21 03:44:08.213197] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.040 [2024-07-21 03:44:08.222703] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.040 [2024-07-21 03:44:08.223127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.040 [2024-07-21 03:44:08.223158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:23.040 [2024-07-21 03:44:08.223176] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:23.040 [2024-07-21 03:44:08.223415] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:23.040 [2024-07-21 03:44:08.223676] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.040 [2024-07-21 03:44:08.223701] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.040 [2024-07-21 03:44:08.223717] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.040 [2024-07-21 03:44:08.227287] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.040 [2024-07-21 03:44:08.236597] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.040 [2024-07-21 03:44:08.237012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.040 [2024-07-21 03:44:08.237044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:23.040 [2024-07-21 03:44:08.237063] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:23.040 [2024-07-21 03:44:08.237302] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:23.040 [2024-07-21 03:44:08.237546] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.040 [2024-07-21 03:44:08.237571] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.040 [2024-07-21 03:44:08.237587] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.040 [2024-07-21 03:44:08.241177] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.040 [2024-07-21 03:44:08.250499] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.040 [2024-07-21 03:44:08.250905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.040 [2024-07-21 03:44:08.250938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:23.040 [2024-07-21 03:44:08.250955] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:23.040 [2024-07-21 03:44:08.251194] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:23.040 [2024-07-21 03:44:08.251436] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.040 [2024-07-21 03:44:08.251461] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.040 [2024-07-21 03:44:08.251477] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.040 [2024-07-21 03:44:08.255063] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.040 [2024-07-21 03:44:08.264370] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.040 [2024-07-21 03:44:08.264794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.040 [2024-07-21 03:44:08.264826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:23.040 [2024-07-21 03:44:08.264845] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:23.040 [2024-07-21 03:44:08.265083] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:23.040 [2024-07-21 03:44:08.265326] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.040 [2024-07-21 03:44:08.265351] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.040 [2024-07-21 03:44:08.265367] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.040 [2024-07-21 03:44:08.268953] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.040 [2024-07-21 03:44:08.278238] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.040 [2024-07-21 03:44:08.278640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.040 [2024-07-21 03:44:08.278677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:23.040 [2024-07-21 03:44:08.278703] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:23.040 [2024-07-21 03:44:08.278943] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:23.040 [2024-07-21 03:44:08.279187] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.041 [2024-07-21 03:44:08.279211] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.041 [2024-07-21 03:44:08.279228] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.041 [2024-07-21 03:44:08.282810] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.041 [2024-07-21 03:44:08.292094] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.041 [2024-07-21 03:44:08.292486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.041 [2024-07-21 03:44:08.292517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:23.041 [2024-07-21 03:44:08.292535] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:23.041 [2024-07-21 03:44:08.292786] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:23.041 [2024-07-21 03:44:08.293030] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.041 [2024-07-21 03:44:08.293053] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.041 [2024-07-21 03:44:08.293070] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.041 [2024-07-21 03:44:08.296660] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.041 [2024-07-21 03:44:08.305940] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.041 [2024-07-21 03:44:08.306371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.041 [2024-07-21 03:44:08.306420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:23.041 [2024-07-21 03:44:08.306439] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:23.041 [2024-07-21 03:44:08.306688] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:23.041 [2024-07-21 03:44:08.306930] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.041 [2024-07-21 03:44:08.306953] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.041 [2024-07-21 03:44:08.306969] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.041 [2024-07-21 03:44:08.310542] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.041 [2024-07-21 03:44:08.319837] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.041 [2024-07-21 03:44:08.320239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.041 [2024-07-21 03:44:08.320271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:23.041 [2024-07-21 03:44:08.320295] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:23.041 [2024-07-21 03:44:08.320536] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:23.041 [2024-07-21 03:44:08.320792] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.041 [2024-07-21 03:44:08.320818] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.041 [2024-07-21 03:44:08.320834] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.041 [2024-07-21 03:44:08.324427] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.041 [2024-07-21 03:44:08.333776] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.041 [2024-07-21 03:44:08.334241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.041 [2024-07-21 03:44:08.334273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:23.041 [2024-07-21 03:44:08.334292] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:23.041 [2024-07-21 03:44:08.334532] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:23.041 [2024-07-21 03:44:08.334788] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.041 [2024-07-21 03:44:08.334814] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.041 [2024-07-21 03:44:08.334830] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.041 [2024-07-21 03:44:08.338407] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.041 [2024-07-21 03:44:08.347718] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.041 [2024-07-21 03:44:08.348113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.041 [2024-07-21 03:44:08.348146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:23.041 [2024-07-21 03:44:08.348164] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:23.041 [2024-07-21 03:44:08.348404] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:23.041 [2024-07-21 03:44:08.348659] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.041 [2024-07-21 03:44:08.348684] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.041 [2024-07-21 03:44:08.348700] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.300 [2024-07-21 03:44:08.352285] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.300 [2024-07-21 03:44:08.361583] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.300 [2024-07-21 03:44:08.361962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.300 [2024-07-21 03:44:08.361994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:23.300 [2024-07-21 03:44:08.362012] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:23.300 [2024-07-21 03:44:08.362251] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:23.300 [2024-07-21 03:44:08.362494] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.300 [2024-07-21 03:44:08.362524] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.300 [2024-07-21 03:44:08.362541] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.300 [2024-07-21 03:44:08.366128] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.300 [2024-07-21 03:44:08.375631] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.300 [2024-07-21 03:44:08.376023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.300 [2024-07-21 03:44:08.376054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:23.300 [2024-07-21 03:44:08.376072] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:23.300 [2024-07-21 03:44:08.376310] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:23.300 [2024-07-21 03:44:08.376553] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.300 [2024-07-21 03:44:08.376578] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.300 [2024-07-21 03:44:08.376594] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.300 [2024-07-21 03:44:08.380201] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.300 [2024-07-21 03:44:08.389501] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.300 [2024-07-21 03:44:08.389919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.300 [2024-07-21 03:44:08.389951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:23.300 [2024-07-21 03:44:08.389970] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:23.300 [2024-07-21 03:44:08.390208] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:23.300 [2024-07-21 03:44:08.390450] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.300 [2024-07-21 03:44:08.390476] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.300 [2024-07-21 03:44:08.390492] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.300 [2024-07-21 03:44:08.394077] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.300 [2024-07-21 03:44:08.403375] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.300 [2024-07-21 03:44:08.403781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.300 [2024-07-21 03:44:08.403813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:23.300 [2024-07-21 03:44:08.403831] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:23.300 [2024-07-21 03:44:08.404070] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:23.300 [2024-07-21 03:44:08.404312] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.300 [2024-07-21 03:44:08.404337] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.300 [2024-07-21 03:44:08.404354] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.300 [2024-07-21 03:44:08.407944] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.300 [2024-07-21 03:44:08.417238] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.300 [2024-07-21 03:44:08.417606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.300 [2024-07-21 03:44:08.417646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:23.300 [2024-07-21 03:44:08.417665] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:23.300 [2024-07-21 03:44:08.417904] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:23.300 [2024-07-21 03:44:08.418146] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.300 [2024-07-21 03:44:08.418171] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.300 [2024-07-21 03:44:08.418186] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.300 [2024-07-21 03:44:08.421770] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.300 [2024-07-21 03:44:08.431277] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.300 [2024-07-21 03:44:08.431683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.300 [2024-07-21 03:44:08.431716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:23.300 [2024-07-21 03:44:08.431734] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:23.300 [2024-07-21 03:44:08.431974] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:23.300 [2024-07-21 03:44:08.432216] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.300 [2024-07-21 03:44:08.432240] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.300 [2024-07-21 03:44:08.432256] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.300 [2024-07-21 03:44:08.435840] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.300 [2024-07-21 03:44:08.445126] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.300 [2024-07-21 03:44:08.445515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.300 [2024-07-21 03:44:08.445547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:23.300 [2024-07-21 03:44:08.445566] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:23.300 [2024-07-21 03:44:08.445819] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:23.300 [2024-07-21 03:44:08.446064] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.300 [2024-07-21 03:44:08.446089] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.300 [2024-07-21 03:44:08.446105] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.300 [2024-07-21 03:44:08.449692] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.300 [2024-07-21 03:44:08.458984] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.300 [2024-07-21 03:44:08.459373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.300 [2024-07-21 03:44:08.459404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:23.300 [2024-07-21 03:44:08.459422] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:23.300 [2024-07-21 03:44:08.459679] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:23.300 [2024-07-21 03:44:08.459923] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.300 [2024-07-21 03:44:08.459947] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.300 [2024-07-21 03:44:08.459964] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.300 [2024-07-21 03:44:08.463541] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.300 [2024-07-21 03:44:08.472835] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.300 [2024-07-21 03:44:08.473228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.300 [2024-07-21 03:44:08.473259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:23.301 [2024-07-21 03:44:08.473277] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:23.301 [2024-07-21 03:44:08.473516] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:23.301 [2024-07-21 03:44:08.473772] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.301 [2024-07-21 03:44:08.473797] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.301 [2024-07-21 03:44:08.473814] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.301 [2024-07-21 03:44:08.477391] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.301 [2024-07-21 03:44:08.486702] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.301 [2024-07-21 03:44:08.487095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.301 [2024-07-21 03:44:08.487127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:23.301 [2024-07-21 03:44:08.487145] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:23.301 [2024-07-21 03:44:08.487384] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:23.301 [2024-07-21 03:44:08.487639] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.301 [2024-07-21 03:44:08.487665] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.301 [2024-07-21 03:44:08.487681] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.301 [2024-07-21 03:44:08.491252] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.301 [2024-07-21 03:44:08.500560] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.301 [2024-07-21 03:44:08.501018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.301 [2024-07-21 03:44:08.501069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:23.301 [2024-07-21 03:44:08.501088] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:23.301 [2024-07-21 03:44:08.501327] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:23.301 [2024-07-21 03:44:08.501569] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.301 [2024-07-21 03:44:08.501594] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.301 [2024-07-21 03:44:08.501631] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.301 [2024-07-21 03:44:08.505224] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.301 [2024-07-21 03:44:08.514520] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.301 [2024-07-21 03:44:08.514943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.301 [2024-07-21 03:44:08.514975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:23.301 [2024-07-21 03:44:08.514993] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:23.301 [2024-07-21 03:44:08.515231] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:23.301 [2024-07-21 03:44:08.515473] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.301 [2024-07-21 03:44:08.515498] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.301 [2024-07-21 03:44:08.515514] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.301 [2024-07-21 03:44:08.519098] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.301 [2024-07-21 03:44:08.528383] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.301 [2024-07-21 03:44:08.528851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.301 [2024-07-21 03:44:08.528884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:23.301 [2024-07-21 03:44:08.528902] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:23.301 [2024-07-21 03:44:08.529153] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:23.301 [2024-07-21 03:44:08.529399] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.301 [2024-07-21 03:44:08.529425] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.301 [2024-07-21 03:44:08.529442] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.301 [2024-07-21 03:44:08.533026] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.301 [2024-07-21 03:44:08.542313] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.301 [2024-07-21 03:44:08.542722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.301 [2024-07-21 03:44:08.542755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:23.301 [2024-07-21 03:44:08.542775] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:23.301 [2024-07-21 03:44:08.543014] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:23.301 [2024-07-21 03:44:08.543258] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.301 [2024-07-21 03:44:08.543283] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.301 [2024-07-21 03:44:08.543299] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.301 [2024-07-21 03:44:08.546883] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.301 [2024-07-21 03:44:08.556174] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.301 [2024-07-21 03:44:08.556655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.301 [2024-07-21 03:44:08.556705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:23.301 [2024-07-21 03:44:08.556723] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:23.301 [2024-07-21 03:44:08.556962] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:23.301 [2024-07-21 03:44:08.557204] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.301 [2024-07-21 03:44:08.557229] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.301 [2024-07-21 03:44:08.557244] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.301 [2024-07-21 03:44:08.560838] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.301 [2024-07-21 03:44:08.570126] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.301 [2024-07-21 03:44:08.570530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.301 [2024-07-21 03:44:08.570562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:23.301 [2024-07-21 03:44:08.570580] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:23.301 [2024-07-21 03:44:08.570832] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:23.301 [2024-07-21 03:44:08.571075] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.301 [2024-07-21 03:44:08.571100] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.301 [2024-07-21 03:44:08.571116] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.301 [2024-07-21 03:44:08.574717] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.301 [2024-07-21 03:44:08.584154] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.301 [2024-07-21 03:44:08.584534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.301 [2024-07-21 03:44:08.584568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:23.301 [2024-07-21 03:44:08.584586] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:23.301 [2024-07-21 03:44:08.584839] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:23.301 [2024-07-21 03:44:08.585082] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.301 [2024-07-21 03:44:08.585107] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.301 [2024-07-21 03:44:08.585124] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.301 [2024-07-21 03:44:08.588705] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.301 [2024-07-21 03:44:08.598215] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.301 [2024-07-21 03:44:08.598627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.301 [2024-07-21 03:44:08.598660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:23.301 [2024-07-21 03:44:08.598679] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:23.301 [2024-07-21 03:44:08.598923] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:23.301 [2024-07-21 03:44:08.599166] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.301 [2024-07-21 03:44:08.599190] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.301 [2024-07-21 03:44:08.599206] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.301 [2024-07-21 03:44:08.602792] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.560 [2024-07-21 03:44:08.612083] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.560 [2024-07-21 03:44:08.612480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.560 [2024-07-21 03:44:08.612531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:23.560 [2024-07-21 03:44:08.612549] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:23.560 [2024-07-21 03:44:08.612799] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:23.560 [2024-07-21 03:44:08.613042] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.560 [2024-07-21 03:44:08.613066] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.560 [2024-07-21 03:44:08.613083] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.560 [2024-07-21 03:44:08.616667] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.560 [2024-07-21 03:44:08.625955] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.560 [2024-07-21 03:44:08.626328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.560 [2024-07-21 03:44:08.626361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:23.560 [2024-07-21 03:44:08.626379] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:23.560 [2024-07-21 03:44:08.626630] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:23.560 [2024-07-21 03:44:08.626874] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.560 [2024-07-21 03:44:08.626897] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.560 [2024-07-21 03:44:08.626914] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.560 [2024-07-21 03:44:08.630509] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.560 [2024-07-21 03:44:08.639820] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.561 [2024-07-21 03:44:08.640216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.561 [2024-07-21 03:44:08.640248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:23.561 [2024-07-21 03:44:08.640266] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:23.561 [2024-07-21 03:44:08.640507] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:23.561 [2024-07-21 03:44:08.640763] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.561 [2024-07-21 03:44:08.640789] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.561 [2024-07-21 03:44:08.640813] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.561 [2024-07-21 03:44:08.644391] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.561 [2024-07-21 03:44:08.653695] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.561 [2024-07-21 03:44:08.654089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.561 [2024-07-21 03:44:08.654121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:23.561 [2024-07-21 03:44:08.654139] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:23.561 [2024-07-21 03:44:08.654378] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:23.561 [2024-07-21 03:44:08.654634] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.561 [2024-07-21 03:44:08.654660] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.561 [2024-07-21 03:44:08.654677] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.561 [2024-07-21 03:44:08.658253] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.561 [2024-07-21 03:44:08.667551] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.561 [2024-07-21 03:44:08.667956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.561 [2024-07-21 03:44:08.667988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:23.561 [2024-07-21 03:44:08.668007] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:23.561 [2024-07-21 03:44:08.668247] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:23.561 [2024-07-21 03:44:08.668492] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.561 [2024-07-21 03:44:08.668517] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.561 [2024-07-21 03:44:08.668533] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.561 [2024-07-21 03:44:08.672126] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.561 [2024-07-21 03:44:08.681431] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.561 [2024-07-21 03:44:08.681839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.561 [2024-07-21 03:44:08.681870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:23.561 [2024-07-21 03:44:08.681889] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:23.561 [2024-07-21 03:44:08.682127] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:23.561 [2024-07-21 03:44:08.682370] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.561 [2024-07-21 03:44:08.682395] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.561 [2024-07-21 03:44:08.682411] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.561 [2024-07-21 03:44:08.685999] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.561 [2024-07-21 03:44:08.695292] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.561 [2024-07-21 03:44:08.695716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.561 [2024-07-21 03:44:08.695754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:23.561 [2024-07-21 03:44:08.695773] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:23.561 [2024-07-21 03:44:08.696012] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:23.561 [2024-07-21 03:44:08.696256] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.561 [2024-07-21 03:44:08.696280] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.561 [2024-07-21 03:44:08.696295] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.561 [2024-07-21 03:44:08.699886] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.561 [2024-07-21 03:44:08.709175] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.561 [2024-07-21 03:44:08.709575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.561 [2024-07-21 03:44:08.709606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:23.561 [2024-07-21 03:44:08.709633] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:23.561 [2024-07-21 03:44:08.709873] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:23.561 [2024-07-21 03:44:08.710115] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.561 [2024-07-21 03:44:08.710140] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.561 [2024-07-21 03:44:08.710156] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.561 [2024-07-21 03:44:08.713736] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.561 [2024-07-21 03:44:08.723016] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.561 [2024-07-21 03:44:08.723411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.561 [2024-07-21 03:44:08.723442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:23.561 [2024-07-21 03:44:08.723461] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:23.561 [2024-07-21 03:44:08.723709] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:23.561 [2024-07-21 03:44:08.723963] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.561 [2024-07-21 03:44:08.723987] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.561 [2024-07-21 03:44:08.724003] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.561 [2024-07-21 03:44:08.727571] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.561 [2024-07-21 03:44:08.736883] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.561 [2024-07-21 03:44:08.737281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.561 [2024-07-21 03:44:08.737313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:23.561 [2024-07-21 03:44:08.737332] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:23.561 [2024-07-21 03:44:08.737572] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:23.561 [2024-07-21 03:44:08.737833] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.561 [2024-07-21 03:44:08.737859] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.561 [2024-07-21 03:44:08.737875] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.561 [2024-07-21 03:44:08.741444] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.562 [2024-07-21 03:44:08.750730] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.562 [2024-07-21 03:44:08.751139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.562 [2024-07-21 03:44:08.751171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:23.562 [2024-07-21 03:44:08.751189] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:23.562 [2024-07-21 03:44:08.751427] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:23.562 [2024-07-21 03:44:08.751682] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.562 [2024-07-21 03:44:08.751709] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.562 [2024-07-21 03:44:08.751725] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.562 [2024-07-21 03:44:08.755297] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.562 [2024-07-21 03:44:08.764584] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.562 [2024-07-21 03:44:08.764967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.562 [2024-07-21 03:44:08.764998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:23.562 [2024-07-21 03:44:08.765016] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:23.562 [2024-07-21 03:44:08.765255] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:23.562 [2024-07-21 03:44:08.765497] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.562 [2024-07-21 03:44:08.765521] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.562 [2024-07-21 03:44:08.765538] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.562 [2024-07-21 03:44:08.769119] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.562 [2024-07-21 03:44:08.778427] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.562 [2024-07-21 03:44:08.778829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.562 [2024-07-21 03:44:08.778860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:23.562 [2024-07-21 03:44:08.778879] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:23.562 [2024-07-21 03:44:08.779117] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:23.562 [2024-07-21 03:44:08.779360] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.562 [2024-07-21 03:44:08.779385] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.562 [2024-07-21 03:44:08.779402] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.562 [2024-07-21 03:44:08.783002] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.562 [2024-07-21 03:44:08.792284] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.562 [2024-07-21 03:44:08.792666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.562 [2024-07-21 03:44:08.792699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:23.562 [2024-07-21 03:44:08.792717] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:23.562 [2024-07-21 03:44:08.792957] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:23.562 [2024-07-21 03:44:08.793200] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.562 [2024-07-21 03:44:08.793224] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.562 [2024-07-21 03:44:08.793239] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.562 [2024-07-21 03:44:08.796839] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.562 [2024-07-21 03:44:08.806119] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.562 [2024-07-21 03:44:08.806489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.562 [2024-07-21 03:44:08.806522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:23.562 [2024-07-21 03:44:08.806540] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:23.562 [2024-07-21 03:44:08.806791] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:23.562 [2024-07-21 03:44:08.807037] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.562 [2024-07-21 03:44:08.807062] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.562 [2024-07-21 03:44:08.807078] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.562 [2024-07-21 03:44:08.810658] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.562 [2024-07-21 03:44:08.820149] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.562 [2024-07-21 03:44:08.820555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.562 [2024-07-21 03:44:08.820586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:23.562 [2024-07-21 03:44:08.820604] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:23.562 [2024-07-21 03:44:08.820863] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:23.562 [2024-07-21 03:44:08.821107] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.562 [2024-07-21 03:44:08.821131] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.562 [2024-07-21 03:44:08.821147] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.562 [2024-07-21 03:44:08.824730] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.562 [2024-07-21 03:44:08.834037] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.562 [2024-07-21 03:44:08.834437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.562 [2024-07-21 03:44:08.834469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:23.562 [2024-07-21 03:44:08.834494] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:23.562 [2024-07-21 03:44:08.834743] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:23.562 [2024-07-21 03:44:08.834987] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.562 [2024-07-21 03:44:08.835012] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.562 [2024-07-21 03:44:08.835029] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.562 [2024-07-21 03:44:08.838600] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.562 [2024-07-21 03:44:08.847904] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.562 [2024-07-21 03:44:08.848316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.562 [2024-07-21 03:44:08.848347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:23.562 [2024-07-21 03:44:08.848365] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:23.562 [2024-07-21 03:44:08.848604] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:23.562 [2024-07-21 03:44:08.848856] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.562 [2024-07-21 03:44:08.848881] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.563 [2024-07-21 03:44:08.848897] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.563 [2024-07-21 03:44:08.852472] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.563 [2024-07-21 03:44:08.861778] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.563 [2024-07-21 03:44:08.862164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.563 [2024-07-21 03:44:08.862198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:23.563 [2024-07-21 03:44:08.862217] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:23.563 [2024-07-21 03:44:08.862457] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:23.563 [2024-07-21 03:44:08.862713] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.563 [2024-07-21 03:44:08.862738] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.563 [2024-07-21 03:44:08.862754] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.563 [2024-07-21 03:44:08.866331] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.822 [2024-07-21 03:44:08.875626] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.822 [2024-07-21 03:44:08.876022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.822 [2024-07-21 03:44:08.876053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:23.822 [2024-07-21 03:44:08.876072] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:23.822 [2024-07-21 03:44:08.876311] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:23.822 [2024-07-21 03:44:08.876553] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.822 [2024-07-21 03:44:08.876583] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.822 [2024-07-21 03:44:08.876601] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.822 [2024-07-21 03:44:08.880197] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.822 [2024-07-21 03:44:08.889490] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.822 [2024-07-21 03:44:08.889872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.822 [2024-07-21 03:44:08.889904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:23.822 [2024-07-21 03:44:08.889922] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:23.822 [2024-07-21 03:44:08.890161] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:23.822 [2024-07-21 03:44:08.890404] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.822 [2024-07-21 03:44:08.890429] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.822 [2024-07-21 03:44:08.890446] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.822 [2024-07-21 03:44:08.894027] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.822 [2024-07-21 03:44:08.903563] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.822 [2024-07-21 03:44:08.903980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.822 [2024-07-21 03:44:08.904013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:23.822 [2024-07-21 03:44:08.904032] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:23.822 [2024-07-21 03:44:08.904271] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:23.822 [2024-07-21 03:44:08.904514] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.822 [2024-07-21 03:44:08.904538] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.822 [2024-07-21 03:44:08.904555] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.822 [2024-07-21 03:44:08.908140] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.822 [2024-07-21 03:44:08.917426] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.822 [2024-07-21 03:44:08.917835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.822 [2024-07-21 03:44:08.917868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:23.822 [2024-07-21 03:44:08.917887] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:23.822 [2024-07-21 03:44:08.918126] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:23.822 [2024-07-21 03:44:08.918370] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.822 [2024-07-21 03:44:08.918395] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.822 [2024-07-21 03:44:08.918410] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.822 [2024-07-21 03:44:08.921994] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.822 [2024-07-21 03:44:08.931295] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.822 [2024-07-21 03:44:08.931671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.822 [2024-07-21 03:44:08.931705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:23.822 [2024-07-21 03:44:08.931723] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:23.822 [2024-07-21 03:44:08.931963] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:23.822 [2024-07-21 03:44:08.932208] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.822 [2024-07-21 03:44:08.932232] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.822 [2024-07-21 03:44:08.932248] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.822 [2024-07-21 03:44:08.935832] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.822 [2024-07-21 03:44:08.945326] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.822 [2024-07-21 03:44:08.945697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.823 [2024-07-21 03:44:08.945730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:23.823 [2024-07-21 03:44:08.945749] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:23.823 [2024-07-21 03:44:08.945988] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:23.823 [2024-07-21 03:44:08.946232] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.823 [2024-07-21 03:44:08.946257] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.823 [2024-07-21 03:44:08.946274] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.823 [2024-07-21 03:44:08.949858] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.823 [2024-07-21 03:44:08.959353] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.823 [2024-07-21 03:44:08.959748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.823 [2024-07-21 03:44:08.959780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:23.823 [2024-07-21 03:44:08.959799] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:23.823 [2024-07-21 03:44:08.960038] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:23.823 [2024-07-21 03:44:08.960282] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.823 [2024-07-21 03:44:08.960307] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.823 [2024-07-21 03:44:08.960323] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.823 [2024-07-21 03:44:08.963909] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.823 [2024-07-21 03:44:08.973231] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.823 [2024-07-21 03:44:08.973632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.823 [2024-07-21 03:44:08.973664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:23.823 [2024-07-21 03:44:08.973682] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:23.823 [2024-07-21 03:44:08.973927] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:23.823 [2024-07-21 03:44:08.974170] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.823 [2024-07-21 03:44:08.974194] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.823 [2024-07-21 03:44:08.974210] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.823 [2024-07-21 03:44:08.977793] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.823 [2024-07-21 03:44:08.987088] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.823 [2024-07-21 03:44:08.987497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.823 [2024-07-21 03:44:08.987529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:23.823 [2024-07-21 03:44:08.987548] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:23.823 [2024-07-21 03:44:08.987799] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:23.823 [2024-07-21 03:44:08.988043] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.823 [2024-07-21 03:44:08.988067] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.823 [2024-07-21 03:44:08.988083] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.823 [2024-07-21 03:44:08.991662] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.823 [2024-07-21 03:44:09.000960] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.823 [2024-07-21 03:44:09.001373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.823 [2024-07-21 03:44:09.001406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:23.823 [2024-07-21 03:44:09.001424] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:23.823 [2024-07-21 03:44:09.001674] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:23.823 [2024-07-21 03:44:09.001918] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.823 [2024-07-21 03:44:09.001943] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.823 [2024-07-21 03:44:09.001959] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.823 [2024-07-21 03:44:09.005531] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.823 [2024-07-21 03:44:09.014818] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.823 [2024-07-21 03:44:09.015201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.823 [2024-07-21 03:44:09.015233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:23.823 [2024-07-21 03:44:09.015251] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:23.823 [2024-07-21 03:44:09.015490] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:23.823 [2024-07-21 03:44:09.015745] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.823 [2024-07-21 03:44:09.015770] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.823 [2024-07-21 03:44:09.015793] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.823 [2024-07-21 03:44:09.019366] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.823 [2024-07-21 03:44:09.028861] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.823 [2024-07-21 03:44:09.029255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.823 [2024-07-21 03:44:09.029287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:23.823 [2024-07-21 03:44:09.029305] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:23.823 [2024-07-21 03:44:09.029544] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:23.823 [2024-07-21 03:44:09.029798] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.823 [2024-07-21 03:44:09.029825] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.823 [2024-07-21 03:44:09.029850] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.823 [2024-07-21 03:44:09.033436] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.823 [2024-07-21 03:44:09.042724] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.823 [2024-07-21 03:44:09.043124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.823 [2024-07-21 03:44:09.043156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:23.823 [2024-07-21 03:44:09.043174] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:23.823 [2024-07-21 03:44:09.043413] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:23.823 [2024-07-21 03:44:09.043668] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.823 [2024-07-21 03:44:09.043693] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.823 [2024-07-21 03:44:09.043708] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.823 [2024-07-21 03:44:09.047281] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.823 [2024-07-21 03:44:09.056566] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.823 [2024-07-21 03:44:09.056981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.823 [2024-07-21 03:44:09.057014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:23.823 [2024-07-21 03:44:09.057033] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:23.823 [2024-07-21 03:44:09.057272] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:23.823 [2024-07-21 03:44:09.057517] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.823 [2024-07-21 03:44:09.057542] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.823 [2024-07-21 03:44:09.057558] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.823 [2024-07-21 03:44:09.061146] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.823 [2024-07-21 03:44:09.070427] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.823 [2024-07-21 03:44:09.070835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.823 [2024-07-21 03:44:09.070867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:23.823 [2024-07-21 03:44:09.070885] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:23.823 [2024-07-21 03:44:09.071124] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:23.823 [2024-07-21 03:44:09.071367] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.824 [2024-07-21 03:44:09.071392] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.824 [2024-07-21 03:44:09.071408] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.824 [2024-07-21 03:44:09.074989] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.824 [2024-07-21 03:44:09.084281] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.824 [2024-07-21 03:44:09.084677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.824 [2024-07-21 03:44:09.084709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:23.824 [2024-07-21 03:44:09.084728] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:23.824 [2024-07-21 03:44:09.084967] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:23.824 [2024-07-21 03:44:09.085210] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.824 [2024-07-21 03:44:09.085235] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.824 [2024-07-21 03:44:09.085252] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.824 [2024-07-21 03:44:09.088835] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.824 [2024-07-21 03:44:09.098161] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.824 [2024-07-21 03:44:09.098558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.824 [2024-07-21 03:44:09.098590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:23.824 [2024-07-21 03:44:09.098608] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:23.824 [2024-07-21 03:44:09.098859] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:23.824 [2024-07-21 03:44:09.099101] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.824 [2024-07-21 03:44:09.099126] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.824 [2024-07-21 03:44:09.099142] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.824 [2024-07-21 03:44:09.102722] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.824 [2024-07-21 03:44:09.112001] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.824 [2024-07-21 03:44:09.112400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.824 [2024-07-21 03:44:09.112432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:23.824 [2024-07-21 03:44:09.112451] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:23.824 [2024-07-21 03:44:09.112702] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:23.824 [2024-07-21 03:44:09.112951] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.824 [2024-07-21 03:44:09.112976] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.824 [2024-07-21 03:44:09.112992] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.824 [2024-07-21 03:44:09.116567] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.824 [2024-07-21 03:44:09.125850] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.824 [2024-07-21 03:44:09.126260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.824 [2024-07-21 03:44:09.126291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:23.824 [2024-07-21 03:44:09.126311] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:23.824 [2024-07-21 03:44:09.126550] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:23.824 [2024-07-21 03:44:09.126805] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.824 [2024-07-21 03:44:09.126830] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.824 [2024-07-21 03:44:09.126846] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.824 [2024-07-21 03:44:09.130432] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:24.083 [2024-07-21 03:44:09.139729] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:24.083 [2024-07-21 03:44:09.140136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.083 [2024-07-21 03:44:09.140168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:24.083 [2024-07-21 03:44:09.140187] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:24.083 [2024-07-21 03:44:09.140426] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:24.083 [2024-07-21 03:44:09.140680] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:24.083 [2024-07-21 03:44:09.140707] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:24.083 [2024-07-21 03:44:09.140724] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:24.083 [2024-07-21 03:44:09.144295] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:24.083 [2024-07-21 03:44:09.153579] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:24.083 [2024-07-21 03:44:09.153956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.083 [2024-07-21 03:44:09.153987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:24.083 [2024-07-21 03:44:09.154005] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:24.083 [2024-07-21 03:44:09.154243] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:24.083 [2024-07-21 03:44:09.154486] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:24.083 [2024-07-21 03:44:09.154511] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:24.083 [2024-07-21 03:44:09.154527] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:24.083 [2024-07-21 03:44:09.158348] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:24.083 [2024-07-21 03:44:09.167424] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:24.083 [2024-07-21 03:44:09.167842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.083 [2024-07-21 03:44:09.167874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:24.083 [2024-07-21 03:44:09.167893] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:24.083 [2024-07-21 03:44:09.168132] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:24.083 [2024-07-21 03:44:09.168374] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:24.083 [2024-07-21 03:44:09.168399] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:24.083 [2024-07-21 03:44:09.168415] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:24.083 [2024-07-21 03:44:09.172000] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:24.083 [2024-07-21 03:44:09.181292] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:24.083 [2024-07-21 03:44:09.181692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.083 [2024-07-21 03:44:09.181725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:24.083 [2024-07-21 03:44:09.181743] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:24.083 [2024-07-21 03:44:09.181981] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:24.083 [2024-07-21 03:44:09.182224] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:24.083 [2024-07-21 03:44:09.182249] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:24.083 [2024-07-21 03:44:09.182266] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:24.083 [2024-07-21 03:44:09.185850] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:24.083 [2024-07-21 03:44:09.195139] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:24.083 [2024-07-21 03:44:09.195536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.083 [2024-07-21 03:44:09.195567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:24.083 [2024-07-21 03:44:09.195585] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:24.083 [2024-07-21 03:44:09.195839] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:24.083 [2024-07-21 03:44:09.196087] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:24.083 [2024-07-21 03:44:09.196111] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:24.084 [2024-07-21 03:44:09.196128] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:24.084 [2024-07-21 03:44:09.199714] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:24.084 [2024-07-21 03:44:09.209001] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:24.084 [2024-07-21 03:44:09.209397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.084 [2024-07-21 03:44:09.209433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:24.084 [2024-07-21 03:44:09.209452] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:24.084 [2024-07-21 03:44:09.209709] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:24.084 [2024-07-21 03:44:09.209954] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:24.084 [2024-07-21 03:44:09.209978] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:24.084 [2024-07-21 03:44:09.209994] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:24.084 [2024-07-21 03:44:09.213566] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:24.084 [2024-07-21 03:44:09.222862] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:24.084 [2024-07-21 03:44:09.223243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.084 [2024-07-21 03:44:09.223275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:24.084 [2024-07-21 03:44:09.223294] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:24.084 [2024-07-21 03:44:09.223533] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:24.084 [2024-07-21 03:44:09.223786] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:24.084 [2024-07-21 03:44:09.223811] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:24.084 [2024-07-21 03:44:09.223827] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:24.084 [2024-07-21 03:44:09.227418] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:24.084 [2024-07-21 03:44:09.236738] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:24.084 [2024-07-21 03:44:09.237147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.084 [2024-07-21 03:44:09.237179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:24.084 [2024-07-21 03:44:09.237197] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:24.084 [2024-07-21 03:44:09.237435] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:24.084 [2024-07-21 03:44:09.237691] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:24.084 [2024-07-21 03:44:09.237717] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:24.084 [2024-07-21 03:44:09.237733] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:24.084 [2024-07-21 03:44:09.241305] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:24.084 [2024-07-21 03:44:09.250602] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:24.084 [2024-07-21 03:44:09.250980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.084 [2024-07-21 03:44:09.251012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:24.084 [2024-07-21 03:44:09.251030] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:24.084 [2024-07-21 03:44:09.251268] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:24.084 [2024-07-21 03:44:09.251517] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:24.084 [2024-07-21 03:44:09.251543] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:24.084 [2024-07-21 03:44:09.251560] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:24.084 [2024-07-21 03:44:09.255143] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:24.084 [2024-07-21 03:44:09.264656] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:24.084 [2024-07-21 03:44:09.265051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.084 [2024-07-21 03:44:09.265082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:24.084 [2024-07-21 03:44:09.265100] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:24.084 [2024-07-21 03:44:09.265339] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:24.084 [2024-07-21 03:44:09.265582] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:24.084 [2024-07-21 03:44:09.265606] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:24.084 [2024-07-21 03:44:09.265632] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:24.084 [2024-07-21 03:44:09.269207] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:24.084 [2024-07-21 03:44:09.278495] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:24.084 [2024-07-21 03:44:09.278884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.084 [2024-07-21 03:44:09.278916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:24.084 [2024-07-21 03:44:09.278935] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:24.084 [2024-07-21 03:44:09.279174] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:24.084 [2024-07-21 03:44:09.279418] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:24.084 [2024-07-21 03:44:09.279442] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:24.084 [2024-07-21 03:44:09.279459] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:24.084 [2024-07-21 03:44:09.283057] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:24.084 [2024-07-21 03:44:09.292350] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:24.084 [2024-07-21 03:44:09.292739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.084 [2024-07-21 03:44:09.292771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:24.084 [2024-07-21 03:44:09.292789] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:24.084 [2024-07-21 03:44:09.293027] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:24.084 [2024-07-21 03:44:09.293270] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:24.084 [2024-07-21 03:44:09.293294] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:24.084 [2024-07-21 03:44:09.293310] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:24.084 [2024-07-21 03:44:09.296910] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:24.084 [2024-07-21 03:44:09.306208] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:24.084 [2024-07-21 03:44:09.306608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.084 [2024-07-21 03:44:09.306647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:24.084 [2024-07-21 03:44:09.306666] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:24.084 [2024-07-21 03:44:09.306904] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:24.084 [2024-07-21 03:44:09.307148] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:24.084 [2024-07-21 03:44:09.307173] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:24.084 [2024-07-21 03:44:09.307189] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:24.084 [2024-07-21 03:44:09.310779] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:24.084 [2024-07-21 03:44:09.320075] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:24.084 [2024-07-21 03:44:09.320456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.084 [2024-07-21 03:44:09.320489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:24.084 [2024-07-21 03:44:09.320507] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:24.084 [2024-07-21 03:44:09.320757] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:24.084 [2024-07-21 03:44:09.321001] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:24.084 [2024-07-21 03:44:09.321026] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:24.084 [2024-07-21 03:44:09.321042] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:24.084 [2024-07-21 03:44:09.324627] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:24.084 [2024-07-21 03:44:09.333936] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:24.084 [2024-07-21 03:44:09.334309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.084 [2024-07-21 03:44:09.334340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:24.084 [2024-07-21 03:44:09.334357] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:24.084 [2024-07-21 03:44:09.334596] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:24.084 [2024-07-21 03:44:09.334848] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:24.084 [2024-07-21 03:44:09.334873] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:24.084 [2024-07-21 03:44:09.334889] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:24.084 [2024-07-21 03:44:09.338465] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:24.084 [2024-07-21 03:44:09.347976] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:24.084 [2024-07-21 03:44:09.348371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.084 [2024-07-21 03:44:09.348402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:24.084 [2024-07-21 03:44:09.348429] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:24.084 [2024-07-21 03:44:09.348679] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:24.084 [2024-07-21 03:44:09.348923] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:24.085 [2024-07-21 03:44:09.348947] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:24.085 [2024-07-21 03:44:09.348964] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:24.085 [2024-07-21 03:44:09.352555] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:24.085 [2024-07-21 03:44:09.361855] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:24.085 [2024-07-21 03:44:09.362225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.085 [2024-07-21 03:44:09.362257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:24.085 [2024-07-21 03:44:09.362276] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:24.085 [2024-07-21 03:44:09.362515] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:24.085 [2024-07-21 03:44:09.362768] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:24.085 [2024-07-21 03:44:09.362793] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:24.085 [2024-07-21 03:44:09.362810] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:24.085 [2024-07-21 03:44:09.366383] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:24.085 [2024-07-21 03:44:09.375884] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:24.085 [2024-07-21 03:44:09.376280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.085 [2024-07-21 03:44:09.376311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:24.085 [2024-07-21 03:44:09.376329] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:24.085 [2024-07-21 03:44:09.376569] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:24.085 [2024-07-21 03:44:09.376820] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:24.085 [2024-07-21 03:44:09.376845] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:24.085 [2024-07-21 03:44:09.376861] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:24.085 [2024-07-21 03:44:09.380442] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:24.085 [2024-07-21 03:44:09.389735] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:24.085 [2024-07-21 03:44:09.390104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.085 [2024-07-21 03:44:09.390135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:24.085 [2024-07-21 03:44:09.390153] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:24.085 [2024-07-21 03:44:09.390391] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:24.085 [2024-07-21 03:44:09.390644] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:24.085 [2024-07-21 03:44:09.390677] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:24.085 [2024-07-21 03:44:09.390693] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:24.085 [2024-07-21 03:44:09.394264] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:24.343 [2024-07-21 03:44:09.403564] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:24.343 [2024-07-21 03:44:09.403955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.343 [2024-07-21 03:44:09.403988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:24.343 [2024-07-21 03:44:09.404007] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:24.343 [2024-07-21 03:44:09.404248] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:24.343 [2024-07-21 03:44:09.404493] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:24.343 [2024-07-21 03:44:09.404517] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:24.344 [2024-07-21 03:44:09.404533] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:24.344 [2024-07-21 03:44:09.408118] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:24.344 [2024-07-21 03:44:09.417402] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:24.344 [2024-07-21 03:44:09.417806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.344 [2024-07-21 03:44:09.417839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:24.344 [2024-07-21 03:44:09.417857] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:24.344 [2024-07-21 03:44:09.418096] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:24.344 [2024-07-21 03:44:09.418348] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:24.344 [2024-07-21 03:44:09.418374] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:24.344 [2024-07-21 03:44:09.418390] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:24.344 [2024-07-21 03:44:09.421972] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:24.344 [2024-07-21 03:44:09.431267] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:24.344 [2024-07-21 03:44:09.431667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.344 [2024-07-21 03:44:09.431700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:24.344 [2024-07-21 03:44:09.431718] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:24.344 [2024-07-21 03:44:09.431958] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:24.344 [2024-07-21 03:44:09.432202] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:24.344 [2024-07-21 03:44:09.432228] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:24.344 [2024-07-21 03:44:09.432244] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:24.344 [2024-07-21 03:44:09.435857] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:24.344 [2024-07-21 03:44:09.445166] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:24.344 [2024-07-21 03:44:09.445558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.344 [2024-07-21 03:44:09.445596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:24.344 [2024-07-21 03:44:09.445623] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:24.344 [2024-07-21 03:44:09.445865] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:24.344 [2024-07-21 03:44:09.446120] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:24.344 [2024-07-21 03:44:09.446144] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:24.344 [2024-07-21 03:44:09.446160] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:24.344 [2024-07-21 03:44:09.449741] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:24.344 [2024-07-21 03:44:09.459029] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:24.344 [2024-07-21 03:44:09.459400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.344 [2024-07-21 03:44:09.459431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:24.344 [2024-07-21 03:44:09.459449] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:24.344 [2024-07-21 03:44:09.459698] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:24.344 [2024-07-21 03:44:09.459942] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:24.344 [2024-07-21 03:44:09.459966] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:24.344 [2024-07-21 03:44:09.459982] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:24.344 [2024-07-21 03:44:09.463552] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:24.344 [2024-07-21 03:44:09.473049] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:24.344 [2024-07-21 03:44:09.473441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.344 [2024-07-21 03:44:09.473472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:24.344 [2024-07-21 03:44:09.473490] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:24.344 [2024-07-21 03:44:09.473740] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:24.344 [2024-07-21 03:44:09.473984] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:24.344 [2024-07-21 03:44:09.474008] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:24.344 [2024-07-21 03:44:09.474024] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:24.344 [2024-07-21 03:44:09.477595] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:24.344 [2024-07-21 03:44:09.486906] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:24.344 [2024-07-21 03:44:09.487318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.344 [2024-07-21 03:44:09.487350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:24.344 [2024-07-21 03:44:09.487368] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:24.344 [2024-07-21 03:44:09.487611] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:24.344 [2024-07-21 03:44:09.487864] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:24.344 [2024-07-21 03:44:09.487896] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:24.344 [2024-07-21 03:44:09.487911] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:24.344 [2024-07-21 03:44:09.491491] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:24.344 [2024-07-21 03:44:09.500804] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:24.344 [2024-07-21 03:44:09.501200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.344 [2024-07-21 03:44:09.501233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:24.344 [2024-07-21 03:44:09.501253] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:24.344 [2024-07-21 03:44:09.501493] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:24.344 [2024-07-21 03:44:09.501749] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:24.344 [2024-07-21 03:44:09.501776] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:24.344 [2024-07-21 03:44:09.501792] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:24.344 [2024-07-21 03:44:09.505382] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:24.344 [2024-07-21 03:44:09.514676] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:24.344 [2024-07-21 03:44:09.515083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.344 [2024-07-21 03:44:09.515115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:24.344 [2024-07-21 03:44:09.515133] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:24.344 [2024-07-21 03:44:09.515372] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:24.344 [2024-07-21 03:44:09.515624] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:24.344 [2024-07-21 03:44:09.515649] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:24.344 [2024-07-21 03:44:09.515664] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:24.344 [2024-07-21 03:44:09.519236] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:24.344 [2024-07-21 03:44:09.528523] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:24.344 [2024-07-21 03:44:09.528901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.344 [2024-07-21 03:44:09.528933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:24.344 [2024-07-21 03:44:09.528951] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:24.344 [2024-07-21 03:44:09.529189] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:24.344 [2024-07-21 03:44:09.529432] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:24.344 [2024-07-21 03:44:09.529456] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:24.344 [2024-07-21 03:44:09.529477] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:24.344 [2024-07-21 03:44:09.533074] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:24.344 [2024-07-21 03:44:09.542564] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:24.344 [2024-07-21 03:44:09.542989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.344 [2024-07-21 03:44:09.543024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:24.344 [2024-07-21 03:44:09.543042] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:24.344 [2024-07-21 03:44:09.543282] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:24.344 [2024-07-21 03:44:09.543524] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:24.344 [2024-07-21 03:44:09.543549] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:24.344 [2024-07-21 03:44:09.543566] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:24.344 [2024-07-21 03:44:09.547164] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:24.344 [2024-07-21 03:44:09.556452] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:24.344 [2024-07-21 03:44:09.556858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.344 [2024-07-21 03:44:09.556890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:24.344 [2024-07-21 03:44:09.556908] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:24.345 [2024-07-21 03:44:09.557147] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:24.345 [2024-07-21 03:44:09.557389] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:24.345 [2024-07-21 03:44:09.557414] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:24.345 [2024-07-21 03:44:09.557431] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:24.345 [2024-07-21 03:44:09.561020] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:24.345 [2024-07-21 03:44:09.570298] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:24.345 [2024-07-21 03:44:09.570681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.345 [2024-07-21 03:44:09.570713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:24.345 [2024-07-21 03:44:09.570731] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:24.345 [2024-07-21 03:44:09.570970] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:24.345 [2024-07-21 03:44:09.571212] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:24.345 [2024-07-21 03:44:09.571237] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:24.345 [2024-07-21 03:44:09.571253] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:24.345 [2024-07-21 03:44:09.574837] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:24.345 [2024-07-21 03:44:09.584344] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:24.345 [2024-07-21 03:44:09.584714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.345 [2024-07-21 03:44:09.584751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:24.345 [2024-07-21 03:44:09.584771] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:24.345 [2024-07-21 03:44:09.585010] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:24.345 [2024-07-21 03:44:09.585252] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:24.345 [2024-07-21 03:44:09.585277] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:24.345 [2024-07-21 03:44:09.585293] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:24.345 [2024-07-21 03:44:09.588879] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:24.345 [2024-07-21 03:44:09.598390] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:24.345 [2024-07-21 03:44:09.598783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.345 [2024-07-21 03:44:09.598815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:24.345 [2024-07-21 03:44:09.598833] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:24.345 [2024-07-21 03:44:09.599072] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:24.345 [2024-07-21 03:44:09.599314] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:24.345 [2024-07-21 03:44:09.599339] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:24.345 [2024-07-21 03:44:09.599355] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:24.345 [2024-07-21 03:44:09.602939] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:24.345 [2024-07-21 03:44:09.612433] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:24.345 [2024-07-21 03:44:09.612835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.345 [2024-07-21 03:44:09.612867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:24.345 [2024-07-21 03:44:09.612885] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:24.345 [2024-07-21 03:44:09.613124] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:24.345 [2024-07-21 03:44:09.613366] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:24.345 [2024-07-21 03:44:09.613391] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:24.345 [2024-07-21 03:44:09.613407] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:24.345 [2024-07-21 03:44:09.616989] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:24.345 [2024-07-21 03:44:09.626274] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:24.345 [2024-07-21 03:44:09.626674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.345 [2024-07-21 03:44:09.626706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:24.345 [2024-07-21 03:44:09.626723] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:24.345 [2024-07-21 03:44:09.626962] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:24.345 [2024-07-21 03:44:09.627210] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:24.345 [2024-07-21 03:44:09.627235] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:24.345 [2024-07-21 03:44:09.627251] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:24.345 [2024-07-21 03:44:09.630851] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:24.345 [2024-07-21 03:44:09.640141] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:24.345 [2024-07-21 03:44:09.640518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.345 [2024-07-21 03:44:09.640551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:24.345 [2024-07-21 03:44:09.640569] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:24.345 [2024-07-21 03:44:09.640817] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:24.345 [2024-07-21 03:44:09.641061] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:24.345 [2024-07-21 03:44:09.641086] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:24.345 [2024-07-21 03:44:09.641102] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:24.345 [2024-07-21 03:44:09.644683] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:24.345 [2024-07-21 03:44:09.654176] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:24.345 [2024-07-21 03:44:09.654549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.345 [2024-07-21 03:44:09.654580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:24.345 [2024-07-21 03:44:09.654598] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:24.345 [2024-07-21 03:44:09.654846] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:24.604 [2024-07-21 03:44:09.655090] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:24.604 [2024-07-21 03:44:09.655115] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:24.604 [2024-07-21 03:44:09.655130] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:24.604 [2024-07-21 03:44:09.658713] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:24.604 [2024-07-21 03:44:09.668207] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:24.604 [2024-07-21 03:44:09.668574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.604 [2024-07-21 03:44:09.668606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:24.604 [2024-07-21 03:44:09.668635] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:24.604 [2024-07-21 03:44:09.668875] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:24.604 [2024-07-21 03:44:09.669120] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:24.604 [2024-07-21 03:44:09.669145] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:24.604 [2024-07-21 03:44:09.669161] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:24.604 [2024-07-21 03:44:09.672746] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:24.604 [2024-07-21 03:44:09.682251] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:24.604 [2024-07-21 03:44:09.682659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.604 [2024-07-21 03:44:09.682691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:24.604 [2024-07-21 03:44:09.682709] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:24.604 [2024-07-21 03:44:09.682948] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:24.604 [2024-07-21 03:44:09.683191] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:24.604 [2024-07-21 03:44:09.683215] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:24.604 [2024-07-21 03:44:09.683231] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:24.604 [2024-07-21 03:44:09.686815] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:24.604 [2024-07-21 03:44:09.696102] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:24.604 [2024-07-21 03:44:09.696513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.604 [2024-07-21 03:44:09.696545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:24.604 [2024-07-21 03:44:09.696563] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:24.604 [2024-07-21 03:44:09.696813] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:24.604 [2024-07-21 03:44:09.697055] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:24.604 [2024-07-21 03:44:09.697080] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:24.604 [2024-07-21 03:44:09.697096] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:24.604 [2024-07-21 03:44:09.700675] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:24.604 [2024-07-21 03:44:09.709957] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:24.604 [2024-07-21 03:44:09.710355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.604 [2024-07-21 03:44:09.710387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:24.604 [2024-07-21 03:44:09.710404] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:24.604 [2024-07-21 03:44:09.710654] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:24.604 [2024-07-21 03:44:09.710898] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:24.604 [2024-07-21 03:44:09.710922] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:24.604 [2024-07-21 03:44:09.710938] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:24.604 [2024-07-21 03:44:09.714511] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:24.604 [2024-07-21 03:44:09.723798] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:24.604 [2024-07-21 03:44:09.724181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.604 [2024-07-21 03:44:09.724213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:24.605 [2024-07-21 03:44:09.724236] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:24.605 [2024-07-21 03:44:09.724476] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:24.605 [2024-07-21 03:44:09.724730] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:24.605 [2024-07-21 03:44:09.724756] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:24.605 [2024-07-21 03:44:09.724773] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:24.605 [2024-07-21 03:44:09.728344] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:24.605 [2024-07-21 03:44:09.737649] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:24.605 [2024-07-21 03:44:09.738047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.605 [2024-07-21 03:44:09.738079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:24.605 [2024-07-21 03:44:09.738097] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:24.605 [2024-07-21 03:44:09.738336] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:24.605 [2024-07-21 03:44:09.738578] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:24.605 [2024-07-21 03:44:09.738603] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:24.605 [2024-07-21 03:44:09.738630] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:24.605 [2024-07-21 03:44:09.742205] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:24.605 [2024-07-21 03:44:09.751487] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:24.605 [2024-07-21 03:44:09.751891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.605 [2024-07-21 03:44:09.751923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:24.605 [2024-07-21 03:44:09.751941] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:24.605 [2024-07-21 03:44:09.752180] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:24.605 [2024-07-21 03:44:09.752422] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:24.605 [2024-07-21 03:44:09.752446] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:24.605 [2024-07-21 03:44:09.752462] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:24.605 [2024-07-21 03:44:09.756045] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:24.605 [2024-07-21 03:44:09.765337] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:24.605 [2024-07-21 03:44:09.765713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.605 [2024-07-21 03:44:09.765745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:24.605 [2024-07-21 03:44:09.765764] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:24.605 [2024-07-21 03:44:09.766003] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:24.605 [2024-07-21 03:44:09.766247] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:24.605 [2024-07-21 03:44:09.766277] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:24.605 [2024-07-21 03:44:09.766294] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:24.605 [2024-07-21 03:44:09.769877] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:24.605 [2024-07-21 03:44:09.779371] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:24.605 [2024-07-21 03:44:09.779748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.605 [2024-07-21 03:44:09.779780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:24.605 [2024-07-21 03:44:09.779798] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:24.605 [2024-07-21 03:44:09.780037] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:24.605 [2024-07-21 03:44:09.780281] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:24.605 [2024-07-21 03:44:09.780306] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:24.605 [2024-07-21 03:44:09.780322] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:24.605 [2024-07-21 03:44:09.783919] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:24.605 [2024-07-21 03:44:09.793213] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:24.605 [2024-07-21 03:44:09.793609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.605 [2024-07-21 03:44:09.793647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:24.605 [2024-07-21 03:44:09.793665] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:24.605 [2024-07-21 03:44:09.793905] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:24.605 [2024-07-21 03:44:09.794147] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:24.605 [2024-07-21 03:44:09.794172] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:24.605 [2024-07-21 03:44:09.794187] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:24.605 [2024-07-21 03:44:09.797784] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:24.605 [2024-07-21 03:44:09.807082] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:24.605 [2024-07-21 03:44:09.807478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.605 [2024-07-21 03:44:09.807510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:24.605 [2024-07-21 03:44:09.807528] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:24.605 [2024-07-21 03:44:09.807779] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:24.605 [2024-07-21 03:44:09.808022] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:24.605 [2024-07-21 03:44:09.808046] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:24.605 [2024-07-21 03:44:09.808062] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:24.605 [2024-07-21 03:44:09.811645] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:24.605 [2024-07-21 03:44:09.820931] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:24.605 [2024-07-21 03:44:09.821305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.605 [2024-07-21 03:44:09.821336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:24.605 [2024-07-21 03:44:09.821355] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:24.605 [2024-07-21 03:44:09.821594] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:24.605 [2024-07-21 03:44:09.821847] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:24.605 [2024-07-21 03:44:09.821873] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:24.605 [2024-07-21 03:44:09.821902] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:24.605 [2024-07-21 03:44:09.824839] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:24.605 [2024-07-21 03:44:09.834259] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:24.605 [2024-07-21 03:44:09.834591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.605 [2024-07-21 03:44:09.834626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:24.605 [2024-07-21 03:44:09.834670] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:24.606 [2024-07-21 03:44:09.834913] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:24.606 [2024-07-21 03:44:09.835122] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:24.606 [2024-07-21 03:44:09.835142] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:24.606 [2024-07-21 03:44:09.835155] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:24.606 [2024-07-21 03:44:09.838150] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:24.606 [2024-07-21 03:44:09.847544] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:24.606 [2024-07-21 03:44:09.847948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.606 [2024-07-21 03:44:09.847976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:24.606 [2024-07-21 03:44:09.847993] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:24.606 [2024-07-21 03:44:09.848218] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:24.606 [2024-07-21 03:44:09.848429] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:24.606 [2024-07-21 03:44:09.848449] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:24.606 [2024-07-21 03:44:09.848461] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:24.606 [2024-07-21 03:44:09.851418] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:24.606 [2024-07-21 03:44:09.860857] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:24.606 [2024-07-21 03:44:09.861294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.606 [2024-07-21 03:44:09.861322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:24.606 [2024-07-21 03:44:09.861343] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:24.606 [2024-07-21 03:44:09.861582] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:24.606 [2024-07-21 03:44:09.861791] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:24.606 [2024-07-21 03:44:09.861813] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:24.606 [2024-07-21 03:44:09.861826] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:24.606 [2024-07-21 03:44:09.864783] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:24.606 [2024-07-21 03:44:09.874235] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:24.606 [2024-07-21 03:44:09.874659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.606 [2024-07-21 03:44:09.874688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:24.606 [2024-07-21 03:44:09.874704] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:24.606 [2024-07-21 03:44:09.874946] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:24.606 [2024-07-21 03:44:09.875139] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:24.606 [2024-07-21 03:44:09.875159] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:24.606 [2024-07-21 03:44:09.875172] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:24.606 [2024-07-21 03:44:09.878133] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:24.606 [2024-07-21 03:44:09.887501] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:24.606 [2024-07-21 03:44:09.887808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.606 [2024-07-21 03:44:09.887851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:24.606 [2024-07-21 03:44:09.887868] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:24.606 [2024-07-21 03:44:09.888096] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:24.606 [2024-07-21 03:44:09.888307] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:24.606 [2024-07-21 03:44:09.888328] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:24.606 [2024-07-21 03:44:09.888341] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:24.606 [2024-07-21 03:44:09.891378] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:24.606 [2024-07-21 03:44:09.900830] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:24.606 [2024-07-21 03:44:09.901238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.606 [2024-07-21 03:44:09.901267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:24.606 [2024-07-21 03:44:09.901282] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:24.606 [2024-07-21 03:44:09.901504] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:24.606 [2024-07-21 03:44:09.901743] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:24.606 [2024-07-21 03:44:09.901769] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:24.606 [2024-07-21 03:44:09.901783] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:24.606 [2024-07-21 03:44:09.904739] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:24.606 [2024-07-21 03:44:09.914373] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:24.606 [2024-07-21 03:44:09.914796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.606 [2024-07-21 03:44:09.914826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:24.606 [2024-07-21 03:44:09.914842] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:24.606 [2024-07-21 03:44:09.915085] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:24.606 [2024-07-21 03:44:09.915311] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:24.606 [2024-07-21 03:44:09.915334] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:24.606 [2024-07-21 03:44:09.915348] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:24.865 [2024-07-21 03:44:09.918433] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:24.865 [2024-07-21 03:44:09.927696] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:24.865 [2024-07-21 03:44:09.928048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.865 [2024-07-21 03:44:09.928076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:24.865 [2024-07-21 03:44:09.928092] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:24.865 [2024-07-21 03:44:09.928317] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:24.865 [2024-07-21 03:44:09.928527] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:24.865 [2024-07-21 03:44:09.928548] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:24.865 [2024-07-21 03:44:09.928561] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:24.865 [2024-07-21 03:44:09.931530] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:24.865 [2024-07-21 03:44:09.940997] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:24.865 [2024-07-21 03:44:09.941423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.865 [2024-07-21 03:44:09.941452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:24.865 [2024-07-21 03:44:09.941469] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:24.865 [2024-07-21 03:44:09.941722] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:24.866 [2024-07-21 03:44:09.941951] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:24.866 [2024-07-21 03:44:09.941972] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:24.866 [2024-07-21 03:44:09.941985] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:24.866 [2024-07-21 03:44:09.944943] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:24.866 [2024-07-21 03:44:09.954166] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:24.866 [2024-07-21 03:44:09.954568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.866 [2024-07-21 03:44:09.954596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:24.866 [2024-07-21 03:44:09.954637] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:24.866 [2024-07-21 03:44:09.954883] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:24.866 [2024-07-21 03:44:09.955093] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:24.866 [2024-07-21 03:44:09.955113] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:24.866 [2024-07-21 03:44:09.955126] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:24.866 [2024-07-21 03:44:09.958082] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:24.866 [2024-07-21 03:44:09.967469] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:24.866 [2024-07-21 03:44:09.967812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.866 [2024-07-21 03:44:09.967840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:24.866 [2024-07-21 03:44:09.967857] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:24.866 [2024-07-21 03:44:09.968080] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:24.866 [2024-07-21 03:44:09.968297] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:24.866 [2024-07-21 03:44:09.968317] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:24.866 [2024-07-21 03:44:09.968330] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:24.866 [2024-07-21 03:44:09.971356] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:24.866 [2024-07-21 03:44:09.980811] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:24.866 [2024-07-21 03:44:09.981218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.866 [2024-07-21 03:44:09.981251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:24.866 [2024-07-21 03:44:09.981268] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:24.866 [2024-07-21 03:44:09.981505] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:24.866 [2024-07-21 03:44:09.981730] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:24.866 [2024-07-21 03:44:09.981753] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:24.866 [2024-07-21 03:44:09.981766] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:24.866 [2024-07-21 03:44:09.984720] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:24.866 [2024-07-21 03:44:09.994052] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:24.866 [2024-07-21 03:44:09.994481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.866 [2024-07-21 03:44:09.994510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:24.866 [2024-07-21 03:44:09.994527] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:24.866 [2024-07-21 03:44:09.994782] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:24.866 [2024-07-21 03:44:09.994994] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:24.866 [2024-07-21 03:44:09.995015] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:24.866 [2024-07-21 03:44:09.995027] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:24.866 [2024-07-21 03:44:09.997996] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:24.866 [2024-07-21 03:44:10.008318] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:24.866 [2024-07-21 03:44:10.008717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.866 [2024-07-21 03:44:10.008759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:24.866 [2024-07-21 03:44:10.008788] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:24.866 [2024-07-21 03:44:10.009084] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:24.866 [2024-07-21 03:44:10.009341] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:24.866 [2024-07-21 03:44:10.009370] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:24.866 [2024-07-21 03:44:10.009392] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:24.866 [2024-07-21 03:44:10.013763] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:24.866 [2024-07-21 03:44:10.022229] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:24.866 [2024-07-21 03:44:10.022671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.866 [2024-07-21 03:44:10.022711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:24.866 [2024-07-21 03:44:10.022735] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:24.866 [2024-07-21 03:44:10.022979] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:24.866 [2024-07-21 03:44:10.023229] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:24.866 [2024-07-21 03:44:10.023258] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:24.866 [2024-07-21 03:44:10.023280] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:24.866 [2024-07-21 03:44:10.026875] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:24.866 [2024-07-21 03:44:10.036193] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:24.866 [2024-07-21 03:44:10.036587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.866 [2024-07-21 03:44:10.036629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:24.866 [2024-07-21 03:44:10.036650] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:24.866 [2024-07-21 03:44:10.036890] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:24.866 [2024-07-21 03:44:10.037134] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:24.866 [2024-07-21 03:44:10.037160] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:24.866 [2024-07-21 03:44:10.037186] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:24.866 [2024-07-21 03:44:10.040780] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:24.866 [2024-07-21 03:44:10.050074] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:24.866 [2024-07-21 03:44:10.050482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.866 [2024-07-21 03:44:10.050514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:24.866 [2024-07-21 03:44:10.050532] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:24.866 [2024-07-21 03:44:10.050785] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:24.866 [2024-07-21 03:44:10.051029] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:24.866 [2024-07-21 03:44:10.051054] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:24.866 [2024-07-21 03:44:10.051070] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:24.866 [2024-07-21 03:44:10.054656] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:24.866 [2024-07-21 03:44:10.063956] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:24.866 [2024-07-21 03:44:10.064355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.866 [2024-07-21 03:44:10.064387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:24.866 [2024-07-21 03:44:10.064405] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:24.866 [2024-07-21 03:44:10.064655] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:24.866 [2024-07-21 03:44:10.064900] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:24.866 [2024-07-21 03:44:10.064925] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:24.866 [2024-07-21 03:44:10.064940] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:24.866 [2024-07-21 03:44:10.068514] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:24.866 [2024-07-21 03:44:10.077814] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:24.866 [2024-07-21 03:44:10.078221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.866 [2024-07-21 03:44:10.078253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:24.866 [2024-07-21 03:44:10.078272] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:24.866 [2024-07-21 03:44:10.078511] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:24.866 [2024-07-21 03:44:10.078769] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:24.866 [2024-07-21 03:44:10.078794] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:24.866 [2024-07-21 03:44:10.078810] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:24.866 [2024-07-21 03:44:10.082397] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:24.866 [2024-07-21 03:44:10.091693] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:24.866 [2024-07-21 03:44:10.092065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.866 [2024-07-21 03:44:10.092102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:24.866 [2024-07-21 03:44:10.092122] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:24.867 [2024-07-21 03:44:10.092360] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:24.867 [2024-07-21 03:44:10.092604] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:24.867 [2024-07-21 03:44:10.092640] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:24.867 [2024-07-21 03:44:10.092657] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:24.867 [2024-07-21 03:44:10.096236] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:24.867 [2024-07-21 03:44:10.105575] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:24.867 [2024-07-21 03:44:10.105988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.867 [2024-07-21 03:44:10.106020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:24.867 [2024-07-21 03:44:10.106038] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:24.867 [2024-07-21 03:44:10.106277] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:24.867 [2024-07-21 03:44:10.106520] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:24.867 [2024-07-21 03:44:10.106545] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:24.867 [2024-07-21 03:44:10.106561] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:24.867 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 2564245 Killed "${NVMF_APP[@]}" "$@" 00:34:24.867 [2024-07-21 03:44:10.110157] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:24.867 03:44:10 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:34:24.867 03:44:10 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:34:24.867 03:44:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:24.867 03:44:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@720 -- # xtrace_disable 00:34:24.867 03:44:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:24.867 03:44:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=2565198 00:34:24.867 03:44:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:34:24.867 03:44:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 2565198 00:34:24.867 03:44:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@827 -- # '[' -z 2565198 ']' 00:34:24.867 03:44:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:24.867 03:44:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@832 -- # local max_retries=100 00:34:24.867 03:44:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:24.867 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:24.867 03:44:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # xtrace_disable 00:34:24.867 03:44:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:24.867 [2024-07-21 03:44:10.119471] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:24.867 [2024-07-21 03:44:10.119839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.867 [2024-07-21 03:44:10.119872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:24.867 [2024-07-21 03:44:10.119896] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:24.867 [2024-07-21 03:44:10.120136] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:24.867 [2024-07-21 03:44:10.120379] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:24.867 [2024-07-21 03:44:10.120402] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:24.867 [2024-07-21 03:44:10.120419] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:24.867 [2024-07-21 03:44:10.124005] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:24.867 [2024-07-21 03:44:10.133533] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:24.867 [2024-07-21 03:44:10.133944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.867 [2024-07-21 03:44:10.133975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:24.867 [2024-07-21 03:44:10.133993] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:24.867 [2024-07-21 03:44:10.134232] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:24.867 [2024-07-21 03:44:10.134476] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:24.867 [2024-07-21 03:44:10.134500] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:24.867 [2024-07-21 03:44:10.134515] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:24.867 [2024-07-21 03:44:10.138105] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:24.867 [2024-07-21 03:44:10.147414] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:24.867 [2024-07-21 03:44:10.147798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.867 [2024-07-21 03:44:10.147830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:24.867 [2024-07-21 03:44:10.147848] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:24.867 [2024-07-21 03:44:10.148088] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:24.867 [2024-07-21 03:44:10.148332] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:24.867 [2024-07-21 03:44:10.148356] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:24.867 [2024-07-21 03:44:10.148374] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:24.867 [2024-07-21 03:44:10.151964] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:24.867 [2024-07-21 03:44:10.161264] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:24.867 [2024-07-21 03:44:10.161664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.867 [2024-07-21 03:44:10.161696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:24.867 [2024-07-21 03:44:10.161714] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:24.867 [2024-07-21 03:44:10.161953] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:24.867 [2024-07-21 03:44:10.162202] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:24.867 [2024-07-21 03:44:10.162226] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:24.867 [2024-07-21 03:44:10.162242] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:24.867 [2024-07-21 03:44:10.163574] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:34:24.867 [2024-07-21 03:44:10.163651] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:24.867 [2024-07-21 03:44:10.165832] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:24.867 [2024-07-21 03:44:10.175294] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:24.867 [2024-07-21 03:44:10.175673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.867 [2024-07-21 03:44:10.175705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:24.867 [2024-07-21 03:44:10.175724] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:24.867 [2024-07-21 03:44:10.175964] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:24.867 [2024-07-21 03:44:10.176207] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:24.867 [2024-07-21 03:44:10.176232] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:24.867 [2024-07-21 03:44:10.176248] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:25.126 [2024-07-21 03:44:10.179841] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:25.126 [2024-07-21 03:44:10.189151] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:25.126 [2024-07-21 03:44:10.189526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.127 [2024-07-21 03:44:10.189558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:25.127 [2024-07-21 03:44:10.189576] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:25.127 [2024-07-21 03:44:10.189826] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:25.127 [2024-07-21 03:44:10.190070] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:25.127 [2024-07-21 03:44:10.190094] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:25.127 [2024-07-21 03:44:10.190110] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:25.127 [2024-07-21 03:44:10.193695] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:25.127 [2024-07-21 03:44:10.203008] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:25.127 [2024-07-21 03:44:10.203365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.127 [2024-07-21 03:44:10.203396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:25.127 [2024-07-21 03:44:10.203414] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:25.127 [2024-07-21 03:44:10.203665] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:25.127 [2024-07-21 03:44:10.203909] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:25.127 [2024-07-21 03:44:10.203939] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:25.127 [2024-07-21 03:44:10.203956] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:25.127 EAL: No free 2048 kB hugepages reported on node 1 00:34:25.127 [2024-07-21 03:44:10.207532] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:25.127 [2024-07-21 03:44:10.217063] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:25.127 [2024-07-21 03:44:10.217467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.127 [2024-07-21 03:44:10.217499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:25.127 [2024-07-21 03:44:10.217517] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:25.127 [2024-07-21 03:44:10.217766] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:25.127 [2024-07-21 03:44:10.218011] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:25.127 [2024-07-21 03:44:10.218035] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:25.127 [2024-07-21 03:44:10.218051] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:25.127 [2024-07-21 03:44:10.221636] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:25.127 [2024-07-21 03:44:10.230934] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:25.127 [2024-07-21 03:44:10.231307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.127 [2024-07-21 03:44:10.231339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:25.127 [2024-07-21 03:44:10.231357] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:25.127 [2024-07-21 03:44:10.231602] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:25.127 [2024-07-21 03:44:10.231861] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:25.127 [2024-07-21 03:44:10.231886] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:25.127 [2024-07-21 03:44:10.231902] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:25.127 [2024-07-21 03:44:10.235478] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:25.127 [2024-07-21 03:44:10.240102] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:34:25.127 [2024-07-21 03:44:10.244800] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:25.127 [2024-07-21 03:44:10.245230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.127 [2024-07-21 03:44:10.245262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:25.127 [2024-07-21 03:44:10.245281] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:25.127 [2024-07-21 03:44:10.245521] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:25.127 [2024-07-21 03:44:10.245776] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:25.127 [2024-07-21 03:44:10.245801] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:25.127 [2024-07-21 03:44:10.245818] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:25.127 [2024-07-21 03:44:10.249412] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:25.127 [2024-07-21 03:44:10.258753] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:25.127 [2024-07-21 03:44:10.259272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.127 [2024-07-21 03:44:10.259312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:25.127 [2024-07-21 03:44:10.259333] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:25.127 [2024-07-21 03:44:10.259582] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:25.127 [2024-07-21 03:44:10.259840] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:25.127 [2024-07-21 03:44:10.259865] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:25.127 [2024-07-21 03:44:10.259885] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:25.127 [2024-07-21 03:44:10.263459] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:25.127 [2024-07-21 03:44:10.272769] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:25.127 [2024-07-21 03:44:10.273178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.127 [2024-07-21 03:44:10.273210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:25.127 [2024-07-21 03:44:10.273229] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:25.127 [2024-07-21 03:44:10.273469] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:25.127 [2024-07-21 03:44:10.273724] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:25.127 [2024-07-21 03:44:10.273750] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:25.127 [2024-07-21 03:44:10.273767] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:25.127 [2024-07-21 03:44:10.277349] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:25.127 [2024-07-21 03:44:10.286674] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:25.127 [2024-07-21 03:44:10.287089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.127 [2024-07-21 03:44:10.287121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:25.127 [2024-07-21 03:44:10.287140] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:25.127 [2024-07-21 03:44:10.287381] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:25.127 [2024-07-21 03:44:10.287636] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:25.127 [2024-07-21 03:44:10.287661] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:25.127 [2024-07-21 03:44:10.287678] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:25.127 [2024-07-21 03:44:10.291250] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:25.127 [2024-07-21 03:44:10.300584] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:25.127 [2024-07-21 03:44:10.301108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.127 [2024-07-21 03:44:10.301149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:25.127 [2024-07-21 03:44:10.301181] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:25.127 [2024-07-21 03:44:10.301430] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:25.127 [2024-07-21 03:44:10.301689] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:25.127 [2024-07-21 03:44:10.301715] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:25.127 [2024-07-21 03:44:10.301733] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:25.127 [2024-07-21 03:44:10.305309] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:25.127 [2024-07-21 03:44:10.314602] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:25.127 [2024-07-21 03:44:10.315001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.127 [2024-07-21 03:44:10.315034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:25.127 [2024-07-21 03:44:10.315053] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:25.127 [2024-07-21 03:44:10.315293] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:25.127 [2024-07-21 03:44:10.315537] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:25.127 [2024-07-21 03:44:10.315562] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:25.127 [2024-07-21 03:44:10.315579] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:25.127 [2024-07-21 03:44:10.319164] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:25.127 [2024-07-21 03:44:10.328450] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:25.127 [2024-07-21 03:44:10.328850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.127 [2024-07-21 03:44:10.328883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:25.127 [2024-07-21 03:44:10.328902] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:25.127 [2024-07-21 03:44:10.329142] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:25.127 [2024-07-21 03:44:10.329387] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:25.127 [2024-07-21 03:44:10.329411] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:25.127 [2024-07-21 03:44:10.329428] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:25.128 [2024-07-21 03:44:10.333033] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:25.128 [2024-07-21 03:44:10.334205] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:25.128 [2024-07-21 03:44:10.334241] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:25.128 [2024-07-21 03:44:10.334258] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:25.128 [2024-07-21 03:44:10.334272] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:25.128 [2024-07-21 03:44:10.334284] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:25.128 [2024-07-21 03:44:10.334342] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:34:25.128 [2024-07-21 03:44:10.334395] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:34:25.128 [2024-07-21 03:44:10.334398] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:34:25.128 [2024-07-21 03:44:10.342336] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:25.128 [2024-07-21 03:44:10.342856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.128 [2024-07-21 03:44:10.342895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:25.128 [2024-07-21 03:44:10.342916] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:25.128 [2024-07-21 03:44:10.343163] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:25.128 [2024-07-21 03:44:10.343411] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:25.128 [2024-07-21 03:44:10.343435] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:25.128 [2024-07-21 03:44:10.343454] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:25.128 [2024-07-21 03:44:10.347041] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:25.128 [2024-07-21 03:44:10.356354] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:25.128 [2024-07-21 03:44:10.356876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.128 [2024-07-21 03:44:10.356920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:25.128 [2024-07-21 03:44:10.356942] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:25.128 [2024-07-21 03:44:10.357193] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:25.128 [2024-07-21 03:44:10.357441] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:25.128 [2024-07-21 03:44:10.357466] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:25.128 [2024-07-21 03:44:10.357484] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:25.128 [2024-07-21 03:44:10.361082] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:25.128 [2024-07-21 03:44:10.370420] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:25.128 [2024-07-21 03:44:10.370980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.128 [2024-07-21 03:44:10.371023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:25.128 [2024-07-21 03:44:10.371046] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:25.128 [2024-07-21 03:44:10.371295] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:25.128 [2024-07-21 03:44:10.371543] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:25.128 [2024-07-21 03:44:10.371568] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:25.128 [2024-07-21 03:44:10.371586] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:25.128 [2024-07-21 03:44:10.375176] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:25.128 [2024-07-21 03:44:10.384517] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:25.128 [2024-07-21 03:44:10.385081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.128 [2024-07-21 03:44:10.385124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:25.128 [2024-07-21 03:44:10.385157] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:25.128 [2024-07-21 03:44:10.385407] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:25.128 [2024-07-21 03:44:10.385666] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:25.128 [2024-07-21 03:44:10.385691] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:25.128 [2024-07-21 03:44:10.385710] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:25.128 [2024-07-21 03:44:10.389286] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:25.128 [2024-07-21 03:44:10.398609] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:25.128 [2024-07-21 03:44:10.399106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.128 [2024-07-21 03:44:10.399155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:25.128 [2024-07-21 03:44:10.399176] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:25.128 [2024-07-21 03:44:10.399428] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:25.128 [2024-07-21 03:44:10.399685] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:25.128 [2024-07-21 03:44:10.399711] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:25.128 [2024-07-21 03:44:10.399728] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:25.128 [2024-07-21 03:44:10.403311] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:25.128 [2024-07-21 03:44:10.412664] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:25.128 [2024-07-21 03:44:10.413210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.128 [2024-07-21 03:44:10.413252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:25.128 [2024-07-21 03:44:10.413275] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:25.128 [2024-07-21 03:44:10.413528] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:25.128 [2024-07-21 03:44:10.413797] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:25.128 [2024-07-21 03:44:10.413822] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:25.128 [2024-07-21 03:44:10.413841] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:25.128 [2024-07-21 03:44:10.417418] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:25.128 [2024-07-21 03:44:10.426727] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:25.128 [2024-07-21 03:44:10.427108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.128 [2024-07-21 03:44:10.427140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:25.128 [2024-07-21 03:44:10.427158] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:25.128 [2024-07-21 03:44:10.427397] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:25.128 [2024-07-21 03:44:10.427653] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:25.128 [2024-07-21 03:44:10.427688] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:25.128 [2024-07-21 03:44:10.427704] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:25.128 [2024-07-21 03:44:10.431281] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:25.387 [2024-07-21 03:44:10.440279] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:25.387 [2024-07-21 03:44:10.440628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.387 [2024-07-21 03:44:10.440669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:25.387 [2024-07-21 03:44:10.440686] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:25.387 [2024-07-21 03:44:10.440901] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:25.387 [2024-07-21 03:44:10.441130] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:25.387 [2024-07-21 03:44:10.441152] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:25.387 [2024-07-21 03:44:10.441167] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:25.387 [2024-07-21 03:44:10.444437] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:25.387 03:44:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:34:25.387 03:44:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@860 -- # return 0 00:34:25.387 03:44:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:25.387 03:44:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:25.387 03:44:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:25.387 [2024-07-21 03:44:10.453816] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:25.387 [2024-07-21 03:44:10.454231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.387 [2024-07-21 03:44:10.454261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:25.387 [2024-07-21 03:44:10.454278] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:25.387 [2024-07-21 03:44:10.454509] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:25.387 [2024-07-21 03:44:10.454767] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:25.387 [2024-07-21 03:44:10.454789] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:25.387 [2024-07-21 03:44:10.454804] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:25.387 [2024-07-21 03:44:10.458053] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:25.387 [2024-07-21 03:44:10.467373] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:25.387 [2024-07-21 03:44:10.467745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.387 [2024-07-21 03:44:10.467775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:25.387 [2024-07-21 03:44:10.467792] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:25.387 [2024-07-21 03:44:10.468036] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:25.387 [2024-07-21 03:44:10.468243] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:25.387 [2024-07-21 03:44:10.468268] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:25.387 [2024-07-21 03:44:10.468284] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:25.387 [2024-07-21 03:44:10.471515] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:25.387 03:44:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:25.387 03:44:10 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:25.387 03:44:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:25.387 03:44:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:25.387 [2024-07-21 03:44:10.476838] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:25.387 [2024-07-21 03:44:10.480956] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:25.387 [2024-07-21 03:44:10.481324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.387 [2024-07-21 03:44:10.481352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:25.387 [2024-07-21 03:44:10.481369] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:25.387 [2024-07-21 03:44:10.481620] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:25.387 [2024-07-21 03:44:10.481850] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:25.387 [2024-07-21 03:44:10.481872] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:25.387 [2024-07-21 03:44:10.481885] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:25.387 03:44:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:25.387 03:44:10 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:25.387 03:44:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:25.387 03:44:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:25.387 [2024-07-21 03:44:10.485230] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:25.387 [2024-07-21 03:44:10.494459] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:25.387 [2024-07-21 03:44:10.494802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.388 [2024-07-21 03:44:10.494831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:25.388 [2024-07-21 03:44:10.494848] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:25.388 [2024-07-21 03:44:10.495078] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:25.388 [2024-07-21 03:44:10.495293] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:25.388 [2024-07-21 03:44:10.495314] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:25.388 [2024-07-21 03:44:10.495328] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:25.388 [2024-07-21 03:44:10.498519] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:25.388 [2024-07-21 03:44:10.507997] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:25.388 [2024-07-21 03:44:10.508478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.388 [2024-07-21 03:44:10.508510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:25.388 [2024-07-21 03:44:10.508537] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:25.388 [2024-07-21 03:44:10.508765] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:25.388 [2024-07-21 03:44:10.509015] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:25.388 [2024-07-21 03:44:10.509037] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:25.388 [2024-07-21 03:44:10.509053] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:25.388 [2024-07-21 03:44:10.512234] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:25.388 [2024-07-21 03:44:10.521532] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:25.388 [2024-07-21 03:44:10.522010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.388 [2024-07-21 03:44:10.522047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:25.388 [2024-07-21 03:44:10.522066] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:25.388 [2024-07-21 03:44:10.522305] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:25.388 [2024-07-21 03:44:10.522541] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:25.388 Malloc0 00:34:25.388 [2024-07-21 03:44:10.522564] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:25.388 [2024-07-21 03:44:10.522581] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:25.388 03:44:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:25.388 03:44:10 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:25.388 03:44:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:25.388 03:44:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:25.388 [2024-07-21 03:44:10.525858] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:25.388 03:44:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:25.388 03:44:10 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:25.388 03:44:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:25.388 03:44:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:25.388 [2024-07-21 03:44:10.535213] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:25.388 [2024-07-21 03:44:10.535590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.388 [2024-07-21 03:44:10.535629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a961e0 with addr=10.0.0.2, port=4420 00:34:25.388 [2024-07-21 03:44:10.535648] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a961e0 is same with the state(5) to be set 00:34:25.388 [2024-07-21 03:44:10.535866] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a961e0 (9): Bad file descriptor 00:34:25.388 [2024-07-21 03:44:10.536088] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:25.388 [2024-07-21 03:44:10.536110] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:25.388 [2024-07-21 03:44:10.536124] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:25.388 03:44:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:25.388 03:44:10 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:25.388 03:44:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:25.388 03:44:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:25.388 [2024-07-21 03:44:10.539480] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:25.388 [2024-07-21 03:44:10.542349] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:25.388 03:44:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:25.388 03:44:10 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 2564532 00:34:25.388 [2024-07-21 03:44:10.548713] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:25.388 [2024-07-21 03:44:10.621398] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:34:35.350 00:34:35.350 Latency(us) 00:34:35.350 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:35.350 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:34:35.350 Verification LBA range: start 0x0 length 0x4000 00:34:35.350 Nvme1n1 : 15.01 6725.87 26.27 8682.14 0.00 8281.16 849.54 15243.19 00:34:35.350 =================================================================================================================== 00:34:35.350 Total : 6725.87 26.27 8682.14 0.00 8281.16 849.54 15243.19 00:34:35.350 03:44:19 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:34:35.350 03:44:19 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:35.350 03:44:19 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:35.350 03:44:19 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:35.350 03:44:19 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:35.350 03:44:19 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:34:35.350 03:44:19 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:34:35.350 03:44:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:34:35.350 03:44:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:34:35.350 03:44:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:34:35.350 03:44:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:34:35.350 03:44:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:34:35.350 03:44:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:34:35.350 rmmod nvme_tcp 00:34:35.350 rmmod nvme_fabrics 00:34:35.350 rmmod nvme_keyring 00:34:35.350 03:44:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:34:35.350 03:44:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:34:35.350 03:44:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:34:35.350 03:44:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 2565198 ']' 00:34:35.350 03:44:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 2565198 00:34:35.350 03:44:19 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@946 -- # '[' -z 2565198 ']' 00:34:35.350 03:44:19 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@950 -- # kill -0 2565198 00:34:35.350 03:44:19 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@951 -- # uname 00:34:35.350 03:44:19 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:34:35.350 03:44:19 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2565198 00:34:35.350 03:44:19 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:34:35.350 03:44:19 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:34:35.350 03:44:19 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2565198' 00:34:35.350 killing process with pid 2565198 00:34:35.350 03:44:19 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@965 -- # kill 2565198 00:34:35.350 03:44:19 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@970 -- # wait 2565198 00:34:35.350 03:44:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:34:35.350 03:44:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:34:35.350 03:44:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:34:35.350 03:44:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:35.351 03:44:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:34:35.351 03:44:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:35.351 03:44:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:35.351 03:44:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:37.252 03:44:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:34:37.252 00:34:37.252 real 0m22.432s 00:34:37.252 user 1m0.182s 00:34:37.252 sys 0m4.152s 00:34:37.252 03:44:22 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:34:37.252 03:44:22 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:37.252 ************************************ 00:34:37.252 END TEST nvmf_bdevperf 00:34:37.252 ************************************ 00:34:37.252 03:44:22 nvmf_tcp -- nvmf/nvmf.sh@123 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:34:37.252 03:44:22 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:34:37.252 03:44:22 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:34:37.252 03:44:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:37.252 ************************************ 00:34:37.252 START TEST nvmf_target_disconnect 00:34:37.252 ************************************ 00:34:37.252 03:44:22 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:34:37.252 * Looking for test storage... 00:34:37.252 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:37.252 03:44:22 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:37.252 03:44:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:34:37.252 03:44:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:37.252 03:44:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:37.252 03:44:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:37.252 03:44:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:37.252 03:44:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:37.252 03:44:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:37.252 03:44:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:37.252 03:44:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:37.252 03:44:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:37.252 03:44:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:37.252 03:44:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:34:37.252 03:44:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:34:37.252 03:44:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:37.252 03:44:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:37.252 03:44:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:37.252 03:44:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:37.252 03:44:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:37.252 03:44:22 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:37.252 03:44:22 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:37.252 03:44:22 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:37.252 03:44:22 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:37.252 03:44:22 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:37.252 03:44:22 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:37.252 03:44:22 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:34:37.252 03:44:22 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:37.252 03:44:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:34:37.252 03:44:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:37.252 03:44:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:37.252 03:44:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:37.252 03:44:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:37.252 03:44:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:37.252 03:44:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:37.252 03:44:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:37.252 03:44:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:37.252 03:44:22 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:34:37.252 03:44:22 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:34:37.252 03:44:22 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:34:37.252 03:44:22 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:34:37.252 03:44:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:34:37.252 03:44:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:37.252 03:44:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:34:37.252 03:44:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:34:37.252 03:44:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:34:37.252 03:44:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:37.252 03:44:22 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:37.252 03:44:22 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:37.252 03:44:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:34:37.252 03:44:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:34:37.252 03:44:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:34:37.252 03:44:22 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:34:39.154 03:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:39.154 03:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:34:39.154 03:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:34:39.154 03:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:34:39.154 03:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:34:39.154 03:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:34:39.154 03:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:34:39.154 03:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:34:39.154 03:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:34:39.154 03:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:34:39.154 03:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:34:39.154 03:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:34:39.154 03:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:34:39.154 03:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:34:39.154 03:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:34:39.154 03:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:39.154 03:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:39.154 03:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:39.154 03:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:39.154 03:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:39.154 03:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:39.154 03:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:39.154 03:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:39.154 03:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:39.154 03:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:39.154 03:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:39.154 03:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:34:39.154 03:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:34:39.154 03:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:34:39.154 03:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:34:39.154 03:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:34:39.154 03:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:34:39.154 03:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:39.154 03:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:34:39.154 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:34:39.154 03:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:39.154 03:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:39.154 03:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:39.154 03:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:39.154 03:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:39.154 03:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:39.154 03:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:34:39.154 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:34:39.154 03:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:39.154 03:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:39.154 03:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:39.154 03:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:39.154 03:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:39.154 03:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:34:39.154 03:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:34:39.154 03:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:34:39.154 03:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:39.154 03:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:39.154 03:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:39.154 03:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:39.154 03:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:39.154 03:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:39.154 03:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:39.154 03:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:34:39.154 Found net devices under 0000:0a:00.0: cvl_0_0 00:34:39.154 03:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:39.154 03:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:39.154 03:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:39.154 03:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:39.154 03:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:39.154 03:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:39.154 03:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:39.154 03:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:39.154 03:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:34:39.154 Found net devices under 0000:0a:00.1: cvl_0_1 00:34:39.154 03:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:39.154 03:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:34:39.154 03:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:34:39.154 03:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:34:39.154 03:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:34:39.154 03:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:34:39.154 03:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:39.154 03:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:39.154 03:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:39.154 03:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:34:39.154 03:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:39.154 03:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:39.154 03:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:34:39.155 03:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:39.155 03:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:39.155 03:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:34:39.155 03:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:34:39.155 03:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:34:39.155 03:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:39.155 03:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:39.155 03:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:39.155 03:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:34:39.155 03:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:39.155 03:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:39.155 03:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:39.155 03:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:34:39.155 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:39.155 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.203 ms 00:34:39.155 00:34:39.155 --- 10.0.0.2 ping statistics --- 00:34:39.155 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:39.155 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:34:39.155 03:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:39.155 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:39.155 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.165 ms 00:34:39.155 00:34:39.155 --- 10.0.0.1 ping statistics --- 00:34:39.155 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:39.155 rtt min/avg/max/mdev = 0.165/0.165/0.165/0.000 ms 00:34:39.155 03:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:39.155 03:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:34:39.155 03:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:34:39.155 03:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:39.155 03:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:34:39.155 03:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:34:39.155 03:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:39.155 03:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:34:39.155 03:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:34:39.155 03:44:24 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:34:39.155 03:44:24 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:34:39.155 03:44:24 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1103 -- # xtrace_disable 00:34:39.155 03:44:24 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:34:39.155 ************************************ 00:34:39.155 START TEST nvmf_target_disconnect_tc1 00:34:39.155 ************************************ 00:34:39.155 03:44:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1121 -- # nvmf_target_disconnect_tc1 00:34:39.155 03:44:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:39.155 03:44:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@648 -- # local es=0 00:34:39.155 03:44:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:39.155 03:44:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:34:39.155 03:44:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:39.155 03:44:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:34:39.155 03:44:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:39.155 03:44:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:34:39.155 03:44:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:39.155 03:44:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:34:39.155 03:44:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:34:39.155 03:44:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:39.413 EAL: No free 2048 kB hugepages reported on node 1 00:34:39.413 [2024-07-21 03:44:24.518269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.413 [2024-07-21 03:44:24.518344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ff740 with addr=10.0.0.2, port=4420 00:34:39.413 [2024-07-21 03:44:24.518382] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:34:39.413 [2024-07-21 03:44:24.518407] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:34:39.413 [2024-07-21 03:44:24.518421] nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:34:39.413 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:34:39.413 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:34:39.413 Initializing NVMe Controllers 00:34:39.413 03:44:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # es=1 00:34:39.413 03:44:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:34:39.413 03:44:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:34:39.413 03:44:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:34:39.413 00:34:39.413 real 0m0.090s 00:34:39.413 user 0m0.040s 00:34:39.413 sys 0m0.050s 00:34:39.413 03:44:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:34:39.413 03:44:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:34:39.413 ************************************ 00:34:39.414 END TEST nvmf_target_disconnect_tc1 00:34:39.414 ************************************ 00:34:39.414 03:44:24 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:34:39.414 03:44:24 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:34:39.414 03:44:24 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1103 -- # xtrace_disable 00:34:39.414 03:44:24 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:34:39.414 ************************************ 00:34:39.414 START TEST nvmf_target_disconnect_tc2 00:34:39.414 ************************************ 00:34:39.414 03:44:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1121 -- # nvmf_target_disconnect_tc2 00:34:39.414 03:44:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:34:39.414 03:44:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:34:39.414 03:44:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:39.414 03:44:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@720 -- # xtrace_disable 00:34:39.414 03:44:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:39.414 03:44:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=2568345 00:34:39.414 03:44:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:34:39.414 03:44:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 2568345 00:34:39.414 03:44:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@827 -- # '[' -z 2568345 ']' 00:34:39.414 03:44:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:39.414 03:44:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@832 -- # local max_retries=100 00:34:39.414 03:44:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:39.414 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:39.414 03:44:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # xtrace_disable 00:34:39.414 03:44:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:39.414 [2024-07-21 03:44:24.619577] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:34:39.414 [2024-07-21 03:44:24.619685] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:39.414 EAL: No free 2048 kB hugepages reported on node 1 00:34:39.414 [2024-07-21 03:44:24.689218] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:39.671 [2024-07-21 03:44:24.786528] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:39.671 [2024-07-21 03:44:24.786590] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:39.671 [2024-07-21 03:44:24.786625] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:39.672 [2024-07-21 03:44:24.786642] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:39.672 [2024-07-21 03:44:24.786655] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:39.672 [2024-07-21 03:44:24.786740] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:34:39.672 [2024-07-21 03:44:24.786793] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:34:39.672 [2024-07-21 03:44:24.786846] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:34:39.672 [2024-07-21 03:44:24.786849] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:34:39.672 03:44:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:34:39.672 03:44:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # return 0 00:34:39.672 03:44:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:39.672 03:44:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:39.672 03:44:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:39.672 03:44:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:39.672 03:44:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:39.672 03:44:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:39.672 03:44:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:39.672 Malloc0 00:34:39.672 03:44:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:39.672 03:44:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:34:39.672 03:44:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:39.672 03:44:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:39.672 [2024-07-21 03:44:24.957348] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:39.672 03:44:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:39.672 03:44:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:39.672 03:44:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:39.672 03:44:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:39.672 03:44:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:39.672 03:44:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:39.672 03:44:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:39.672 03:44:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:39.672 03:44:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:39.672 03:44:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:39.672 03:44:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:39.672 03:44:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:39.929 [2024-07-21 03:44:24.985655] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:39.929 03:44:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:39.929 03:44:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:34:39.929 03:44:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:39.929 03:44:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:39.929 03:44:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:39.929 03:44:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=2568367 00:34:39.929 03:44:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:39.929 03:44:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:34:39.929 EAL: No free 2048 kB hugepages reported on node 1 00:34:41.880 03:44:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 2568345 00:34:41.880 03:44:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:34:41.880 Read completed with error (sct=0, sc=8) 00:34:41.880 starting I/O failed 00:34:41.880 Read completed with error (sct=0, sc=8) 00:34:41.880 starting I/O failed 00:34:41.880 Read completed with error (sct=0, sc=8) 00:34:41.880 starting I/O failed 00:34:41.880 Read completed with error (sct=0, sc=8) 00:34:41.880 starting I/O failed 00:34:41.880 Read completed with error (sct=0, sc=8) 00:34:41.880 starting I/O failed 00:34:41.880 Read completed with error (sct=0, sc=8) 00:34:41.880 starting I/O failed 00:34:41.880 Read completed with error (sct=0, sc=8) 00:34:41.880 starting I/O failed 00:34:41.880 Read completed with error (sct=0, sc=8) 00:34:41.880 starting I/O failed 00:34:41.880 Read completed with error (sct=0, sc=8) 00:34:41.880 starting I/O failed 00:34:41.880 Read completed with error (sct=0, sc=8) 00:34:41.880 starting I/O failed 00:34:41.880 Read completed with error (sct=0, sc=8) 00:34:41.880 starting I/O failed 00:34:41.880 Read completed with error (sct=0, sc=8) 00:34:41.880 starting I/O failed 00:34:41.880 Read completed with error (sct=0, sc=8) 00:34:41.880 starting I/O failed 00:34:41.880 Read completed with error (sct=0, sc=8) 00:34:41.880 starting I/O failed 00:34:41.880 Read completed with error (sct=0, sc=8) 00:34:41.880 starting I/O failed 00:34:41.880 Read completed with error (sct=0, sc=8) 00:34:41.880 starting I/O failed 00:34:41.880 Read completed with error (sct=0, sc=8) 00:34:41.880 starting I/O failed 00:34:41.880 Read completed with error (sct=0, sc=8) 00:34:41.880 starting I/O failed 00:34:41.880 Read completed with error (sct=0, sc=8) 00:34:41.880 starting I/O failed 00:34:41.880 Read completed with error (sct=0, sc=8) 00:34:41.880 starting I/O failed 00:34:41.880 Write completed with error (sct=0, sc=8) 00:34:41.880 starting I/O failed 00:34:41.880 Write completed with error (sct=0, sc=8) 00:34:41.880 starting I/O failed 00:34:41.880 Write completed with error (sct=0, sc=8) 00:34:41.880 starting I/O failed 00:34:41.880 Write completed with error (sct=0, sc=8) 00:34:41.880 starting I/O failed 00:34:41.880 Read completed with error (sct=0, sc=8) 00:34:41.880 starting I/O failed 00:34:41.880 Read completed with error (sct=0, sc=8) 00:34:41.880 starting I/O failed 00:34:41.880 Write completed with error (sct=0, sc=8) 00:34:41.880 starting I/O failed 00:34:41.880 Write completed with error (sct=0, sc=8) 00:34:41.880 starting I/O failed 00:34:41.880 Read completed with error (sct=0, sc=8) 00:34:41.880 starting I/O failed 00:34:41.880 Write completed with error (sct=0, sc=8) 00:34:41.880 starting I/O failed 00:34:41.880 Write completed with error (sct=0, sc=8) 00:34:41.880 starting I/O failed 00:34:41.880 Write completed with error (sct=0, sc=8) 00:34:41.880 starting I/O failed 00:34:41.880 Read completed with error (sct=0, sc=8) 00:34:41.880 starting I/O failed 00:34:41.880 Read completed with error (sct=0, sc=8) 00:34:41.880 starting I/O failed 00:34:41.880 Read completed with error (sct=0, sc=8) 00:34:41.880 starting I/O failed 00:34:41.880 Read completed with error (sct=0, sc=8) 00:34:41.880 starting I/O failed 00:34:41.880 Write completed with error (sct=0, sc=8) 00:34:41.880 starting I/O failed 00:34:41.880 Write completed with error (sct=0, sc=8) 00:34:41.880 starting I/O failed 00:34:41.880 Read completed with error (sct=0, sc=8) 00:34:41.880 [2024-07-21 03:44:27.009907] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:41.880 starting I/O failed 00:34:41.880 Write completed with error (sct=0, sc=8) 00:34:41.880 starting I/O failed 00:34:41.880 Write completed with error (sct=0, sc=8) 00:34:41.880 starting I/O failed 00:34:41.880 Write completed with error (sct=0, sc=8) 00:34:41.880 starting I/O failed 00:34:41.880 Write completed with error (sct=0, sc=8) 00:34:41.880 starting I/O failed 00:34:41.880 Read completed with error (sct=0, sc=8) 00:34:41.880 starting I/O failed 00:34:41.880 Read completed with error (sct=0, sc=8) 00:34:41.880 starting I/O failed 00:34:41.880 Write completed with error (sct=0, sc=8) 00:34:41.880 starting I/O failed 00:34:41.880 Write completed with error (sct=0, sc=8) 00:34:41.880 starting I/O failed 00:34:41.880 Write completed with error (sct=0, sc=8) 00:34:41.880 starting I/O failed 00:34:41.880 Read completed with error (sct=0, sc=8) 00:34:41.880 starting I/O failed 00:34:41.880 Read completed with error (sct=0, sc=8) 00:34:41.880 starting I/O failed 00:34:41.880 Read completed with error (sct=0, sc=8) 00:34:41.880 starting I/O failed 00:34:41.880 Write completed with error (sct=0, sc=8) 00:34:41.880 starting I/O failed 00:34:41.880 Read completed with error (sct=0, sc=8) 00:34:41.880 starting I/O failed 00:34:41.880 Write completed with error (sct=0, sc=8) 00:34:41.880 starting I/O failed 00:34:41.880 Write completed with error (sct=0, sc=8) 00:34:41.880 starting I/O failed 00:34:41.880 Read completed with error (sct=0, sc=8) 00:34:41.880 starting I/O failed 00:34:41.880 Read completed with error (sct=0, sc=8) 00:34:41.880 starting I/O failed 00:34:41.880 Write completed with error (sct=0, sc=8) 00:34:41.880 starting I/O failed 00:34:41.880 Read completed with error (sct=0, sc=8) 00:34:41.880 starting I/O failed 00:34:41.880 Read completed with error (sct=0, sc=8) 00:34:41.880 starting I/O failed 00:34:41.880 Read completed with error (sct=0, sc=8) 00:34:41.880 starting I/O failed 00:34:41.880 Read completed with error (sct=0, sc=8) 00:34:41.880 starting I/O failed 00:34:41.880 Read completed with error (sct=0, sc=8) 00:34:41.880 starting I/O failed 00:34:41.880 Write completed with error (sct=0, sc=8) 00:34:41.880 starting I/O failed 00:34:41.881 [2024-07-21 03:44:27.010234] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:41.881 Read completed with error (sct=0, sc=8) 00:34:41.881 starting I/O failed 00:34:41.881 Read completed with error (sct=0, sc=8) 00:34:41.881 starting I/O failed 00:34:41.881 Read completed with error (sct=0, sc=8) 00:34:41.881 starting I/O failed 00:34:41.881 Read completed with error (sct=0, sc=8) 00:34:41.881 starting I/O failed 00:34:41.881 Read completed with error (sct=0, sc=8) 00:34:41.881 starting I/O failed 00:34:41.881 Read completed with error (sct=0, sc=8) 00:34:41.881 starting I/O failed 00:34:41.881 Read completed with error (sct=0, sc=8) 00:34:41.881 starting I/O failed 00:34:41.881 Read completed with error (sct=0, sc=8) 00:34:41.881 starting I/O failed 00:34:41.881 Read completed with error (sct=0, sc=8) 00:34:41.881 starting I/O failed 00:34:41.881 Read completed with error (sct=0, sc=8) 00:34:41.881 starting I/O failed 00:34:41.881 Read completed with error (sct=0, sc=8) 00:34:41.881 starting I/O failed 00:34:41.881 Read completed with error (sct=0, sc=8) 00:34:41.881 starting I/O failed 00:34:41.881 Write completed with error (sct=0, sc=8) 00:34:41.881 starting I/O failed 00:34:41.881 Read completed with error (sct=0, sc=8) 00:34:41.881 starting I/O failed 00:34:41.881 Write completed with error (sct=0, sc=8) 00:34:41.881 starting I/O failed 00:34:41.881 Read completed with error (sct=0, sc=8) 00:34:41.881 starting I/O failed 00:34:41.881 Read completed with error (sct=0, sc=8) 00:34:41.881 starting I/O failed 00:34:41.881 Read completed with error (sct=0, sc=8) 00:34:41.881 starting I/O failed 00:34:41.881 Read completed with error (sct=0, sc=8) 00:34:41.881 starting I/O failed 00:34:41.881 Write completed with error (sct=0, sc=8) 00:34:41.881 starting I/O failed 00:34:41.881 Write completed with error (sct=0, sc=8) 00:34:41.881 starting I/O failed 00:34:41.881 Write completed with error (sct=0, sc=8) 00:34:41.881 starting I/O failed 00:34:41.881 Read completed with error (sct=0, sc=8) 00:34:41.881 starting I/O failed 00:34:41.881 Write completed with error (sct=0, sc=8) 00:34:41.881 starting I/O failed 00:34:41.881 Read completed with error (sct=0, sc=8) 00:34:41.881 starting I/O failed 00:34:41.881 Write completed with error (sct=0, sc=8) 00:34:41.881 starting I/O failed 00:34:41.881 Write completed with error (sct=0, sc=8) 00:34:41.881 starting I/O failed 00:34:41.881 Write completed with error (sct=0, sc=8) 00:34:41.881 starting I/O failed 00:34:41.881 Read completed with error (sct=0, sc=8) 00:34:41.881 starting I/O failed 00:34:41.881 Write completed with error (sct=0, sc=8) 00:34:41.881 starting I/O failed 00:34:41.881 Read completed with error (sct=0, sc=8) 00:34:41.881 starting I/O failed 00:34:41.881 Write completed with error (sct=0, sc=8) 00:34:41.881 starting I/O failed 00:34:41.881 [2024-07-21 03:44:27.010556] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.881 Read completed with error (sct=0, sc=8) 00:34:41.881 starting I/O failed 00:34:41.881 Read completed with error (sct=0, sc=8) 00:34:41.881 starting I/O failed 00:34:41.881 Read completed with error (sct=0, sc=8) 00:34:41.881 starting I/O failed 00:34:41.881 Read completed with error (sct=0, sc=8) 00:34:41.881 starting I/O failed 00:34:41.881 Read completed with error (sct=0, sc=8) 00:34:41.881 starting I/O failed 00:34:41.881 Read completed with error (sct=0, sc=8) 00:34:41.881 starting I/O failed 00:34:41.881 Read completed with error (sct=0, sc=8) 00:34:41.881 starting I/O failed 00:34:41.881 Read completed with error (sct=0, sc=8) 00:34:41.881 starting I/O failed 00:34:41.881 Read completed with error (sct=0, sc=8) 00:34:41.881 starting I/O failed 00:34:41.881 Write completed with error (sct=0, sc=8) 00:34:41.881 starting I/O failed 00:34:41.881 Write completed with error (sct=0, sc=8) 00:34:41.881 starting I/O failed 00:34:41.881 Write completed with error (sct=0, sc=8) 00:34:41.881 starting I/O failed 00:34:41.881 Read completed with error (sct=0, sc=8) 00:34:41.881 starting I/O failed 00:34:41.881 Write completed with error (sct=0, sc=8) 00:34:41.881 starting I/O failed 00:34:41.881 Write completed with error (sct=0, sc=8) 00:34:41.881 starting I/O failed 00:34:41.881 Read completed with error (sct=0, sc=8) 00:34:41.881 starting I/O failed 00:34:41.881 Write completed with error (sct=0, sc=8) 00:34:41.881 starting I/O failed 00:34:41.881 Read completed with error (sct=0, sc=8) 00:34:41.881 starting I/O failed 00:34:41.881 Write completed with error (sct=0, sc=8) 00:34:41.881 starting I/O failed 00:34:41.881 Read completed with error (sct=0, sc=8) 00:34:41.881 starting I/O failed 00:34:41.881 Write completed with error (sct=0, sc=8) 00:34:41.881 starting I/O failed 00:34:41.881 Write completed with error (sct=0, sc=8) 00:34:41.881 starting I/O failed 00:34:41.881 Write completed with error (sct=0, sc=8) 00:34:41.881 starting I/O failed 00:34:41.881 Write completed with error (sct=0, sc=8) 00:34:41.881 starting I/O failed 00:34:41.881 Write completed with error (sct=0, sc=8) 00:34:41.881 starting I/O failed 00:34:41.881 Write completed with error (sct=0, sc=8) 00:34:41.881 starting I/O failed 00:34:41.881 Write completed with error (sct=0, sc=8) 00:34:41.881 starting I/O failed 00:34:41.881 Read completed with error (sct=0, sc=8) 00:34:41.881 starting I/O failed 00:34:41.881 Write completed with error (sct=0, sc=8) 00:34:41.881 starting I/O failed 00:34:41.881 Read completed with error (sct=0, sc=8) 00:34:41.881 starting I/O failed 00:34:41.881 Write completed with error (sct=0, sc=8) 00:34:41.881 starting I/O failed 00:34:41.881 Write completed with error (sct=0, sc=8) 00:34:41.881 starting I/O failed 00:34:41.881 [2024-07-21 03:44:27.010925] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.881 [2024-07-21 03:44:27.011140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.881 [2024-07-21 03:44:27.011183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:41.881 qpair failed and we were unable to recover it. 00:34:41.881 [2024-07-21 03:44:27.011318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.881 [2024-07-21 03:44:27.011346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:41.881 qpair failed and we were unable to recover it. 00:34:41.881 [2024-07-21 03:44:27.011451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.881 [2024-07-21 03:44:27.011478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:41.881 qpair failed and we were unable to recover it. 00:34:41.881 [2024-07-21 03:44:27.011602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.881 [2024-07-21 03:44:27.011642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:41.881 qpair failed and we were unable to recover it. 00:34:41.881 [2024-07-21 03:44:27.011737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.881 [2024-07-21 03:44:27.011763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:41.881 qpair failed and we were unable to recover it. 00:34:41.881 [2024-07-21 03:44:27.011872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.881 [2024-07-21 03:44:27.011900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:41.881 qpair failed and we were unable to recover it. 00:34:41.881 [2024-07-21 03:44:27.012003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.881 [2024-07-21 03:44:27.012029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:41.881 qpair failed and we were unable to recover it. 00:34:41.881 [2024-07-21 03:44:27.012125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.881 [2024-07-21 03:44:27.012151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:41.881 qpair failed and we were unable to recover it. 00:34:41.881 [2024-07-21 03:44:27.012269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.881 [2024-07-21 03:44:27.012295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:41.881 qpair failed and we were unable to recover it. 00:34:41.882 [2024-07-21 03:44:27.012402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.882 [2024-07-21 03:44:27.012439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.882 qpair failed and we were unable to recover it. 00:34:41.882 [2024-07-21 03:44:27.012665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.882 [2024-07-21 03:44:27.012709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.882 qpair failed and we were unable to recover it. 00:34:41.882 [2024-07-21 03:44:27.012815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.882 [2024-07-21 03:44:27.012851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.882 qpair failed and we were unable to recover it. 00:34:41.882 [2024-07-21 03:44:27.013014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.882 [2024-07-21 03:44:27.013041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.882 qpair failed and we were unable to recover it. 00:34:41.882 [2024-07-21 03:44:27.013152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.882 [2024-07-21 03:44:27.013181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.882 qpair failed and we were unable to recover it. 00:34:41.882 [2024-07-21 03:44:27.013478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.882 [2024-07-21 03:44:27.013528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.882 qpair failed and we were unable to recover it. 00:34:41.882 [2024-07-21 03:44:27.013675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.882 [2024-07-21 03:44:27.013702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.882 qpair failed and we were unable to recover it. 00:34:41.882 [2024-07-21 03:44:27.013799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.882 [2024-07-21 03:44:27.013825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.882 qpair failed and we were unable to recover it. 00:34:41.882 [2024-07-21 03:44:27.013980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.882 [2024-07-21 03:44:27.014006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.882 qpair failed and we were unable to recover it. 00:34:41.882 [2024-07-21 03:44:27.014105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.882 [2024-07-21 03:44:27.014131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.882 qpair failed and we were unable to recover it. 00:34:41.882 [2024-07-21 03:44:27.014250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.882 [2024-07-21 03:44:27.014275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.882 qpair failed and we were unable to recover it. 00:34:41.882 [2024-07-21 03:44:27.014374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.882 [2024-07-21 03:44:27.014404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.882 qpair failed and we were unable to recover it. 00:34:41.882 [2024-07-21 03:44:27.014585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.882 [2024-07-21 03:44:27.014672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.882 qpair failed and we were unable to recover it. 00:34:41.882 [2024-07-21 03:44:27.014763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.882 [2024-07-21 03:44:27.014789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.882 qpair failed and we were unable to recover it. 00:34:41.882 [2024-07-21 03:44:27.014876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.882 [2024-07-21 03:44:27.014902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.882 qpair failed and we were unable to recover it. 00:34:41.882 [2024-07-21 03:44:27.015076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.882 [2024-07-21 03:44:27.015102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.882 qpair failed and we were unable to recover it. 00:34:41.882 [2024-07-21 03:44:27.015261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.882 [2024-07-21 03:44:27.015303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.882 qpair failed and we were unable to recover it. 00:34:41.882 [2024-07-21 03:44:27.015467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.882 [2024-07-21 03:44:27.015496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.882 qpair failed and we were unable to recover it. 00:34:41.882 [2024-07-21 03:44:27.015662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.882 [2024-07-21 03:44:27.015690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.882 qpair failed and we were unable to recover it. 00:34:41.882 [2024-07-21 03:44:27.015796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.882 [2024-07-21 03:44:27.015822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.882 qpair failed and we were unable to recover it. 00:34:41.882 [2024-07-21 03:44:27.015916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.882 [2024-07-21 03:44:27.015942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.882 qpair failed and we were unable to recover it. 00:34:41.882 [2024-07-21 03:44:27.016073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.882 [2024-07-21 03:44:27.016099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.882 qpair failed and we were unable to recover it. 00:34:41.882 [2024-07-21 03:44:27.016225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.882 [2024-07-21 03:44:27.016252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.882 qpair failed and we were unable to recover it. 00:34:41.882 [2024-07-21 03:44:27.016387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.882 [2024-07-21 03:44:27.016415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.882 qpair failed and we were unable to recover it. 00:34:41.882 [2024-07-21 03:44:27.016548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.882 [2024-07-21 03:44:27.016588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:41.882 qpair failed and we were unable to recover it. 00:34:41.882 [2024-07-21 03:44:27.016720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.882 [2024-07-21 03:44:27.016760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.882 qpair failed and we were unable to recover it. 00:34:41.882 [2024-07-21 03:44:27.016862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.882 [2024-07-21 03:44:27.016889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.882 qpair failed and we were unable to recover it. 00:34:41.882 [2024-07-21 03:44:27.017021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.882 [2024-07-21 03:44:27.017048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.882 qpair failed and we were unable to recover it. 00:34:41.882 [2024-07-21 03:44:27.017200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.882 [2024-07-21 03:44:27.017226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.882 qpair failed and we were unable to recover it. 00:34:41.882 [2024-07-21 03:44:27.017416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.882 [2024-07-21 03:44:27.017483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.882 qpair failed and we were unable to recover it. 00:34:41.882 [2024-07-21 03:44:27.017658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.882 [2024-07-21 03:44:27.017685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.883 qpair failed and we were unable to recover it. 00:34:41.883 [2024-07-21 03:44:27.017781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.883 [2024-07-21 03:44:27.017807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.883 qpair failed and we were unable to recover it. 00:34:41.883 [2024-07-21 03:44:27.017938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.883 [2024-07-21 03:44:27.017976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.883 qpair failed and we were unable to recover it. 00:34:41.883 [2024-07-21 03:44:27.018079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.883 [2024-07-21 03:44:27.018107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.883 qpair failed and we were unable to recover it. 00:34:41.883 [2024-07-21 03:44:27.018234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.883 [2024-07-21 03:44:27.018260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.883 qpair failed and we were unable to recover it. 00:34:41.883 [2024-07-21 03:44:27.018354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.883 [2024-07-21 03:44:27.018381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.883 qpair failed and we were unable to recover it. 00:34:41.883 [2024-07-21 03:44:27.018526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.883 [2024-07-21 03:44:27.018567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.883 qpair failed and we were unable to recover it. 00:34:41.883 [2024-07-21 03:44:27.018675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.883 [2024-07-21 03:44:27.018703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.883 qpair failed and we were unable to recover it. 00:34:41.883 [2024-07-21 03:44:27.018809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.883 [2024-07-21 03:44:27.018836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.883 qpair failed and we were unable to recover it. 00:34:41.883 [2024-07-21 03:44:27.018938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.883 [2024-07-21 03:44:27.018966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.883 qpair failed and we were unable to recover it. 00:34:41.883 [2024-07-21 03:44:27.019106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.883 [2024-07-21 03:44:27.019137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.883 qpair failed and we were unable to recover it. 00:34:41.883 [2024-07-21 03:44:27.019272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.883 [2024-07-21 03:44:27.019301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.883 qpair failed and we were unable to recover it. 00:34:41.883 [2024-07-21 03:44:27.019510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.883 [2024-07-21 03:44:27.019539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.883 qpair failed and we were unable to recover it. 00:34:41.883 [2024-07-21 03:44:27.019715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.883 [2024-07-21 03:44:27.019743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.883 qpair failed and we were unable to recover it. 00:34:41.883 [2024-07-21 03:44:27.019860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.883 [2024-07-21 03:44:27.019916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.883 qpair failed and we were unable to recover it. 00:34:41.883 [2024-07-21 03:44:27.020084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.883 [2024-07-21 03:44:27.020124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.883 qpair failed and we were unable to recover it. 00:34:41.883 [2024-07-21 03:44:27.020227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.883 [2024-07-21 03:44:27.020255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.883 qpair failed and we were unable to recover it. 00:34:41.883 [2024-07-21 03:44:27.020445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.883 [2024-07-21 03:44:27.020472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.883 qpair failed and we were unable to recover it. 00:34:41.883 [2024-07-21 03:44:27.020570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.883 [2024-07-21 03:44:27.020596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.883 qpair failed and we were unable to recover it. 00:34:41.883 [2024-07-21 03:44:27.020701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.883 [2024-07-21 03:44:27.020729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.883 qpair failed and we were unable to recover it. 00:34:41.883 [2024-07-21 03:44:27.020827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.883 [2024-07-21 03:44:27.020856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.883 qpair failed and we were unable to recover it. 00:34:41.883 [2024-07-21 03:44:27.021068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.883 [2024-07-21 03:44:27.021098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.883 qpair failed and we were unable to recover it. 00:34:41.883 [2024-07-21 03:44:27.021192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.883 [2024-07-21 03:44:27.021219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.883 qpair failed and we were unable to recover it. 00:34:41.883 [2024-07-21 03:44:27.021432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.883 [2024-07-21 03:44:27.021459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.883 qpair failed and we were unable to recover it. 00:34:41.883 [2024-07-21 03:44:27.021556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.883 [2024-07-21 03:44:27.021582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.883 qpair failed and we were unable to recover it. 00:34:41.883 [2024-07-21 03:44:27.021763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.883 [2024-07-21 03:44:27.021803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:41.883 qpair failed and we were unable to recover it. 00:34:41.883 [2024-07-21 03:44:27.021937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.883 [2024-07-21 03:44:27.021965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.883 qpair failed and we were unable to recover it. 00:34:41.883 [2024-07-21 03:44:27.022125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.883 [2024-07-21 03:44:27.022152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.883 qpair failed and we were unable to recover it. 00:34:41.883 [2024-07-21 03:44:27.022249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.883 [2024-07-21 03:44:27.022275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.883 qpair failed and we were unable to recover it. 00:34:41.883 [2024-07-21 03:44:27.022419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.883 [2024-07-21 03:44:27.022446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.883 qpair failed and we were unable to recover it. 00:34:41.883 [2024-07-21 03:44:27.022593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.883 [2024-07-21 03:44:27.022626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.883 qpair failed and we were unable to recover it. 00:34:41.883 [2024-07-21 03:44:27.022753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.883 [2024-07-21 03:44:27.022780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.883 qpair failed and we were unable to recover it. 00:34:41.883 [2024-07-21 03:44:27.022902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.884 [2024-07-21 03:44:27.022930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.884 qpair failed and we were unable to recover it. 00:34:41.884 [2024-07-21 03:44:27.023116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.884 [2024-07-21 03:44:27.023142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.884 qpair failed and we were unable to recover it. 00:34:41.884 [2024-07-21 03:44:27.023286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.884 [2024-07-21 03:44:27.023312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.884 qpair failed and we were unable to recover it. 00:34:41.884 [2024-07-21 03:44:27.023410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.884 [2024-07-21 03:44:27.023436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.884 qpair failed and we were unable to recover it. 00:34:41.884 [2024-07-21 03:44:27.023565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.884 [2024-07-21 03:44:27.023592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.884 qpair failed and we were unable to recover it. 00:34:41.884 [2024-07-21 03:44:27.023735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.884 [2024-07-21 03:44:27.023775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.884 qpair failed and we were unable to recover it. 00:34:41.884 [2024-07-21 03:44:27.023905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.884 [2024-07-21 03:44:27.023935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:41.884 qpair failed and we were unable to recover it. 00:34:41.884 [2024-07-21 03:44:27.024061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.884 [2024-07-21 03:44:27.024088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:41.884 qpair failed and we were unable to recover it. 00:34:41.884 [2024-07-21 03:44:27.024200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.884 [2024-07-21 03:44:27.024229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:41.884 qpair failed and we were unable to recover it. 00:34:41.884 [2024-07-21 03:44:27.024350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.884 [2024-07-21 03:44:27.024377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:41.884 qpair failed and we were unable to recover it. 00:34:41.884 [2024-07-21 03:44:27.024499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.884 [2024-07-21 03:44:27.024543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.884 qpair failed and we were unable to recover it. 00:34:41.884 [2024-07-21 03:44:27.024720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.884 [2024-07-21 03:44:27.024748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.884 qpair failed and we were unable to recover it. 00:34:41.884 [2024-07-21 03:44:27.024866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.884 [2024-07-21 03:44:27.024893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.884 qpair failed and we were unable to recover it. 00:34:41.884 [2024-07-21 03:44:27.024994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.884 [2024-07-21 03:44:27.025021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.884 qpair failed and we were unable to recover it. 00:34:41.884 [2024-07-21 03:44:27.025152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.884 [2024-07-21 03:44:27.025177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.884 qpair failed and we were unable to recover it. 00:34:41.884 [2024-07-21 03:44:27.025310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.884 [2024-07-21 03:44:27.025337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.884 qpair failed and we were unable to recover it. 00:34:41.884 [2024-07-21 03:44:27.025494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.884 [2024-07-21 03:44:27.025521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.884 qpair failed and we were unable to recover it. 00:34:41.884 [2024-07-21 03:44:27.025608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.884 [2024-07-21 03:44:27.025641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.884 qpair failed and we were unable to recover it. 00:34:41.884 [2024-07-21 03:44:27.025735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.884 [2024-07-21 03:44:27.025761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.884 qpair failed and we were unable to recover it. 00:34:41.884 [2024-07-21 03:44:27.025877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.884 [2024-07-21 03:44:27.025903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.884 qpair failed and we were unable to recover it. 00:34:41.884 [2024-07-21 03:44:27.025994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.884 [2024-07-21 03:44:27.026020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.884 qpair failed and we were unable to recover it. 00:34:41.884 [2024-07-21 03:44:27.026137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.884 [2024-07-21 03:44:27.026163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.884 qpair failed and we were unable to recover it. 00:34:41.884 [2024-07-21 03:44:27.026323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.884 [2024-07-21 03:44:27.026381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:41.884 qpair failed and we were unable to recover it. 00:34:41.884 [2024-07-21 03:44:27.026522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.884 [2024-07-21 03:44:27.026562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.884 qpair failed and we were unable to recover it. 00:34:41.884 [2024-07-21 03:44:27.026719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.884 [2024-07-21 03:44:27.026760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.884 qpair failed and we were unable to recover it. 00:34:41.884 [2024-07-21 03:44:27.026919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.884 [2024-07-21 03:44:27.026965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.884 qpair failed and we were unable to recover it. 00:34:41.884 [2024-07-21 03:44:27.027109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.884 [2024-07-21 03:44:27.027156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.884 qpair failed and we were unable to recover it. 00:34:41.884 [2024-07-21 03:44:27.027303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.884 [2024-07-21 03:44:27.027348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.884 qpair failed and we were unable to recover it. 00:34:41.884 [2024-07-21 03:44:27.027505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.884 [2024-07-21 03:44:27.027532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.884 qpair failed and we were unable to recover it. 00:34:41.884 [2024-07-21 03:44:27.027659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.884 [2024-07-21 03:44:27.027687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.884 qpair failed and we were unable to recover it. 00:34:41.884 [2024-07-21 03:44:27.027789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.884 [2024-07-21 03:44:27.027815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.884 qpair failed and we were unable to recover it. 00:34:41.884 [2024-07-21 03:44:27.027978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.884 [2024-07-21 03:44:27.028005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.884 qpair failed and we were unable to recover it. 00:34:41.885 [2024-07-21 03:44:27.028096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.885 [2024-07-21 03:44:27.028126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.885 qpair failed and we were unable to recover it. 00:34:41.885 [2024-07-21 03:44:27.028246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.885 [2024-07-21 03:44:27.028274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.885 qpair failed and we were unable to recover it. 00:34:41.885 [2024-07-21 03:44:27.028412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.885 [2024-07-21 03:44:27.028445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.885 qpair failed and we were unable to recover it. 00:34:41.885 [2024-07-21 03:44:27.028608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.885 [2024-07-21 03:44:27.028655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:41.885 qpair failed and we were unable to recover it. 00:34:41.885 [2024-07-21 03:44:27.028763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.885 [2024-07-21 03:44:27.028792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:41.885 qpair failed and we were unable to recover it. 00:34:41.885 [2024-07-21 03:44:27.028905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.885 [2024-07-21 03:44:27.028943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:41.885 qpair failed and we were unable to recover it. 00:34:41.885 [2024-07-21 03:44:27.029090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.885 [2024-07-21 03:44:27.029116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:41.885 qpair failed and we were unable to recover it. 00:34:41.885 [2024-07-21 03:44:27.029680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.885 [2024-07-21 03:44:27.029710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:41.885 qpair failed and we were unable to recover it. 00:34:41.885 [2024-07-21 03:44:27.029859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.885 [2024-07-21 03:44:27.029889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:41.885 qpair failed and we were unable to recover it. 00:34:41.885 [2024-07-21 03:44:27.030059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.885 [2024-07-21 03:44:27.030086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:41.885 qpair failed and we were unable to recover it. 00:34:41.885 [2024-07-21 03:44:27.030230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.885 [2024-07-21 03:44:27.030256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:41.885 qpair failed and we were unable to recover it. 00:34:41.885 [2024-07-21 03:44:27.030390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.885 [2024-07-21 03:44:27.030418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.885 qpair failed and we were unable to recover it. 00:34:41.885 [2024-07-21 03:44:27.030567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.885 [2024-07-21 03:44:27.030593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.885 qpair failed and we were unable to recover it. 00:34:41.885 [2024-07-21 03:44:27.030727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.885 [2024-07-21 03:44:27.030767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.885 qpair failed and we were unable to recover it. 00:34:41.885 [2024-07-21 03:44:27.030918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.885 [2024-07-21 03:44:27.030949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.885 qpair failed and we were unable to recover it. 00:34:41.885 [2024-07-21 03:44:27.031114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.885 [2024-07-21 03:44:27.031141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.885 qpair failed and we were unable to recover it. 00:34:41.885 [2024-07-21 03:44:27.031250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.885 [2024-07-21 03:44:27.031276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.885 qpair failed and we were unable to recover it. 00:34:41.885 [2024-07-21 03:44:27.031415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.885 [2024-07-21 03:44:27.031444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:41.885 qpair failed and we were unable to recover it. 00:34:41.885 [2024-07-21 03:44:27.031543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.885 [2024-07-21 03:44:27.031571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:41.885 qpair failed and we were unable to recover it. 00:34:41.885 [2024-07-21 03:44:27.031733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.885 [2024-07-21 03:44:27.031760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:41.885 qpair failed and we were unable to recover it. 00:34:41.885 [2024-07-21 03:44:27.031890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.885 [2024-07-21 03:44:27.031925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:41.885 qpair failed and we were unable to recover it. 00:34:41.885 [2024-07-21 03:44:27.032021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.885 [2024-07-21 03:44:27.032049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:41.885 qpair failed and we were unable to recover it. 00:34:41.885 [2024-07-21 03:44:27.032201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.885 [2024-07-21 03:44:27.032227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:41.885 qpair failed and we were unable to recover it. 00:34:41.885 [2024-07-21 03:44:27.032346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.885 [2024-07-21 03:44:27.032373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:41.885 qpair failed and we were unable to recover it. 00:34:41.885 [2024-07-21 03:44:27.032484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.885 [2024-07-21 03:44:27.032510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:41.885 qpair failed and we were unable to recover it. 00:34:41.885 [2024-07-21 03:44:27.032648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.885 [2024-07-21 03:44:27.032676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:41.885 qpair failed and we were unable to recover it. 00:34:41.885 [2024-07-21 03:44:27.032778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.886 [2024-07-21 03:44:27.032805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:41.886 qpair failed and we were unable to recover it. 00:34:41.886 [2024-07-21 03:44:27.032897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.886 [2024-07-21 03:44:27.032928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:41.886 qpair failed and we were unable to recover it. 00:34:41.886 [2024-07-21 03:44:27.033052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.886 [2024-07-21 03:44:27.033078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:41.886 qpair failed and we were unable to recover it. 00:34:41.886 [2024-07-21 03:44:27.033169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.886 [2024-07-21 03:44:27.033198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.886 qpair failed and we were unable to recover it. 00:34:41.886 [2024-07-21 03:44:27.033324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.886 [2024-07-21 03:44:27.033352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.886 qpair failed and we were unable to recover it. 00:34:41.886 [2024-07-21 03:44:27.033479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.886 [2024-07-21 03:44:27.033506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.886 qpair failed and we were unable to recover it. 00:34:41.886 [2024-07-21 03:44:27.033645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.886 [2024-07-21 03:44:27.033672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.886 qpair failed and we were unable to recover it. 00:34:41.886 [2024-07-21 03:44:27.033772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.886 [2024-07-21 03:44:27.033798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.886 qpair failed and we were unable to recover it. 00:34:41.886 [2024-07-21 03:44:27.033889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.886 [2024-07-21 03:44:27.033924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.886 qpair failed and we were unable to recover it. 00:34:41.886 [2024-07-21 03:44:27.034013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.886 [2024-07-21 03:44:27.034040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.886 qpair failed and we were unable to recover it. 00:34:41.886 [2024-07-21 03:44:27.034143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.886 [2024-07-21 03:44:27.034169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.886 qpair failed and we were unable to recover it. 00:34:41.886 [2024-07-21 03:44:27.034289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.886 [2024-07-21 03:44:27.034316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.886 qpair failed and we were unable to recover it. 00:34:41.886 [2024-07-21 03:44:27.034465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.886 [2024-07-21 03:44:27.034492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:41.886 qpair failed and we were unable to recover it. 00:34:41.886 [2024-07-21 03:44:27.034649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.886 [2024-07-21 03:44:27.034676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:41.886 qpair failed and we were unable to recover it. 00:34:41.886 [2024-07-21 03:44:27.034776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.886 [2024-07-21 03:44:27.034803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:41.886 qpair failed and we were unable to recover it. 00:34:41.886 [2024-07-21 03:44:27.034948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.886 [2024-07-21 03:44:27.034975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:41.886 qpair failed and we were unable to recover it. 00:34:41.886 [2024-07-21 03:44:27.035123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.886 [2024-07-21 03:44:27.035154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:41.886 qpair failed and we were unable to recover it. 00:34:41.886 [2024-07-21 03:44:27.035408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.886 [2024-07-21 03:44:27.035434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:41.886 qpair failed and we were unable to recover it. 00:34:41.886 [2024-07-21 03:44:27.035553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.886 [2024-07-21 03:44:27.035580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:41.886 qpair failed and we were unable to recover it. 00:34:41.886 [2024-07-21 03:44:27.035705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.886 [2024-07-21 03:44:27.035746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.886 qpair failed and we were unable to recover it. 00:34:41.886 [2024-07-21 03:44:27.035875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.886 [2024-07-21 03:44:27.035914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.886 qpair failed and we were unable to recover it. 00:34:41.886 [2024-07-21 03:44:27.036045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.886 [2024-07-21 03:44:27.036089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.886 qpair failed and we were unable to recover it. 00:34:41.886 [2024-07-21 03:44:27.036291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.886 [2024-07-21 03:44:27.036337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.886 qpair failed and we were unable to recover it. 00:34:41.886 [2024-07-21 03:44:27.036480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.886 [2024-07-21 03:44:27.036527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.886 qpair failed and we were unable to recover it. 00:34:41.886 [2024-07-21 03:44:27.036686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.886 [2024-07-21 03:44:27.036715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.886 qpair failed and we were unable to recover it. 00:34:41.886 [2024-07-21 03:44:27.036836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.886 [2024-07-21 03:44:27.036864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:41.886 qpair failed and we were unable to recover it. 00:34:41.886 [2024-07-21 03:44:27.036997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.886 [2024-07-21 03:44:27.037023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:41.886 qpair failed and we were unable to recover it. 00:34:41.886 [2024-07-21 03:44:27.037141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.886 [2024-07-21 03:44:27.037167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:41.886 qpair failed and we were unable to recover it. 00:34:41.886 [2024-07-21 03:44:27.037346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.886 [2024-07-21 03:44:27.037376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:41.886 qpair failed and we were unable to recover it. 00:34:41.887 [2024-07-21 03:44:27.037525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.887 [2024-07-21 03:44:27.037553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:41.887 qpair failed and we were unable to recover it. 00:34:41.887 [2024-07-21 03:44:27.037676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.887 [2024-07-21 03:44:27.037704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:41.887 qpair failed and we were unable to recover it. 00:34:41.887 [2024-07-21 03:44:27.037809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.887 [2024-07-21 03:44:27.037836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:41.887 qpair failed and we were unable to recover it. 00:34:41.887 [2024-07-21 03:44:27.037988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.887 [2024-07-21 03:44:27.038014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:41.887 qpair failed and we were unable to recover it. 00:34:41.887 [2024-07-21 03:44:27.038178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.887 [2024-07-21 03:44:27.038223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:41.887 qpair failed and we were unable to recover it. 00:34:41.887 [2024-07-21 03:44:27.038372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.887 [2024-07-21 03:44:27.038398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:41.887 qpair failed and we were unable to recover it. 00:34:41.887 [2024-07-21 03:44:27.038522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.887 [2024-07-21 03:44:27.038548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:41.887 qpair failed and we were unable to recover it. 00:34:41.887 [2024-07-21 03:44:27.038652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.887 [2024-07-21 03:44:27.038679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:41.887 qpair failed and we were unable to recover it. 00:34:41.887 [2024-07-21 03:44:27.038801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.887 [2024-07-21 03:44:27.038828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:41.887 qpair failed and we were unable to recover it. 00:34:41.887 [2024-07-21 03:44:27.038914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.887 [2024-07-21 03:44:27.038941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:41.887 qpair failed and we were unable to recover it. 00:34:41.887 [2024-07-21 03:44:27.039110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.887 [2024-07-21 03:44:27.039139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:41.887 qpair failed and we were unable to recover it. 00:34:41.887 [2024-07-21 03:44:27.039298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.887 [2024-07-21 03:44:27.039327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:41.887 qpair failed and we were unable to recover it. 00:34:41.887 [2024-07-21 03:44:27.039475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.887 [2024-07-21 03:44:27.039516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.887 qpair failed and we were unable to recover it. 00:34:41.887 [2024-07-21 03:44:27.039620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.887 [2024-07-21 03:44:27.039648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.887 qpair failed and we were unable to recover it. 00:34:41.887 [2024-07-21 03:44:27.039749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.887 [2024-07-21 03:44:27.039782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.887 qpair failed and we were unable to recover it. 00:34:41.887 [2024-07-21 03:44:27.039877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.887 [2024-07-21 03:44:27.039904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.887 qpair failed and we were unable to recover it. 00:34:41.887 [2024-07-21 03:44:27.040052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.887 [2024-07-21 03:44:27.040078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.887 qpair failed and we were unable to recover it. 00:34:41.887 [2024-07-21 03:44:27.040200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.887 [2024-07-21 03:44:27.040226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.887 qpair failed and we were unable to recover it. 00:34:41.887 [2024-07-21 03:44:27.040346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.887 [2024-07-21 03:44:27.040372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.887 qpair failed and we were unable to recover it. 00:34:41.887 [2024-07-21 03:44:27.040472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.887 [2024-07-21 03:44:27.040512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.887 qpair failed and we were unable to recover it. 00:34:41.887 [2024-07-21 03:44:27.040624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.887 [2024-07-21 03:44:27.040652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.887 qpair failed and we were unable to recover it. 00:34:41.887 [2024-07-21 03:44:27.040746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.887 [2024-07-21 03:44:27.040773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.887 qpair failed and we were unable to recover it. 00:34:41.887 [2024-07-21 03:44:27.040872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.887 [2024-07-21 03:44:27.040898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.887 qpair failed and we were unable to recover it. 00:34:41.887 [2024-07-21 03:44:27.041055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.887 [2024-07-21 03:44:27.041081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.887 qpair failed and we were unable to recover it. 00:34:41.887 [2024-07-21 03:44:27.041310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.887 [2024-07-21 03:44:27.041341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:41.887 qpair failed and we were unable to recover it. 00:34:41.887 [2024-07-21 03:44:27.041510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.887 [2024-07-21 03:44:27.041538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.887 qpair failed and we were unable to recover it. 00:34:41.887 [2024-07-21 03:44:27.041634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.887 [2024-07-21 03:44:27.041661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.887 qpair failed and we were unable to recover it. 00:34:41.887 [2024-07-21 03:44:27.041824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.887 [2024-07-21 03:44:27.041851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.887 qpair failed and we were unable to recover it. 00:34:41.887 [2024-07-21 03:44:27.042031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.887 [2024-07-21 03:44:27.042061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.887 qpair failed and we were unable to recover it. 00:34:41.887 [2024-07-21 03:44:27.042208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.887 [2024-07-21 03:44:27.042251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.887 qpair failed and we were unable to recover it. 00:34:41.887 [2024-07-21 03:44:27.042346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.887 [2024-07-21 03:44:27.042374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.887 qpair failed and we were unable to recover it. 00:34:41.887 [2024-07-21 03:44:27.042521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.887 [2024-07-21 03:44:27.042548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.888 qpair failed and we were unable to recover it. 00:34:41.888 [2024-07-21 03:44:27.042681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.888 [2024-07-21 03:44:27.042707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.888 qpair failed and we were unable to recover it. 00:34:41.888 [2024-07-21 03:44:27.042793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.888 [2024-07-21 03:44:27.042819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.888 qpair failed and we were unable to recover it. 00:34:41.888 [2024-07-21 03:44:27.042965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.888 [2024-07-21 03:44:27.043012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.888 qpair failed and we were unable to recover it. 00:34:41.888 [2024-07-21 03:44:27.043208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.888 [2024-07-21 03:44:27.043257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.888 qpair failed and we were unable to recover it. 00:34:41.888 [2024-07-21 03:44:27.043390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.888 [2024-07-21 03:44:27.043418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.888 qpair failed and we were unable to recover it. 00:34:41.888 [2024-07-21 03:44:27.043581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.888 [2024-07-21 03:44:27.043628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.888 qpair failed and we were unable to recover it. 00:34:41.888 [2024-07-21 03:44:27.043744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.888 [2024-07-21 03:44:27.043770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.888 qpair failed and we were unable to recover it. 00:34:41.888 [2024-07-21 03:44:27.043899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.888 [2024-07-21 03:44:27.043935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.888 qpair failed and we were unable to recover it. 00:34:41.888 [2024-07-21 03:44:27.044050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.888 [2024-07-21 03:44:27.044077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.888 qpair failed and we were unable to recover it. 00:34:41.888 [2024-07-21 03:44:27.044244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.888 [2024-07-21 03:44:27.044297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.888 qpair failed and we were unable to recover it. 00:34:41.888 [2024-07-21 03:44:27.044450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.888 [2024-07-21 03:44:27.044495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.888 qpair failed and we were unable to recover it. 00:34:41.888 [2024-07-21 03:44:27.044637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.888 [2024-07-21 03:44:27.044678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:41.888 qpair failed and we were unable to recover it. 00:34:41.888 [2024-07-21 03:44:27.044777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.888 [2024-07-21 03:44:27.044804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:41.888 qpair failed and we were unable to recover it. 00:34:41.888 [2024-07-21 03:44:27.044954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.888 [2024-07-21 03:44:27.044984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:41.888 qpair failed and we were unable to recover it. 00:34:41.888 [2024-07-21 03:44:27.045133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.888 [2024-07-21 03:44:27.045160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:41.888 qpair failed and we were unable to recover it. 00:34:41.888 [2024-07-21 03:44:27.045293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.888 [2024-07-21 03:44:27.045320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:41.888 qpair failed and we were unable to recover it. 00:34:41.888 [2024-07-21 03:44:27.045470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.888 [2024-07-21 03:44:27.045501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.888 qpair failed and we were unable to recover it. 00:34:41.888 [2024-07-21 03:44:27.045664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.888 [2024-07-21 03:44:27.045705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.888 qpair failed and we were unable to recover it. 00:34:41.888 [2024-07-21 03:44:27.045848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.888 [2024-07-21 03:44:27.045879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.888 qpair failed and we were unable to recover it. 00:34:41.888 [2024-07-21 03:44:27.046023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.888 [2024-07-21 03:44:27.046067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.888 qpair failed and we were unable to recover it. 00:34:41.888 [2024-07-21 03:44:27.046188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.888 [2024-07-21 03:44:27.046214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.888 qpair failed and we were unable to recover it. 00:34:41.888 [2024-07-21 03:44:27.046354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.888 [2024-07-21 03:44:27.046381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.888 qpair failed and we were unable to recover it. 00:34:41.888 [2024-07-21 03:44:27.046529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.888 [2024-07-21 03:44:27.046557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:41.888 qpair failed and we were unable to recover it. 00:34:41.888 [2024-07-21 03:44:27.046692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.888 [2024-07-21 03:44:27.046720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.888 qpair failed and we were unable to recover it. 00:34:41.888 [2024-07-21 03:44:27.046845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.888 [2024-07-21 03:44:27.046871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.888 qpair failed and we were unable to recover it. 00:34:41.888 [2024-07-21 03:44:27.047015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.888 [2024-07-21 03:44:27.047042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.888 qpair failed and we were unable to recover it. 00:34:41.888 [2024-07-21 03:44:27.047243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.888 [2024-07-21 03:44:27.047269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.888 qpair failed and we were unable to recover it. 00:34:41.888 [2024-07-21 03:44:27.047356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.888 [2024-07-21 03:44:27.047382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.888 qpair failed and we were unable to recover it. 00:34:41.888 [2024-07-21 03:44:27.047473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.888 [2024-07-21 03:44:27.047500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.888 qpair failed and we were unable to recover it. 00:34:41.888 [2024-07-21 03:44:27.047600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.888 [2024-07-21 03:44:27.047646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:41.888 qpair failed and we were unable to recover it. 00:34:41.888 [2024-07-21 03:44:27.047754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.888 [2024-07-21 03:44:27.047781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:41.889 qpair failed and we were unable to recover it. 00:34:41.889 [2024-07-21 03:44:27.047900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.889 [2024-07-21 03:44:27.047934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:41.889 qpair failed and we were unable to recover it. 00:34:41.889 [2024-07-21 03:44:27.048058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.889 [2024-07-21 03:44:27.048101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:41.889 qpair failed and we were unable to recover it. 00:34:41.889 [2024-07-21 03:44:27.048286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.889 [2024-07-21 03:44:27.048334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:41.889 qpair failed and we were unable to recover it. 00:34:41.889 [2024-07-21 03:44:27.048485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.889 [2024-07-21 03:44:27.048511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:41.889 qpair failed and we were unable to recover it. 00:34:41.889 [2024-07-21 03:44:27.048610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.889 [2024-07-21 03:44:27.048642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.889 qpair failed and we were unable to recover it. 00:34:41.889 [2024-07-21 03:44:27.048750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.889 [2024-07-21 03:44:27.048777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.889 qpair failed and we were unable to recover it. 00:34:41.889 [2024-07-21 03:44:27.048869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.889 [2024-07-21 03:44:27.048894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.889 qpair failed and we were unable to recover it. 00:34:41.889 [2024-07-21 03:44:27.049023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.889 [2024-07-21 03:44:27.049049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.889 qpair failed and we were unable to recover it. 00:34:41.889 [2024-07-21 03:44:27.049171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.889 [2024-07-21 03:44:27.049199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.889 qpair failed and we were unable to recover it. 00:34:41.889 [2024-07-21 03:44:27.049384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.889 [2024-07-21 03:44:27.049410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.889 qpair failed and we were unable to recover it. 00:34:41.889 [2024-07-21 03:44:27.049531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.889 [2024-07-21 03:44:27.049559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:41.889 qpair failed and we were unable to recover it. 00:34:41.889 [2024-07-21 03:44:27.049738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.889 [2024-07-21 03:44:27.049778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.889 qpair failed and we were unable to recover it. 00:34:41.889 [2024-07-21 03:44:27.049889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.889 [2024-07-21 03:44:27.049938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.889 qpair failed and we were unable to recover it. 00:34:41.889 [2024-07-21 03:44:27.050130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.889 [2024-07-21 03:44:27.050180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.889 qpair failed and we were unable to recover it. 00:34:41.889 [2024-07-21 03:44:27.050329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.889 [2024-07-21 03:44:27.050378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.889 qpair failed and we were unable to recover it. 00:34:41.889 [2024-07-21 03:44:27.050471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.889 [2024-07-21 03:44:27.050498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.889 qpair failed and we were unable to recover it. 00:34:41.889 [2024-07-21 03:44:27.050627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.889 [2024-07-21 03:44:27.050654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.889 qpair failed and we were unable to recover it. 00:34:41.889 [2024-07-21 03:44:27.050754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.889 [2024-07-21 03:44:27.050782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:41.889 qpair failed and we were unable to recover it. 00:34:41.889 [2024-07-21 03:44:27.050929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.889 [2024-07-21 03:44:27.050964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:41.889 qpair failed and we were unable to recover it. 00:34:41.889 [2024-07-21 03:44:27.051212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.889 [2024-07-21 03:44:27.051264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:41.889 qpair failed and we were unable to recover it. 00:34:41.889 [2024-07-21 03:44:27.051369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.889 [2024-07-21 03:44:27.051400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:41.889 qpair failed and we were unable to recover it. 00:34:41.889 [2024-07-21 03:44:27.051536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.889 [2024-07-21 03:44:27.051565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:41.889 qpair failed and we were unable to recover it. 00:34:41.889 [2024-07-21 03:44:27.051739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.889 [2024-07-21 03:44:27.051770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:41.889 qpair failed and we were unable to recover it. 00:34:41.889 [2024-07-21 03:44:27.051896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.889 [2024-07-21 03:44:27.051937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:41.889 qpair failed and we were unable to recover it. 00:34:41.889 [2024-07-21 03:44:27.052036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.889 [2024-07-21 03:44:27.052065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:41.889 qpair failed and we were unable to recover it. 00:34:41.889 [2024-07-21 03:44:27.052253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.889 [2024-07-21 03:44:27.052309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.889 qpair failed and we were unable to recover it. 00:34:41.889 [2024-07-21 03:44:27.052459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.889 [2024-07-21 03:44:27.052486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.889 qpair failed and we were unable to recover it. 00:34:41.889 [2024-07-21 03:44:27.052603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.889 [2024-07-21 03:44:27.052639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.889 qpair failed and we were unable to recover it. 00:34:41.889 [2024-07-21 03:44:27.052752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.889 [2024-07-21 03:44:27.052781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.889 qpair failed and we were unable to recover it. 00:34:41.889 [2024-07-21 03:44:27.052912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.889 [2024-07-21 03:44:27.052956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.889 qpair failed and we were unable to recover it. 00:34:41.889 [2024-07-21 03:44:27.053097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.889 [2024-07-21 03:44:27.053142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.889 qpair failed and we were unable to recover it. 00:34:41.890 [2024-07-21 03:44:27.053269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.890 [2024-07-21 03:44:27.053296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.890 qpair failed and we were unable to recover it. 00:34:41.890 [2024-07-21 03:44:27.053437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.890 [2024-07-21 03:44:27.053478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.890 qpair failed and we were unable to recover it. 00:34:41.890 [2024-07-21 03:44:27.053578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.890 [2024-07-21 03:44:27.053622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.890 qpair failed and we were unable to recover it. 00:34:41.890 [2024-07-21 03:44:27.053748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.890 [2024-07-21 03:44:27.053775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.890 qpair failed and we were unable to recover it. 00:34:41.890 [2024-07-21 03:44:27.053872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.890 [2024-07-21 03:44:27.053900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.890 qpair failed and we were unable to recover it. 00:34:41.890 [2024-07-21 03:44:27.053995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.890 [2024-07-21 03:44:27.054037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.890 qpair failed and we were unable to recover it. 00:34:41.890 [2024-07-21 03:44:27.054192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.890 [2024-07-21 03:44:27.054222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.890 qpair failed and we were unable to recover it. 00:34:41.890 [2024-07-21 03:44:27.054336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.890 [2024-07-21 03:44:27.054382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.890 qpair failed and we were unable to recover it. 00:34:41.890 [2024-07-21 03:44:27.054494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.890 [2024-07-21 03:44:27.054521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.890 qpair failed and we were unable to recover it. 00:34:41.890 [2024-07-21 03:44:27.054669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.890 [2024-07-21 03:44:27.054699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.890 qpair failed and we were unable to recover it. 00:34:41.890 [2024-07-21 03:44:27.054835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.890 [2024-07-21 03:44:27.054880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.890 qpair failed and we were unable to recover it. 00:34:41.890 [2024-07-21 03:44:27.054968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.890 [2024-07-21 03:44:27.054995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.890 qpair failed and we were unable to recover it. 00:34:41.890 [2024-07-21 03:44:27.055090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.890 [2024-07-21 03:44:27.055117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.890 qpair failed and we were unable to recover it. 00:34:41.890 [2024-07-21 03:44:27.055242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.890 [2024-07-21 03:44:27.055270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.890 qpair failed and we were unable to recover it. 00:34:41.890 [2024-07-21 03:44:27.055369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.890 [2024-07-21 03:44:27.055401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.890 qpair failed and we were unable to recover it. 00:34:41.890 [2024-07-21 03:44:27.055516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.890 [2024-07-21 03:44:27.055543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.890 qpair failed and we were unable to recover it. 00:34:41.890 [2024-07-21 03:44:27.055681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.890 [2024-07-21 03:44:27.055711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.890 qpair failed and we were unable to recover it. 00:34:41.890 [2024-07-21 03:44:27.055812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.890 [2024-07-21 03:44:27.055838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.890 qpair failed and we were unable to recover it. 00:34:41.890 [2024-07-21 03:44:27.055928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.890 [2024-07-21 03:44:27.055955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.890 qpair failed and we were unable to recover it. 00:34:41.890 [2024-07-21 03:44:27.056069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.890 [2024-07-21 03:44:27.056095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.890 qpair failed and we were unable to recover it. 00:34:41.890 [2024-07-21 03:44:27.056184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.890 [2024-07-21 03:44:27.056210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.890 qpair failed and we were unable to recover it. 00:34:41.890 [2024-07-21 03:44:27.056311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.890 [2024-07-21 03:44:27.056337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.890 qpair failed and we were unable to recover it. 00:34:41.890 [2024-07-21 03:44:27.056459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.890 [2024-07-21 03:44:27.056487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.890 qpair failed and we were unable to recover it. 00:34:41.890 [2024-07-21 03:44:27.056587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.890 [2024-07-21 03:44:27.056624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.890 qpair failed and we were unable to recover it. 00:34:41.890 [2024-07-21 03:44:27.056764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.890 [2024-07-21 03:44:27.056809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.890 qpair failed and we were unable to recover it. 00:34:41.890 [2024-07-21 03:44:27.056947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.890 [2024-07-21 03:44:27.056990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.890 qpair failed and we were unable to recover it. 00:34:41.890 [2024-07-21 03:44:27.057155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.890 [2024-07-21 03:44:27.057199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.890 qpair failed and we were unable to recover it. 00:34:41.890 [2024-07-21 03:44:27.057317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.890 [2024-07-21 03:44:27.057343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.890 qpair failed and we were unable to recover it. 00:34:41.890 [2024-07-21 03:44:27.057444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.890 [2024-07-21 03:44:27.057472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.890 qpair failed and we were unable to recover it. 00:34:41.890 [2024-07-21 03:44:27.057598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.890 [2024-07-21 03:44:27.057637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.890 qpair failed and we were unable to recover it. 00:34:41.890 [2024-07-21 03:44:27.057748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.890 [2024-07-21 03:44:27.057777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.890 qpair failed and we were unable to recover it. 00:34:41.890 [2024-07-21 03:44:27.057887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.890 [2024-07-21 03:44:27.057925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.890 qpair failed and we were unable to recover it. 00:34:41.890 [2024-07-21 03:44:27.058082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.891 [2024-07-21 03:44:27.058136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.891 qpair failed and we were unable to recover it. 00:34:41.891 [2024-07-21 03:44:27.058240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.891 [2024-07-21 03:44:27.058269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.891 qpair failed and we were unable to recover it. 00:34:41.891 [2024-07-21 03:44:27.058400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.891 [2024-07-21 03:44:27.058429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.891 qpair failed and we were unable to recover it. 00:34:41.891 [2024-07-21 03:44:27.058562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.891 [2024-07-21 03:44:27.058594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.891 qpair failed and we were unable to recover it. 00:34:41.891 [2024-07-21 03:44:27.058750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.891 [2024-07-21 03:44:27.058778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.891 qpair failed and we were unable to recover it. 00:34:41.891 [2024-07-21 03:44:27.058894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.891 [2024-07-21 03:44:27.058933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.891 qpair failed and we were unable to recover it. 00:34:41.891 [2024-07-21 03:44:27.059087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.891 [2024-07-21 03:44:27.059131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.891 qpair failed and we were unable to recover it. 00:34:41.891 [2024-07-21 03:44:27.059299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.891 [2024-07-21 03:44:27.059343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.891 qpair failed and we were unable to recover it. 00:34:41.891 [2024-07-21 03:44:27.059462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.891 [2024-07-21 03:44:27.059488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.891 qpair failed and we were unable to recover it. 00:34:41.891 [2024-07-21 03:44:27.059632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.891 [2024-07-21 03:44:27.059660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.891 qpair failed and we were unable to recover it. 00:34:41.891 [2024-07-21 03:44:27.059767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.891 [2024-07-21 03:44:27.059797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.891 qpair failed and we were unable to recover it. 00:34:41.891 [2024-07-21 03:44:27.059931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.891 [2024-07-21 03:44:27.059974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.891 qpair failed and we were unable to recover it. 00:34:41.891 [2024-07-21 03:44:27.060067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.891 [2024-07-21 03:44:27.060093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.891 qpair failed and we were unable to recover it. 00:34:41.891 [2024-07-21 03:44:27.060218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.891 [2024-07-21 03:44:27.060244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.891 qpair failed and we were unable to recover it. 00:34:41.891 [2024-07-21 03:44:27.060339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.891 [2024-07-21 03:44:27.060367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.891 qpair failed and we were unable to recover it. 00:34:41.891 [2024-07-21 03:44:27.060461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.891 [2024-07-21 03:44:27.060488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.891 qpair failed and we were unable to recover it. 00:34:41.891 [2024-07-21 03:44:27.060577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.891 [2024-07-21 03:44:27.060619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.891 qpair failed and we were unable to recover it. 00:34:41.891 [2024-07-21 03:44:27.060717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.891 [2024-07-21 03:44:27.060743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.891 qpair failed and we were unable to recover it. 00:34:41.891 [2024-07-21 03:44:27.060859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.891 [2024-07-21 03:44:27.060885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.891 qpair failed and we were unable to recover it. 00:34:41.891 [2024-07-21 03:44:27.060980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.891 [2024-07-21 03:44:27.061007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.891 qpair failed and we were unable to recover it. 00:34:41.891 [2024-07-21 03:44:27.061123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.891 [2024-07-21 03:44:27.061149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.891 qpair failed and we were unable to recover it. 00:34:41.891 [2024-07-21 03:44:27.061261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.891 [2024-07-21 03:44:27.061290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.891 qpair failed and we were unable to recover it. 00:34:41.891 [2024-07-21 03:44:27.061438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.891 [2024-07-21 03:44:27.061478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:41.891 qpair failed and we were unable to recover it. 00:34:41.891 [2024-07-21 03:44:27.061588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.891 [2024-07-21 03:44:27.061627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:41.891 qpair failed and we were unable to recover it. 00:34:41.891 [2024-07-21 03:44:27.061764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.891 [2024-07-21 03:44:27.061793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:41.891 qpair failed and we were unable to recover it. 00:34:41.892 [2024-07-21 03:44:27.061895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.892 [2024-07-21 03:44:27.061937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:41.892 qpair failed and we were unable to recover it. 00:34:41.892 [2024-07-21 03:44:27.062091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.892 [2024-07-21 03:44:27.062123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.892 qpair failed and we were unable to recover it. 00:34:41.892 [2024-07-21 03:44:27.062276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.892 [2024-07-21 03:44:27.062305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.892 qpair failed and we were unable to recover it. 00:34:41.892 [2024-07-21 03:44:27.062469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.892 [2024-07-21 03:44:27.062495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.892 qpair failed and we were unable to recover it. 00:34:41.892 [2024-07-21 03:44:27.062624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.892 [2024-07-21 03:44:27.062651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.892 qpair failed and we were unable to recover it. 00:34:41.892 [2024-07-21 03:44:27.062784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.892 [2024-07-21 03:44:27.062824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.892 qpair failed and we were unable to recover it. 00:34:41.892 [2024-07-21 03:44:27.062958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.892 [2024-07-21 03:44:27.062986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.892 qpair failed and we were unable to recover it. 00:34:41.892 [2024-07-21 03:44:27.063082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.892 [2024-07-21 03:44:27.063109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.892 qpair failed and we were unable to recover it. 00:34:41.892 [2024-07-21 03:44:27.063204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.892 [2024-07-21 03:44:27.063232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.892 qpair failed and we were unable to recover it. 00:34:41.892 [2024-07-21 03:44:27.063353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.892 [2024-07-21 03:44:27.063381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.892 qpair failed and we were unable to recover it. 00:34:41.892 [2024-07-21 03:44:27.063521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.892 [2024-07-21 03:44:27.063562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:41.892 qpair failed and we were unable to recover it. 00:34:41.892 [2024-07-21 03:44:27.063707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.892 [2024-07-21 03:44:27.063735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.892 qpair failed and we were unable to recover it. 00:34:41.892 [2024-07-21 03:44:27.063824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.892 [2024-07-21 03:44:27.063851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.892 qpair failed and we were unable to recover it. 00:34:41.892 [2024-07-21 03:44:27.064010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.892 [2024-07-21 03:44:27.064053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.892 qpair failed and we were unable to recover it. 00:34:41.892 [2024-07-21 03:44:27.064154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.892 [2024-07-21 03:44:27.064197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.892 qpair failed and we were unable to recover it. 00:34:41.892 [2024-07-21 03:44:27.064316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.892 [2024-07-21 03:44:27.064342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.892 qpair failed and we were unable to recover it. 00:34:41.892 [2024-07-21 03:44:27.064480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.892 [2024-07-21 03:44:27.064508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.892 qpair failed and we were unable to recover it. 00:34:41.892 [2024-07-21 03:44:27.064604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.892 [2024-07-21 03:44:27.064635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.892 qpair failed and we were unable to recover it. 00:34:41.892 [2024-07-21 03:44:27.064755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.892 [2024-07-21 03:44:27.064781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.892 qpair failed and we were unable to recover it. 00:34:41.892 [2024-07-21 03:44:27.064885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.892 [2024-07-21 03:44:27.064917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.892 qpair failed and we were unable to recover it. 00:34:41.892 [2024-07-21 03:44:27.065033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.892 [2024-07-21 03:44:27.065081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.892 qpair failed and we were unable to recover it. 00:34:41.892 [2024-07-21 03:44:27.065197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.892 [2024-07-21 03:44:27.065231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.892 qpair failed and we were unable to recover it. 00:34:41.892 [2024-07-21 03:44:27.065395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.892 [2024-07-21 03:44:27.065421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.892 qpair failed and we were unable to recover it. 00:34:41.892 [2024-07-21 03:44:27.065535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.892 [2024-07-21 03:44:27.065561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.892 qpair failed and we were unable to recover it. 00:34:41.892 [2024-07-21 03:44:27.065739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.892 [2024-07-21 03:44:27.065777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.892 qpair failed and we were unable to recover it. 00:34:41.892 [2024-07-21 03:44:27.065890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.892 [2024-07-21 03:44:27.065922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.892 qpair failed and we were unable to recover it. 00:34:41.892 [2024-07-21 03:44:27.066091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.892 [2024-07-21 03:44:27.066138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.892 qpair failed and we were unable to recover it. 00:34:41.892 [2024-07-21 03:44:27.066253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.892 [2024-07-21 03:44:27.066302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.892 qpair failed and we were unable to recover it. 00:34:41.892 [2024-07-21 03:44:27.066450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.892 [2024-07-21 03:44:27.066476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.892 qpair failed and we were unable to recover it. 00:34:41.892 [2024-07-21 03:44:27.066590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.892 [2024-07-21 03:44:27.066633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.892 qpair failed and we were unable to recover it. 00:34:41.892 [2024-07-21 03:44:27.066753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.892 [2024-07-21 03:44:27.066797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.892 qpair failed and we were unable to recover it. 00:34:41.892 [2024-07-21 03:44:27.066970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.892 [2024-07-21 03:44:27.067015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.892 qpair failed and we were unable to recover it. 00:34:41.892 [2024-07-21 03:44:27.067157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.892 [2024-07-21 03:44:27.067201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.892 qpair failed and we were unable to recover it. 00:34:41.893 [2024-07-21 03:44:27.067289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.893 [2024-07-21 03:44:27.067316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.893 qpair failed and we were unable to recover it. 00:34:41.893 [2024-07-21 03:44:27.067465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.893 [2024-07-21 03:44:27.067491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.893 qpair failed and we were unable to recover it. 00:34:41.893 [2024-07-21 03:44:27.067602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.893 [2024-07-21 03:44:27.067665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:41.893 qpair failed and we were unable to recover it. 00:34:41.893 [2024-07-21 03:44:27.067809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.893 [2024-07-21 03:44:27.067841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:41.893 qpair failed and we were unable to recover it. 00:34:41.893 [2024-07-21 03:44:27.067982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.893 [2024-07-21 03:44:27.068012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:41.893 qpair failed and we were unable to recover it. 00:34:41.893 [2024-07-21 03:44:27.068181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.893 [2024-07-21 03:44:27.068230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:41.893 qpair failed and we were unable to recover it. 00:34:41.893 [2024-07-21 03:44:27.068415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.893 [2024-07-21 03:44:27.068465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:41.893 qpair failed and we were unable to recover it. 00:34:41.893 [2024-07-21 03:44:27.068608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.893 [2024-07-21 03:44:27.068641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:41.893 qpair failed and we were unable to recover it. 00:34:41.893 [2024-07-21 03:44:27.068811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.893 [2024-07-21 03:44:27.068841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:41.893 qpair failed and we were unable to recover it. 00:34:41.893 [2024-07-21 03:44:27.068943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.893 [2024-07-21 03:44:27.068973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:41.893 qpair failed and we were unable to recover it. 00:34:41.893 [2024-07-21 03:44:27.069101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.893 [2024-07-21 03:44:27.069131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:41.893 qpair failed and we were unable to recover it. 00:34:41.893 [2024-07-21 03:44:27.069289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.893 [2024-07-21 03:44:27.069335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.893 qpair failed and we were unable to recover it. 00:34:41.893 [2024-07-21 03:44:27.069471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.893 [2024-07-21 03:44:27.069512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.893 qpair failed and we were unable to recover it. 00:34:41.893 [2024-07-21 03:44:27.069642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.893 [2024-07-21 03:44:27.069671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.893 qpair failed and we were unable to recover it. 00:34:41.893 [2024-07-21 03:44:27.069820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.893 [2024-07-21 03:44:27.069846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.893 qpair failed and we were unable to recover it. 00:34:41.893 [2024-07-21 03:44:27.069966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.893 [2024-07-21 03:44:27.069995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.893 qpair failed and we were unable to recover it. 00:34:41.893 [2024-07-21 03:44:27.070152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.893 [2024-07-21 03:44:27.070181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.893 qpair failed and we were unable to recover it. 00:34:41.893 [2024-07-21 03:44:27.070340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.893 [2024-07-21 03:44:27.070369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.893 qpair failed and we were unable to recover it. 00:34:41.893 [2024-07-21 03:44:27.070477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.893 [2024-07-21 03:44:27.070510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.893 qpair failed and we were unable to recover it. 00:34:41.893 [2024-07-21 03:44:27.070609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.893 [2024-07-21 03:44:27.070642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.893 qpair failed and we were unable to recover it. 00:34:41.893 [2024-07-21 03:44:27.070795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.893 [2024-07-21 03:44:27.070821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.893 qpair failed and we were unable to recover it. 00:34:41.893 [2024-07-21 03:44:27.071051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.893 [2024-07-21 03:44:27.071085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.893 qpair failed and we were unable to recover it. 00:34:41.893 [2024-07-21 03:44:27.071248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.893 [2024-07-21 03:44:27.071293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.893 qpair failed and we were unable to recover it. 00:34:41.893 [2024-07-21 03:44:27.071443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.893 [2024-07-21 03:44:27.071470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.893 qpair failed and we were unable to recover it. 00:34:41.893 [2024-07-21 03:44:27.071568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.893 [2024-07-21 03:44:27.071595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.893 qpair failed and we were unable to recover it. 00:34:41.893 [2024-07-21 03:44:27.071696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.893 [2024-07-21 03:44:27.071724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.894 qpair failed and we were unable to recover it. 00:34:41.894 [2024-07-21 03:44:27.071852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.894 [2024-07-21 03:44:27.071879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.894 qpair failed and we were unable to recover it. 00:34:41.894 [2024-07-21 03:44:27.071976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.894 [2024-07-21 03:44:27.072002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.894 qpair failed and we were unable to recover it. 00:34:41.894 [2024-07-21 03:44:27.072121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.894 [2024-07-21 03:44:27.072147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.894 qpair failed and we were unable to recover it. 00:34:41.894 [2024-07-21 03:44:27.072269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.894 [2024-07-21 03:44:27.072296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.894 qpair failed and we were unable to recover it. 00:34:41.894 [2024-07-21 03:44:27.072386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.894 [2024-07-21 03:44:27.072411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.894 qpair failed and we were unable to recover it. 00:34:41.894 [2024-07-21 03:44:27.072555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.894 [2024-07-21 03:44:27.072581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.894 qpair failed and we were unable to recover it. 00:34:41.894 [2024-07-21 03:44:27.072717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.894 [2024-07-21 03:44:27.072757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:41.894 qpair failed and we were unable to recover it. 00:34:41.894 [2024-07-21 03:44:27.072855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.894 [2024-07-21 03:44:27.072884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:41.894 qpair failed and we were unable to recover it. 00:34:41.894 [2024-07-21 03:44:27.073014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.894 [2024-07-21 03:44:27.073041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:41.894 qpair failed and we were unable to recover it. 00:34:41.894 [2024-07-21 03:44:27.073201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.894 [2024-07-21 03:44:27.073230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:41.894 qpair failed and we were unable to recover it. 00:34:41.894 [2024-07-21 03:44:27.073332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.894 [2024-07-21 03:44:27.073362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:41.894 qpair failed and we were unable to recover it. 00:34:41.894 [2024-07-21 03:44:27.073497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.894 [2024-07-21 03:44:27.073527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:41.894 qpair failed and we were unable to recover it. 00:34:41.894 [2024-07-21 03:44:27.073702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.894 [2024-07-21 03:44:27.073748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.894 qpair failed and we were unable to recover it. 00:34:41.894 [2024-07-21 03:44:27.073840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.894 [2024-07-21 03:44:27.073868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.894 qpair failed and we were unable to recover it. 00:34:41.894 [2024-07-21 03:44:27.073982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.894 [2024-07-21 03:44:27.074011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.894 qpair failed and we were unable to recover it. 00:34:41.894 [2024-07-21 03:44:27.074169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.894 [2024-07-21 03:44:27.074213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.894 qpair failed and we were unable to recover it. 00:34:41.894 [2024-07-21 03:44:27.074333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.894 [2024-07-21 03:44:27.074360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.894 qpair failed and we were unable to recover it. 00:34:41.894 [2024-07-21 03:44:27.074454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.894 [2024-07-21 03:44:27.074480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.894 qpair failed and we were unable to recover it. 00:34:41.894 [2024-07-21 03:44:27.074596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.894 [2024-07-21 03:44:27.074635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.894 qpair failed and we were unable to recover it. 00:34:41.894 [2024-07-21 03:44:27.074784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.894 [2024-07-21 03:44:27.074828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.894 qpair failed and we were unable to recover it. 00:34:41.894 [2024-07-21 03:44:27.074952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.894 [2024-07-21 03:44:27.074996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.894 qpair failed and we were unable to recover it. 00:34:41.894 [2024-07-21 03:44:27.075143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.894 [2024-07-21 03:44:27.075170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.894 qpair failed and we were unable to recover it. 00:34:41.894 [2024-07-21 03:44:27.075304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.894 [2024-07-21 03:44:27.075345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.894 qpair failed and we were unable to recover it. 00:34:41.894 [2024-07-21 03:44:27.075454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.894 [2024-07-21 03:44:27.075494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.894 qpair failed and we were unable to recover it. 00:34:41.894 [2024-07-21 03:44:27.075624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.894 [2024-07-21 03:44:27.075671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:41.894 qpair failed and we were unable to recover it. 00:34:41.894 [2024-07-21 03:44:27.075893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.894 [2024-07-21 03:44:27.075931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:41.894 qpair failed and we were unable to recover it. 00:34:41.894 [2024-07-21 03:44:27.076087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.894 [2024-07-21 03:44:27.076121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:41.894 qpair failed and we were unable to recover it. 00:34:41.894 [2024-07-21 03:44:27.076245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.894 [2024-07-21 03:44:27.076294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:41.894 qpair failed and we were unable to recover it. 00:34:41.894 [2024-07-21 03:44:27.076431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.894 [2024-07-21 03:44:27.076460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:41.894 qpair failed and we were unable to recover it. 00:34:41.894 [2024-07-21 03:44:27.076572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.894 [2024-07-21 03:44:27.076599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:41.894 qpair failed and we were unable to recover it. 00:34:41.894 [2024-07-21 03:44:27.076730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.894 [2024-07-21 03:44:27.076758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:41.894 qpair failed and we were unable to recover it. 00:34:41.894 [2024-07-21 03:44:27.076934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.894 [2024-07-21 03:44:27.076964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:41.894 qpair failed and we were unable to recover it. 00:34:41.894 [2024-07-21 03:44:27.077124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.895 [2024-07-21 03:44:27.077163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:41.895 qpair failed and we were unable to recover it. 00:34:41.895 [2024-07-21 03:44:27.077268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.895 [2024-07-21 03:44:27.077298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:41.895 qpair failed and we were unable to recover it. 00:34:41.895 [2024-07-21 03:44:27.077423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.895 [2024-07-21 03:44:27.077466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:41.895 qpair failed and we were unable to recover it. 00:34:41.895 [2024-07-21 03:44:27.077556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.895 [2024-07-21 03:44:27.077583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:41.895 qpair failed and we were unable to recover it. 00:34:41.895 [2024-07-21 03:44:27.077729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.895 [2024-07-21 03:44:27.077756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:41.895 qpair failed and we were unable to recover it. 00:34:41.895 [2024-07-21 03:44:27.077878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.895 [2024-07-21 03:44:27.077913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:41.895 qpair failed and we were unable to recover it. 00:34:41.895 [2024-07-21 03:44:27.078038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.895 [2024-07-21 03:44:27.078082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:41.895 qpair failed and we were unable to recover it. 00:34:41.895 [2024-07-21 03:44:27.078212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.895 [2024-07-21 03:44:27.078242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:41.895 qpair failed and we were unable to recover it. 00:34:41.895 [2024-07-21 03:44:27.078463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.895 [2024-07-21 03:44:27.078493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:41.895 qpair failed and we were unable to recover it. 00:34:41.895 [2024-07-21 03:44:27.078594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.895 [2024-07-21 03:44:27.078658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:41.895 qpair failed and we were unable to recover it. 00:34:41.895 [2024-07-21 03:44:27.078758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.895 [2024-07-21 03:44:27.078786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:41.895 qpair failed and we were unable to recover it. 00:34:41.895 [2024-07-21 03:44:27.078906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.895 [2024-07-21 03:44:27.078936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:41.895 qpair failed and we were unable to recover it. 00:34:41.895 [2024-07-21 03:44:27.079047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.895 [2024-07-21 03:44:27.079077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:41.895 qpair failed and we were unable to recover it. 00:34:41.895 [2024-07-21 03:44:27.079192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.895 [2024-07-21 03:44:27.079222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:41.895 qpair failed and we were unable to recover it. 00:34:41.895 [2024-07-21 03:44:27.079390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.895 [2024-07-21 03:44:27.079419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:41.895 qpair failed and we were unable to recover it. 00:34:41.895 [2024-07-21 03:44:27.079553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.895 [2024-07-21 03:44:27.079582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:41.895 qpair failed and we were unable to recover it. 00:34:41.895 [2024-07-21 03:44:27.079739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.895 [2024-07-21 03:44:27.079770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.895 qpair failed and we were unable to recover it. 00:34:41.895 [2024-07-21 03:44:27.079912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.895 [2024-07-21 03:44:27.079953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.895 qpair failed and we were unable to recover it. 00:34:41.895 [2024-07-21 03:44:27.080108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.895 [2024-07-21 03:44:27.080151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.895 qpair failed and we were unable to recover it. 00:34:41.895 [2024-07-21 03:44:27.080323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.895 [2024-07-21 03:44:27.080354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.895 qpair failed and we were unable to recover it. 00:34:41.895 [2024-07-21 03:44:27.080475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.895 [2024-07-21 03:44:27.080502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.895 qpair failed and we were unable to recover it. 00:34:41.895 [2024-07-21 03:44:27.080648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.895 [2024-07-21 03:44:27.080675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.895 qpair failed and we were unable to recover it. 00:34:41.895 [2024-07-21 03:44:27.080774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.895 [2024-07-21 03:44:27.080802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:41.895 qpair failed and we were unable to recover it. 00:34:41.895 [2024-07-21 03:44:27.080983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.895 [2024-07-21 03:44:27.081012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:41.895 qpair failed and we were unable to recover it. 00:34:41.895 [2024-07-21 03:44:27.081209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.895 [2024-07-21 03:44:27.081238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:41.895 qpair failed and we were unable to recover it. 00:34:41.895 [2024-07-21 03:44:27.081387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.895 [2024-07-21 03:44:27.081432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:41.895 qpair failed and we were unable to recover it. 00:34:41.895 [2024-07-21 03:44:27.081590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.895 [2024-07-21 03:44:27.081625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:41.895 qpair failed and we were unable to recover it. 00:34:41.895 [2024-07-21 03:44:27.081744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.895 [2024-07-21 03:44:27.081771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:41.895 qpair failed and we were unable to recover it. 00:34:41.895 [2024-07-21 03:44:27.081865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.895 [2024-07-21 03:44:27.081892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:41.895 qpair failed and we were unable to recover it. 00:34:41.895 [2024-07-21 03:44:27.082059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.895 [2024-07-21 03:44:27.082088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:41.895 qpair failed and we were unable to recover it. 00:34:41.895 [2024-07-21 03:44:27.082221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.895 [2024-07-21 03:44:27.082250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:41.895 qpair failed and we were unable to recover it. 00:34:41.895 [2024-07-21 03:44:27.082401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.895 [2024-07-21 03:44:27.082427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:41.895 qpair failed and we were unable to recover it. 00:34:41.895 [2024-07-21 03:44:27.082571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.895 [2024-07-21 03:44:27.082600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:41.895 qpair failed and we were unable to recover it. 00:34:41.896 [2024-07-21 03:44:27.082764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.896 [2024-07-21 03:44:27.082791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:41.896 qpair failed and we were unable to recover it. 00:34:41.896 [2024-07-21 03:44:27.082875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.896 [2024-07-21 03:44:27.082918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:41.896 qpair failed and we were unable to recover it. 00:34:41.896 [2024-07-21 03:44:27.083051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.896 [2024-07-21 03:44:27.083081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:41.896 qpair failed and we were unable to recover it. 00:34:41.896 [2024-07-21 03:44:27.083198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.896 [2024-07-21 03:44:27.083242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:41.896 qpair failed and we were unable to recover it. 00:34:41.896 [2024-07-21 03:44:27.083373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.896 [2024-07-21 03:44:27.083404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:41.896 qpair failed and we were unable to recover it. 00:34:41.896 [2024-07-21 03:44:27.083512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.896 [2024-07-21 03:44:27.083543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:41.896 qpair failed and we were unable to recover it. 00:34:41.896 [2024-07-21 03:44:27.083677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.896 [2024-07-21 03:44:27.083705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:41.896 qpair failed and we were unable to recover it. 00:34:41.896 [2024-07-21 03:44:27.083827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.896 [2024-07-21 03:44:27.083858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:41.896 qpair failed and we were unable to recover it. 00:34:41.896 [2024-07-21 03:44:27.083984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.896 [2024-07-21 03:44:27.084028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.896 qpair failed and we were unable to recover it. 00:34:41.896 [2024-07-21 03:44:27.084146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.896 [2024-07-21 03:44:27.084190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.896 qpair failed and we were unable to recover it. 00:34:41.896 [2024-07-21 03:44:27.084349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.896 [2024-07-21 03:44:27.084379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.896 qpair failed and we were unable to recover it. 00:34:41.896 [2024-07-21 03:44:27.084504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.896 [2024-07-21 03:44:27.084533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.896 qpair failed and we were unable to recover it. 00:34:41.896 [2024-07-21 03:44:27.084675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.896 [2024-07-21 03:44:27.084702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.896 qpair failed and we were unable to recover it. 00:34:41.896 [2024-07-21 03:44:27.084848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.896 [2024-07-21 03:44:27.084875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.896 qpair failed and we were unable to recover it. 00:34:41.896 [2024-07-21 03:44:27.085033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.896 [2024-07-21 03:44:27.085060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.896 qpair failed and we were unable to recover it. 00:34:41.896 [2024-07-21 03:44:27.085209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.896 [2024-07-21 03:44:27.085238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.896 qpair failed and we were unable to recover it. 00:34:41.896 [2024-07-21 03:44:27.085373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.896 [2024-07-21 03:44:27.085403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.896 qpair failed and we were unable to recover it. 00:34:41.896 [2024-07-21 03:44:27.085526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.896 [2024-07-21 03:44:27.085555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.896 qpair failed and we were unable to recover it. 00:34:41.896 [2024-07-21 03:44:27.085727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.896 [2024-07-21 03:44:27.085767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.896 qpair failed and we were unable to recover it. 00:34:41.896 [2024-07-21 03:44:27.085897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.896 [2024-07-21 03:44:27.085925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.896 qpair failed and we were unable to recover it. 00:34:41.896 [2024-07-21 03:44:27.086095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.896 [2024-07-21 03:44:27.086140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.896 qpair failed and we were unable to recover it. 00:34:41.896 [2024-07-21 03:44:27.086288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.896 [2024-07-21 03:44:27.086333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.896 qpair failed and we were unable to recover it. 00:34:41.896 [2024-07-21 03:44:27.086428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.896 [2024-07-21 03:44:27.086455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.896 qpair failed and we were unable to recover it. 00:34:41.896 [2024-07-21 03:44:27.086578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.896 [2024-07-21 03:44:27.086604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.896 qpair failed and we were unable to recover it. 00:34:41.896 [2024-07-21 03:44:27.086726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.896 [2024-07-21 03:44:27.086753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.896 qpair failed and we were unable to recover it. 00:34:41.896 [2024-07-21 03:44:27.086840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.896 [2024-07-21 03:44:27.086866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.896 qpair failed and we were unable to recover it. 00:34:41.896 [2024-07-21 03:44:27.087015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.896 [2024-07-21 03:44:27.087042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.896 qpair failed and we were unable to recover it. 00:34:41.896 [2024-07-21 03:44:27.087188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.896 [2024-07-21 03:44:27.087216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.896 qpair failed and we were unable to recover it. 00:34:41.896 [2024-07-21 03:44:27.087345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.896 [2024-07-21 03:44:27.087372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.896 qpair failed and we were unable to recover it. 00:34:41.896 [2024-07-21 03:44:27.087523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.896 [2024-07-21 03:44:27.087549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.896 qpair failed and we were unable to recover it. 00:34:41.896 [2024-07-21 03:44:27.087732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.896 [2024-07-21 03:44:27.087777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.896 qpair failed and we were unable to recover it. 00:34:41.897 [2024-07-21 03:44:27.087947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.897 [2024-07-21 03:44:27.087990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.897 qpair failed and we were unable to recover it. 00:34:41.897 [2024-07-21 03:44:27.088126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.897 [2024-07-21 03:44:27.088175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.897 qpair failed and we were unable to recover it. 00:34:41.897 [2024-07-21 03:44:27.088409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.897 [2024-07-21 03:44:27.088457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.897 qpair failed and we were unable to recover it. 00:34:41.897 [2024-07-21 03:44:27.088594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.897 [2024-07-21 03:44:27.088631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.897 qpair failed and we were unable to recover it. 00:34:41.897 [2024-07-21 03:44:27.088766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.897 [2024-07-21 03:44:27.088795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.897 qpair failed and we were unable to recover it. 00:34:41.897 [2024-07-21 03:44:27.089033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.897 [2024-07-21 03:44:27.089080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.897 qpair failed and we were unable to recover it. 00:34:41.897 [2024-07-21 03:44:27.089267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.897 [2024-07-21 03:44:27.089327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.897 qpair failed and we were unable to recover it. 00:34:41.897 [2024-07-21 03:44:27.089459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.897 [2024-07-21 03:44:27.089488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.897 qpair failed and we were unable to recover it. 00:34:41.897 [2024-07-21 03:44:27.089636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.897 [2024-07-21 03:44:27.089664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.897 qpair failed and we were unable to recover it. 00:34:41.897 [2024-07-21 03:44:27.089799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.897 [2024-07-21 03:44:27.089843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.897 qpair failed and we were unable to recover it. 00:34:41.897 [2024-07-21 03:44:27.089961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.897 [2024-07-21 03:44:27.090005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.897 qpair failed and we were unable to recover it. 00:34:41.897 [2024-07-21 03:44:27.090195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.897 [2024-07-21 03:44:27.090242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.897 qpair failed and we were unable to recover it. 00:34:41.897 [2024-07-21 03:44:27.090401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.897 [2024-07-21 03:44:27.090451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.897 qpair failed and we were unable to recover it. 00:34:41.897 [2024-07-21 03:44:27.090562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.897 [2024-07-21 03:44:27.090589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.897 qpair failed and we were unable to recover it. 00:34:41.897 [2024-07-21 03:44:27.090733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.897 [2024-07-21 03:44:27.090773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.897 qpair failed and we were unable to recover it. 00:34:41.897 [2024-07-21 03:44:27.090961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.897 [2024-07-21 03:44:27.091032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.897 qpair failed and we were unable to recover it. 00:34:41.897 [2024-07-21 03:44:27.091203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.897 [2024-07-21 03:44:27.091255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.897 qpair failed and we were unable to recover it. 00:34:41.897 [2024-07-21 03:44:27.091457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.897 [2024-07-21 03:44:27.091511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.897 qpair failed and we were unable to recover it. 00:34:41.897 [2024-07-21 03:44:27.091633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.897 [2024-07-21 03:44:27.091660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.897 qpair failed and we were unable to recover it. 00:34:41.897 [2024-07-21 03:44:27.091755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.897 [2024-07-21 03:44:27.091781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.897 qpair failed and we were unable to recover it. 00:34:41.897 [2024-07-21 03:44:27.091872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.897 [2024-07-21 03:44:27.091912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.897 qpair failed and we were unable to recover it. 00:34:41.897 [2024-07-21 03:44:27.092067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.897 [2024-07-21 03:44:27.092096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.897 qpair failed and we were unable to recover it. 00:34:41.897 [2024-07-21 03:44:27.092252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.897 [2024-07-21 03:44:27.092300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.897 qpair failed and we were unable to recover it. 00:34:41.897 [2024-07-21 03:44:27.092424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.897 [2024-07-21 03:44:27.092467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.897 qpair failed and we were unable to recover it. 00:34:41.897 [2024-07-21 03:44:27.092610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.897 [2024-07-21 03:44:27.092644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.897 qpair failed and we were unable to recover it. 00:34:41.897 [2024-07-21 03:44:27.092766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.897 [2024-07-21 03:44:27.092793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.897 qpair failed and we were unable to recover it. 00:34:41.897 [2024-07-21 03:44:27.092942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.897 [2024-07-21 03:44:27.092970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.897 qpair failed and we were unable to recover it. 00:34:41.897 [2024-07-21 03:44:27.093156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.897 [2024-07-21 03:44:27.093207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.897 qpair failed and we were unable to recover it. 00:34:41.897 [2024-07-21 03:44:27.093366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.897 [2024-07-21 03:44:27.093394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.897 qpair failed and we were unable to recover it. 00:34:41.897 [2024-07-21 03:44:27.093502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.897 [2024-07-21 03:44:27.093530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.897 qpair failed and we were unable to recover it. 00:34:41.897 [2024-07-21 03:44:27.093687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.897 [2024-07-21 03:44:27.093727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.898 qpair failed and we were unable to recover it. 00:34:41.898 [2024-07-21 03:44:27.093874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.898 [2024-07-21 03:44:27.093919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.898 qpair failed and we were unable to recover it. 00:34:41.898 [2024-07-21 03:44:27.094098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.898 [2024-07-21 03:44:27.094142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.898 qpair failed and we were unable to recover it. 00:34:41.898 [2024-07-21 03:44:27.094279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.898 [2024-07-21 03:44:27.094326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.898 qpair failed and we were unable to recover it. 00:34:41.898 [2024-07-21 03:44:27.094444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.898 [2024-07-21 03:44:27.094471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.898 qpair failed and we were unable to recover it. 00:34:41.898 [2024-07-21 03:44:27.094593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.898 [2024-07-21 03:44:27.094630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.898 qpair failed and we were unable to recover it. 00:34:41.898 [2024-07-21 03:44:27.094807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.898 [2024-07-21 03:44:27.094837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.898 qpair failed and we were unable to recover it. 00:34:41.898 [2024-07-21 03:44:27.094996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.898 [2024-07-21 03:44:27.095039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.898 qpair failed and we were unable to recover it. 00:34:41.898 [2024-07-21 03:44:27.095165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.898 [2024-07-21 03:44:27.095191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.898 qpair failed and we were unable to recover it. 00:34:41.898 [2024-07-21 03:44:27.095341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.898 [2024-07-21 03:44:27.095369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.898 qpair failed and we were unable to recover it. 00:34:41.898 [2024-07-21 03:44:27.095468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.898 [2024-07-21 03:44:27.095495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.898 qpair failed and we were unable to recover it. 00:34:41.898 [2024-07-21 03:44:27.095611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.898 [2024-07-21 03:44:27.095661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.898 qpair failed and we were unable to recover it. 00:34:41.898 [2024-07-21 03:44:27.095816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.898 [2024-07-21 03:44:27.095844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.898 qpair failed and we were unable to recover it. 00:34:41.898 [2024-07-21 03:44:27.095985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.898 [2024-07-21 03:44:27.096019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.898 qpair failed and we were unable to recover it. 00:34:41.898 [2024-07-21 03:44:27.096157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.898 [2024-07-21 03:44:27.096186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.898 qpair failed and we were unable to recover it. 00:34:41.898 [2024-07-21 03:44:27.096398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.898 [2024-07-21 03:44:27.096458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.898 qpair failed and we were unable to recover it. 00:34:41.898 [2024-07-21 03:44:27.096578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.898 [2024-07-21 03:44:27.096606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.898 qpair failed and we were unable to recover it. 00:34:41.898 [2024-07-21 03:44:27.096741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.898 [2024-07-21 03:44:27.096768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.898 qpair failed and we were unable to recover it. 00:34:41.898 [2024-07-21 03:44:27.096898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.898 [2024-07-21 03:44:27.096928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.898 qpair failed and we were unable to recover it. 00:34:41.898 [2024-07-21 03:44:27.097126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.898 [2024-07-21 03:44:27.097155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.898 qpair failed and we were unable to recover it. 00:34:41.898 [2024-07-21 03:44:27.097317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.898 [2024-07-21 03:44:27.097379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.898 qpair failed and we were unable to recover it. 00:34:41.898 [2024-07-21 03:44:27.097512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.898 [2024-07-21 03:44:27.097538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.898 qpair failed and we were unable to recover it. 00:34:41.898 [2024-07-21 03:44:27.097670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.898 [2024-07-21 03:44:27.097697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.898 qpair failed and we were unable to recover it. 00:34:41.898 [2024-07-21 03:44:27.097826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.898 [2024-07-21 03:44:27.097853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.898 qpair failed and we were unable to recover it. 00:34:41.898 [2024-07-21 03:44:27.098026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.898 [2024-07-21 03:44:27.098052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.898 qpair failed and we were unable to recover it. 00:34:41.898 [2024-07-21 03:44:27.098225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.898 [2024-07-21 03:44:27.098254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.898 qpair failed and we were unable to recover it. 00:34:41.898 [2024-07-21 03:44:27.098376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.899 [2024-07-21 03:44:27.098418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.899 qpair failed and we were unable to recover it. 00:34:41.899 [2024-07-21 03:44:27.098599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.899 [2024-07-21 03:44:27.098631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.899 qpair failed and we were unable to recover it. 00:34:41.899 [2024-07-21 03:44:27.098757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.899 [2024-07-21 03:44:27.098783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.899 qpair failed and we were unable to recover it. 00:34:41.899 [2024-07-21 03:44:27.098921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.899 [2024-07-21 03:44:27.098950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.899 qpair failed and we were unable to recover it. 00:34:41.899 [2024-07-21 03:44:27.099106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.899 [2024-07-21 03:44:27.099134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.899 qpair failed and we were unable to recover it. 00:34:41.899 [2024-07-21 03:44:27.099226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.899 [2024-07-21 03:44:27.099254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.899 qpair failed and we were unable to recover it. 00:34:41.899 [2024-07-21 03:44:27.099361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.899 [2024-07-21 03:44:27.099390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.899 qpair failed and we were unable to recover it. 00:34:41.899 [2024-07-21 03:44:27.099526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.899 [2024-07-21 03:44:27.099565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.899 qpair failed and we were unable to recover it. 00:34:41.899 [2024-07-21 03:44:27.099732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.899 [2024-07-21 03:44:27.099762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.899 qpair failed and we were unable to recover it. 00:34:41.899 [2024-07-21 03:44:27.099909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.899 [2024-07-21 03:44:27.099936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.899 qpair failed and we were unable to recover it. 00:34:41.899 [2024-07-21 03:44:27.100108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.899 [2024-07-21 03:44:27.100152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.899 qpair failed and we were unable to recover it. 00:34:41.899 [2024-07-21 03:44:27.100268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.899 [2024-07-21 03:44:27.100298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.899 qpair failed and we were unable to recover it. 00:34:41.899 [2024-07-21 03:44:27.100462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.899 [2024-07-21 03:44:27.100489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.899 qpair failed and we were unable to recover it. 00:34:41.899 [2024-07-21 03:44:27.100605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.899 [2024-07-21 03:44:27.100639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.899 qpair failed and we were unable to recover it. 00:34:41.899 [2024-07-21 03:44:27.100745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.899 [2024-07-21 03:44:27.100772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.899 qpair failed and we were unable to recover it. 00:34:41.899 [2024-07-21 03:44:27.100896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.899 [2024-07-21 03:44:27.100923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.899 qpair failed and we were unable to recover it. 00:34:41.899 [2024-07-21 03:44:27.101033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.899 [2024-07-21 03:44:27.101062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.899 qpair failed and we were unable to recover it. 00:34:41.899 [2024-07-21 03:44:27.101225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.899 [2024-07-21 03:44:27.101252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.899 qpair failed and we were unable to recover it. 00:34:41.899 [2024-07-21 03:44:27.101383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.899 [2024-07-21 03:44:27.101423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.899 qpair failed and we were unable to recover it. 00:34:41.899 [2024-07-21 03:44:27.101580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.899 [2024-07-21 03:44:27.101609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.899 qpair failed and we were unable to recover it. 00:34:41.899 [2024-07-21 03:44:27.101768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.899 [2024-07-21 03:44:27.101798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.899 qpair failed and we were unable to recover it. 00:34:41.899 [2024-07-21 03:44:27.101928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.899 [2024-07-21 03:44:27.101957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.899 qpair failed and we were unable to recover it. 00:34:41.899 [2024-07-21 03:44:27.102188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.899 [2024-07-21 03:44:27.102217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.899 qpair failed and we were unable to recover it. 00:34:41.899 [2024-07-21 03:44:27.102348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.899 [2024-07-21 03:44:27.102377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.899 qpair failed and we were unable to recover it. 00:34:41.899 [2024-07-21 03:44:27.102526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.899 [2024-07-21 03:44:27.102554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.899 qpair failed and we were unable to recover it. 00:34:41.899 [2024-07-21 03:44:27.102691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.899 [2024-07-21 03:44:27.102736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.899 qpair failed and we were unable to recover it. 00:34:41.899 [2024-07-21 03:44:27.102880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.899 [2024-07-21 03:44:27.102911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.899 qpair failed and we were unable to recover it. 00:34:41.899 [2024-07-21 03:44:27.103021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.899 [2024-07-21 03:44:27.103050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.899 qpair failed and we were unable to recover it. 00:34:41.899 [2024-07-21 03:44:27.103265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.899 [2024-07-21 03:44:27.103314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.899 qpair failed and we were unable to recover it. 00:34:41.899 [2024-07-21 03:44:27.103448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.899 [2024-07-21 03:44:27.103477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.899 qpair failed and we were unable to recover it. 00:34:41.899 [2024-07-21 03:44:27.103595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.899 [2024-07-21 03:44:27.103629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.899 qpair failed and we were unable to recover it. 00:34:41.899 [2024-07-21 03:44:27.103741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.899 [2024-07-21 03:44:27.103772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.899 qpair failed and we were unable to recover it. 00:34:41.900 [2024-07-21 03:44:27.103945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.900 [2024-07-21 03:44:27.103973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.900 qpair failed and we were unable to recover it. 00:34:41.900 [2024-07-21 03:44:27.104105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.900 [2024-07-21 03:44:27.104134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.900 qpair failed and we were unable to recover it. 00:34:41.900 [2024-07-21 03:44:27.104291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.900 [2024-07-21 03:44:27.104320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.900 qpair failed and we were unable to recover it. 00:34:41.900 [2024-07-21 03:44:27.104454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.900 [2024-07-21 03:44:27.104483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.900 qpair failed and we were unable to recover it. 00:34:41.900 [2024-07-21 03:44:27.104610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.900 [2024-07-21 03:44:27.104673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.900 qpair failed and we were unable to recover it. 00:34:41.900 [2024-07-21 03:44:27.104805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.900 [2024-07-21 03:44:27.104835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.900 qpair failed and we were unable to recover it. 00:34:41.900 [2024-07-21 03:44:27.104975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.900 [2024-07-21 03:44:27.105021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.900 qpair failed and we were unable to recover it. 00:34:41.900 [2024-07-21 03:44:27.105169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.900 [2024-07-21 03:44:27.105195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.900 qpair failed and we were unable to recover it. 00:34:41.900 [2024-07-21 03:44:27.105284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.900 [2024-07-21 03:44:27.105312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.900 qpair failed and we were unable to recover it. 00:34:41.900 [2024-07-21 03:44:27.105408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.900 [2024-07-21 03:44:27.105435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.900 qpair failed and we were unable to recover it. 00:34:41.900 [2024-07-21 03:44:27.105528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.900 [2024-07-21 03:44:27.105554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.900 qpair failed and we were unable to recover it. 00:34:41.900 [2024-07-21 03:44:27.105657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.900 [2024-07-21 03:44:27.105698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.900 qpair failed and we were unable to recover it. 00:34:41.900 [2024-07-21 03:44:27.105840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.900 [2024-07-21 03:44:27.105867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.900 qpair failed and we were unable to recover it. 00:34:41.900 [2024-07-21 03:44:27.105996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.900 [2024-07-21 03:44:27.106023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.900 qpair failed and we were unable to recover it. 00:34:41.900 [2024-07-21 03:44:27.106186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.900 [2024-07-21 03:44:27.106234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.900 qpair failed and we were unable to recover it. 00:34:41.900 [2024-07-21 03:44:27.106393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.900 [2024-07-21 03:44:27.106422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.900 qpair failed and we were unable to recover it. 00:34:41.900 [2024-07-21 03:44:27.106525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.900 [2024-07-21 03:44:27.106568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.900 qpair failed and we were unable to recover it. 00:34:41.900 [2024-07-21 03:44:27.106712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.900 [2024-07-21 03:44:27.106739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.900 qpair failed and we were unable to recover it. 00:34:41.900 [2024-07-21 03:44:27.106863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.900 [2024-07-21 03:44:27.106905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.900 qpair failed and we were unable to recover it. 00:34:41.900 [2024-07-21 03:44:27.107034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.900 [2024-07-21 03:44:27.107064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.900 qpair failed and we were unable to recover it. 00:34:41.900 [2024-07-21 03:44:27.107199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.900 [2024-07-21 03:44:27.107247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.900 qpair failed and we were unable to recover it. 00:34:41.900 [2024-07-21 03:44:27.107397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.900 [2024-07-21 03:44:27.107426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.900 qpair failed and we were unable to recover it. 00:34:41.900 [2024-07-21 03:44:27.107526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.900 [2024-07-21 03:44:27.107561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.900 qpair failed and we were unable to recover it. 00:34:41.900 [2024-07-21 03:44:27.107737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.900 [2024-07-21 03:44:27.107764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.900 qpair failed and we were unable to recover it. 00:34:41.900 [2024-07-21 03:44:27.107858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.900 [2024-07-21 03:44:27.107902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.900 qpair failed and we were unable to recover it. 00:34:41.900 [2024-07-21 03:44:27.108012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.900 [2024-07-21 03:44:27.108043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.900 qpair failed and we were unable to recover it. 00:34:41.900 [2024-07-21 03:44:27.108186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.900 [2024-07-21 03:44:27.108229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.900 qpair failed and we were unable to recover it. 00:34:41.900 [2024-07-21 03:44:27.108389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.900 [2024-07-21 03:44:27.108419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.900 qpair failed and we were unable to recover it. 00:34:41.900 [2024-07-21 03:44:27.108580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.900 [2024-07-21 03:44:27.108609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.900 qpair failed and we were unable to recover it. 00:34:41.900 [2024-07-21 03:44:27.108784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.900 [2024-07-21 03:44:27.108810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.900 qpair failed and we were unable to recover it. 00:34:41.900 [2024-07-21 03:44:27.108947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.900 [2024-07-21 03:44:27.108976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.900 qpair failed and we were unable to recover it. 00:34:41.900 [2024-07-21 03:44:27.109079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.900 [2024-07-21 03:44:27.109107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.900 qpair failed and we were unable to recover it. 00:34:41.900 [2024-07-21 03:44:27.109243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.900 [2024-07-21 03:44:27.109287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.901 qpair failed and we were unable to recover it. 00:34:41.901 [2024-07-21 03:44:27.109385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.901 [2024-07-21 03:44:27.109416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.901 qpair failed and we were unable to recover it. 00:34:41.901 [2024-07-21 03:44:27.109579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.901 [2024-07-21 03:44:27.109608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.901 qpair failed and we were unable to recover it. 00:34:41.901 [2024-07-21 03:44:27.109736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.901 [2024-07-21 03:44:27.109763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.901 qpair failed and we were unable to recover it. 00:34:41.901 [2024-07-21 03:44:27.109924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.901 [2024-07-21 03:44:27.109951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.901 qpair failed and we were unable to recover it. 00:34:41.901 [2024-07-21 03:44:27.110073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.901 [2024-07-21 03:44:27.110103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.901 qpair failed and we were unable to recover it. 00:34:41.901 [2024-07-21 03:44:27.110230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.901 [2024-07-21 03:44:27.110259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.901 qpair failed and we were unable to recover it. 00:34:41.901 [2024-07-21 03:44:27.110391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.901 [2024-07-21 03:44:27.110420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.901 qpair failed and we were unable to recover it. 00:34:41.901 [2024-07-21 03:44:27.110546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.901 [2024-07-21 03:44:27.110573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.901 qpair failed and we were unable to recover it. 00:34:41.901 [2024-07-21 03:44:27.110712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.901 [2024-07-21 03:44:27.110740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.901 qpair failed and we were unable to recover it. 00:34:41.901 [2024-07-21 03:44:27.110857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.901 [2024-07-21 03:44:27.110899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.901 qpair failed and we were unable to recover it. 00:34:41.901 [2024-07-21 03:44:27.111061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.901 [2024-07-21 03:44:27.111095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.901 qpair failed and we were unable to recover it. 00:34:41.901 [2024-07-21 03:44:27.111284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.901 [2024-07-21 03:44:27.111313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.901 qpair failed and we were unable to recover it. 00:34:41.901 [2024-07-21 03:44:27.111451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.901 [2024-07-21 03:44:27.111480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.901 qpair failed and we were unable to recover it. 00:34:41.901 [2024-07-21 03:44:27.111600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.901 [2024-07-21 03:44:27.111636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.901 qpair failed and we were unable to recover it. 00:34:41.901 [2024-07-21 03:44:27.111757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.901 [2024-07-21 03:44:27.111784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.901 qpair failed and we were unable to recover it. 00:34:41.901 [2024-07-21 03:44:27.111903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.901 [2024-07-21 03:44:27.111945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.901 qpair failed and we were unable to recover it. 00:34:41.901 [2024-07-21 03:44:27.112090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.901 [2024-07-21 03:44:27.112117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.901 qpair failed and we were unable to recover it. 00:34:41.901 [2024-07-21 03:44:27.112264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.901 [2024-07-21 03:44:27.112293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.901 qpair failed and we were unable to recover it. 00:34:41.901 [2024-07-21 03:44:27.112424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.901 [2024-07-21 03:44:27.112452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.901 qpair failed and we were unable to recover it. 00:34:41.901 [2024-07-21 03:44:27.112570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.901 [2024-07-21 03:44:27.112597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.901 qpair failed and we were unable to recover it. 00:34:41.901 [2024-07-21 03:44:27.112721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.901 [2024-07-21 03:44:27.112748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.901 qpair failed and we were unable to recover it. 00:34:41.901 [2024-07-21 03:44:27.112863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.901 [2024-07-21 03:44:27.112892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.901 qpair failed and we were unable to recover it. 00:34:41.901 [2024-07-21 03:44:27.113051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.901 [2024-07-21 03:44:27.113080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.901 qpair failed and we were unable to recover it. 00:34:41.901 [2024-07-21 03:44:27.113188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.901 [2024-07-21 03:44:27.113217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.901 qpair failed and we were unable to recover it. 00:34:41.901 [2024-07-21 03:44:27.113371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.901 [2024-07-21 03:44:27.113400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.901 qpair failed and we were unable to recover it. 00:34:41.901 [2024-07-21 03:44:27.113497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.901 [2024-07-21 03:44:27.113526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.901 qpair failed and we were unable to recover it. 00:34:41.901 [2024-07-21 03:44:27.113665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.901 [2024-07-21 03:44:27.113694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.901 qpair failed and we were unable to recover it. 00:34:41.901 [2024-07-21 03:44:27.113829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.901 [2024-07-21 03:44:27.113855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.901 qpair failed and we were unable to recover it. 00:34:41.901 [2024-07-21 03:44:27.114000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.901 [2024-07-21 03:44:27.114029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.901 qpair failed and we were unable to recover it. 00:34:41.901 [2024-07-21 03:44:27.114142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.901 [2024-07-21 03:44:27.114189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.901 qpair failed and we were unable to recover it. 00:34:41.901 [2024-07-21 03:44:27.114324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.901 [2024-07-21 03:44:27.114354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.901 qpair failed and we were unable to recover it. 00:34:41.901 [2024-07-21 03:44:27.114485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.901 [2024-07-21 03:44:27.114515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.901 qpair failed and we were unable to recover it. 00:34:41.902 [2024-07-21 03:44:27.114680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.902 [2024-07-21 03:44:27.114707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.902 qpair failed and we were unable to recover it. 00:34:41.902 [2024-07-21 03:44:27.114831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.902 [2024-07-21 03:44:27.114857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.902 qpair failed and we were unable to recover it. 00:34:41.902 [2024-07-21 03:44:27.114978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.902 [2024-07-21 03:44:27.115004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.902 qpair failed and we were unable to recover it. 00:34:41.902 [2024-07-21 03:44:27.115150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.902 [2024-07-21 03:44:27.115181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.902 qpair failed and we were unable to recover it. 00:34:41.902 [2024-07-21 03:44:27.115314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.902 [2024-07-21 03:44:27.115342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.902 qpair failed and we were unable to recover it. 00:34:41.902 [2024-07-21 03:44:27.115478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.902 [2024-07-21 03:44:27.115507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.902 qpair failed and we were unable to recover it. 00:34:41.902 [2024-07-21 03:44:27.115632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.902 [2024-07-21 03:44:27.115673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.902 qpair failed and we were unable to recover it. 00:34:41.902 [2024-07-21 03:44:27.115791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.902 [2024-07-21 03:44:27.115835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.902 qpair failed and we were unable to recover it. 00:34:41.902 [2024-07-21 03:44:27.116003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.902 [2024-07-21 03:44:27.116047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.902 qpair failed and we were unable to recover it. 00:34:41.902 [2024-07-21 03:44:27.116224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.902 [2024-07-21 03:44:27.116271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.902 qpair failed and we were unable to recover it. 00:34:41.902 [2024-07-21 03:44:27.116419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.902 [2024-07-21 03:44:27.116445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.902 qpair failed and we were unable to recover it. 00:34:41.902 [2024-07-21 03:44:27.116558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.902 [2024-07-21 03:44:27.116585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.902 qpair failed and we were unable to recover it. 00:34:41.902 [2024-07-21 03:44:27.116768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.902 [2024-07-21 03:44:27.116813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.902 qpair failed and we were unable to recover it. 00:34:41.902 [2024-07-21 03:44:27.116922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.902 [2024-07-21 03:44:27.116966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.902 qpair failed and we were unable to recover it. 00:34:41.902 [2024-07-21 03:44:27.117081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.902 [2024-07-21 03:44:27.117111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.902 qpair failed and we were unable to recover it. 00:34:41.902 [2024-07-21 03:44:27.117278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.902 [2024-07-21 03:44:27.117324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.902 qpair failed and we were unable to recover it. 00:34:41.902 [2024-07-21 03:44:27.117448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.902 [2024-07-21 03:44:27.117474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.902 qpair failed and we were unable to recover it. 00:34:41.902 [2024-07-21 03:44:27.117625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.902 [2024-07-21 03:44:27.117669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.902 qpair failed and we were unable to recover it. 00:34:41.902 [2024-07-21 03:44:27.117785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.902 [2024-07-21 03:44:27.117814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.902 qpair failed and we were unable to recover it. 00:34:41.902 [2024-07-21 03:44:27.117948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.902 [2024-07-21 03:44:27.117975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.902 qpair failed and we were unable to recover it. 00:34:41.902 [2024-07-21 03:44:27.118111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.902 [2024-07-21 03:44:27.118140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.902 qpair failed and we were unable to recover it. 00:34:41.902 [2024-07-21 03:44:27.118266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.902 [2024-07-21 03:44:27.118295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.902 qpair failed and we were unable to recover it. 00:34:41.902 [2024-07-21 03:44:27.118436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.902 [2024-07-21 03:44:27.118464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.902 qpair failed and we were unable to recover it. 00:34:41.902 [2024-07-21 03:44:27.118554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.902 [2024-07-21 03:44:27.118581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.902 qpair failed and we were unable to recover it. 00:34:41.902 [2024-07-21 03:44:27.118723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.902 [2024-07-21 03:44:27.118769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.902 qpair failed and we were unable to recover it. 00:34:41.902 [2024-07-21 03:44:27.118943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.902 [2024-07-21 03:44:27.118987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.902 qpair failed and we were unable to recover it. 00:34:41.902 [2024-07-21 03:44:27.119119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.902 [2024-07-21 03:44:27.119149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.902 qpair failed and we were unable to recover it. 00:34:41.902 [2024-07-21 03:44:27.119285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.902 [2024-07-21 03:44:27.119312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.902 qpair failed and we were unable to recover it. 00:34:41.902 [2024-07-21 03:44:27.119436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.902 [2024-07-21 03:44:27.119462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.902 qpair failed and we were unable to recover it. 00:34:41.902 [2024-07-21 03:44:27.119570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.902 [2024-07-21 03:44:27.119611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.902 qpair failed and we were unable to recover it. 00:34:41.902 [2024-07-21 03:44:27.119768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.902 [2024-07-21 03:44:27.119798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.902 qpair failed and we were unable to recover it. 00:34:41.902 [2024-07-21 03:44:27.119931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.902 [2024-07-21 03:44:27.119960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.902 qpair failed and we were unable to recover it. 00:34:41.902 [2024-07-21 03:44:27.120083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.903 [2024-07-21 03:44:27.120112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.903 qpair failed and we were unable to recover it. 00:34:41.903 [2024-07-21 03:44:27.120300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.903 [2024-07-21 03:44:27.120332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.903 qpair failed and we were unable to recover it. 00:34:41.903 [2024-07-21 03:44:27.120460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.903 [2024-07-21 03:44:27.120502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.903 qpair failed and we were unable to recover it. 00:34:41.903 [2024-07-21 03:44:27.120651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.903 [2024-07-21 03:44:27.120678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.903 qpair failed and we were unable to recover it. 00:34:41.903 [2024-07-21 03:44:27.120776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.903 [2024-07-21 03:44:27.120802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.903 qpair failed and we were unable to recover it. 00:34:41.903 [2024-07-21 03:44:27.120943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.903 [2024-07-21 03:44:27.120972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.903 qpair failed and we were unable to recover it. 00:34:41.903 [2024-07-21 03:44:27.121176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.903 [2024-07-21 03:44:27.121205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.903 qpair failed and we were unable to recover it. 00:34:41.903 [2024-07-21 03:44:27.121364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.903 [2024-07-21 03:44:27.121393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.903 qpair failed and we were unable to recover it. 00:34:41.903 [2024-07-21 03:44:27.121526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.903 [2024-07-21 03:44:27.121555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.903 qpair failed and we were unable to recover it. 00:34:41.903 [2024-07-21 03:44:27.121728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.903 [2024-07-21 03:44:27.121768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.903 qpair failed and we were unable to recover it. 00:34:41.903 [2024-07-21 03:44:27.121970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.903 [2024-07-21 03:44:27.122022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.903 qpair failed and we were unable to recover it. 00:34:41.903 [2024-07-21 03:44:27.122123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.903 [2024-07-21 03:44:27.122153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.903 qpair failed and we were unable to recover it. 00:34:41.903 [2024-07-21 03:44:27.122285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.903 [2024-07-21 03:44:27.122314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.903 qpair failed and we were unable to recover it. 00:34:41.903 [2024-07-21 03:44:27.122418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.903 [2024-07-21 03:44:27.122448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.903 qpair failed and we were unable to recover it. 00:34:41.903 [2024-07-21 03:44:27.122599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.903 [2024-07-21 03:44:27.122642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.903 qpair failed and we were unable to recover it. 00:34:41.903 [2024-07-21 03:44:27.122790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.903 [2024-07-21 03:44:27.122817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.903 qpair failed and we were unable to recover it. 00:34:41.903 [2024-07-21 03:44:27.122978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.903 [2024-07-21 03:44:27.123030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.903 qpair failed and we were unable to recover it. 00:34:41.903 [2024-07-21 03:44:27.123132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.903 [2024-07-21 03:44:27.123174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.903 qpair failed and we were unable to recover it. 00:34:41.903 [2024-07-21 03:44:27.123324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.903 [2024-07-21 03:44:27.123353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.903 qpair failed and we were unable to recover it. 00:34:41.903 [2024-07-21 03:44:27.123509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.903 [2024-07-21 03:44:27.123535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.903 qpair failed and we were unable to recover it. 00:34:41.903 [2024-07-21 03:44:27.123656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.903 [2024-07-21 03:44:27.123683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.903 qpair failed and we were unable to recover it. 00:34:41.903 [2024-07-21 03:44:27.123830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.903 [2024-07-21 03:44:27.123856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.903 qpair failed and we were unable to recover it. 00:34:41.903 [2024-07-21 03:44:27.124071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.903 [2024-07-21 03:44:27.124164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.903 qpair failed and we were unable to recover it. 00:34:41.903 [2024-07-21 03:44:27.124321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.903 [2024-07-21 03:44:27.124350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.903 qpair failed and we were unable to recover it. 00:34:41.903 [2024-07-21 03:44:27.124482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.903 [2024-07-21 03:44:27.124511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.903 qpair failed and we were unable to recover it. 00:34:41.903 [2024-07-21 03:44:27.124646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.903 [2024-07-21 03:44:27.124673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.903 qpair failed and we were unable to recover it. 00:34:41.903 [2024-07-21 03:44:27.124763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.903 [2024-07-21 03:44:27.124789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.903 qpair failed and we were unable to recover it. 00:34:41.903 [2024-07-21 03:44:27.124908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.903 [2024-07-21 03:44:27.124937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.903 qpair failed and we were unable to recover it. 00:34:41.903 [2024-07-21 03:44:27.125068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.903 [2024-07-21 03:44:27.125098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.903 qpair failed and we were unable to recover it. 00:34:41.903 [2024-07-21 03:44:27.125258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.903 [2024-07-21 03:44:27.125287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.903 qpair failed and we were unable to recover it. 00:34:41.903 [2024-07-21 03:44:27.125403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.903 [2024-07-21 03:44:27.125444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.903 qpair failed and we were unable to recover it. 00:34:41.903 [2024-07-21 03:44:27.125590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.903 [2024-07-21 03:44:27.125623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.903 qpair failed and we were unable to recover it. 00:34:41.903 [2024-07-21 03:44:27.125706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.903 [2024-07-21 03:44:27.125737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.903 qpair failed and we were unable to recover it. 00:34:41.904 [2024-07-21 03:44:27.125848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.904 [2024-07-21 03:44:27.125874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.904 qpair failed and we were unable to recover it. 00:34:41.904 [2024-07-21 03:44:27.125984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.904 [2024-07-21 03:44:27.126010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.904 qpair failed and we were unable to recover it. 00:34:41.904 [2024-07-21 03:44:27.126129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.904 [2024-07-21 03:44:27.126156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.904 qpair failed and we were unable to recover it. 00:34:41.904 [2024-07-21 03:44:27.126291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.904 [2024-07-21 03:44:27.126320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.904 qpair failed and we were unable to recover it. 00:34:41.904 [2024-07-21 03:44:27.126452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.904 [2024-07-21 03:44:27.126495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.904 qpair failed and we were unable to recover it. 00:34:41.904 [2024-07-21 03:44:27.126596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.904 [2024-07-21 03:44:27.126644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.904 qpair failed and we were unable to recover it. 00:34:41.904 [2024-07-21 03:44:27.126781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.904 [2024-07-21 03:44:27.126808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.904 qpair failed and we were unable to recover it. 00:34:41.904 [2024-07-21 03:44:27.126904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.904 [2024-07-21 03:44:27.126930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.904 qpair failed and we were unable to recover it. 00:34:41.904 [2024-07-21 03:44:27.127054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.904 [2024-07-21 03:44:27.127080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.904 qpair failed and we were unable to recover it. 00:34:41.904 [2024-07-21 03:44:27.127209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.904 [2024-07-21 03:44:27.127238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.904 qpair failed and we were unable to recover it. 00:34:41.904 [2024-07-21 03:44:27.127354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.904 [2024-07-21 03:44:27.127397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.904 qpair failed and we were unable to recover it. 00:34:41.904 [2024-07-21 03:44:27.127552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.904 [2024-07-21 03:44:27.127580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.904 qpair failed and we were unable to recover it. 00:34:41.904 [2024-07-21 03:44:27.127718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.904 [2024-07-21 03:44:27.127757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.904 qpair failed and we were unable to recover it. 00:34:41.904 [2024-07-21 03:44:27.127865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.904 [2024-07-21 03:44:27.127893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.904 qpair failed and we were unable to recover it. 00:34:41.904 [2024-07-21 03:44:27.128007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.904 [2024-07-21 03:44:27.128036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.904 qpair failed and we were unable to recover it. 00:34:41.904 [2024-07-21 03:44:27.128214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.904 [2024-07-21 03:44:27.128248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.904 qpair failed and we were unable to recover it. 00:34:41.904 [2024-07-21 03:44:27.128484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.904 [2024-07-21 03:44:27.128538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.904 qpair failed and we were unable to recover it. 00:34:41.904 [2024-07-21 03:44:27.128702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.904 [2024-07-21 03:44:27.128729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.904 qpair failed and we were unable to recover it. 00:34:41.904 [2024-07-21 03:44:27.128821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.904 [2024-07-21 03:44:27.128847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.904 qpair failed and we were unable to recover it. 00:34:41.904 [2024-07-21 03:44:27.128999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.904 [2024-07-21 03:44:27.129028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.904 qpair failed and we were unable to recover it. 00:34:41.904 [2024-07-21 03:44:27.129164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.904 [2024-07-21 03:44:27.129194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.904 qpair failed and we were unable to recover it. 00:34:41.904 [2024-07-21 03:44:27.129328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.904 [2024-07-21 03:44:27.129357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.904 qpair failed and we were unable to recover it. 00:34:41.904 [2024-07-21 03:44:27.129510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.904 [2024-07-21 03:44:27.129550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.904 qpair failed and we were unable to recover it. 00:34:41.904 [2024-07-21 03:44:27.129709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.904 [2024-07-21 03:44:27.129737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.904 qpair failed and we were unable to recover it. 00:34:41.904 [2024-07-21 03:44:27.129826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.904 [2024-07-21 03:44:27.129854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.904 qpair failed and we were unable to recover it. 00:34:41.905 [2024-07-21 03:44:27.129973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.905 [2024-07-21 03:44:27.129999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.905 qpair failed and we were unable to recover it. 00:34:41.905 [2024-07-21 03:44:27.130135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.905 [2024-07-21 03:44:27.130164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.905 qpair failed and we were unable to recover it. 00:34:41.905 [2024-07-21 03:44:27.130279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.905 [2024-07-21 03:44:27.130324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.905 qpair failed and we were unable to recover it. 00:34:41.905 [2024-07-21 03:44:27.130453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.905 [2024-07-21 03:44:27.130482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.905 qpair failed and we were unable to recover it. 00:34:41.905 [2024-07-21 03:44:27.130632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.905 [2024-07-21 03:44:27.130676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.905 qpair failed and we were unable to recover it. 00:34:41.905 [2024-07-21 03:44:27.130797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.905 [2024-07-21 03:44:27.130823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.905 qpair failed and we were unable to recover it. 00:34:41.905 [2024-07-21 03:44:27.130990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.905 [2024-07-21 03:44:27.131018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.905 qpair failed and we were unable to recover it. 00:34:41.905 [2024-07-21 03:44:27.131175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.905 [2024-07-21 03:44:27.131204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.905 qpair failed and we were unable to recover it. 00:34:41.905 [2024-07-21 03:44:27.131365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.905 [2024-07-21 03:44:27.131394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.905 qpair failed and we were unable to recover it. 00:34:41.905 [2024-07-21 03:44:27.131502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.905 [2024-07-21 03:44:27.131533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.905 qpair failed and we were unable to recover it. 00:34:41.905 [2024-07-21 03:44:27.131681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.905 [2024-07-21 03:44:27.131708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.905 qpair failed and we were unable to recover it. 00:34:41.905 [2024-07-21 03:44:27.131837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.905 [2024-07-21 03:44:27.131864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.905 qpair failed and we were unable to recover it. 00:34:41.905 [2024-07-21 03:44:27.131973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.905 [2024-07-21 03:44:27.132007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.905 qpair failed and we were unable to recover it. 00:34:41.905 [2024-07-21 03:44:27.132170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.905 [2024-07-21 03:44:27.132199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.905 qpair failed and we were unable to recover it. 00:34:41.905 [2024-07-21 03:44:27.132291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.905 [2024-07-21 03:44:27.132320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.905 qpair failed and we were unable to recover it. 00:34:41.905 [2024-07-21 03:44:27.132455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.905 [2024-07-21 03:44:27.132485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.905 qpair failed and we were unable to recover it. 00:34:41.905 [2024-07-21 03:44:27.132580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.905 [2024-07-21 03:44:27.132609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.905 qpair failed and we were unable to recover it. 00:34:41.905 [2024-07-21 03:44:27.132742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.905 [2024-07-21 03:44:27.132782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.905 qpair failed and we were unable to recover it. 00:34:41.905 [2024-07-21 03:44:27.132958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.905 [2024-07-21 03:44:27.133003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.905 qpair failed and we were unable to recover it. 00:34:41.905 [2024-07-21 03:44:27.133111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.905 [2024-07-21 03:44:27.133154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.905 qpair failed and we were unable to recover it. 00:34:41.905 [2024-07-21 03:44:27.133320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.905 [2024-07-21 03:44:27.133363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.905 qpair failed and we were unable to recover it. 00:34:41.905 [2024-07-21 03:44:27.133449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.905 [2024-07-21 03:44:27.133477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.905 qpair failed and we were unable to recover it. 00:34:41.905 [2024-07-21 03:44:27.133595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.905 [2024-07-21 03:44:27.133628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.905 qpair failed and we were unable to recover it. 00:34:41.905 [2024-07-21 03:44:27.133734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.905 [2024-07-21 03:44:27.133762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.905 qpair failed and we were unable to recover it. 00:34:41.905 [2024-07-21 03:44:27.133896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.905 [2024-07-21 03:44:27.133926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.905 qpair failed and we were unable to recover it. 00:34:41.905 [2024-07-21 03:44:27.134054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.905 [2024-07-21 03:44:27.134083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.905 qpair failed and we were unable to recover it. 00:34:41.905 [2024-07-21 03:44:27.134215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.905 [2024-07-21 03:44:27.134244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.905 qpair failed and we were unable to recover it. 00:34:41.905 [2024-07-21 03:44:27.134423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.905 [2024-07-21 03:44:27.134468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.905 qpair failed and we were unable to recover it. 00:34:41.905 [2024-07-21 03:44:27.134591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.905 [2024-07-21 03:44:27.134626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.905 qpair failed and we were unable to recover it. 00:34:41.905 [2024-07-21 03:44:27.134791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.905 [2024-07-21 03:44:27.134836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.905 qpair failed and we were unable to recover it. 00:34:41.905 [2024-07-21 03:44:27.134998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.905 [2024-07-21 03:44:27.135043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.905 qpair failed and we were unable to recover it. 00:34:41.905 [2024-07-21 03:44:27.135210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.905 [2024-07-21 03:44:27.135252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.905 qpair failed and we were unable to recover it. 00:34:41.905 [2024-07-21 03:44:27.135510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.906 [2024-07-21 03:44:27.135557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.906 qpair failed and we were unable to recover it. 00:34:41.906 [2024-07-21 03:44:27.135709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.906 [2024-07-21 03:44:27.135737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.906 qpair failed and we were unable to recover it. 00:34:41.906 [2024-07-21 03:44:27.135880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.906 [2024-07-21 03:44:27.135909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.906 qpair failed and we were unable to recover it. 00:34:41.906 [2024-07-21 03:44:27.136048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.906 [2024-07-21 03:44:27.136076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.906 qpair failed and we were unable to recover it. 00:34:41.906 [2024-07-21 03:44:27.136236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.906 [2024-07-21 03:44:27.136286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.906 qpair failed and we were unable to recover it. 00:34:41.906 [2024-07-21 03:44:27.136394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.906 [2024-07-21 03:44:27.136420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.906 qpair failed and we were unable to recover it. 00:34:41.906 [2024-07-21 03:44:27.136543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.906 [2024-07-21 03:44:27.136570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.906 qpair failed and we were unable to recover it. 00:34:41.906 [2024-07-21 03:44:27.136664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.906 [2024-07-21 03:44:27.136691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.906 qpair failed and we were unable to recover it. 00:34:41.906 [2024-07-21 03:44:27.136795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.906 [2024-07-21 03:44:27.136822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.906 qpair failed and we were unable to recover it. 00:34:41.906 [2024-07-21 03:44:27.136962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.906 [2024-07-21 03:44:27.136991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.906 qpair failed and we were unable to recover it. 00:34:41.906 [2024-07-21 03:44:27.137113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.906 [2024-07-21 03:44:27.137155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.906 qpair failed and we were unable to recover it. 00:34:41.906 [2024-07-21 03:44:27.137285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.906 [2024-07-21 03:44:27.137314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.906 qpair failed and we were unable to recover it. 00:34:41.906 [2024-07-21 03:44:27.137421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.906 [2024-07-21 03:44:27.137450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.906 qpair failed and we were unable to recover it. 00:34:41.906 [2024-07-21 03:44:27.137630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.906 [2024-07-21 03:44:27.137657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.906 qpair failed and we were unable to recover it. 00:34:41.906 [2024-07-21 03:44:27.137748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.906 [2024-07-21 03:44:27.137775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.906 qpair failed and we were unable to recover it. 00:34:41.906 [2024-07-21 03:44:27.137861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.906 [2024-07-21 03:44:27.137887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.906 qpair failed and we were unable to recover it. 00:34:41.906 [2024-07-21 03:44:27.138021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.906 [2024-07-21 03:44:27.138050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.906 qpair failed and we were unable to recover it. 00:34:41.906 [2024-07-21 03:44:27.138156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.906 [2024-07-21 03:44:27.138185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.906 qpair failed and we were unable to recover it. 00:34:41.906 [2024-07-21 03:44:27.138320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.906 [2024-07-21 03:44:27.138351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.906 qpair failed and we were unable to recover it. 00:34:41.906 [2024-07-21 03:44:27.138460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.906 [2024-07-21 03:44:27.138491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.906 qpair failed and we were unable to recover it. 00:34:41.906 [2024-07-21 03:44:27.138662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.906 [2024-07-21 03:44:27.138689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.906 qpair failed and we were unable to recover it. 00:34:41.906 [2024-07-21 03:44:27.138845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.906 [2024-07-21 03:44:27.138900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.906 qpair failed and we were unable to recover it. 00:34:41.906 [2024-07-21 03:44:27.139033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.906 [2024-07-21 03:44:27.139079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.906 qpair failed and we were unable to recover it. 00:34:41.906 [2024-07-21 03:44:27.139227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.906 [2024-07-21 03:44:27.139271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:41.906 qpair failed and we were unable to recover it. 00:34:41.906 [2024-07-21 03:44:27.139409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.906 [2024-07-21 03:44:27.139439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.906 qpair failed and we were unable to recover it. 00:34:41.906 [2024-07-21 03:44:27.139563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.906 [2024-07-21 03:44:27.139589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.906 qpair failed and we were unable to recover it. 00:34:41.906 [2024-07-21 03:44:27.139690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.906 [2024-07-21 03:44:27.139716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.906 qpair failed and we were unable to recover it. 00:34:41.906 [2024-07-21 03:44:27.139838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.906 [2024-07-21 03:44:27.139864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.906 qpair failed and we were unable to recover it. 00:34:41.906 [2024-07-21 03:44:27.140007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.906 [2024-07-21 03:44:27.140036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.906 qpair failed and we were unable to recover it. 00:34:41.906 [2024-07-21 03:44:27.140162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.906 [2024-07-21 03:44:27.140190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.906 qpair failed and we were unable to recover it. 00:34:41.906 [2024-07-21 03:44:27.140302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.906 [2024-07-21 03:44:27.140331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.906 qpair failed and we were unable to recover it. 00:34:41.906 [2024-07-21 03:44:27.140455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.906 [2024-07-21 03:44:27.140484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.906 qpair failed and we were unable to recover it. 00:34:41.906 [2024-07-21 03:44:27.140678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.906 [2024-07-21 03:44:27.140719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.906 qpair failed and we were unable to recover it. 00:34:41.906 [2024-07-21 03:44:27.140849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.907 [2024-07-21 03:44:27.140878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.907 qpair failed and we were unable to recover it. 00:34:41.907 [2024-07-21 03:44:27.141003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.907 [2024-07-21 03:44:27.141062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.907 qpair failed and we were unable to recover it. 00:34:41.907 [2024-07-21 03:44:27.141210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.907 [2024-07-21 03:44:27.141254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.907 qpair failed and we were unable to recover it. 00:34:41.907 [2024-07-21 03:44:27.141398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.907 [2024-07-21 03:44:27.141441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.907 qpair failed and we were unable to recover it. 00:34:41.907 [2024-07-21 03:44:27.141543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.907 [2024-07-21 03:44:27.141571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.907 qpair failed and we were unable to recover it. 00:34:41.907 [2024-07-21 03:44:27.141663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.907 [2024-07-21 03:44:27.141709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.907 qpair failed and we were unable to recover it. 00:34:41.907 [2024-07-21 03:44:27.141867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.907 [2024-07-21 03:44:27.141897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.907 qpair failed and we were unable to recover it. 00:34:41.907 [2024-07-21 03:44:27.142027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.907 [2024-07-21 03:44:27.142061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.907 qpair failed and we were unable to recover it. 00:34:41.907 [2024-07-21 03:44:27.142211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.907 [2024-07-21 03:44:27.142263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.907 qpair failed and we were unable to recover it. 00:34:41.907 [2024-07-21 03:44:27.142395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.907 [2024-07-21 03:44:27.142424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.907 qpair failed and we were unable to recover it. 00:34:41.907 [2024-07-21 03:44:27.142551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.907 [2024-07-21 03:44:27.142580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.907 qpair failed and we were unable to recover it. 00:34:41.907 [2024-07-21 03:44:27.142695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.907 [2024-07-21 03:44:27.142739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.907 qpair failed and we were unable to recover it. 00:34:41.907 [2024-07-21 03:44:27.142850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.907 [2024-07-21 03:44:27.142879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.907 qpair failed and we were unable to recover it. 00:34:41.907 [2024-07-21 03:44:27.142989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.907 [2024-07-21 03:44:27.143018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.907 qpair failed and we were unable to recover it. 00:34:41.907 [2024-07-21 03:44:27.143149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.907 [2024-07-21 03:44:27.143178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.907 qpair failed and we were unable to recover it. 00:34:41.907 [2024-07-21 03:44:27.143354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.907 [2024-07-21 03:44:27.143399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:41.907 qpair failed and we were unable to recover it. 00:34:41.907 [2024-07-21 03:44:27.143571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.907 [2024-07-21 03:44:27.143602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:41.907 qpair failed and we were unable to recover it. 00:34:41.907 [2024-07-21 03:44:27.143749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.907 [2024-07-21 03:44:27.143781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.907 qpair failed and we were unable to recover it. 00:34:41.907 [2024-07-21 03:44:27.143937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.907 [2024-07-21 03:44:27.143980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.907 qpair failed and we were unable to recover it. 00:34:41.907 [2024-07-21 03:44:27.144125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.907 [2024-07-21 03:44:27.144169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.907 qpair failed and we were unable to recover it. 00:34:41.907 [2024-07-21 03:44:27.144346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.907 [2024-07-21 03:44:27.144395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.907 qpair failed and we were unable to recover it. 00:34:41.907 [2024-07-21 03:44:27.144517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.907 [2024-07-21 03:44:27.144543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.907 qpair failed and we were unable to recover it. 00:34:41.907 [2024-07-21 03:44:27.144678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.907 [2024-07-21 03:44:27.144708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.907 qpair failed and we were unable to recover it. 00:34:41.907 [2024-07-21 03:44:27.144856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.907 [2024-07-21 03:44:27.144885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.907 qpair failed and we were unable to recover it. 00:34:41.907 [2024-07-21 03:44:27.145026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.907 [2024-07-21 03:44:27.145052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.907 qpair failed and we were unable to recover it. 00:34:41.907 [2024-07-21 03:44:27.145173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.907 [2024-07-21 03:44:27.145200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.907 qpair failed and we were unable to recover it. 00:34:41.907 [2024-07-21 03:44:27.145294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.907 [2024-07-21 03:44:27.145321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.907 qpair failed and we were unable to recover it. 00:34:41.907 [2024-07-21 03:44:27.145445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.907 [2024-07-21 03:44:27.145471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.907 qpair failed and we were unable to recover it. 00:34:41.907 [2024-07-21 03:44:27.145565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.907 [2024-07-21 03:44:27.145592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.907 qpair failed and we were unable to recover it. 00:34:41.907 [2024-07-21 03:44:27.145754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.907 [2024-07-21 03:44:27.145799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:41.907 qpair failed and we were unable to recover it. 00:34:41.907 [2024-07-21 03:44:27.145946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.907 [2024-07-21 03:44:27.145984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.907 qpair failed and we were unable to recover it. 00:34:41.907 [2024-07-21 03:44:27.146178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.907 [2024-07-21 03:44:27.146233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.907 qpair failed and we were unable to recover it. 00:34:41.907 [2024-07-21 03:44:27.146387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.907 [2024-07-21 03:44:27.146434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.907 qpair failed and we were unable to recover it. 00:34:41.907 [2024-07-21 03:44:27.146592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.908 [2024-07-21 03:44:27.146627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.908 qpair failed and we were unable to recover it. 00:34:41.908 [2024-07-21 03:44:27.146770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.908 [2024-07-21 03:44:27.146796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.908 qpair failed and we were unable to recover it. 00:34:41.908 [2024-07-21 03:44:27.146985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.908 [2024-07-21 03:44:27.147033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.908 qpair failed and we were unable to recover it. 00:34:41.908 [2024-07-21 03:44:27.147182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.908 [2024-07-21 03:44:27.147216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.908 qpair failed and we were unable to recover it. 00:34:41.908 [2024-07-21 03:44:27.147498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.908 [2024-07-21 03:44:27.147550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.908 qpair failed and we were unable to recover it. 00:34:41.908 [2024-07-21 03:44:27.147683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.908 [2024-07-21 03:44:27.147710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.908 qpair failed and we were unable to recover it. 00:34:41.908 [2024-07-21 03:44:27.147823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.908 [2024-07-21 03:44:27.147867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.908 qpair failed and we were unable to recover it. 00:34:41.908 [2024-07-21 03:44:27.148003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.908 [2024-07-21 03:44:27.148051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.908 qpair failed and we were unable to recover it. 00:34:41.908 [2024-07-21 03:44:27.148206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.908 [2024-07-21 03:44:27.148260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.908 qpair failed and we were unable to recover it. 00:34:41.908 [2024-07-21 03:44:27.148383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.908 [2024-07-21 03:44:27.148409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.908 qpair failed and we were unable to recover it. 00:34:41.908 [2024-07-21 03:44:27.148508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.908 [2024-07-21 03:44:27.148536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.908 qpair failed and we were unable to recover it. 00:34:41.908 [2024-07-21 03:44:27.148683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.908 [2024-07-21 03:44:27.148714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.908 qpair failed and we were unable to recover it. 00:34:41.908 [2024-07-21 03:44:27.148824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.908 [2024-07-21 03:44:27.148867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.908 qpair failed and we were unable to recover it. 00:34:41.908 [2024-07-21 03:44:27.148986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.908 [2024-07-21 03:44:27.149015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.908 qpair failed and we were unable to recover it. 00:34:41.908 [2024-07-21 03:44:27.149139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.908 [2024-07-21 03:44:27.149167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.908 qpair failed and we were unable to recover it. 00:34:41.908 [2024-07-21 03:44:27.149343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.908 [2024-07-21 03:44:27.149369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.908 qpair failed and we were unable to recover it. 00:34:41.908 [2024-07-21 03:44:27.149467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.908 [2024-07-21 03:44:27.149493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.908 qpair failed and we were unable to recover it. 00:34:41.908 [2024-07-21 03:44:27.149605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.908 [2024-07-21 03:44:27.149662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.908 qpair failed and we were unable to recover it. 00:34:41.908 [2024-07-21 03:44:27.149796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.908 [2024-07-21 03:44:27.149827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.908 qpair failed and we were unable to recover it. 00:34:41.908 [2024-07-21 03:44:27.149989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.908 [2024-07-21 03:44:27.150018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.908 qpair failed and we were unable to recover it. 00:34:41.908 [2024-07-21 03:44:27.150140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.908 [2024-07-21 03:44:27.150170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.908 qpair failed and we were unable to recover it. 00:34:41.908 [2024-07-21 03:44:27.150297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.908 [2024-07-21 03:44:27.150326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.908 qpair failed and we were unable to recover it. 00:34:41.908 [2024-07-21 03:44:27.150459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.908 [2024-07-21 03:44:27.150487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.908 qpair failed and we were unable to recover it. 00:34:41.908 [2024-07-21 03:44:27.150604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.908 [2024-07-21 03:44:27.150647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.908 qpair failed and we were unable to recover it. 00:34:41.908 [2024-07-21 03:44:27.150799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.908 [2024-07-21 03:44:27.150830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.908 qpair failed and we were unable to recover it. 00:34:41.908 [2024-07-21 03:44:27.150972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.908 [2024-07-21 03:44:27.151020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.908 qpair failed and we were unable to recover it. 00:34:41.908 [2024-07-21 03:44:27.151130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.908 [2024-07-21 03:44:27.151160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.908 qpair failed and we were unable to recover it. 00:34:41.908 [2024-07-21 03:44:27.151351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.908 [2024-07-21 03:44:27.151395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.908 qpair failed and we were unable to recover it. 00:34:41.908 [2024-07-21 03:44:27.151545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.908 [2024-07-21 03:44:27.151571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.908 qpair failed and we were unable to recover it. 00:34:41.908 [2024-07-21 03:44:27.151686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.908 [2024-07-21 03:44:27.151717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.908 qpair failed and we were unable to recover it. 00:34:41.908 [2024-07-21 03:44:27.151827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.908 [2024-07-21 03:44:27.151856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.908 qpair failed and we were unable to recover it. 00:34:41.908 [2024-07-21 03:44:27.152014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.908 [2024-07-21 03:44:27.152043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.908 qpair failed and we were unable to recover it. 00:34:41.908 [2024-07-21 03:44:27.152167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.908 [2024-07-21 03:44:27.152196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.908 qpair failed and we were unable to recover it. 00:34:41.908 [2024-07-21 03:44:27.152326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.909 [2024-07-21 03:44:27.152355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.909 qpair failed and we were unable to recover it. 00:34:41.909 [2024-07-21 03:44:27.152470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.909 [2024-07-21 03:44:27.152497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.909 qpair failed and we were unable to recover it. 00:34:41.909 [2024-07-21 03:44:27.152632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.909 [2024-07-21 03:44:27.152662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.909 qpair failed and we were unable to recover it. 00:34:41.909 [2024-07-21 03:44:27.152790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.909 [2024-07-21 03:44:27.152834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:41.909 qpair failed and we were unable to recover it. 00:34:41.909 [2024-07-21 03:44:27.152977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.909 [2024-07-21 03:44:27.153008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:41.909 qpair failed and we were unable to recover it. 00:34:41.909 [2024-07-21 03:44:27.153140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.909 [2024-07-21 03:44:27.153170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:41.909 qpair failed and we were unable to recover it. 00:34:41.909 [2024-07-21 03:44:27.153260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.909 [2024-07-21 03:44:27.153290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:41.909 qpair failed and we were unable to recover it. 00:34:41.909 [2024-07-21 03:44:27.153474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.909 [2024-07-21 03:44:27.153518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.909 qpair failed and we were unable to recover it. 00:34:41.909 [2024-07-21 03:44:27.153623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.909 [2024-07-21 03:44:27.153668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.909 qpair failed and we were unable to recover it. 00:34:41.909 [2024-07-21 03:44:27.153815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.909 [2024-07-21 03:44:27.153841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.909 qpair failed and we were unable to recover it. 00:34:41.909 [2024-07-21 03:44:27.154006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.909 [2024-07-21 03:44:27.154035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.909 qpair failed and we were unable to recover it. 00:34:41.909 [2024-07-21 03:44:27.154149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.909 [2024-07-21 03:44:27.154177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.909 qpair failed and we were unable to recover it. 00:34:41.909 [2024-07-21 03:44:27.154336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.909 [2024-07-21 03:44:27.154364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.909 qpair failed and we were unable to recover it. 00:34:41.909 [2024-07-21 03:44:27.154486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.909 [2024-07-21 03:44:27.154515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.909 qpair failed and we were unable to recover it. 00:34:41.909 [2024-07-21 03:44:27.154648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.909 [2024-07-21 03:44:27.154691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.909 qpair failed and we were unable to recover it. 00:34:41.909 [2024-07-21 03:44:27.154782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.909 [2024-07-21 03:44:27.154808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.909 qpair failed and we were unable to recover it. 00:34:41.909 [2024-07-21 03:44:27.154950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.909 [2024-07-21 03:44:27.154979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.909 qpair failed and we were unable to recover it. 00:34:41.909 [2024-07-21 03:44:27.155227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.909 [2024-07-21 03:44:27.155274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.909 qpair failed and we were unable to recover it. 00:34:41.909 [2024-07-21 03:44:27.155381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.909 [2024-07-21 03:44:27.155419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.909 qpair failed and we were unable to recover it. 00:34:41.909 [2024-07-21 03:44:27.155552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.909 [2024-07-21 03:44:27.155582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.909 qpair failed and we were unable to recover it. 00:34:41.909 [2024-07-21 03:44:27.155739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.909 [2024-07-21 03:44:27.155779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.909 qpair failed and we were unable to recover it. 00:34:41.909 [2024-07-21 03:44:27.155945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.909 [2024-07-21 03:44:27.156008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.909 qpair failed and we were unable to recover it. 00:34:41.909 [2024-07-21 03:44:27.156206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.909 [2024-07-21 03:44:27.156258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.909 qpair failed and we were unable to recover it. 00:34:41.909 [2024-07-21 03:44:27.156471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.909 [2024-07-21 03:44:27.156523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.909 qpair failed and we were unable to recover it. 00:34:41.909 [2024-07-21 03:44:27.156633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.909 [2024-07-21 03:44:27.156662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.909 qpair failed and we were unable to recover it. 00:34:41.909 [2024-07-21 03:44:27.156830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.909 [2024-07-21 03:44:27.156875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.909 qpair failed and we were unable to recover it. 00:34:41.910 [2024-07-21 03:44:27.157057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.910 [2024-07-21 03:44:27.157105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.910 qpair failed and we were unable to recover it. 00:34:41.910 [2024-07-21 03:44:27.157223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.910 [2024-07-21 03:44:27.157268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.910 qpair failed and we were unable to recover it. 00:34:41.910 [2024-07-21 03:44:27.157423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.910 [2024-07-21 03:44:27.157450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.910 qpair failed and we were unable to recover it. 00:34:41.910 [2024-07-21 03:44:27.157599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.910 [2024-07-21 03:44:27.157635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.910 qpair failed and we were unable to recover it. 00:34:41.910 [2024-07-21 03:44:27.157742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.910 [2024-07-21 03:44:27.157769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.910 qpair failed and we were unable to recover it. 00:34:41.910 [2024-07-21 03:44:27.157936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.910 [2024-07-21 03:44:27.157965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.910 qpair failed and we were unable to recover it. 00:34:41.910 [2024-07-21 03:44:27.158205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.910 [2024-07-21 03:44:27.158258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.910 qpair failed and we were unable to recover it. 00:34:41.910 [2024-07-21 03:44:27.158472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.910 [2024-07-21 03:44:27.158525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.910 qpair failed and we were unable to recover it. 00:34:41.910 [2024-07-21 03:44:27.158677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.910 [2024-07-21 03:44:27.158705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.910 qpair failed and we were unable to recover it. 00:34:41.910 [2024-07-21 03:44:27.158831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.910 [2024-07-21 03:44:27.158857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.910 qpair failed and we were unable to recover it. 00:34:41.910 [2024-07-21 03:44:27.158977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.910 [2024-07-21 03:44:27.159020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.910 qpair failed and we were unable to recover it. 00:34:41.910 [2024-07-21 03:44:27.159120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.910 [2024-07-21 03:44:27.159151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.910 qpair failed and we were unable to recover it. 00:34:41.910 [2024-07-21 03:44:27.159279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.910 [2024-07-21 03:44:27.159323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.910 qpair failed and we were unable to recover it. 00:34:41.910 [2024-07-21 03:44:27.159456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.910 [2024-07-21 03:44:27.159485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.910 qpair failed and we were unable to recover it. 00:34:41.910 [2024-07-21 03:44:27.159617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.910 [2024-07-21 03:44:27.159660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.910 qpair failed and we were unable to recover it. 00:34:41.910 [2024-07-21 03:44:27.159758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.910 [2024-07-21 03:44:27.159785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.910 qpair failed and we were unable to recover it. 00:34:41.910 [2024-07-21 03:44:27.159879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.910 [2024-07-21 03:44:27.159905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.910 qpair failed and we were unable to recover it. 00:34:41.910 [2024-07-21 03:44:27.160069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.910 [2024-07-21 03:44:27.160098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.910 qpair failed and we were unable to recover it. 00:34:41.910 [2024-07-21 03:44:27.160219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.910 [2024-07-21 03:44:27.160262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.910 qpair failed and we were unable to recover it. 00:34:41.910 [2024-07-21 03:44:27.160366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.910 [2024-07-21 03:44:27.160400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.910 qpair failed and we were unable to recover it. 00:34:41.910 [2024-07-21 03:44:27.160546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.910 [2024-07-21 03:44:27.160572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.910 qpair failed and we were unable to recover it. 00:34:41.910 [2024-07-21 03:44:27.160707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.910 [2024-07-21 03:44:27.160734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.910 qpair failed and we were unable to recover it. 00:34:41.910 [2024-07-21 03:44:27.160854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.910 [2024-07-21 03:44:27.160881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.910 qpair failed and we were unable to recover it. 00:34:41.910 [2024-07-21 03:44:27.161019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.910 [2024-07-21 03:44:27.161048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.910 qpair failed and we were unable to recover it. 00:34:41.910 [2024-07-21 03:44:27.161251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.910 [2024-07-21 03:44:27.161280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.910 qpair failed and we were unable to recover it. 00:34:41.910 [2024-07-21 03:44:27.161407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.910 [2024-07-21 03:44:27.161436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.910 qpair failed and we were unable to recover it. 00:34:41.910 [2024-07-21 03:44:27.161557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.910 [2024-07-21 03:44:27.161586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.910 qpair failed and we were unable to recover it. 00:34:41.910 [2024-07-21 03:44:27.161790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.910 [2024-07-21 03:44:27.161830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.910 qpair failed and we were unable to recover it. 00:34:41.910 [2024-07-21 03:44:27.161994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.910 [2024-07-21 03:44:27.162033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.910 qpair failed and we were unable to recover it. 00:34:41.910 [2024-07-21 03:44:27.162156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.910 [2024-07-21 03:44:27.162187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.910 qpair failed and we were unable to recover it. 00:34:41.910 [2024-07-21 03:44:27.162289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.910 [2024-07-21 03:44:27.162334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.910 qpair failed and we were unable to recover it. 00:34:41.910 [2024-07-21 03:44:27.162481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.910 [2024-07-21 03:44:27.162510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.910 qpair failed and we were unable to recover it. 00:34:41.911 [2024-07-21 03:44:27.162684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.911 [2024-07-21 03:44:27.162712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.911 qpair failed and we were unable to recover it. 00:34:41.911 [2024-07-21 03:44:27.162838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.911 [2024-07-21 03:44:27.162866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.911 qpair failed and we were unable to recover it. 00:34:41.911 [2024-07-21 03:44:27.163036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.911 [2024-07-21 03:44:27.163065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.911 qpair failed and we were unable to recover it. 00:34:41.911 [2024-07-21 03:44:27.163191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.911 [2024-07-21 03:44:27.163220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.911 qpair failed and we were unable to recover it. 00:34:41.911 [2024-07-21 03:44:27.163344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.911 [2024-07-21 03:44:27.163372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.911 qpair failed and we were unable to recover it. 00:34:41.911 [2024-07-21 03:44:27.163526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.911 [2024-07-21 03:44:27.163552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.911 qpair failed and we were unable to recover it. 00:34:41.911 [2024-07-21 03:44:27.163655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.911 [2024-07-21 03:44:27.163682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.911 qpair failed and we were unable to recover it. 00:34:41.911 [2024-07-21 03:44:27.163807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.911 [2024-07-21 03:44:27.163833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.911 qpair failed and we were unable to recover it. 00:34:41.911 [2024-07-21 03:44:27.163926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.911 [2024-07-21 03:44:27.163954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.911 qpair failed and we were unable to recover it. 00:34:41.911 [2024-07-21 03:44:27.164070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.911 [2024-07-21 03:44:27.164097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.911 qpair failed and we were unable to recover it. 00:34:41.911 [2024-07-21 03:44:27.164199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.911 [2024-07-21 03:44:27.164226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.911 qpair failed and we were unable to recover it. 00:34:41.911 [2024-07-21 03:44:27.164383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.911 [2024-07-21 03:44:27.164440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.911 qpair failed and we were unable to recover it. 00:34:41.911 [2024-07-21 03:44:27.164571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.911 [2024-07-21 03:44:27.164599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.911 qpair failed and we were unable to recover it. 00:34:41.911 [2024-07-21 03:44:27.164732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.911 [2024-07-21 03:44:27.164759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.911 qpair failed and we were unable to recover it. 00:34:41.911 [2024-07-21 03:44:27.164898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.911 [2024-07-21 03:44:27.164947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.911 qpair failed and we were unable to recover it. 00:34:41.911 [2024-07-21 03:44:27.165054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.911 [2024-07-21 03:44:27.165085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.911 qpair failed and we were unable to recover it. 00:34:41.911 [2024-07-21 03:44:27.165234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.911 [2024-07-21 03:44:27.165264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.911 qpair failed and we were unable to recover it. 00:34:41.911 [2024-07-21 03:44:27.165508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.911 [2024-07-21 03:44:27.165564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.911 qpair failed and we were unable to recover it. 00:34:41.911 [2024-07-21 03:44:27.165719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.911 [2024-07-21 03:44:27.165747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.911 qpair failed and we were unable to recover it. 00:34:41.911 [2024-07-21 03:44:27.165853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.911 [2024-07-21 03:44:27.165882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.911 qpair failed and we were unable to recover it. 00:34:41.911 [2024-07-21 03:44:27.166124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.911 [2024-07-21 03:44:27.166177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.911 qpair failed and we were unable to recover it. 00:34:41.911 [2024-07-21 03:44:27.166310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.911 [2024-07-21 03:44:27.166338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.911 qpair failed and we were unable to recover it. 00:34:41.911 [2024-07-21 03:44:27.166442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.911 [2024-07-21 03:44:27.166471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.911 qpair failed and we were unable to recover it. 00:34:41.911 [2024-07-21 03:44:27.166630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.911 [2024-07-21 03:44:27.166669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.911 qpair failed and we were unable to recover it. 00:34:41.911 [2024-07-21 03:44:27.166797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.911 [2024-07-21 03:44:27.166825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.911 qpair failed and we were unable to recover it. 00:34:41.911 [2024-07-21 03:44:27.166961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.911 [2024-07-21 03:44:27.166991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.911 qpair failed and we were unable to recover it. 00:34:41.911 [2024-07-21 03:44:27.167121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.911 [2024-07-21 03:44:27.167150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.911 qpair failed and we were unable to recover it. 00:34:41.911 [2024-07-21 03:44:27.167310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.911 [2024-07-21 03:44:27.167339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.911 qpair failed and we were unable to recover it. 00:34:41.911 [2024-07-21 03:44:27.167446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.911 [2024-07-21 03:44:27.167474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.911 qpair failed and we were unable to recover it. 00:34:41.911 [2024-07-21 03:44:27.167600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.911 [2024-07-21 03:44:27.167633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.911 qpair failed and we were unable to recover it. 00:34:41.911 [2024-07-21 03:44:27.167762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.911 [2024-07-21 03:44:27.167789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.911 qpair failed and we were unable to recover it. 00:34:41.911 [2024-07-21 03:44:27.167951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.911 [2024-07-21 03:44:27.167981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.911 qpair failed and we were unable to recover it. 00:34:41.911 [2024-07-21 03:44:27.168105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.911 [2024-07-21 03:44:27.168135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.911 qpair failed and we were unable to recover it. 00:34:41.911 [2024-07-21 03:44:27.168237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.911 [2024-07-21 03:44:27.168266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.912 qpair failed and we were unable to recover it. 00:34:41.912 [2024-07-21 03:44:27.168420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.912 [2024-07-21 03:44:27.168449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.912 qpair failed and we were unable to recover it. 00:34:41.912 [2024-07-21 03:44:27.168587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.912 [2024-07-21 03:44:27.168620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.912 qpair failed and we were unable to recover it. 00:34:41.912 [2024-07-21 03:44:27.168719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.912 [2024-07-21 03:44:27.168745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.912 qpair failed and we were unable to recover it. 00:34:41.912 [2024-07-21 03:44:27.168871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.912 [2024-07-21 03:44:27.168899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.912 qpair failed and we were unable to recover it. 00:34:41.912 [2024-07-21 03:44:27.169054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.912 [2024-07-21 03:44:27.169111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.912 qpair failed and we were unable to recover it. 00:34:41.912 [2024-07-21 03:44:27.169232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.912 [2024-07-21 03:44:27.169265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.912 qpair failed and we were unable to recover it. 00:34:41.912 [2024-07-21 03:44:27.169427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.912 [2024-07-21 03:44:27.169472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.912 qpair failed and we were unable to recover it. 00:34:41.912 [2024-07-21 03:44:27.169598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.912 [2024-07-21 03:44:27.169631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.912 qpair failed and we were unable to recover it. 00:34:41.912 [2024-07-21 03:44:27.169773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.912 [2024-07-21 03:44:27.169817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.912 qpair failed and we were unable to recover it. 00:34:41.912 [2024-07-21 03:44:27.169938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.912 [2024-07-21 03:44:27.169965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.912 qpair failed and we were unable to recover it. 00:34:41.912 [2024-07-21 03:44:27.170090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.912 [2024-07-21 03:44:27.170116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.912 qpair failed and we were unable to recover it. 00:34:41.912 [2024-07-21 03:44:27.170205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.912 [2024-07-21 03:44:27.170233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.912 qpair failed and we were unable to recover it. 00:34:41.912 [2024-07-21 03:44:27.170328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.912 [2024-07-21 03:44:27.170357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.912 qpair failed and we were unable to recover it. 00:34:41.912 [2024-07-21 03:44:27.170509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.912 [2024-07-21 03:44:27.170537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.912 qpair failed and we were unable to recover it. 00:34:41.912 [2024-07-21 03:44:27.170700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.912 [2024-07-21 03:44:27.170731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.912 qpair failed and we were unable to recover it. 00:34:41.912 [2024-07-21 03:44:27.170828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.912 [2024-07-21 03:44:27.170857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.912 qpair failed and we were unable to recover it. 00:34:41.912 [2024-07-21 03:44:27.171048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.912 [2024-07-21 03:44:27.171098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.912 qpair failed and we were unable to recover it. 00:34:41.912 [2024-07-21 03:44:27.171205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.912 [2024-07-21 03:44:27.171235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.912 qpair failed and we were unable to recover it. 00:34:41.912 [2024-07-21 03:44:27.171424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.912 [2024-07-21 03:44:27.171469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.912 qpair failed and we were unable to recover it. 00:34:41.912 [2024-07-21 03:44:27.171590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.912 [2024-07-21 03:44:27.171623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.912 qpair failed and we were unable to recover it. 00:34:41.912 [2024-07-21 03:44:27.171745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.912 [2024-07-21 03:44:27.171796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.912 qpair failed and we were unable to recover it. 00:34:41.912 [2024-07-21 03:44:27.171939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.912 [2024-07-21 03:44:27.171982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.912 qpair failed and we were unable to recover it. 00:34:41.912 [2024-07-21 03:44:27.172149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.912 [2024-07-21 03:44:27.172191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.912 qpair failed and we were unable to recover it. 00:34:41.912 [2024-07-21 03:44:27.172360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.912 [2024-07-21 03:44:27.172389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.912 qpair failed and we were unable to recover it. 00:34:41.912 [2024-07-21 03:44:27.172530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.912 [2024-07-21 03:44:27.172556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.912 qpair failed and we were unable to recover it. 00:34:41.912 [2024-07-21 03:44:27.172711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.912 [2024-07-21 03:44:27.172756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.912 qpair failed and we were unable to recover it. 00:34:41.912 [2024-07-21 03:44:27.172924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.912 [2024-07-21 03:44:27.172956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.912 qpair failed and we were unable to recover it. 00:34:41.912 [2024-07-21 03:44:27.173192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.912 [2024-07-21 03:44:27.173252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.912 qpair failed and we were unable to recover it. 00:34:41.912 [2024-07-21 03:44:27.173495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.912 [2024-07-21 03:44:27.173546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.912 qpair failed and we were unable to recover it. 00:34:41.912 [2024-07-21 03:44:27.173643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.912 [2024-07-21 03:44:27.173681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.912 qpair failed and we were unable to recover it. 00:34:41.912 [2024-07-21 03:44:27.173828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.912 [2024-07-21 03:44:27.173871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.912 qpair failed and we were unable to recover it. 00:34:41.912 [2024-07-21 03:44:27.174108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.912 [2024-07-21 03:44:27.174138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.912 qpair failed and we were unable to recover it. 00:34:41.912 [2024-07-21 03:44:27.174231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.913 [2024-07-21 03:44:27.174260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.913 qpair failed and we were unable to recover it. 00:34:41.913 [2024-07-21 03:44:27.174379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.913 [2024-07-21 03:44:27.174405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.913 qpair failed and we were unable to recover it. 00:34:41.913 [2024-07-21 03:44:27.174499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.913 [2024-07-21 03:44:27.174525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.913 qpair failed and we were unable to recover it. 00:34:41.913 [2024-07-21 03:44:27.174609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.913 [2024-07-21 03:44:27.174644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.913 qpair failed and we were unable to recover it. 00:34:41.913 [2024-07-21 03:44:27.174778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.913 [2024-07-21 03:44:27.174806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.913 qpair failed and we were unable to recover it. 00:34:41.913 [2024-07-21 03:44:27.174937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.913 [2024-07-21 03:44:27.174966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.913 qpair failed and we were unable to recover it. 00:34:41.913 [2024-07-21 03:44:27.175058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.913 [2024-07-21 03:44:27.175086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.913 qpair failed and we were unable to recover it. 00:34:41.913 [2024-07-21 03:44:27.175221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.913 [2024-07-21 03:44:27.175254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.913 qpair failed and we were unable to recover it. 00:34:41.913 [2024-07-21 03:44:27.175445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.913 [2024-07-21 03:44:27.175489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.913 qpair failed and we were unable to recover it. 00:34:41.913 [2024-07-21 03:44:27.175609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.913 [2024-07-21 03:44:27.175641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.913 qpair failed and we were unable to recover it. 00:34:41.913 [2024-07-21 03:44:27.175750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.913 [2024-07-21 03:44:27.175795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.913 qpair failed and we were unable to recover it. 00:34:41.913 [2024-07-21 03:44:27.175970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.913 [2024-07-21 03:44:27.176005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.913 qpair failed and we were unable to recover it. 00:34:41.913 [2024-07-21 03:44:27.176147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.913 [2024-07-21 03:44:27.176210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.913 qpair failed and we were unable to recover it. 00:34:41.913 [2024-07-21 03:44:27.176349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.913 [2024-07-21 03:44:27.176380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.913 qpair failed and we were unable to recover it. 00:34:41.913 [2024-07-21 03:44:27.176521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.913 [2024-07-21 03:44:27.176548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.913 qpair failed and we were unable to recover it. 00:34:41.913 [2024-07-21 03:44:27.176654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.913 [2024-07-21 03:44:27.176686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.913 qpair failed and we were unable to recover it. 00:34:41.913 [2024-07-21 03:44:27.176782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.913 [2024-07-21 03:44:27.176808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.913 qpair failed and we were unable to recover it. 00:34:41.913 [2024-07-21 03:44:27.176923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.913 [2024-07-21 03:44:27.177009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.913 qpair failed and we were unable to recover it. 00:34:41.913 [2024-07-21 03:44:27.177102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.913 [2024-07-21 03:44:27.177131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.913 qpair failed and we were unable to recover it. 00:34:41.913 [2024-07-21 03:44:27.177274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.913 [2024-07-21 03:44:27.177303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.913 qpair failed and we were unable to recover it. 00:34:41.913 [2024-07-21 03:44:27.177449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.913 [2024-07-21 03:44:27.177480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.913 qpair failed and we were unable to recover it. 00:34:41.913 [2024-07-21 03:44:27.177603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.913 [2024-07-21 03:44:27.177640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:41.913 qpair failed and we were unable to recover it. 00:34:41.913 [2024-07-21 03:44:27.177769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.913 [2024-07-21 03:44:27.177803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.913 qpair failed and we were unable to recover it. 00:34:41.913 [2024-07-21 03:44:27.177935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.913 [2024-07-21 03:44:27.177964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.913 qpair failed and we were unable to recover it. 00:34:41.913 [2024-07-21 03:44:27.178081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.913 [2024-07-21 03:44:27.178111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.913 qpair failed and we were unable to recover it. 00:34:41.913 [2024-07-21 03:44:27.178251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.913 [2024-07-21 03:44:27.178294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:41.913 qpair failed and we were unable to recover it. 00:34:41.913 [2024-07-21 03:44:27.178413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.913 [2024-07-21 03:44:27.178444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:41.913 qpair failed and we were unable to recover it. 00:34:42.193 [2024-07-21 03:44:27.178551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.193 [2024-07-21 03:44:27.178582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.193 qpair failed and we were unable to recover it. 00:34:42.193 [2024-07-21 03:44:27.178737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.193 [2024-07-21 03:44:27.178764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.193 qpair failed and we were unable to recover it. 00:34:42.193 [2024-07-21 03:44:27.178894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.193 [2024-07-21 03:44:27.178920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.193 qpair failed and we were unable to recover it. 00:34:42.193 [2024-07-21 03:44:27.179068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.193 [2024-07-21 03:44:27.179094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.193 qpair failed and we were unable to recover it. 00:34:42.193 [2024-07-21 03:44:27.179212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.193 [2024-07-21 03:44:27.179242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.193 qpair failed and we were unable to recover it. 00:34:42.193 [2024-07-21 03:44:27.179358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.193 [2024-07-21 03:44:27.179388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.193 qpair failed and we were unable to recover it. 00:34:42.193 [2024-07-21 03:44:27.179518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.193 [2024-07-21 03:44:27.179545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.193 qpair failed and we were unable to recover it. 00:34:42.193 [2024-07-21 03:44:27.179666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.193 [2024-07-21 03:44:27.179693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.193 qpair failed and we were unable to recover it. 00:34:42.193 [2024-07-21 03:44:27.179812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.193 [2024-07-21 03:44:27.179839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.193 qpair failed and we were unable to recover it. 00:34:42.193 [2024-07-21 03:44:27.179937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.193 [2024-07-21 03:44:27.179963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.193 qpair failed and we were unable to recover it. 00:34:42.193 [2024-07-21 03:44:27.180133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.193 [2024-07-21 03:44:27.180177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.193 qpair failed and we were unable to recover it. 00:34:42.193 [2024-07-21 03:44:27.180294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.193 [2024-07-21 03:44:27.180320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.193 qpair failed and we were unable to recover it. 00:34:42.193 [2024-07-21 03:44:27.180502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.193 [2024-07-21 03:44:27.180531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.193 qpair failed and we were unable to recover it. 00:34:42.193 [2024-07-21 03:44:27.180695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.193 [2024-07-21 03:44:27.180722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.193 qpair failed and we were unable to recover it. 00:34:42.193 [2024-07-21 03:44:27.180868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.193 [2024-07-21 03:44:27.180895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.193 qpair failed and we were unable to recover it. 00:34:42.193 [2024-07-21 03:44:27.180993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.193 [2024-07-21 03:44:27.181019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.193 qpair failed and we were unable to recover it. 00:34:42.193 [2024-07-21 03:44:27.181166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.193 [2024-07-21 03:44:27.181197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.193 qpair failed and we were unable to recover it. 00:34:42.193 [2024-07-21 03:44:27.181332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.193 [2024-07-21 03:44:27.181362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.193 qpair failed and we were unable to recover it. 00:34:42.193 [2024-07-21 03:44:27.181467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.193 [2024-07-21 03:44:27.181496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.193 qpair failed and we were unable to recover it. 00:34:42.193 [2024-07-21 03:44:27.181627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.193 [2024-07-21 03:44:27.181655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.193 qpair failed and we were unable to recover it. 00:34:42.193 [2024-07-21 03:44:27.181755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.193 [2024-07-21 03:44:27.181782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.193 qpair failed and we were unable to recover it. 00:34:42.193 [2024-07-21 03:44:27.181884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.193 [2024-07-21 03:44:27.181912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.193 qpair failed and we were unable to recover it. 00:34:42.193 [2024-07-21 03:44:27.182052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.193 [2024-07-21 03:44:27.182082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.193 qpair failed and we were unable to recover it. 00:34:42.193 [2024-07-21 03:44:27.182177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.193 [2024-07-21 03:44:27.182206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.193 qpair failed and we were unable to recover it. 00:34:42.193 [2024-07-21 03:44:27.182363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.193 [2024-07-21 03:44:27.182392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.193 qpair failed and we were unable to recover it. 00:34:42.193 [2024-07-21 03:44:27.182532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.193 [2024-07-21 03:44:27.182558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.193 qpair failed and we were unable to recover it. 00:34:42.193 [2024-07-21 03:44:27.182679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.193 [2024-07-21 03:44:27.182706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.193 qpair failed and we were unable to recover it. 00:34:42.193 [2024-07-21 03:44:27.182801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.193 [2024-07-21 03:44:27.182828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.193 qpair failed and we were unable to recover it. 00:34:42.193 [2024-07-21 03:44:27.182928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.193 [2024-07-21 03:44:27.182954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.193 qpair failed and we were unable to recover it. 00:34:42.193 [2024-07-21 03:44:27.183048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.194 [2024-07-21 03:44:27.183091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.194 qpair failed and we were unable to recover it. 00:34:42.194 [2024-07-21 03:44:27.183218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.194 [2024-07-21 03:44:27.183248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.194 qpair failed and we were unable to recover it. 00:34:42.194 [2024-07-21 03:44:27.183344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.194 [2024-07-21 03:44:27.183373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.194 qpair failed and we were unable to recover it. 00:34:42.194 [2024-07-21 03:44:27.183502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.194 [2024-07-21 03:44:27.183547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.194 qpair failed and we were unable to recover it. 00:34:42.194 [2024-07-21 03:44:27.183705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.194 [2024-07-21 03:44:27.183734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.194 qpair failed and we were unable to recover it. 00:34:42.194 [2024-07-21 03:44:27.183856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.194 [2024-07-21 03:44:27.183889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.194 qpair failed and we were unable to recover it. 00:34:42.194 [2024-07-21 03:44:27.183988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.194 [2024-07-21 03:44:27.184018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.194 qpair failed and we were unable to recover it. 00:34:42.194 [2024-07-21 03:44:27.184173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.194 [2024-07-21 03:44:27.184221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.194 qpair failed and we were unable to recover it. 00:34:42.194 [2024-07-21 03:44:27.184319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.194 [2024-07-21 03:44:27.184348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.194 qpair failed and we were unable to recover it. 00:34:42.194 [2024-07-21 03:44:27.184478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.194 [2024-07-21 03:44:27.184505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.194 qpair failed and we were unable to recover it. 00:34:42.194 [2024-07-21 03:44:27.184653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.194 [2024-07-21 03:44:27.184681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.194 qpair failed and we were unable to recover it. 00:34:42.194 [2024-07-21 03:44:27.184769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.194 [2024-07-21 03:44:27.184796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.194 qpair failed and we were unable to recover it. 00:34:42.194 [2024-07-21 03:44:27.184944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.194 [2024-07-21 03:44:27.184970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.194 qpair failed and we were unable to recover it. 00:34:42.194 [2024-07-21 03:44:27.185061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.194 [2024-07-21 03:44:27.185088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.194 qpair failed and we were unable to recover it. 00:34:42.194 [2024-07-21 03:44:27.185188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.194 [2024-07-21 03:44:27.185216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.194 qpair failed and we were unable to recover it. 00:34:42.194 [2024-07-21 03:44:27.185336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.194 [2024-07-21 03:44:27.185363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.194 qpair failed and we were unable to recover it. 00:34:42.194 [2024-07-21 03:44:27.185458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.194 [2024-07-21 03:44:27.185484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.194 qpair failed and we were unable to recover it. 00:34:42.194 [2024-07-21 03:44:27.185580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.194 [2024-07-21 03:44:27.185608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.194 qpair failed and we were unable to recover it. 00:34:42.194 [2024-07-21 03:44:27.185707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.194 [2024-07-21 03:44:27.185734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.194 qpair failed and we were unable to recover it. 00:34:42.194 [2024-07-21 03:44:27.185877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.194 [2024-07-21 03:44:27.185904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.194 qpair failed and we were unable to recover it. 00:34:42.194 [2024-07-21 03:44:27.185997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.194 [2024-07-21 03:44:27.186023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.194 qpair failed and we were unable to recover it. 00:34:42.194 [2024-07-21 03:44:27.186121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.194 [2024-07-21 03:44:27.186148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.194 qpair failed and we were unable to recover it. 00:34:42.194 [2024-07-21 03:44:27.186280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.194 [2024-07-21 03:44:27.186320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.194 qpair failed and we were unable to recover it. 00:34:42.194 [2024-07-21 03:44:27.186447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.194 [2024-07-21 03:44:27.186475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.194 qpair failed and we were unable to recover it. 00:34:42.194 [2024-07-21 03:44:27.186597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.194 [2024-07-21 03:44:27.186632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.194 qpair failed and we were unable to recover it. 00:34:42.194 [2024-07-21 03:44:27.186775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.194 [2024-07-21 03:44:27.186805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.194 qpair failed and we were unable to recover it. 00:34:42.194 [2024-07-21 03:44:27.186911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.194 [2024-07-21 03:44:27.186948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.194 qpair failed and we were unable to recover it. 00:34:42.194 [2024-07-21 03:44:27.187107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.194 [2024-07-21 03:44:27.187137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.194 qpair failed and we were unable to recover it. 00:34:42.194 [2024-07-21 03:44:27.187274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.194 [2024-07-21 03:44:27.187304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.194 qpair failed and we were unable to recover it. 00:34:42.194 [2024-07-21 03:44:27.187443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.194 [2024-07-21 03:44:27.187471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.194 qpair failed and we were unable to recover it. 00:34:42.194 [2024-07-21 03:44:27.187570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.194 [2024-07-21 03:44:27.187597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.194 qpair failed and we were unable to recover it. 00:34:42.194 [2024-07-21 03:44:27.187752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.194 [2024-07-21 03:44:27.187783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.194 qpair failed and we were unable to recover it. 00:34:42.194 [2024-07-21 03:44:27.187886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.194 [2024-07-21 03:44:27.187915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.194 qpair failed and we were unable to recover it. 00:34:42.194 [2024-07-21 03:44:27.188083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.194 [2024-07-21 03:44:27.188134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.194 qpair failed and we were unable to recover it. 00:34:42.194 [2024-07-21 03:44:27.188255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.194 [2024-07-21 03:44:27.188282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.194 qpair failed and we were unable to recover it. 00:34:42.194 [2024-07-21 03:44:27.188424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.194 [2024-07-21 03:44:27.188454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.194 qpair failed and we were unable to recover it. 00:34:42.194 [2024-07-21 03:44:27.188544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.194 [2024-07-21 03:44:27.188586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.194 qpair failed and we were unable to recover it. 00:34:42.194 [2024-07-21 03:44:27.188718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.194 [2024-07-21 03:44:27.188745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.194 qpair failed and we were unable to recover it. 00:34:42.194 [2024-07-21 03:44:27.188872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.194 [2024-07-21 03:44:27.188918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.194 qpair failed and we were unable to recover it. 00:34:42.194 [2024-07-21 03:44:27.189075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.194 [2024-07-21 03:44:27.189118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.194 qpair failed and we were unable to recover it. 00:34:42.194 [2024-07-21 03:44:27.189290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.194 [2024-07-21 03:44:27.189321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.194 qpair failed and we were unable to recover it. 00:34:42.194 [2024-07-21 03:44:27.189490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.194 [2024-07-21 03:44:27.189519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.194 qpair failed and we were unable to recover it. 00:34:42.194 [2024-07-21 03:44:27.189701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.194 [2024-07-21 03:44:27.189728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.194 qpair failed and we were unable to recover it. 00:34:42.194 [2024-07-21 03:44:27.189847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.194 [2024-07-21 03:44:27.189873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.194 qpair failed and we were unable to recover it. 00:34:42.194 [2024-07-21 03:44:27.189975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.194 [2024-07-21 03:44:27.190002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.194 qpair failed and we were unable to recover it. 00:34:42.194 [2024-07-21 03:44:27.190155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.194 [2024-07-21 03:44:27.190185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.194 qpair failed and we were unable to recover it. 00:34:42.194 [2024-07-21 03:44:27.190381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.194 [2024-07-21 03:44:27.190410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.194 qpair failed and we were unable to recover it. 00:34:42.194 [2024-07-21 03:44:27.190537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.194 [2024-07-21 03:44:27.190565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.194 qpair failed and we were unable to recover it. 00:34:42.194 [2024-07-21 03:44:27.190692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.194 [2024-07-21 03:44:27.190720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.194 qpair failed and we were unable to recover it. 00:34:42.194 [2024-07-21 03:44:27.190831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.194 [2024-07-21 03:44:27.190860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.194 qpair failed and we were unable to recover it. 00:34:42.194 [2024-07-21 03:44:27.191018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.194 [2024-07-21 03:44:27.191048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.194 qpair failed and we were unable to recover it. 00:34:42.194 [2024-07-21 03:44:27.191170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.194 [2024-07-21 03:44:27.191200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.194 qpair failed and we were unable to recover it. 00:34:42.194 [2024-07-21 03:44:27.191389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.194 [2024-07-21 03:44:27.191420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.194 qpair failed and we were unable to recover it. 00:34:42.194 [2024-07-21 03:44:27.191520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.194 [2024-07-21 03:44:27.191550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.194 qpair failed and we were unable to recover it. 00:34:42.194 [2024-07-21 03:44:27.191678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.194 [2024-07-21 03:44:27.191705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.194 qpair failed and we were unable to recover it. 00:34:42.194 [2024-07-21 03:44:27.191833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.194 [2024-07-21 03:44:27.191859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.194 qpair failed and we were unable to recover it. 00:34:42.194 [2024-07-21 03:44:27.191978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.194 [2024-07-21 03:44:27.192020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.194 qpair failed and we were unable to recover it. 00:34:42.194 [2024-07-21 03:44:27.192193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.195 [2024-07-21 03:44:27.192222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.195 qpair failed and we were unable to recover it. 00:34:42.195 [2024-07-21 03:44:27.192335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.195 [2024-07-21 03:44:27.192361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.195 qpair failed and we were unable to recover it. 00:34:42.195 [2024-07-21 03:44:27.192509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.195 [2024-07-21 03:44:27.192539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.195 qpair failed and we were unable to recover it. 00:34:42.195 [2024-07-21 03:44:27.192687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.195 [2024-07-21 03:44:27.192714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.195 qpair failed and we were unable to recover it. 00:34:42.195 [2024-07-21 03:44:27.192840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.195 [2024-07-21 03:44:27.192867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.195 qpair failed and we were unable to recover it. 00:34:42.195 [2024-07-21 03:44:27.192990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.195 [2024-07-21 03:44:27.193033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.195 qpair failed and we were unable to recover it. 00:34:42.195 [2024-07-21 03:44:27.193132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.195 [2024-07-21 03:44:27.193162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.195 qpair failed and we were unable to recover it. 00:34:42.195 [2024-07-21 03:44:27.193298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.195 [2024-07-21 03:44:27.193343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.195 qpair failed and we were unable to recover it. 00:34:42.195 [2024-07-21 03:44:27.193501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.195 [2024-07-21 03:44:27.193530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.195 qpair failed and we were unable to recover it. 00:34:42.195 [2024-07-21 03:44:27.193683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.195 [2024-07-21 03:44:27.193729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.195 qpair failed and we were unable to recover it. 00:34:42.195 [2024-07-21 03:44:27.193843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.195 [2024-07-21 03:44:27.193884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.195 qpair failed and we were unable to recover it. 00:34:42.195 [2024-07-21 03:44:27.194069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.195 [2024-07-21 03:44:27.194113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.195 qpair failed and we were unable to recover it. 00:34:42.195 [2024-07-21 03:44:27.194235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.195 [2024-07-21 03:44:27.194263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.195 qpair failed and we were unable to recover it. 00:34:42.195 [2024-07-21 03:44:27.194486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.195 [2024-07-21 03:44:27.194515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.195 qpair failed and we were unable to recover it. 00:34:42.195 [2024-07-21 03:44:27.194638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.195 [2024-07-21 03:44:27.194666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.195 qpair failed and we were unable to recover it. 00:34:42.195 [2024-07-21 03:44:27.194758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.195 [2024-07-21 03:44:27.194784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.195 qpair failed and we were unable to recover it. 00:34:42.195 [2024-07-21 03:44:27.194913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.195 [2024-07-21 03:44:27.194939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.195 qpair failed and we were unable to recover it. 00:34:42.195 [2024-07-21 03:44:27.195052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.195 [2024-07-21 03:44:27.195081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.195 qpair failed and we were unable to recover it. 00:34:42.195 [2024-07-21 03:44:27.195184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.195 [2024-07-21 03:44:27.195213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.195 qpair failed and we were unable to recover it. 00:34:42.195 [2024-07-21 03:44:27.195322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.195 [2024-07-21 03:44:27.195355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.195 qpair failed and we were unable to recover it. 00:34:42.195 [2024-07-21 03:44:27.195526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.195 [2024-07-21 03:44:27.195556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.195 qpair failed and we were unable to recover it. 00:34:42.195 [2024-07-21 03:44:27.195652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.195 [2024-07-21 03:44:27.195680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.195 qpair failed and we were unable to recover it. 00:34:42.195 [2024-07-21 03:44:27.195765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.195 [2024-07-21 03:44:27.195791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.195 qpair failed and we were unable to recover it. 00:34:42.195 [2024-07-21 03:44:27.195926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.195 [2024-07-21 03:44:27.195969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.195 qpair failed and we were unable to recover it. 00:34:42.195 [2024-07-21 03:44:27.196108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.195 [2024-07-21 03:44:27.196137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.195 qpair failed and we were unable to recover it. 00:34:42.195 [2024-07-21 03:44:27.196334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.195 [2024-07-21 03:44:27.196389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.195 qpair failed and we were unable to recover it. 00:34:42.195 [2024-07-21 03:44:27.196484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.195 [2024-07-21 03:44:27.196512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.195 qpair failed and we were unable to recover it. 00:34:42.195 [2024-07-21 03:44:27.196683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.195 [2024-07-21 03:44:27.196715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.195 qpair failed and we were unable to recover it. 00:34:42.195 [2024-07-21 03:44:27.196824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.195 [2024-07-21 03:44:27.196854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.195 qpair failed and we were unable to recover it. 00:34:42.195 [2024-07-21 03:44:27.197014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.195 [2024-07-21 03:44:27.197043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.195 qpair failed and we were unable to recover it. 00:34:42.195 [2024-07-21 03:44:27.197175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.195 [2024-07-21 03:44:27.197204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.195 qpair failed and we were unable to recover it. 00:34:42.195 [2024-07-21 03:44:27.197308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.195 [2024-07-21 03:44:27.197336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.195 qpair failed and we were unable to recover it. 00:34:42.195 [2024-07-21 03:44:27.197464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.195 [2024-07-21 03:44:27.197506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.195 qpair failed and we were unable to recover it. 00:34:42.195 [2024-07-21 03:44:27.197650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.195 [2024-07-21 03:44:27.197677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.195 qpair failed and we were unable to recover it. 00:34:42.195 [2024-07-21 03:44:27.197789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.195 [2024-07-21 03:44:27.197815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.195 qpair failed and we were unable to recover it. 00:34:42.195 [2024-07-21 03:44:27.197974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.195 [2024-07-21 03:44:27.198003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.195 qpair failed and we were unable to recover it. 00:34:42.195 [2024-07-21 03:44:27.198162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.195 [2024-07-21 03:44:27.198196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.195 qpair failed and we were unable to recover it. 00:34:42.195 [2024-07-21 03:44:27.198332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.195 [2024-07-21 03:44:27.198361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.195 qpair failed and we were unable to recover it. 00:34:42.195 [2024-07-21 03:44:27.198483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.195 [2024-07-21 03:44:27.198513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.195 qpair failed and we were unable to recover it. 00:34:42.195 [2024-07-21 03:44:27.198634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.195 [2024-07-21 03:44:27.198662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.195 qpair failed and we were unable to recover it. 00:34:42.195 [2024-07-21 03:44:27.198792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.195 [2024-07-21 03:44:27.198818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.195 qpair failed and we were unable to recover it. 00:34:42.195 [2024-07-21 03:44:27.198951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.195 [2024-07-21 03:44:27.198980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.195 qpair failed and we were unable to recover it. 00:34:42.195 [2024-07-21 03:44:27.199274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.195 [2024-07-21 03:44:27.199335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.195 qpair failed and we were unable to recover it. 00:34:42.195 [2024-07-21 03:44:27.199512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.195 [2024-07-21 03:44:27.199541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.195 qpair failed and we were unable to recover it. 00:34:42.195 [2024-07-21 03:44:27.199709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.195 [2024-07-21 03:44:27.199737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.195 qpair failed and we were unable to recover it. 00:34:42.195 [2024-07-21 03:44:27.199882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.195 [2024-07-21 03:44:27.199909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.195 qpair failed and we were unable to recover it. 00:34:42.195 [2024-07-21 03:44:27.199996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.195 [2024-07-21 03:44:27.200024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.195 qpair failed and we were unable to recover it. 00:34:42.195 [2024-07-21 03:44:27.200142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.195 [2024-07-21 03:44:27.200172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.195 qpair failed and we were unable to recover it. 00:34:42.195 [2024-07-21 03:44:27.200352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.195 [2024-07-21 03:44:27.200381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.195 qpair failed and we were unable to recover it. 00:34:42.195 [2024-07-21 03:44:27.200539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.195 [2024-07-21 03:44:27.200568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.195 qpair failed and we were unable to recover it. 00:34:42.195 [2024-07-21 03:44:27.200715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.195 [2024-07-21 03:44:27.200755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.195 qpair failed and we were unable to recover it. 00:34:42.195 [2024-07-21 03:44:27.200856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.195 [2024-07-21 03:44:27.200901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.195 qpair failed and we were unable to recover it. 00:34:42.195 [2024-07-21 03:44:27.201059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.195 [2024-07-21 03:44:27.201088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.195 qpair failed and we were unable to recover it. 00:34:42.195 [2024-07-21 03:44:27.201223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.195 [2024-07-21 03:44:27.201253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.195 qpair failed and we were unable to recover it. 00:34:42.195 [2024-07-21 03:44:27.201455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.195 [2024-07-21 03:44:27.201486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.195 qpair failed and we were unable to recover it. 00:34:42.195 [2024-07-21 03:44:27.201645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.195 [2024-07-21 03:44:27.201689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.195 qpair failed and we were unable to recover it. 00:34:42.195 [2024-07-21 03:44:27.201812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.195 [2024-07-21 03:44:27.201839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.195 qpair failed and we were unable to recover it. 00:34:42.196 [2024-07-21 03:44:27.202027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.196 [2024-07-21 03:44:27.202091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.196 qpair failed and we were unable to recover it. 00:34:42.196 [2024-07-21 03:44:27.202298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.196 [2024-07-21 03:44:27.202348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.196 qpair failed and we were unable to recover it. 00:34:42.196 [2024-07-21 03:44:27.202452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.196 [2024-07-21 03:44:27.202481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.196 qpair failed and we were unable to recover it. 00:34:42.196 [2024-07-21 03:44:27.202650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.196 [2024-07-21 03:44:27.202677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.196 qpair failed and we were unable to recover it. 00:34:42.196 [2024-07-21 03:44:27.202795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.196 [2024-07-21 03:44:27.202821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.196 qpair failed and we were unable to recover it. 00:34:42.196 [2024-07-21 03:44:27.202941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.196 [2024-07-21 03:44:27.202983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.196 qpair failed and we were unable to recover it. 00:34:42.196 [2024-07-21 03:44:27.203115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.196 [2024-07-21 03:44:27.203183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.196 qpair failed and we were unable to recover it. 00:34:42.196 [2024-07-21 03:44:27.203312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.196 [2024-07-21 03:44:27.203341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.196 qpair failed and we were unable to recover it. 00:34:42.196 [2024-07-21 03:44:27.203461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.196 [2024-07-21 03:44:27.203488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.196 qpair failed and we were unable to recover it. 00:34:42.196 [2024-07-21 03:44:27.203618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.196 [2024-07-21 03:44:27.203663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.196 qpair failed and we were unable to recover it. 00:34:42.196 [2024-07-21 03:44:27.203799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.196 [2024-07-21 03:44:27.203829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.196 qpair failed and we were unable to recover it. 00:34:42.196 [2024-07-21 03:44:27.204038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.196 [2024-07-21 03:44:27.204099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.196 qpair failed and we were unable to recover it. 00:34:42.196 [2024-07-21 03:44:27.204237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.196 [2024-07-21 03:44:27.204301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.196 qpair failed and we were unable to recover it. 00:34:42.196 [2024-07-21 03:44:27.204461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.196 [2024-07-21 03:44:27.204489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.196 qpair failed and we were unable to recover it. 00:34:42.196 [2024-07-21 03:44:27.204643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.196 [2024-07-21 03:44:27.204671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.196 qpair failed and we were unable to recover it. 00:34:42.196 [2024-07-21 03:44:27.204807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.196 [2024-07-21 03:44:27.204851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.196 qpair failed and we were unable to recover it. 00:34:42.196 [2024-07-21 03:44:27.205025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.196 [2024-07-21 03:44:27.205072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.196 qpair failed and we were unable to recover it. 00:34:42.196 [2024-07-21 03:44:27.205219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.196 [2024-07-21 03:44:27.205264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.196 qpair failed and we were unable to recover it. 00:34:42.196 [2024-07-21 03:44:27.205416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.196 [2024-07-21 03:44:27.205443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.196 qpair failed and we were unable to recover it. 00:34:42.196 [2024-07-21 03:44:27.205618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.196 [2024-07-21 03:44:27.205682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.196 qpair failed and we were unable to recover it. 00:34:42.196 [2024-07-21 03:44:27.205825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.196 [2024-07-21 03:44:27.205855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.196 qpair failed and we were unable to recover it. 00:34:42.196 [2024-07-21 03:44:27.206024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.196 [2024-07-21 03:44:27.206053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.196 qpair failed and we were unable to recover it. 00:34:42.196 [2024-07-21 03:44:27.206254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.196 [2024-07-21 03:44:27.206310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.196 qpair failed and we were unable to recover it. 00:34:42.196 [2024-07-21 03:44:27.206485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.196 [2024-07-21 03:44:27.206537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.196 qpair failed and we were unable to recover it. 00:34:42.196 [2024-07-21 03:44:27.206684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.196 [2024-07-21 03:44:27.206712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.196 qpair failed and we were unable to recover it. 00:34:42.196 [2024-07-21 03:44:27.206852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.196 [2024-07-21 03:44:27.206897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.196 qpair failed and we were unable to recover it. 00:34:42.196 [2024-07-21 03:44:27.207009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.196 [2024-07-21 03:44:27.207053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.196 qpair failed and we were unable to recover it. 00:34:42.196 [2024-07-21 03:44:27.207228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.196 [2024-07-21 03:44:27.207291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.196 qpair failed and we were unable to recover it. 00:34:42.196 [2024-07-21 03:44:27.207413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.196 [2024-07-21 03:44:27.207439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.196 qpair failed and we were unable to recover it. 00:34:42.196 [2024-07-21 03:44:27.207591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.196 [2024-07-21 03:44:27.207632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.196 qpair failed and we were unable to recover it. 00:34:42.196 [2024-07-21 03:44:27.207775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.196 [2024-07-21 03:44:27.207806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.196 qpair failed and we were unable to recover it. 00:34:42.196 [2024-07-21 03:44:27.207944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.196 [2024-07-21 03:44:27.207990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.196 qpair failed and we were unable to recover it. 00:34:42.196 [2024-07-21 03:44:27.208092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.196 [2024-07-21 03:44:27.208120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.196 qpair failed and we were unable to recover it. 00:34:42.196 [2024-07-21 03:44:27.208252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.196 [2024-07-21 03:44:27.208279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.196 qpair failed and we were unable to recover it. 00:34:42.196 [2024-07-21 03:44:27.208411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.196 [2024-07-21 03:44:27.208452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.196 qpair failed and we were unable to recover it. 00:34:42.196 [2024-07-21 03:44:27.208549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.196 [2024-07-21 03:44:27.208577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.196 qpair failed and we were unable to recover it. 00:34:42.196 [2024-07-21 03:44:27.208740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.196 [2024-07-21 03:44:27.208773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.196 qpair failed and we were unable to recover it. 00:34:42.196 [2024-07-21 03:44:27.208917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.196 [2024-07-21 03:44:27.208946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.196 qpair failed and we were unable to recover it. 00:34:42.196 [2024-07-21 03:44:27.209071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.196 [2024-07-21 03:44:27.209138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.196 qpair failed and we were unable to recover it. 00:34:42.196 [2024-07-21 03:44:27.209384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.196 [2024-07-21 03:44:27.209436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.196 qpair failed and we were unable to recover it. 00:34:42.196 [2024-07-21 03:44:27.209576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.196 [2024-07-21 03:44:27.209620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.196 qpair failed and we were unable to recover it. 00:34:42.196 [2024-07-21 03:44:27.209750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.196 [2024-07-21 03:44:27.209776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.196 qpair failed and we were unable to recover it. 00:34:42.196 [2024-07-21 03:44:27.209959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.196 [2024-07-21 03:44:27.209988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.196 qpair failed and we were unable to recover it. 00:34:42.196 [2024-07-21 03:44:27.210098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.196 [2024-07-21 03:44:27.210147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.196 qpair failed and we were unable to recover it. 00:34:42.196 [2024-07-21 03:44:27.210313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.196 [2024-07-21 03:44:27.210342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.196 qpair failed and we were unable to recover it. 00:34:42.196 [2024-07-21 03:44:27.210487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.196 [2024-07-21 03:44:27.210516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.196 qpair failed and we were unable to recover it. 00:34:42.196 [2024-07-21 03:44:27.210662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.196 [2024-07-21 03:44:27.210707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.196 qpair failed and we were unable to recover it. 00:34:42.196 [2024-07-21 03:44:27.210803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.196 [2024-07-21 03:44:27.210831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.196 qpair failed and we were unable to recover it. 00:34:42.196 [2024-07-21 03:44:27.210972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.196 [2024-07-21 03:44:27.211016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.197 qpair failed and we were unable to recover it. 00:34:42.197 [2024-07-21 03:44:27.211149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.197 [2024-07-21 03:44:27.211179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.197 qpair failed and we were unable to recover it. 00:34:42.197 [2024-07-21 03:44:27.211365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.197 [2024-07-21 03:44:27.211413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.197 qpair failed and we were unable to recover it. 00:34:42.197 [2024-07-21 03:44:27.211549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.197 [2024-07-21 03:44:27.211589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.197 qpair failed and we were unable to recover it. 00:34:42.197 [2024-07-21 03:44:27.211737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.197 [2024-07-21 03:44:27.211766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.197 qpair failed and we were unable to recover it. 00:34:42.197 [2024-07-21 03:44:27.211891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.197 [2024-07-21 03:44:27.211924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.197 qpair failed and we were unable to recover it. 00:34:42.197 [2024-07-21 03:44:27.212033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.197 [2024-07-21 03:44:27.212062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.197 qpair failed and we were unable to recover it. 00:34:42.197 [2024-07-21 03:44:27.212196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.197 [2024-07-21 03:44:27.212227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.197 qpair failed and we were unable to recover it. 00:34:42.197 [2024-07-21 03:44:27.212361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.197 [2024-07-21 03:44:27.212390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.197 qpair failed and we were unable to recover it. 00:34:42.197 [2024-07-21 03:44:27.212556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.197 [2024-07-21 03:44:27.212582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.197 qpair failed and we were unable to recover it. 00:34:42.197 [2024-07-21 03:44:27.212768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.197 [2024-07-21 03:44:27.212808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.197 qpair failed and we were unable to recover it. 00:34:42.197 [2024-07-21 03:44:27.212965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.197 [2024-07-21 03:44:27.212996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.197 qpair failed and we were unable to recover it. 00:34:42.197 [2024-07-21 03:44:27.213154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.197 [2024-07-21 03:44:27.213180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.197 qpair failed and we were unable to recover it. 00:34:42.197 [2024-07-21 03:44:27.213294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.197 [2024-07-21 03:44:27.213323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.197 qpair failed and we were unable to recover it. 00:34:42.197 [2024-07-21 03:44:27.213448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.197 [2024-07-21 03:44:27.213477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.197 qpair failed and we were unable to recover it. 00:34:42.197 [2024-07-21 03:44:27.213587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.197 [2024-07-21 03:44:27.213621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.197 qpair failed and we were unable to recover it. 00:34:42.197 [2024-07-21 03:44:27.213738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.197 [2024-07-21 03:44:27.213764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.197 qpair failed and we were unable to recover it. 00:34:42.197 [2024-07-21 03:44:27.213939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.197 [2024-07-21 03:44:27.213968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.197 qpair failed and we were unable to recover it. 00:34:42.197 [2024-07-21 03:44:27.214172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.197 [2024-07-21 03:44:27.214203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.197 qpair failed and we were unable to recover it. 00:34:42.197 [2024-07-21 03:44:27.214336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.197 [2024-07-21 03:44:27.214365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.197 qpair failed and we were unable to recover it. 00:34:42.197 [2024-07-21 03:44:27.214464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.197 [2024-07-21 03:44:27.214494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.197 qpair failed and we were unable to recover it. 00:34:42.197 [2024-07-21 03:44:27.214635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.197 [2024-07-21 03:44:27.214661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.197 qpair failed and we were unable to recover it. 00:34:42.197 [2024-07-21 03:44:27.214749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.197 [2024-07-21 03:44:27.214775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.197 qpair failed and we were unable to recover it. 00:34:42.197 [2024-07-21 03:44:27.214916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.197 [2024-07-21 03:44:27.214946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.197 qpair failed and we were unable to recover it. 00:34:42.197 [2024-07-21 03:44:27.215153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.197 [2024-07-21 03:44:27.215204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.197 qpair failed and we were unable to recover it. 00:34:42.197 [2024-07-21 03:44:27.215310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.197 [2024-07-21 03:44:27.215343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.197 qpair failed and we were unable to recover it. 00:34:42.197 [2024-07-21 03:44:27.215492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.197 [2024-07-21 03:44:27.215537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.197 qpair failed and we were unable to recover it. 00:34:42.197 [2024-07-21 03:44:27.215661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.197 [2024-07-21 03:44:27.215690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.197 qpair failed and we were unable to recover it. 00:34:42.197 [2024-07-21 03:44:27.215844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.197 [2024-07-21 03:44:27.215870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.197 qpair failed and we were unable to recover it. 00:34:42.197 [2024-07-21 03:44:27.216047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.197 [2024-07-21 03:44:27.216074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.197 qpair failed and we were unable to recover it. 00:34:42.197 [2024-07-21 03:44:27.216215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.197 [2024-07-21 03:44:27.216245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.197 qpair failed and we were unable to recover it. 00:34:42.197 [2024-07-21 03:44:27.216373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.197 [2024-07-21 03:44:27.216403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.197 qpair failed and we were unable to recover it. 00:34:42.197 [2024-07-21 03:44:27.216508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.197 [2024-07-21 03:44:27.216537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.197 qpair failed and we were unable to recover it. 00:34:42.197 [2024-07-21 03:44:27.216662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.197 [2024-07-21 03:44:27.216689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.197 qpair failed and we were unable to recover it. 00:34:42.197 [2024-07-21 03:44:27.216775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.197 [2024-07-21 03:44:27.216801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.197 qpair failed and we were unable to recover it. 00:34:42.197 [2024-07-21 03:44:27.216934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.197 [2024-07-21 03:44:27.216962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.197 qpair failed and we were unable to recover it. 00:34:42.197 [2024-07-21 03:44:27.217117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.197 [2024-07-21 03:44:27.217160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.197 qpair failed and we were unable to recover it. 00:34:42.197 [2024-07-21 03:44:27.217281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.197 [2024-07-21 03:44:27.217310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.197 qpair failed and we were unable to recover it. 00:34:42.197 [2024-07-21 03:44:27.217433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.197 [2024-07-21 03:44:27.217459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.197 qpair failed and we were unable to recover it. 00:34:42.197 [2024-07-21 03:44:27.217639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.197 [2024-07-21 03:44:27.217666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.197 qpair failed and we were unable to recover it. 00:34:42.197 [2024-07-21 03:44:27.217781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.197 [2024-07-21 03:44:27.217808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.197 qpair failed and we were unable to recover it. 00:34:42.197 [2024-07-21 03:44:27.217931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.197 [2024-07-21 03:44:27.217956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.197 qpair failed and we were unable to recover it. 00:34:42.197 [2024-07-21 03:44:27.218126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.197 [2024-07-21 03:44:27.218155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.197 qpair failed and we were unable to recover it. 00:34:42.197 [2024-07-21 03:44:27.218293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.197 [2024-07-21 03:44:27.218324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.197 qpair failed and we were unable to recover it. 00:34:42.197 [2024-07-21 03:44:27.218457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.197 [2024-07-21 03:44:27.218488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.197 qpair failed and we were unable to recover it. 00:34:42.197 [2024-07-21 03:44:27.218634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.197 [2024-07-21 03:44:27.218677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.197 qpair failed and we were unable to recover it. 00:34:42.197 [2024-07-21 03:44:27.218765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.197 [2024-07-21 03:44:27.218792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.197 qpair failed and we were unable to recover it. 00:34:42.197 [2024-07-21 03:44:27.218890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.197 [2024-07-21 03:44:27.218919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.197 qpair failed and we were unable to recover it. 00:34:42.197 [2024-07-21 03:44:27.219085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.197 [2024-07-21 03:44:27.219114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.197 qpair failed and we were unable to recover it. 00:34:42.197 [2024-07-21 03:44:27.219250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.197 [2024-07-21 03:44:27.219279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.197 qpair failed and we were unable to recover it. 00:34:42.197 [2024-07-21 03:44:27.219411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.197 [2024-07-21 03:44:27.219440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.197 qpair failed and we were unable to recover it. 00:34:42.197 [2024-07-21 03:44:27.219567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.197 [2024-07-21 03:44:27.219596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.197 qpair failed and we were unable to recover it. 00:34:42.197 [2024-07-21 03:44:27.219761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.197 [2024-07-21 03:44:27.219801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.197 qpair failed and we were unable to recover it. 00:34:42.197 [2024-07-21 03:44:27.219948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.197 [2024-07-21 03:44:27.219993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.197 qpair failed and we were unable to recover it. 00:34:42.197 [2024-07-21 03:44:27.220142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.197 [2024-07-21 03:44:27.220185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.197 qpair failed and we were unable to recover it. 00:34:42.197 [2024-07-21 03:44:27.220289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.197 [2024-07-21 03:44:27.220318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.197 qpair failed and we were unable to recover it. 00:34:42.197 [2024-07-21 03:44:27.220454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.197 [2024-07-21 03:44:27.220481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.197 qpair failed and we were unable to recover it. 00:34:42.197 [2024-07-21 03:44:27.220598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.198 [2024-07-21 03:44:27.220630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.198 qpair failed and we were unable to recover it. 00:34:42.198 [2024-07-21 03:44:27.220752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.198 [2024-07-21 03:44:27.220779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.198 qpair failed and we were unable to recover it. 00:34:42.198 [2024-07-21 03:44:27.220866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.198 [2024-07-21 03:44:27.220904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.198 qpair failed and we were unable to recover it. 00:34:42.198 [2024-07-21 03:44:27.221022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.198 [2024-07-21 03:44:27.221048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.198 qpair failed and we were unable to recover it. 00:34:42.198 [2024-07-21 03:44:27.221168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.198 [2024-07-21 03:44:27.221194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.198 qpair failed and we were unable to recover it. 00:34:42.198 [2024-07-21 03:44:27.221339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.198 [2024-07-21 03:44:27.221365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.198 qpair failed and we were unable to recover it. 00:34:42.198 [2024-07-21 03:44:27.221478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.198 [2024-07-21 03:44:27.221505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.198 qpair failed and we were unable to recover it. 00:34:42.198 [2024-07-21 03:44:27.221674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.198 [2024-07-21 03:44:27.221705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.198 qpair failed and we were unable to recover it. 00:34:42.198 [2024-07-21 03:44:27.221813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.198 [2024-07-21 03:44:27.221845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.198 qpair failed and we were unable to recover it. 00:34:42.198 [2024-07-21 03:44:27.221933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.198 [2024-07-21 03:44:27.221959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.198 qpair failed and we were unable to recover it. 00:34:42.198 [2024-07-21 03:44:27.222083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.198 [2024-07-21 03:44:27.222110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.198 qpair failed and we were unable to recover it. 00:34:42.198 [2024-07-21 03:44:27.222222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.198 [2024-07-21 03:44:27.222249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.198 qpair failed and we were unable to recover it. 00:34:42.198 [2024-07-21 03:44:27.222345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.198 [2024-07-21 03:44:27.222371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.198 qpair failed and we were unable to recover it. 00:34:42.198 [2024-07-21 03:44:27.222523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.198 [2024-07-21 03:44:27.222549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.198 qpair failed and we were unable to recover it. 00:34:42.198 [2024-07-21 03:44:27.222651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.198 [2024-07-21 03:44:27.222678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.198 qpair failed and we were unable to recover it. 00:34:42.198 [2024-07-21 03:44:27.222790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.198 [2024-07-21 03:44:27.222816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.198 qpair failed and we were unable to recover it. 00:34:42.198 [2024-07-21 03:44:27.222960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.198 [2024-07-21 03:44:27.223010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.198 qpair failed and we were unable to recover it. 00:34:42.198 [2024-07-21 03:44:27.223157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.198 [2024-07-21 03:44:27.223184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.198 qpair failed and we were unable to recover it. 00:34:42.198 [2024-07-21 03:44:27.223333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.198 [2024-07-21 03:44:27.223360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.198 qpair failed and we were unable to recover it. 00:34:42.198 [2024-07-21 03:44:27.223460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.198 [2024-07-21 03:44:27.223486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.198 qpair failed and we were unable to recover it. 00:34:42.198 [2024-07-21 03:44:27.223581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.198 [2024-07-21 03:44:27.223631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.198 qpair failed and we were unable to recover it. 00:34:42.198 [2024-07-21 03:44:27.223811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.198 [2024-07-21 03:44:27.223840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.198 qpair failed and we were unable to recover it. 00:34:42.198 [2024-07-21 03:44:27.224012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.198 [2024-07-21 03:44:27.224041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.198 qpair failed and we were unable to recover it. 00:34:42.198 [2024-07-21 03:44:27.224175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.198 [2024-07-21 03:44:27.224239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.198 qpair failed and we were unable to recover it. 00:34:42.198 [2024-07-21 03:44:27.224377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.198 [2024-07-21 03:44:27.224403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.198 qpair failed and we were unable to recover it. 00:34:42.198 [2024-07-21 03:44:27.224550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.198 [2024-07-21 03:44:27.224576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.198 qpair failed and we were unable to recover it. 00:34:42.198 [2024-07-21 03:44:27.224743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.198 [2024-07-21 03:44:27.224769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.198 qpair failed and we were unable to recover it. 00:34:42.198 [2024-07-21 03:44:27.224905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.198 [2024-07-21 03:44:27.224934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.198 qpair failed and we were unable to recover it. 00:34:42.198 [2024-07-21 03:44:27.225057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.198 [2024-07-21 03:44:27.225100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.198 qpair failed and we were unable to recover it. 00:34:42.198 [2024-07-21 03:44:27.225243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.198 [2024-07-21 03:44:27.225286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.198 qpair failed and we were unable to recover it. 00:34:42.198 [2024-07-21 03:44:27.225422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.198 [2024-07-21 03:44:27.225451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.198 qpair failed and we were unable to recover it. 00:34:42.198 [2024-07-21 03:44:27.225549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.198 [2024-07-21 03:44:27.225578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.198 qpair failed and we were unable to recover it. 00:34:42.198 [2024-07-21 03:44:27.225728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.198 [2024-07-21 03:44:27.225754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.198 qpair failed and we were unable to recover it. 00:34:42.198 [2024-07-21 03:44:27.225873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.198 [2024-07-21 03:44:27.225917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.198 qpair failed and we were unable to recover it. 00:34:42.198 [2024-07-21 03:44:27.226054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.198 [2024-07-21 03:44:27.226081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.198 qpair failed and we were unable to recover it. 00:34:42.198 [2024-07-21 03:44:27.226220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.198 [2024-07-21 03:44:27.226270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.198 qpair failed and we were unable to recover it. 00:34:42.198 [2024-07-21 03:44:27.226418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.198 [2024-07-21 03:44:27.226444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.198 qpair failed and we were unable to recover it. 00:34:42.198 [2024-07-21 03:44:27.226558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.198 [2024-07-21 03:44:27.226584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.198 qpair failed and we were unable to recover it. 00:34:42.198 [2024-07-21 03:44:27.226735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.198 [2024-07-21 03:44:27.226763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.198 qpair failed and we were unable to recover it. 00:34:42.198 [2024-07-21 03:44:27.226886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.198 [2024-07-21 03:44:27.226936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.198 qpair failed and we were unable to recover it. 00:34:42.198 [2024-07-21 03:44:27.227049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.198 [2024-07-21 03:44:27.227091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.198 qpair failed and we were unable to recover it. 00:34:42.198 [2024-07-21 03:44:27.227211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.198 [2024-07-21 03:44:27.227240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.198 qpair failed and we were unable to recover it. 00:34:42.198 [2024-07-21 03:44:27.227337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.198 [2024-07-21 03:44:27.227368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.198 qpair failed and we were unable to recover it. 00:34:42.198 [2024-07-21 03:44:27.227478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.198 [2024-07-21 03:44:27.227506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.198 qpair failed and we were unable to recover it. 00:34:42.198 [2024-07-21 03:44:27.227626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.198 [2024-07-21 03:44:27.227653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.198 qpair failed and we were unable to recover it. 00:34:42.198 [2024-07-21 03:44:27.227773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.198 [2024-07-21 03:44:27.227799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.198 qpair failed and we were unable to recover it. 00:34:42.198 [2024-07-21 03:44:27.227889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.198 [2024-07-21 03:44:27.227915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.198 qpair failed and we were unable to recover it. 00:34:42.198 [2024-07-21 03:44:27.228036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.198 [2024-07-21 03:44:27.228080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.198 qpair failed and we were unable to recover it. 00:34:42.198 [2024-07-21 03:44:27.228224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.198 [2024-07-21 03:44:27.228256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.198 qpair failed and we were unable to recover it. 00:34:42.198 [2024-07-21 03:44:27.228395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.198 [2024-07-21 03:44:27.228425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.198 qpair failed and we were unable to recover it. 00:34:42.198 [2024-07-21 03:44:27.228565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.198 [2024-07-21 03:44:27.228593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.198 qpair failed and we were unable to recover it. 00:34:42.198 [2024-07-21 03:44:27.228717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.198 [2024-07-21 03:44:27.228745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.198 qpair failed and we were unable to recover it. 00:34:42.198 [2024-07-21 03:44:27.228887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.198 [2024-07-21 03:44:27.228916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.198 qpair failed and we were unable to recover it. 00:34:42.198 [2024-07-21 03:44:27.229019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.198 [2024-07-21 03:44:27.229060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.198 qpair failed and we were unable to recover it. 00:34:42.198 [2024-07-21 03:44:27.229230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.198 [2024-07-21 03:44:27.229259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.198 qpair failed and we were unable to recover it. 00:34:42.198 [2024-07-21 03:44:27.229384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.198 [2024-07-21 03:44:27.229414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.198 qpair failed and we were unable to recover it. 00:34:42.198 [2024-07-21 03:44:27.229574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.198 [2024-07-21 03:44:27.229599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.198 qpair failed and we were unable to recover it. 00:34:42.198 [2024-07-21 03:44:27.229720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.199 [2024-07-21 03:44:27.229747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.199 qpair failed and we were unable to recover it. 00:34:42.199 [2024-07-21 03:44:27.229841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.199 [2024-07-21 03:44:27.229869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.199 qpair failed and we were unable to recover it. 00:34:42.199 [2024-07-21 03:44:27.230085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.199 [2024-07-21 03:44:27.230142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.199 qpair failed and we were unable to recover it. 00:34:42.199 [2024-07-21 03:44:27.230298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.199 [2024-07-21 03:44:27.230371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.199 qpair failed and we were unable to recover it. 00:34:42.199 [2024-07-21 03:44:27.230538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.199 [2024-07-21 03:44:27.230567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.199 qpair failed and we were unable to recover it. 00:34:42.199 [2024-07-21 03:44:27.230703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.199 [2024-07-21 03:44:27.230734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.199 qpair failed and we were unable to recover it. 00:34:42.199 [2024-07-21 03:44:27.230859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.199 [2024-07-21 03:44:27.230885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.199 qpair failed and we were unable to recover it. 00:34:42.199 [2024-07-21 03:44:27.231121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.199 [2024-07-21 03:44:27.231176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.199 qpair failed and we were unable to recover it. 00:34:42.199 [2024-07-21 03:44:27.231283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.199 [2024-07-21 03:44:27.231313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.199 qpair failed and we were unable to recover it. 00:34:42.199 [2024-07-21 03:44:27.231449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.199 [2024-07-21 03:44:27.231478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.199 qpair failed and we were unable to recover it. 00:34:42.199 [2024-07-21 03:44:27.231604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.199 [2024-07-21 03:44:27.231640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.199 qpair failed and we were unable to recover it. 00:34:42.199 [2024-07-21 03:44:27.231775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.199 [2024-07-21 03:44:27.231800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.199 qpair failed and we were unable to recover it. 00:34:42.199 [2024-07-21 03:44:27.231915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.199 [2024-07-21 03:44:27.231941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.199 qpair failed and we were unable to recover it. 00:34:42.199 [2024-07-21 03:44:27.232088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.199 [2024-07-21 03:44:27.232117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.199 qpair failed and we were unable to recover it. 00:34:42.199 [2024-07-21 03:44:27.232277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.199 [2024-07-21 03:44:27.232306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.199 qpair failed and we were unable to recover it. 00:34:42.199 [2024-07-21 03:44:27.232489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.199 [2024-07-21 03:44:27.232518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.199 qpair failed and we were unable to recover it. 00:34:42.199 [2024-07-21 03:44:27.232665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.199 [2024-07-21 03:44:27.232693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.199 qpair failed and we were unable to recover it. 00:34:42.199 [2024-07-21 03:44:27.232819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.199 [2024-07-21 03:44:27.232846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.199 qpair failed and we were unable to recover it. 00:34:42.199 [2024-07-21 03:44:27.232979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.199 [2024-07-21 03:44:27.233008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.199 qpair failed and we were unable to recover it. 00:34:42.199 [2024-07-21 03:44:27.233134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.199 [2024-07-21 03:44:27.233177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.199 qpair failed and we were unable to recover it. 00:34:42.199 [2024-07-21 03:44:27.233300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.199 [2024-07-21 03:44:27.233329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.199 qpair failed and we were unable to recover it. 00:34:42.199 [2024-07-21 03:44:27.233417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.199 [2024-07-21 03:44:27.233459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.199 qpair failed and we were unable to recover it. 00:34:42.199 [2024-07-21 03:44:27.233573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.199 [2024-07-21 03:44:27.233603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.199 qpair failed and we were unable to recover it. 00:34:42.199 [2024-07-21 03:44:27.233751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.199 [2024-07-21 03:44:27.233777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.199 qpair failed and we were unable to recover it. 00:34:42.199 [2024-07-21 03:44:27.233937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.199 [2024-07-21 03:44:27.233966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.199 qpair failed and we were unable to recover it. 00:34:42.199 [2024-07-21 03:44:27.234109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.199 [2024-07-21 03:44:27.234136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.199 qpair failed and we were unable to recover it. 00:34:42.199 [2024-07-21 03:44:27.234264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.199 [2024-07-21 03:44:27.234293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.199 qpair failed and we were unable to recover it. 00:34:42.199 [2024-07-21 03:44:27.234515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.199 [2024-07-21 03:44:27.234543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.199 qpair failed and we were unable to recover it. 00:34:42.199 [2024-07-21 03:44:27.234708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.199 [2024-07-21 03:44:27.234735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.199 qpair failed and we were unable to recover it. 00:34:42.199 [2024-07-21 03:44:27.234824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.199 [2024-07-21 03:44:27.234851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.199 qpair failed and we were unable to recover it. 00:34:42.199 [2024-07-21 03:44:27.235038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.199 [2024-07-21 03:44:27.235064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.199 qpair failed and we were unable to recover it. 00:34:42.199 [2024-07-21 03:44:27.235238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.199 [2024-07-21 03:44:27.235266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.199 qpair failed and we were unable to recover it. 00:34:42.199 [2024-07-21 03:44:27.235399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.199 [2024-07-21 03:44:27.235435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.199 qpair failed and we were unable to recover it. 00:34:42.199 [2024-07-21 03:44:27.235572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.199 [2024-07-21 03:44:27.235601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.199 qpair failed and we were unable to recover it. 00:34:42.199 [2024-07-21 03:44:27.235777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.199 [2024-07-21 03:44:27.235803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.199 qpair failed and we were unable to recover it. 00:34:42.199 [2024-07-21 03:44:27.235932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.199 [2024-07-21 03:44:27.235958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.199 qpair failed and we were unable to recover it. 00:34:42.199 [2024-07-21 03:44:27.236103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.199 [2024-07-21 03:44:27.236132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.199 qpair failed and we were unable to recover it. 00:34:42.199 [2024-07-21 03:44:27.236295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.199 [2024-07-21 03:44:27.236323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.199 qpair failed and we were unable to recover it. 00:34:42.199 [2024-07-21 03:44:27.236429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.199 [2024-07-21 03:44:27.236457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.199 qpair failed and we were unable to recover it. 00:34:42.199 [2024-07-21 03:44:27.236609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.199 [2024-07-21 03:44:27.236644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.199 qpair failed and we were unable to recover it. 00:34:42.199 [2024-07-21 03:44:27.236792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.199 [2024-07-21 03:44:27.236819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.199 qpair failed and we were unable to recover it. 00:34:42.199 [2024-07-21 03:44:27.236927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.199 [2024-07-21 03:44:27.236956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.199 qpair failed and we were unable to recover it. 00:34:42.199 [2024-07-21 03:44:27.237082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.199 [2024-07-21 03:44:27.237110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.199 qpair failed and we were unable to recover it. 00:34:42.199 [2024-07-21 03:44:27.237225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.199 [2024-07-21 03:44:27.237268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.199 qpair failed and we were unable to recover it. 00:34:42.199 [2024-07-21 03:44:27.237404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.199 [2024-07-21 03:44:27.237433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.199 qpair failed and we were unable to recover it. 00:34:42.199 [2024-07-21 03:44:27.237536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.199 [2024-07-21 03:44:27.237564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.199 qpair failed and we were unable to recover it. 00:34:42.199 [2024-07-21 03:44:27.237717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.199 [2024-07-21 03:44:27.237744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.199 qpair failed and we were unable to recover it. 00:34:42.199 [2024-07-21 03:44:27.237859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.199 [2024-07-21 03:44:27.237885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.199 qpair failed and we were unable to recover it. 00:34:42.199 [2024-07-21 03:44:27.238005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.199 [2024-07-21 03:44:27.238031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.199 qpair failed and we were unable to recover it. 00:34:42.199 [2024-07-21 03:44:27.238144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.199 [2024-07-21 03:44:27.238175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.199 qpair failed and we were unable to recover it. 00:34:42.199 [2024-07-21 03:44:27.238264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.199 [2024-07-21 03:44:27.238293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.199 qpair failed and we were unable to recover it. 00:34:42.200 [2024-07-21 03:44:27.238416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.200 [2024-07-21 03:44:27.238444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.200 qpair failed and we were unable to recover it. 00:34:42.200 [2024-07-21 03:44:27.238589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.200 [2024-07-21 03:44:27.238629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.200 qpair failed and we were unable to recover it. 00:34:42.200 [2024-07-21 03:44:27.238722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.200 [2024-07-21 03:44:27.238748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.200 qpair failed and we were unable to recover it. 00:34:42.200 [2024-07-21 03:44:27.238848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.200 [2024-07-21 03:44:27.238873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.200 qpair failed and we were unable to recover it. 00:34:42.200 [2024-07-21 03:44:27.239084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.200 [2024-07-21 03:44:27.239113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.200 qpair failed and we were unable to recover it. 00:34:42.200 [2024-07-21 03:44:27.239242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.200 [2024-07-21 03:44:27.239270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.200 qpair failed and we were unable to recover it. 00:34:42.200 [2024-07-21 03:44:27.239430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.200 [2024-07-21 03:44:27.239458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.200 qpair failed and we were unable to recover it. 00:34:42.200 [2024-07-21 03:44:27.239609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.200 [2024-07-21 03:44:27.239642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.200 qpair failed and we were unable to recover it. 00:34:42.200 [2024-07-21 03:44:27.239730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.200 [2024-07-21 03:44:27.239756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.200 qpair failed and we were unable to recover it. 00:34:42.200 [2024-07-21 03:44:27.239890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.200 [2024-07-21 03:44:27.239918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.200 qpair failed and we were unable to recover it. 00:34:42.200 [2024-07-21 03:44:27.240064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.200 [2024-07-21 03:44:27.240093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.200 qpair failed and we were unable to recover it. 00:34:42.200 [2024-07-21 03:44:27.240203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.200 [2024-07-21 03:44:27.240231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.200 qpair failed and we were unable to recover it. 00:34:42.200 [2024-07-21 03:44:27.240361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.200 [2024-07-21 03:44:27.240387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.200 qpair failed and we were unable to recover it. 00:34:42.200 [2024-07-21 03:44:27.240476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.200 [2024-07-21 03:44:27.240504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.200 qpair failed and we were unable to recover it. 00:34:42.200 [2024-07-21 03:44:27.240642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.200 [2024-07-21 03:44:27.240672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.200 qpair failed and we were unable to recover it. 00:34:42.200 [2024-07-21 03:44:27.240803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.200 [2024-07-21 03:44:27.240833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.200 qpair failed and we were unable to recover it. 00:34:42.200 [2024-07-21 03:44:27.240968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.200 [2024-07-21 03:44:27.240993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.200 qpair failed and we were unable to recover it. 00:34:42.200 [2024-07-21 03:44:27.241124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.200 [2024-07-21 03:44:27.241150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.200 qpair failed and we were unable to recover it. 00:34:42.200 [2024-07-21 03:44:27.241292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.200 [2024-07-21 03:44:27.241321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.200 qpair failed and we were unable to recover it. 00:34:42.200 [2024-07-21 03:44:27.241457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.200 [2024-07-21 03:44:27.241485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.200 qpair failed and we were unable to recover it. 00:34:42.200 [2024-07-21 03:44:27.241650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.200 [2024-07-21 03:44:27.241676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.200 qpair failed and we were unable to recover it. 00:34:42.200 [2024-07-21 03:44:27.241795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.200 [2024-07-21 03:44:27.241821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.200 qpair failed and we were unable to recover it. 00:34:42.200 [2024-07-21 03:44:27.241971] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc8390 is same with the state(5) to be set 00:34:42.200 [2024-07-21 03:44:27.242150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.200 [2024-07-21 03:44:27.242194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.200 qpair failed and we were unable to recover it. 00:34:42.200 [2024-07-21 03:44:27.242326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.200 [2024-07-21 03:44:27.242355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.200 qpair failed and we were unable to recover it. 00:34:42.200 [2024-07-21 03:44:27.242504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.200 [2024-07-21 03:44:27.242531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.200 qpair failed and we were unable to recover it. 00:34:42.200 [2024-07-21 03:44:27.242674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.200 [2024-07-21 03:44:27.242704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.200 qpair failed and we were unable to recover it. 00:34:42.200 [2024-07-21 03:44:27.242844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.200 [2024-07-21 03:44:27.242870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.200 qpair failed and we were unable to recover it. 00:34:42.200 [2024-07-21 03:44:27.242974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.200 [2024-07-21 03:44:27.243003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.200 qpair failed and we were unable to recover it. 00:34:42.200 [2024-07-21 03:44:27.243128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.200 [2024-07-21 03:44:27.243156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.200 qpair failed and we were unable to recover it. 00:34:42.200 [2024-07-21 03:44:27.243249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.200 [2024-07-21 03:44:27.243276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.200 qpair failed and we were unable to recover it. 00:34:42.200 [2024-07-21 03:44:27.243391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.200 [2024-07-21 03:44:27.243418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.200 qpair failed and we were unable to recover it. 00:34:42.200 [2024-07-21 03:44:27.243509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.200 [2024-07-21 03:44:27.243538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.200 qpair failed and we were unable to recover it. 00:34:42.200 [2024-07-21 03:44:27.243636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.200 [2024-07-21 03:44:27.243664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.200 qpair failed and we were unable to recover it. 00:34:42.200 [2024-07-21 03:44:27.243767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.200 [2024-07-21 03:44:27.243794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.200 qpair failed and we were unable to recover it. 00:34:42.200 [2024-07-21 03:44:27.243896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.200 [2024-07-21 03:44:27.243922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.200 qpair failed and we were unable to recover it. 00:34:42.200 [2024-07-21 03:44:27.244074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.200 [2024-07-21 03:44:27.244100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.200 qpair failed and we were unable to recover it. 00:34:42.200 [2024-07-21 03:44:27.244216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.200 [2024-07-21 03:44:27.244260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.200 qpair failed and we were unable to recover it. 00:34:42.200 [2024-07-21 03:44:27.244389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.200 [2024-07-21 03:44:27.244417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.200 qpair failed and we were unable to recover it. 00:34:42.200 [2024-07-21 03:44:27.244550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.200 [2024-07-21 03:44:27.244576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.200 qpair failed and we were unable to recover it. 00:34:42.200 [2024-07-21 03:44:27.244706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.200 [2024-07-21 03:44:27.244732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.200 qpair failed and we were unable to recover it. 00:34:42.200 [2024-07-21 03:44:27.244820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.200 [2024-07-21 03:44:27.244846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.200 qpair failed and we were unable to recover it. 00:34:42.200 [2024-07-21 03:44:27.244957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.200 [2024-07-21 03:44:27.244983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.200 qpair failed and we were unable to recover it. 00:34:42.200 [2024-07-21 03:44:27.245072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.200 [2024-07-21 03:44:27.245097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.200 qpair failed and we were unable to recover it. 00:34:42.200 [2024-07-21 03:44:27.245237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.200 [2024-07-21 03:44:27.245268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.200 qpair failed and we were unable to recover it. 00:34:42.200 [2024-07-21 03:44:27.245429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.200 [2024-07-21 03:44:27.245459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.200 qpair failed and we were unable to recover it. 00:34:42.200 [2024-07-21 03:44:27.245564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.200 [2024-07-21 03:44:27.245595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.200 qpair failed and we were unable to recover it. 00:34:42.200 [2024-07-21 03:44:27.245722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.200 [2024-07-21 03:44:27.245749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.200 qpair failed and we were unable to recover it. 00:34:42.200 [2024-07-21 03:44:27.245841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.200 [2024-07-21 03:44:27.245869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.200 qpair failed and we were unable to recover it. 00:34:42.200 [2024-07-21 03:44:27.245970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.200 [2024-07-21 03:44:27.245997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.200 qpair failed and we were unable to recover it. 00:34:42.200 [2024-07-21 03:44:27.246169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.200 [2024-07-21 03:44:27.246199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.200 qpair failed and we were unable to recover it. 00:34:42.200 [2024-07-21 03:44:27.246339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.200 [2024-07-21 03:44:27.246366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.200 qpair failed and we were unable to recover it. 00:34:42.200 [2024-07-21 03:44:27.246454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.200 [2024-07-21 03:44:27.246482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.200 qpair failed and we were unable to recover it. 00:34:42.200 [2024-07-21 03:44:27.246602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.200 [2024-07-21 03:44:27.246663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.200 qpair failed and we were unable to recover it. 00:34:42.200 [2024-07-21 03:44:27.246788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.200 [2024-07-21 03:44:27.246814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.200 qpair failed and we were unable to recover it. 00:34:42.200 [2024-07-21 03:44:27.246936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.200 [2024-07-21 03:44:27.246962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.200 qpair failed and we were unable to recover it. 00:34:42.200 [2024-07-21 03:44:27.247131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.200 [2024-07-21 03:44:27.247160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.200 qpair failed and we were unable to recover it. 00:34:42.200 [2024-07-21 03:44:27.247308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.201 [2024-07-21 03:44:27.247333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.201 qpair failed and we were unable to recover it. 00:34:42.201 [2024-07-21 03:44:27.247450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.201 [2024-07-21 03:44:27.247476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.201 qpair failed and we were unable to recover it. 00:34:42.201 [2024-07-21 03:44:27.247628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.201 [2024-07-21 03:44:27.247658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.201 qpair failed and we were unable to recover it. 00:34:42.201 [2024-07-21 03:44:27.247797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.201 [2024-07-21 03:44:27.247822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.201 qpair failed and we were unable to recover it. 00:34:42.201 [2024-07-21 03:44:27.247964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.201 [2024-07-21 03:44:27.248004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.201 qpair failed and we were unable to recover it. 00:34:42.201 [2024-07-21 03:44:27.248127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.201 [2024-07-21 03:44:27.248156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.201 qpair failed and we were unable to recover it. 00:34:42.201 [2024-07-21 03:44:27.248297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.201 [2024-07-21 03:44:27.248323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.201 qpair failed and we were unable to recover it. 00:34:42.201 [2024-07-21 03:44:27.248425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.201 [2024-07-21 03:44:27.248451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.201 qpair failed and we were unable to recover it. 00:34:42.201 [2024-07-21 03:44:27.248596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.201 [2024-07-21 03:44:27.248627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.201 qpair failed and we were unable to recover it. 00:34:42.201 [2024-07-21 03:44:27.248717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.201 [2024-07-21 03:44:27.248744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.201 qpair failed and we were unable to recover it. 00:34:42.201 [2024-07-21 03:44:27.248849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.201 [2024-07-21 03:44:27.248875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.201 qpair failed and we were unable to recover it. 00:34:42.201 [2024-07-21 03:44:27.248970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.201 [2024-07-21 03:44:27.248995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.201 qpair failed and we were unable to recover it. 00:34:42.201 [2024-07-21 03:44:27.249116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.201 [2024-07-21 03:44:27.249142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.201 qpair failed and we were unable to recover it. 00:34:42.201 [2024-07-21 03:44:27.249240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.201 [2024-07-21 03:44:27.249268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.201 qpair failed and we were unable to recover it. 00:34:42.201 [2024-07-21 03:44:27.249412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.201 [2024-07-21 03:44:27.249441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.201 qpair failed and we were unable to recover it. 00:34:42.201 [2024-07-21 03:44:27.249606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.201 [2024-07-21 03:44:27.249642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.201 qpair failed and we were unable to recover it. 00:34:42.201 [2024-07-21 03:44:27.249751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.201 [2024-07-21 03:44:27.249777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.201 qpair failed and we were unable to recover it. 00:34:42.201 [2024-07-21 03:44:27.249917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.201 [2024-07-21 03:44:27.249946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.201 qpair failed and we were unable to recover it. 00:34:42.201 [2024-07-21 03:44:27.250097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.201 [2024-07-21 03:44:27.250123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.201 qpair failed and we were unable to recover it. 00:34:42.201 [2024-07-21 03:44:27.250212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.201 [2024-07-21 03:44:27.250238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.201 qpair failed and we were unable to recover it. 00:34:42.201 [2024-07-21 03:44:27.250379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.201 [2024-07-21 03:44:27.250410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.201 qpair failed and we were unable to recover it. 00:34:42.201 [2024-07-21 03:44:27.250555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.201 [2024-07-21 03:44:27.250581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.201 qpair failed and we were unable to recover it. 00:34:42.201 [2024-07-21 03:44:27.250717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.201 [2024-07-21 03:44:27.250743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.201 qpair failed and we were unable to recover it. 00:34:42.201 [2024-07-21 03:44:27.250832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.201 [2024-07-21 03:44:27.250858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.201 qpair failed and we were unable to recover it. 00:34:42.201 [2024-07-21 03:44:27.250950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.201 [2024-07-21 03:44:27.250976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.201 qpair failed and we were unable to recover it. 00:34:42.201 [2024-07-21 03:44:27.251065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.201 [2024-07-21 03:44:27.251091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.201 qpair failed and we were unable to recover it. 00:34:42.201 [2024-07-21 03:44:27.251237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.201 [2024-07-21 03:44:27.251263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.201 qpair failed and we were unable to recover it. 00:34:42.201 [2024-07-21 03:44:27.251348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.201 [2024-07-21 03:44:27.251374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.201 qpair failed and we were unable to recover it. 00:34:42.201 [2024-07-21 03:44:27.251464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.201 [2024-07-21 03:44:27.251490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.201 qpair failed and we were unable to recover it. 00:34:42.201 [2024-07-21 03:44:27.251576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.201 [2024-07-21 03:44:27.251626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.201 qpair failed and we were unable to recover it. 00:34:42.201 [2024-07-21 03:44:27.251747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.201 [2024-07-21 03:44:27.251773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.201 qpair failed and we were unable to recover it. 00:34:42.201 [2024-07-21 03:44:27.251874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.201 [2024-07-21 03:44:27.251900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.201 qpair failed and we were unable to recover it. 00:34:42.201 [2024-07-21 03:44:27.251980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.201 [2024-07-21 03:44:27.252006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.201 qpair failed and we were unable to recover it. 00:34:42.201 [2024-07-21 03:44:27.252151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.201 [2024-07-21 03:44:27.252181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.201 qpair failed and we were unable to recover it. 00:34:42.201 [2024-07-21 03:44:27.252318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.201 [2024-07-21 03:44:27.252347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.201 qpair failed and we were unable to recover it. 00:34:42.201 [2024-07-21 03:44:27.252519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.201 [2024-07-21 03:44:27.252564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.201 qpair failed and we were unable to recover it. 00:34:42.201 [2024-07-21 03:44:27.252699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.201 [2024-07-21 03:44:27.252728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.201 qpair failed and we were unable to recover it. 00:34:42.201 [2024-07-21 03:44:27.252855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.201 [2024-07-21 03:44:27.252882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.201 qpair failed and we were unable to recover it. 00:34:42.201 [2024-07-21 03:44:27.252995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.201 [2024-07-21 03:44:27.253021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.201 qpair failed and we were unable to recover it. 00:34:42.201 [2024-07-21 03:44:27.253167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.201 [2024-07-21 03:44:27.253194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.201 qpair failed and we were unable to recover it. 00:34:42.201 [2024-07-21 03:44:27.253329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.201 [2024-07-21 03:44:27.253359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.201 qpair failed and we were unable to recover it. 00:34:42.201 [2024-07-21 03:44:27.253502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.201 [2024-07-21 03:44:27.253530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.201 qpair failed and we were unable to recover it. 00:34:42.201 [2024-07-21 03:44:27.253697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.201 [2024-07-21 03:44:27.253723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.201 qpair failed and we were unable to recover it. 00:34:42.201 [2024-07-21 03:44:27.253847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.201 [2024-07-21 03:44:27.253873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.201 qpair failed and we were unable to recover it. 00:34:42.201 [2024-07-21 03:44:27.254087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.201 [2024-07-21 03:44:27.254147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.201 qpair failed and we were unable to recover it. 00:34:42.201 [2024-07-21 03:44:27.254312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.201 [2024-07-21 03:44:27.254338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.201 qpair failed and we were unable to recover it. 00:34:42.201 [2024-07-21 03:44:27.254457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.201 [2024-07-21 03:44:27.254482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.201 qpair failed and we were unable to recover it. 00:34:42.201 [2024-07-21 03:44:27.254597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.201 [2024-07-21 03:44:27.254653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.201 qpair failed and we were unable to recover it. 00:34:42.201 [2024-07-21 03:44:27.254817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.201 [2024-07-21 03:44:27.254845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.201 qpair failed and we were unable to recover it. 00:34:42.201 [2024-07-21 03:44:27.255007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.201 [2024-07-21 03:44:27.255037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.201 qpair failed and we were unable to recover it. 00:34:42.201 [2024-07-21 03:44:27.255172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.201 [2024-07-21 03:44:27.255199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.201 qpair failed and we were unable to recover it. 00:34:42.201 [2024-07-21 03:44:27.255293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.201 [2024-07-21 03:44:27.255321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.201 qpair failed and we were unable to recover it. 00:34:42.201 [2024-07-21 03:44:27.255440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.201 [2024-07-21 03:44:27.255467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.201 qpair failed and we were unable to recover it. 00:34:42.201 [2024-07-21 03:44:27.255582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.201 [2024-07-21 03:44:27.255609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.201 qpair failed and we were unable to recover it. 00:34:42.201 [2024-07-21 03:44:27.255794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.201 [2024-07-21 03:44:27.255820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.201 qpair failed and we were unable to recover it. 00:34:42.201 [2024-07-21 03:44:27.255916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.201 [2024-07-21 03:44:27.255942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.201 qpair failed and we were unable to recover it. 00:34:42.201 [2024-07-21 03:44:27.256095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.201 [2024-07-21 03:44:27.256121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.201 qpair failed and we were unable to recover it. 00:34:42.201 [2024-07-21 03:44:27.256242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.202 [2024-07-21 03:44:27.256268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.202 qpair failed and we were unable to recover it. 00:34:42.202 [2024-07-21 03:44:27.256384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.202 [2024-07-21 03:44:27.256410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.202 qpair failed and we were unable to recover it. 00:34:42.202 [2024-07-21 03:44:27.256566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.202 [2024-07-21 03:44:27.256596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.202 qpair failed and we were unable to recover it. 00:34:42.202 [2024-07-21 03:44:27.256776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.202 [2024-07-21 03:44:27.256816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.202 qpair failed and we were unable to recover it. 00:34:42.202 [2024-07-21 03:44:27.256924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.202 [2024-07-21 03:44:27.256953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.202 qpair failed and we were unable to recover it. 00:34:42.202 [2024-07-21 03:44:27.257094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.202 [2024-07-21 03:44:27.257138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.202 qpair failed and we were unable to recover it. 00:34:42.202 [2024-07-21 03:44:27.257308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.202 [2024-07-21 03:44:27.257338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.202 qpair failed and we were unable to recover it. 00:34:42.202 [2024-07-21 03:44:27.257518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.202 [2024-07-21 03:44:27.257545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.202 qpair failed and we were unable to recover it. 00:34:42.202 [2024-07-21 03:44:27.257665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.202 [2024-07-21 03:44:27.257694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.202 qpair failed and we were unable to recover it. 00:34:42.202 [2024-07-21 03:44:27.257794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.202 [2024-07-21 03:44:27.257823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.202 qpair failed and we were unable to recover it. 00:34:42.202 [2024-07-21 03:44:27.257977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.202 [2024-07-21 03:44:27.258020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.202 qpair failed and we were unable to recover it. 00:34:42.202 [2024-07-21 03:44:27.258117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.202 [2024-07-21 03:44:27.258148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.202 qpair failed and we were unable to recover it. 00:34:42.202 [2024-07-21 03:44:27.258341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.202 [2024-07-21 03:44:27.258371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.202 qpair failed and we were unable to recover it. 00:34:42.202 [2024-07-21 03:44:27.258506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.202 [2024-07-21 03:44:27.258535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.202 qpair failed and we were unable to recover it. 00:34:42.202 [2024-07-21 03:44:27.258717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.202 [2024-07-21 03:44:27.258759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.202 qpair failed and we were unable to recover it. 00:34:42.202 [2024-07-21 03:44:27.258907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.202 [2024-07-21 03:44:27.258947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.202 qpair failed and we were unable to recover it. 00:34:42.202 [2024-07-21 03:44:27.259101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.202 [2024-07-21 03:44:27.259152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.202 qpair failed and we were unable to recover it. 00:34:42.202 [2024-07-21 03:44:27.259267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.202 [2024-07-21 03:44:27.259311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.202 qpair failed and we were unable to recover it. 00:34:42.202 [2024-07-21 03:44:27.259458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.202 [2024-07-21 03:44:27.259484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.202 qpair failed and we were unable to recover it. 00:34:42.202 [2024-07-21 03:44:27.259620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.202 [2024-07-21 03:44:27.259648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.202 qpair failed and we were unable to recover it. 00:34:42.202 [2024-07-21 03:44:27.259768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.202 [2024-07-21 03:44:27.259795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.202 qpair failed and we were unable to recover it. 00:34:42.202 [2024-07-21 03:44:27.259889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.202 [2024-07-21 03:44:27.259915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.202 qpair failed and we were unable to recover it. 00:34:42.202 [2024-07-21 03:44:27.260041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.202 [2024-07-21 03:44:27.260068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.202 qpair failed and we were unable to recover it. 00:34:42.202 [2024-07-21 03:44:27.260216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.202 [2024-07-21 03:44:27.260243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.202 qpair failed and we were unable to recover it. 00:34:42.202 [2024-07-21 03:44:27.260345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.202 [2024-07-21 03:44:27.260372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.202 qpair failed and we were unable to recover it. 00:34:42.202 [2024-07-21 03:44:27.260487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.202 [2024-07-21 03:44:27.260513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.202 qpair failed and we were unable to recover it. 00:34:42.202 [2024-07-21 03:44:27.260637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.202 [2024-07-21 03:44:27.260664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.202 qpair failed and we were unable to recover it. 00:34:42.202 [2024-07-21 03:44:27.260788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.202 [2024-07-21 03:44:27.260814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.202 qpair failed and we were unable to recover it. 00:34:42.202 [2024-07-21 03:44:27.260916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.202 [2024-07-21 03:44:27.260945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.202 qpair failed and we were unable to recover it. 00:34:42.202 [2024-07-21 03:44:27.261059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.202 [2024-07-21 03:44:27.261085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.202 qpair failed and we were unable to recover it. 00:34:42.202 [2024-07-21 03:44:27.261208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.202 [2024-07-21 03:44:27.261234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.202 qpair failed and we were unable to recover it. 00:34:42.202 [2024-07-21 03:44:27.261353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.202 [2024-07-21 03:44:27.261380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.202 qpair failed and we were unable to recover it. 00:34:42.202 [2024-07-21 03:44:27.261481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.202 [2024-07-21 03:44:27.261508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.202 qpair failed and we were unable to recover it. 00:34:42.202 [2024-07-21 03:44:27.261691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.202 [2024-07-21 03:44:27.261726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.202 qpair failed and we were unable to recover it. 00:34:42.202 [2024-07-21 03:44:27.261868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.202 [2024-07-21 03:44:27.261898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.202 qpair failed and we were unable to recover it. 00:34:42.202 [2024-07-21 03:44:27.262004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.202 [2024-07-21 03:44:27.262032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.202 qpair failed and we were unable to recover it. 00:34:42.202 [2024-07-21 03:44:27.262165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.202 [2024-07-21 03:44:27.262193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.202 qpair failed and we were unable to recover it. 00:34:42.202 [2024-07-21 03:44:27.262294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.202 [2024-07-21 03:44:27.262323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.202 qpair failed and we were unable to recover it. 00:34:42.202 [2024-07-21 03:44:27.262439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.202 [2024-07-21 03:44:27.262465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.202 qpair failed and we were unable to recover it. 00:34:42.202 [2024-07-21 03:44:27.262547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.202 [2024-07-21 03:44:27.262580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.202 qpair failed and we were unable to recover it. 00:34:42.202 [2024-07-21 03:44:27.262740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.202 [2024-07-21 03:44:27.262769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.202 qpair failed and we were unable to recover it. 00:34:42.202 [2024-07-21 03:44:27.262901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.202 [2024-07-21 03:44:27.262930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.202 qpair failed and we were unable to recover it. 00:34:42.202 [2024-07-21 03:44:27.263060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.202 [2024-07-21 03:44:27.263090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.202 qpair failed and we were unable to recover it. 00:34:42.202 [2024-07-21 03:44:27.263253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.202 [2024-07-21 03:44:27.263287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.202 qpair failed and we were unable to recover it. 00:34:42.202 [2024-07-21 03:44:27.263412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.202 [2024-07-21 03:44:27.263440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.202 qpair failed and we were unable to recover it. 00:34:42.202 [2024-07-21 03:44:27.263565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.202 [2024-07-21 03:44:27.263593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.202 qpair failed and we were unable to recover it. 00:34:42.202 [2024-07-21 03:44:27.263739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.202 [2024-07-21 03:44:27.263782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.202 qpair failed and we were unable to recover it. 00:34:42.202 [2024-07-21 03:44:27.263925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.202 [2024-07-21 03:44:27.263955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.202 qpair failed and we were unable to recover it. 00:34:42.202 [2024-07-21 03:44:27.264148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.202 [2024-07-21 03:44:27.264201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.202 qpair failed and we were unable to recover it. 00:34:42.202 [2024-07-21 03:44:27.264323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.202 [2024-07-21 03:44:27.264350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.202 qpair failed and we were unable to recover it. 00:34:42.202 [2024-07-21 03:44:27.264480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.202 [2024-07-21 03:44:27.264520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.202 qpair failed and we were unable to recover it. 00:34:42.203 [2024-07-21 03:44:27.264664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.203 [2024-07-21 03:44:27.264694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.203 qpair failed and we were unable to recover it. 00:34:42.203 [2024-07-21 03:44:27.264821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.203 [2024-07-21 03:44:27.264850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.203 qpair failed and we were unable to recover it. 00:34:42.203 [2024-07-21 03:44:27.264990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.203 [2024-07-21 03:44:27.265019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.203 qpair failed and we were unable to recover it. 00:34:42.203 [2024-07-21 03:44:27.265232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.203 [2024-07-21 03:44:27.265298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.203 qpair failed and we were unable to recover it. 00:34:42.203 [2024-07-21 03:44:27.265416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.203 [2024-07-21 03:44:27.265444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.203 qpair failed and we were unable to recover it. 00:34:42.203 [2024-07-21 03:44:27.265539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.203 [2024-07-21 03:44:27.265565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.203 qpair failed and we were unable to recover it. 00:34:42.203 [2024-07-21 03:44:27.265720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.203 [2024-07-21 03:44:27.265747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.203 qpair failed and we were unable to recover it. 00:34:42.203 [2024-07-21 03:44:27.265908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.203 [2024-07-21 03:44:27.265937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.203 qpair failed and we were unable to recover it. 00:34:42.203 [2024-07-21 03:44:27.266039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.203 [2024-07-21 03:44:27.266067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.203 qpair failed and we were unable to recover it. 00:34:42.203 [2024-07-21 03:44:27.266190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.203 [2024-07-21 03:44:27.266219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.203 qpair failed and we were unable to recover it. 00:34:42.203 [2024-07-21 03:44:27.266356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.203 [2024-07-21 03:44:27.266385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.203 qpair failed and we were unable to recover it. 00:34:42.203 [2024-07-21 03:44:27.266487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.203 [2024-07-21 03:44:27.266512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.203 qpair failed and we were unable to recover it. 00:34:42.203 [2024-07-21 03:44:27.266648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.203 [2024-07-21 03:44:27.266675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.203 qpair failed and we were unable to recover it. 00:34:42.203 [2024-07-21 03:44:27.266796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.203 [2024-07-21 03:44:27.266823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.203 qpair failed and we were unable to recover it. 00:34:42.203 [2024-07-21 03:44:27.266943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.203 [2024-07-21 03:44:27.266969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.203 qpair failed and we were unable to recover it. 00:34:42.203 [2024-07-21 03:44:27.267092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.203 [2024-07-21 03:44:27.267135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.203 qpair failed and we were unable to recover it. 00:34:42.203 [2024-07-21 03:44:27.267265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.203 [2024-07-21 03:44:27.267294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.203 qpair failed and we were unable to recover it. 00:34:42.203 [2024-07-21 03:44:27.267405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.203 [2024-07-21 03:44:27.267434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.203 qpair failed and we were unable to recover it. 00:34:42.203 [2024-07-21 03:44:27.267580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.203 [2024-07-21 03:44:27.267632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.203 qpair failed and we were unable to recover it. 00:34:42.203 [2024-07-21 03:44:27.267778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.203 [2024-07-21 03:44:27.267808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.203 qpair failed and we were unable to recover it. 00:34:42.203 [2024-07-21 03:44:27.267953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.203 [2024-07-21 03:44:27.267985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.203 qpair failed and we were unable to recover it. 00:34:42.203 [2024-07-21 03:44:27.268082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.203 [2024-07-21 03:44:27.268111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.203 qpair failed and we were unable to recover it. 00:34:42.203 [2024-07-21 03:44:27.268240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.203 [2024-07-21 03:44:27.268270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.203 qpair failed and we were unable to recover it. 00:34:42.203 [2024-07-21 03:44:27.268441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.203 [2024-07-21 03:44:27.268498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.203 qpair failed and we were unable to recover it. 00:34:42.203 [2024-07-21 03:44:27.268603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.203 [2024-07-21 03:44:27.268638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.203 qpair failed and we were unable to recover it. 00:34:42.203 [2024-07-21 03:44:27.268732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.203 [2024-07-21 03:44:27.268759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.203 qpair failed and we were unable to recover it. 00:34:42.203 [2024-07-21 03:44:27.268916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.203 [2024-07-21 03:44:27.268944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.203 qpair failed and we were unable to recover it. 00:34:42.203 [2024-07-21 03:44:27.269112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.203 [2024-07-21 03:44:27.269141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.203 qpair failed and we were unable to recover it. 00:34:42.203 [2024-07-21 03:44:27.269320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.203 [2024-07-21 03:44:27.269365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.203 qpair failed and we were unable to recover it. 00:34:42.203 [2024-07-21 03:44:27.269466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.203 [2024-07-21 03:44:27.269494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.203 qpair failed and we were unable to recover it. 00:34:42.203 [2024-07-21 03:44:27.269620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.203 [2024-07-21 03:44:27.269647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.203 qpair failed and we were unable to recover it. 00:34:42.203 [2024-07-21 03:44:27.269768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.203 [2024-07-21 03:44:27.269794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.203 qpair failed and we were unable to recover it. 00:34:42.203 [2024-07-21 03:44:27.269905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.203 [2024-07-21 03:44:27.269934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.203 qpair failed and we were unable to recover it. 00:34:42.203 [2024-07-21 03:44:27.270068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.203 [2024-07-21 03:44:27.270098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.203 qpair failed and we were unable to recover it. 00:34:42.203 [2024-07-21 03:44:27.270239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.203 [2024-07-21 03:44:27.270267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.203 qpair failed and we were unable to recover it. 00:34:42.203 [2024-07-21 03:44:27.270426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.203 [2024-07-21 03:44:27.270472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.203 qpair failed and we were unable to recover it. 00:34:42.203 [2024-07-21 03:44:27.270581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.203 [2024-07-21 03:44:27.270627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.203 qpair failed and we were unable to recover it. 00:34:42.203 [2024-07-21 03:44:27.270739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.203 [2024-07-21 03:44:27.270765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.203 qpair failed and we were unable to recover it. 00:34:42.203 [2024-07-21 03:44:27.270888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.203 [2024-07-21 03:44:27.270935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.203 qpair failed and we were unable to recover it. 00:34:42.203 [2024-07-21 03:44:27.271083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.203 [2024-07-21 03:44:27.271113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.203 qpair failed and we were unable to recover it. 00:34:42.203 [2024-07-21 03:44:27.271244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.203 [2024-07-21 03:44:27.271274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.203 qpair failed and we were unable to recover it. 00:34:42.203 [2024-07-21 03:44:27.271420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.203 [2024-07-21 03:44:27.271450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.203 qpair failed and we were unable to recover it. 00:34:42.203 [2024-07-21 03:44:27.271628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.203 [2024-07-21 03:44:27.271655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.203 qpair failed and we were unable to recover it. 00:34:42.203 [2024-07-21 03:44:27.271750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.203 [2024-07-21 03:44:27.271776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.203 qpair failed and we were unable to recover it. 00:34:42.203 [2024-07-21 03:44:27.271899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.203 [2024-07-21 03:44:27.271925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.203 qpair failed and we were unable to recover it. 00:34:42.203 [2024-07-21 03:44:27.272048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.203 [2024-07-21 03:44:27.272076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.203 qpair failed and we were unable to recover it. 00:34:42.203 [2024-07-21 03:44:27.272211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.203 [2024-07-21 03:44:27.272244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.203 qpair failed and we were unable to recover it. 00:34:42.203 [2024-07-21 03:44:27.272385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.203 [2024-07-21 03:44:27.272430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.203 qpair failed and we were unable to recover it. 00:34:42.203 [2024-07-21 03:44:27.272535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.203 [2024-07-21 03:44:27.272561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.203 qpair failed and we were unable to recover it. 00:34:42.203 [2024-07-21 03:44:27.272673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.203 [2024-07-21 03:44:27.272701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.203 qpair failed and we were unable to recover it. 00:34:42.203 [2024-07-21 03:44:27.272791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.203 [2024-07-21 03:44:27.272817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.203 qpair failed and we were unable to recover it. 00:34:42.203 [2024-07-21 03:44:27.273004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.203 [2024-07-21 03:44:27.273074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.203 qpair failed and we were unable to recover it. 00:34:42.203 [2024-07-21 03:44:27.273214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.203 [2024-07-21 03:44:27.273258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.203 qpair failed and we were unable to recover it. 00:34:42.203 [2024-07-21 03:44:27.273428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.203 [2024-07-21 03:44:27.273474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.203 qpair failed and we were unable to recover it. 00:34:42.203 [2024-07-21 03:44:27.273597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.203 [2024-07-21 03:44:27.273632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.203 qpair failed and we were unable to recover it. 00:34:42.203 [2024-07-21 03:44:27.273766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.203 [2024-07-21 03:44:27.273793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.203 qpair failed and we were unable to recover it. 00:34:42.203 [2024-07-21 03:44:27.273916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.204 [2024-07-21 03:44:27.273961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.204 qpair failed and we were unable to recover it. 00:34:42.204 [2024-07-21 03:44:27.274057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.204 [2024-07-21 03:44:27.274084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.204 qpair failed and we were unable to recover it. 00:34:42.204 [2024-07-21 03:44:27.274272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.204 [2024-07-21 03:44:27.274316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.204 qpair failed and we were unable to recover it. 00:34:42.204 [2024-07-21 03:44:27.274438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.204 [2024-07-21 03:44:27.274493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.204 qpair failed and we were unable to recover it. 00:34:42.204 [2024-07-21 03:44:27.274669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.204 [2024-07-21 03:44:27.274696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.204 qpair failed and we were unable to recover it. 00:34:42.204 [2024-07-21 03:44:27.274830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.204 [2024-07-21 03:44:27.274859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.204 qpair failed and we were unable to recover it. 00:34:42.204 [2024-07-21 03:44:27.274960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.204 [2024-07-21 03:44:27.274989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.204 qpair failed and we were unable to recover it. 00:34:42.204 [2024-07-21 03:44:27.275121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.204 [2024-07-21 03:44:27.275150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.204 qpair failed and we were unable to recover it. 00:34:42.204 [2024-07-21 03:44:27.275348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.204 [2024-07-21 03:44:27.275433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.204 qpair failed and we were unable to recover it. 00:34:42.204 [2024-07-21 03:44:27.275583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.204 [2024-07-21 03:44:27.275610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.204 qpair failed and we were unable to recover it. 00:34:42.204 [2024-07-21 03:44:27.275748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.204 [2024-07-21 03:44:27.275776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.204 qpair failed and we were unable to recover it. 00:34:42.204 [2024-07-21 03:44:27.275906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.204 [2024-07-21 03:44:27.275935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.204 qpair failed and we were unable to recover it. 00:34:42.204 [2024-07-21 03:44:27.276065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.204 [2024-07-21 03:44:27.276094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.204 qpair failed and we were unable to recover it. 00:34:42.204 [2024-07-21 03:44:27.276273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.204 [2024-07-21 03:44:27.276340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.204 qpair failed and we were unable to recover it. 00:34:42.204 [2024-07-21 03:44:27.276423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.204 [2024-07-21 03:44:27.276448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.204 qpair failed and we were unable to recover it. 00:34:42.204 [2024-07-21 03:44:27.276566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.204 [2024-07-21 03:44:27.276595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.204 qpair failed and we were unable to recover it. 00:34:42.204 [2024-07-21 03:44:27.276694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.204 [2024-07-21 03:44:27.276720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.204 qpair failed and we were unable to recover it. 00:34:42.204 [2024-07-21 03:44:27.276838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.204 [2024-07-21 03:44:27.276866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.204 qpair failed and we were unable to recover it. 00:34:42.204 [2024-07-21 03:44:27.276991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.204 [2024-07-21 03:44:27.277017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.204 qpair failed and we were unable to recover it. 00:34:42.204 [2024-07-21 03:44:27.277137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.204 [2024-07-21 03:44:27.277162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.204 qpair failed and we were unable to recover it. 00:34:42.204 [2024-07-21 03:44:27.277246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.204 [2024-07-21 03:44:27.277272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.204 qpair failed and we were unable to recover it. 00:34:42.204 [2024-07-21 03:44:27.277366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.204 [2024-07-21 03:44:27.277393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.204 qpair failed and we were unable to recover it. 00:34:42.204 [2024-07-21 03:44:27.277542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.204 [2024-07-21 03:44:27.277567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.204 qpair failed and we were unable to recover it. 00:34:42.204 [2024-07-21 03:44:27.277712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.204 [2024-07-21 03:44:27.277740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.204 qpair failed and we were unable to recover it. 00:34:42.204 [2024-07-21 03:44:27.277830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.204 [2024-07-21 03:44:27.277871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.204 qpair failed and we were unable to recover it. 00:34:42.204 [2024-07-21 03:44:27.277982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.204 [2024-07-21 03:44:27.278010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.204 qpair failed and we were unable to recover it. 00:34:42.204 [2024-07-21 03:44:27.278131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.204 [2024-07-21 03:44:27.278160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.204 qpair failed and we were unable to recover it. 00:34:42.204 [2024-07-21 03:44:27.278294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.204 [2024-07-21 03:44:27.278323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.204 qpair failed and we were unable to recover it. 00:34:42.204 [2024-07-21 03:44:27.278430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.204 [2024-07-21 03:44:27.278458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.204 qpair failed and we were unable to recover it. 00:34:42.204 [2024-07-21 03:44:27.278597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.204 [2024-07-21 03:44:27.278639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.204 qpair failed and we were unable to recover it. 00:34:42.204 [2024-07-21 03:44:27.278738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.204 [2024-07-21 03:44:27.278764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.204 qpair failed and we were unable to recover it. 00:34:42.204 [2024-07-21 03:44:27.278870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.204 [2024-07-21 03:44:27.278917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.204 qpair failed and we were unable to recover it. 00:34:42.204 [2024-07-21 03:44:27.279062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.204 [2024-07-21 03:44:27.279091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.204 qpair failed and we were unable to recover it. 00:34:42.204 [2024-07-21 03:44:27.279249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.204 [2024-07-21 03:44:27.279278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.204 qpair failed and we were unable to recover it. 00:34:42.204 [2024-07-21 03:44:27.279396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.204 [2024-07-21 03:44:27.279440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.204 qpair failed and we were unable to recover it. 00:34:42.204 [2024-07-21 03:44:27.279596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.204 [2024-07-21 03:44:27.279642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.204 qpair failed and we were unable to recover it. 00:34:42.204 [2024-07-21 03:44:27.279755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.204 [2024-07-21 03:44:27.279783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.204 qpair failed and we were unable to recover it. 00:34:42.204 [2024-07-21 03:44:27.279878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.204 [2024-07-21 03:44:27.279905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.204 qpair failed and we were unable to recover it. 00:34:42.204 [2024-07-21 03:44:27.280046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.204 [2024-07-21 03:44:27.280091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.204 qpair failed and we were unable to recover it. 00:34:42.204 [2024-07-21 03:44:27.280234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.204 [2024-07-21 03:44:27.280279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.204 qpair failed and we were unable to recover it. 00:34:42.204 [2024-07-21 03:44:27.280373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.204 [2024-07-21 03:44:27.280401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.204 qpair failed and we were unable to recover it. 00:34:42.204 [2024-07-21 03:44:27.280524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.204 [2024-07-21 03:44:27.280551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.204 qpair failed and we were unable to recover it. 00:34:42.204 [2024-07-21 03:44:27.280652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.204 [2024-07-21 03:44:27.280678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.204 qpair failed and we were unable to recover it. 00:34:42.204 [2024-07-21 03:44:27.280808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.204 [2024-07-21 03:44:27.280837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.204 qpair failed and we were unable to recover it. 00:34:42.204 [2024-07-21 03:44:27.281056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.204 [2024-07-21 03:44:27.281127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.204 qpair failed and we were unable to recover it. 00:34:42.204 [2024-07-21 03:44:27.281387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.204 [2024-07-21 03:44:27.281439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.204 qpair failed and we were unable to recover it. 00:34:42.204 [2024-07-21 03:44:27.281554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.204 [2024-07-21 03:44:27.281582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.204 qpair failed and we were unable to recover it. 00:34:42.204 [2024-07-21 03:44:27.281735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.204 [2024-07-21 03:44:27.281763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.204 qpair failed and we were unable to recover it. 00:34:42.204 [2024-07-21 03:44:27.281935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.204 [2024-07-21 03:44:27.281980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.204 qpair failed and we were unable to recover it. 00:34:42.204 [2024-07-21 03:44:27.282117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.204 [2024-07-21 03:44:27.282145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.204 qpair failed and we were unable to recover it. 00:34:42.204 [2024-07-21 03:44:27.282282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.204 [2024-07-21 03:44:27.282336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.204 qpair failed and we were unable to recover it. 00:34:42.204 [2024-07-21 03:44:27.282461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.204 [2024-07-21 03:44:27.282487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.204 qpair failed and we were unable to recover it. 00:34:42.204 [2024-07-21 03:44:27.282587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.204 [2024-07-21 03:44:27.282634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.204 qpair failed and we were unable to recover it. 00:34:42.204 [2024-07-21 03:44:27.282780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.204 [2024-07-21 03:44:27.282808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.204 qpair failed and we were unable to recover it. 00:34:42.204 [2024-07-21 03:44:27.282922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.204 [2024-07-21 03:44:27.282951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.204 qpair failed and we were unable to recover it. 00:34:42.204 [2024-07-21 03:44:27.283071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.204 [2024-07-21 03:44:27.283097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.204 qpair failed and we were unable to recover it. 00:34:42.204 [2024-07-21 03:44:27.283219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.204 [2024-07-21 03:44:27.283244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.204 qpair failed and we were unable to recover it. 00:34:42.204 [2024-07-21 03:44:27.283358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.205 [2024-07-21 03:44:27.283384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.205 qpair failed and we were unable to recover it. 00:34:42.205 [2024-07-21 03:44:27.283535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.205 [2024-07-21 03:44:27.283563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.205 qpair failed and we were unable to recover it. 00:34:42.205 [2024-07-21 03:44:27.283708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.205 [2024-07-21 03:44:27.283764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.205 qpair failed and we were unable to recover it. 00:34:42.205 [2024-07-21 03:44:27.283881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.205 [2024-07-21 03:44:27.283926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.205 qpair failed and we were unable to recover it. 00:34:42.205 [2024-07-21 03:44:27.284095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.205 [2024-07-21 03:44:27.284140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.205 qpair failed and we were unable to recover it. 00:34:42.205 [2024-07-21 03:44:27.284251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.205 [2024-07-21 03:44:27.284280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.205 qpair failed and we were unable to recover it. 00:34:42.205 [2024-07-21 03:44:27.284424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.205 [2024-07-21 03:44:27.284452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.205 qpair failed and we were unable to recover it. 00:34:42.205 [2024-07-21 03:44:27.284552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.205 [2024-07-21 03:44:27.284579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.205 qpair failed and we were unable to recover it. 00:34:42.205 [2024-07-21 03:44:27.284721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.205 [2024-07-21 03:44:27.284766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.205 qpair failed and we were unable to recover it. 00:34:42.205 [2024-07-21 03:44:27.284941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.205 [2024-07-21 03:44:27.284986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.205 qpair failed and we were unable to recover it. 00:34:42.205 [2024-07-21 03:44:27.285145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.205 [2024-07-21 03:44:27.285189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.205 qpair failed and we were unable to recover it. 00:34:42.205 [2024-07-21 03:44:27.285339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.205 [2024-07-21 03:44:27.285365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.205 qpair failed and we were unable to recover it. 00:34:42.205 [2024-07-21 03:44:27.285488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.205 [2024-07-21 03:44:27.285514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.205 qpair failed and we were unable to recover it. 00:34:42.205 [2024-07-21 03:44:27.285637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.205 [2024-07-21 03:44:27.285684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.205 qpair failed and we were unable to recover it. 00:34:42.205 [2024-07-21 03:44:27.285851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.205 [2024-07-21 03:44:27.285880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.205 qpair failed and we were unable to recover it. 00:34:42.205 [2024-07-21 03:44:27.285972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.205 [2024-07-21 03:44:27.285999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.205 qpair failed and we were unable to recover it. 00:34:42.205 [2024-07-21 03:44:27.286151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.205 [2024-07-21 03:44:27.286202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.205 qpair failed and we were unable to recover it. 00:34:42.205 [2024-07-21 03:44:27.286408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.205 [2024-07-21 03:44:27.286462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.205 qpair failed and we were unable to recover it. 00:34:42.205 [2024-07-21 03:44:27.286595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.205 [2024-07-21 03:44:27.286630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.205 qpair failed and we were unable to recover it. 00:34:42.205 [2024-07-21 03:44:27.286783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.205 [2024-07-21 03:44:27.286811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.205 qpair failed and we were unable to recover it. 00:34:42.205 [2024-07-21 03:44:27.286923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.205 [2024-07-21 03:44:27.286953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.205 qpair failed and we were unable to recover it. 00:34:42.205 [2024-07-21 03:44:27.287166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.205 [2024-07-21 03:44:27.287222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.205 qpair failed and we were unable to recover it. 00:34:42.205 [2024-07-21 03:44:27.287388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.205 [2024-07-21 03:44:27.287433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.205 qpair failed and we were unable to recover it. 00:34:42.205 [2024-07-21 03:44:27.287554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.205 [2024-07-21 03:44:27.287580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.205 qpair failed and we were unable to recover it. 00:34:42.205 [2024-07-21 03:44:27.287742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.205 [2024-07-21 03:44:27.287787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.205 qpair failed and we were unable to recover it. 00:34:42.205 [2024-07-21 03:44:27.287931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.205 [2024-07-21 03:44:27.287975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.205 qpair failed and we were unable to recover it. 00:34:42.205 [2024-07-21 03:44:27.288116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.205 [2024-07-21 03:44:27.288160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.205 qpair failed and we were unable to recover it. 00:34:42.205 [2024-07-21 03:44:27.288292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.205 [2024-07-21 03:44:27.288339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.205 qpair failed and we were unable to recover it. 00:34:42.205 [2024-07-21 03:44:27.288465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.205 [2024-07-21 03:44:27.288492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.205 qpair failed and we were unable to recover it. 00:34:42.205 [2024-07-21 03:44:27.288634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.205 [2024-07-21 03:44:27.288662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.205 qpair failed and we were unable to recover it. 00:34:42.205 [2024-07-21 03:44:27.288783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.205 [2024-07-21 03:44:27.288809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.205 qpair failed and we were unable to recover it. 00:34:42.205 [2024-07-21 03:44:27.288894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.205 [2024-07-21 03:44:27.288920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.205 qpair failed and we were unable to recover it. 00:34:42.205 [2024-07-21 03:44:27.289015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.205 [2024-07-21 03:44:27.289042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.205 qpair failed and we were unable to recover it. 00:34:42.205 [2024-07-21 03:44:27.289195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.205 [2024-07-21 03:44:27.289223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.205 qpair failed and we were unable to recover it. 00:34:42.205 [2024-07-21 03:44:27.289386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.205 [2024-07-21 03:44:27.289416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.205 qpair failed and we were unable to recover it. 00:34:42.205 [2024-07-21 03:44:27.289561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.205 [2024-07-21 03:44:27.289589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.205 qpair failed and we were unable to recover it. 00:34:42.205 [2024-07-21 03:44:27.289729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.205 [2024-07-21 03:44:27.289791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.205 qpair failed and we were unable to recover it. 00:34:42.205 [2024-07-21 03:44:27.289904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.205 [2024-07-21 03:44:27.289959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.205 qpair failed and we were unable to recover it. 00:34:42.205 [2024-07-21 03:44:27.290215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.205 [2024-07-21 03:44:27.290265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.205 qpair failed and we were unable to recover it. 00:34:42.205 [2024-07-21 03:44:27.290510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.205 [2024-07-21 03:44:27.290541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.205 qpair failed and we were unable to recover it. 00:34:42.205 [2024-07-21 03:44:27.290684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.205 [2024-07-21 03:44:27.290714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.205 qpair failed and we were unable to recover it. 00:34:42.205 [2024-07-21 03:44:27.290846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.205 [2024-07-21 03:44:27.290873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.205 qpair failed and we were unable to recover it. 00:34:42.205 [2024-07-21 03:44:27.291049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.205 [2024-07-21 03:44:27.291120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.205 qpair failed and we were unable to recover it. 00:34:42.205 [2024-07-21 03:44:27.291365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.205 [2024-07-21 03:44:27.291418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.205 qpair failed and we were unable to recover it. 00:34:42.205 [2024-07-21 03:44:27.291540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.205 [2024-07-21 03:44:27.291575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.205 qpair failed and we were unable to recover it. 00:34:42.205 [2024-07-21 03:44:27.291735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.205 [2024-07-21 03:44:27.291763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.205 qpair failed and we were unable to recover it. 00:34:42.205 [2024-07-21 03:44:27.291887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.205 [2024-07-21 03:44:27.291917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.205 qpair failed and we were unable to recover it. 00:34:42.205 [2024-07-21 03:44:27.292024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.205 [2024-07-21 03:44:27.292055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.205 qpair failed and we were unable to recover it. 00:34:42.205 [2024-07-21 03:44:27.292155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.205 [2024-07-21 03:44:27.292185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.205 qpair failed and we were unable to recover it. 00:34:42.205 [2024-07-21 03:44:27.292353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.205 [2024-07-21 03:44:27.292411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.205 qpair failed and we were unable to recover it. 00:34:42.205 [2024-07-21 03:44:27.292555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.205 [2024-07-21 03:44:27.292583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.205 qpair failed and we were unable to recover it. 00:34:42.205 [2024-07-21 03:44:27.292728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.206 [2024-07-21 03:44:27.292757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.206 qpair failed and we were unable to recover it. 00:34:42.206 [2024-07-21 03:44:27.292849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.206 [2024-07-21 03:44:27.292877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.206 qpair failed and we were unable to recover it. 00:34:42.206 [2024-07-21 03:44:27.293039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.206 [2024-07-21 03:44:27.293085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.206 qpair failed and we were unable to recover it. 00:34:42.206 [2024-07-21 03:44:27.293226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.206 [2024-07-21 03:44:27.293326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.206 qpair failed and we were unable to recover it. 00:34:42.206 [2024-07-21 03:44:27.293426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.206 [2024-07-21 03:44:27.293452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.206 qpair failed and we were unable to recover it. 00:34:42.206 [2024-07-21 03:44:27.293572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.206 [2024-07-21 03:44:27.293598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.206 qpair failed and we were unable to recover it. 00:34:42.206 [2024-07-21 03:44:27.293803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.206 [2024-07-21 03:44:27.293853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.206 qpair failed and we were unable to recover it. 00:34:42.206 [2024-07-21 03:44:27.294014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.206 [2024-07-21 03:44:27.294070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.206 qpair failed and we were unable to recover it. 00:34:42.206 [2024-07-21 03:44:27.294177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.206 [2024-07-21 03:44:27.294207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.206 qpair failed and we were unable to recover it. 00:34:42.206 [2024-07-21 03:44:27.294364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.206 [2024-07-21 03:44:27.294416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.206 qpair failed and we were unable to recover it. 00:34:42.206 [2024-07-21 03:44:27.294552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.206 [2024-07-21 03:44:27.294583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.206 qpair failed and we were unable to recover it. 00:34:42.206 [2024-07-21 03:44:27.294736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.206 [2024-07-21 03:44:27.294763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.206 qpair failed and we were unable to recover it. 00:34:42.206 [2024-07-21 03:44:27.294907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.206 [2024-07-21 03:44:27.294936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.206 qpair failed and we were unable to recover it. 00:34:42.206 [2024-07-21 03:44:27.295179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.206 [2024-07-21 03:44:27.295239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.206 qpair failed and we were unable to recover it. 00:34:42.206 [2024-07-21 03:44:27.295401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.206 [2024-07-21 03:44:27.295430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.206 qpair failed and we were unable to recover it. 00:34:42.206 [2024-07-21 03:44:27.295574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.206 [2024-07-21 03:44:27.295602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.206 qpair failed and we were unable to recover it. 00:34:42.206 [2024-07-21 03:44:27.295735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.206 [2024-07-21 03:44:27.295775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.206 qpair failed and we were unable to recover it. 00:34:42.206 [2024-07-21 03:44:27.295918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.206 [2024-07-21 03:44:27.295950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.206 qpair failed and we were unable to recover it. 00:34:42.206 [2024-07-21 03:44:27.296073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.206 [2024-07-21 03:44:27.296116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.206 qpair failed and we were unable to recover it. 00:34:42.206 [2024-07-21 03:44:27.296273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.206 [2024-07-21 03:44:27.296303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.206 qpair failed and we were unable to recover it. 00:34:42.206 [2024-07-21 03:44:27.296462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.206 [2024-07-21 03:44:27.296491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.206 qpair failed and we were unable to recover it. 00:34:42.206 [2024-07-21 03:44:27.296655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.206 [2024-07-21 03:44:27.296683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.206 qpair failed and we were unable to recover it. 00:34:42.206 [2024-07-21 03:44:27.296775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.206 [2024-07-21 03:44:27.296802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.206 qpair failed and we were unable to recover it. 00:34:42.206 [2024-07-21 03:44:27.296956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.206 [2024-07-21 03:44:27.296985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.206 qpair failed and we were unable to recover it. 00:34:42.206 [2024-07-21 03:44:27.297173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.206 [2024-07-21 03:44:27.297203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.206 qpair failed and we were unable to recover it. 00:34:42.206 [2024-07-21 03:44:27.297402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.206 [2024-07-21 03:44:27.297433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.206 qpair failed and we were unable to recover it. 00:34:42.206 [2024-07-21 03:44:27.297593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.206 [2024-07-21 03:44:27.297628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.206 qpair failed and we were unable to recover it. 00:34:42.206 [2024-07-21 03:44:27.297770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.206 [2024-07-21 03:44:27.297798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.206 qpair failed and we were unable to recover it. 00:34:42.206 [2024-07-21 03:44:27.297927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.206 [2024-07-21 03:44:27.297971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.206 qpair failed and we were unable to recover it. 00:34:42.206 [2024-07-21 03:44:27.298145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.206 [2024-07-21 03:44:27.298174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.206 qpair failed and we were unable to recover it. 00:34:42.206 [2024-07-21 03:44:27.298289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.206 [2024-07-21 03:44:27.298332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.206 qpair failed and we were unable to recover it. 00:34:42.206 [2024-07-21 03:44:27.298472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.206 [2024-07-21 03:44:27.298502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.206 qpair failed and we were unable to recover it. 00:34:42.206 [2024-07-21 03:44:27.298650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.206 [2024-07-21 03:44:27.298678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.206 qpair failed and we were unable to recover it. 00:34:42.206 [2024-07-21 03:44:27.298833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.206 [2024-07-21 03:44:27.298859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.206 qpair failed and we were unable to recover it. 00:34:42.206 [2024-07-21 03:44:27.299034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.206 [2024-07-21 03:44:27.299063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.206 qpair failed and we were unable to recover it. 00:34:42.206 [2024-07-21 03:44:27.299197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.206 [2024-07-21 03:44:27.299230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.206 qpair failed and we were unable to recover it. 00:34:42.206 [2024-07-21 03:44:27.299425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.206 [2024-07-21 03:44:27.299454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.206 qpair failed and we were unable to recover it. 00:34:42.206 [2024-07-21 03:44:27.299600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.206 [2024-07-21 03:44:27.299635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.206 qpair failed and we were unable to recover it. 00:34:42.206 [2024-07-21 03:44:27.299785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.206 [2024-07-21 03:44:27.299814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.206 qpair failed and we were unable to recover it. 00:34:42.206 [2024-07-21 03:44:27.299979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.206 [2024-07-21 03:44:27.300018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.206 qpair failed and we were unable to recover it. 00:34:42.206 [2024-07-21 03:44:27.300145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.206 [2024-07-21 03:44:27.300211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.206 qpair failed and we were unable to recover it. 00:34:42.206 [2024-07-21 03:44:27.300377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.206 [2024-07-21 03:44:27.300407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.206 qpair failed and we were unable to recover it. 00:34:42.206 [2024-07-21 03:44:27.300550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.206 [2024-07-21 03:44:27.300576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.206 qpair failed and we were unable to recover it. 00:34:42.206 [2024-07-21 03:44:27.300717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.206 [2024-07-21 03:44:27.300749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.206 qpair failed and we were unable to recover it. 00:34:42.206 [2024-07-21 03:44:27.300860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.206 [2024-07-21 03:44:27.300904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.206 qpair failed and we were unable to recover it. 00:34:42.206 [2024-07-21 03:44:27.301015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.206 [2024-07-21 03:44:27.301044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.206 qpair failed and we were unable to recover it. 00:34:42.206 [2024-07-21 03:44:27.301172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.206 [2024-07-21 03:44:27.301199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.206 qpair failed and we were unable to recover it. 00:34:42.206 [2024-07-21 03:44:27.301326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.206 [2024-07-21 03:44:27.301354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.206 qpair failed and we were unable to recover it. 00:34:42.206 [2024-07-21 03:44:27.301484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.206 [2024-07-21 03:44:27.301512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.206 qpair failed and we were unable to recover it. 00:34:42.206 [2024-07-21 03:44:27.301609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.206 [2024-07-21 03:44:27.301644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.206 qpair failed and we were unable to recover it. 00:34:42.206 [2024-07-21 03:44:27.301738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.206 [2024-07-21 03:44:27.301765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.206 qpair failed and we were unable to recover it. 00:34:42.206 [2024-07-21 03:44:27.301861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.206 [2024-07-21 03:44:27.301888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.206 qpair failed and we were unable to recover it. 00:34:42.206 [2024-07-21 03:44:27.301994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.206 [2024-07-21 03:44:27.302021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.206 qpair failed and we were unable to recover it. 00:34:42.206 [2024-07-21 03:44:27.302136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.206 [2024-07-21 03:44:27.302162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.206 qpair failed and we were unable to recover it. 00:34:42.206 [2024-07-21 03:44:27.302294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.206 [2024-07-21 03:44:27.302334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.206 qpair failed and we were unable to recover it. 00:34:42.206 [2024-07-21 03:44:27.302458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.206 [2024-07-21 03:44:27.302500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.206 qpair failed and we were unable to recover it. 00:34:42.206 [2024-07-21 03:44:27.302630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.206 [2024-07-21 03:44:27.302658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.206 qpair failed and we were unable to recover it. 00:34:42.207 [2024-07-21 03:44:27.302779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.207 [2024-07-21 03:44:27.302806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.207 qpair failed and we were unable to recover it. 00:34:42.207 [2024-07-21 03:44:27.302943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.207 [2024-07-21 03:44:27.302982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.207 qpair failed and we were unable to recover it. 00:34:42.207 [2024-07-21 03:44:27.303116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.207 [2024-07-21 03:44:27.303144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.207 qpair failed and we were unable to recover it. 00:34:42.207 [2024-07-21 03:44:27.303270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.207 [2024-07-21 03:44:27.303297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.207 qpair failed and we were unable to recover it. 00:34:42.207 [2024-07-21 03:44:27.303451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.207 [2024-07-21 03:44:27.303477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.207 qpair failed and we were unable to recover it. 00:34:42.207 [2024-07-21 03:44:27.303575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.207 [2024-07-21 03:44:27.303601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.207 qpair failed and we were unable to recover it. 00:34:42.207 [2024-07-21 03:44:27.303756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.207 [2024-07-21 03:44:27.303811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.207 qpair failed and we were unable to recover it. 00:34:42.207 [2024-07-21 03:44:27.303949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.207 [2024-07-21 03:44:27.303995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.207 qpair failed and we were unable to recover it. 00:34:42.207 [2024-07-21 03:44:27.304121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.207 [2024-07-21 03:44:27.304165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.207 qpair failed and we were unable to recover it. 00:34:42.207 [2024-07-21 03:44:27.304288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.207 [2024-07-21 03:44:27.304315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.207 qpair failed and we were unable to recover it. 00:34:42.207 [2024-07-21 03:44:27.304404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.207 [2024-07-21 03:44:27.304432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.207 qpair failed and we were unable to recover it. 00:34:42.207 [2024-07-21 03:44:27.304581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.207 [2024-07-21 03:44:27.304608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.207 qpair failed and we were unable to recover it. 00:34:42.207 [2024-07-21 03:44:27.304743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.207 [2024-07-21 03:44:27.304769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.207 qpair failed and we were unable to recover it. 00:34:42.207 [2024-07-21 03:44:27.304866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.207 [2024-07-21 03:44:27.304907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.207 qpair failed and we were unable to recover it. 00:34:42.207 [2024-07-21 03:44:27.305028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.207 [2024-07-21 03:44:27.305055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.207 qpair failed and we were unable to recover it. 00:34:42.207 [2024-07-21 03:44:27.305184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.207 [2024-07-21 03:44:27.305211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.207 qpair failed and we were unable to recover it. 00:34:42.207 [2024-07-21 03:44:27.305377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.207 [2024-07-21 03:44:27.305407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.207 qpair failed and we were unable to recover it. 00:34:42.207 [2024-07-21 03:44:27.305527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.207 [2024-07-21 03:44:27.305556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.207 qpair failed and we were unable to recover it. 00:34:42.207 [2024-07-21 03:44:27.305679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.207 [2024-07-21 03:44:27.305709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.207 qpair failed and we were unable to recover it. 00:34:42.207 [2024-07-21 03:44:27.305923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.207 [2024-07-21 03:44:27.305979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.207 qpair failed and we were unable to recover it. 00:34:42.207 [2024-07-21 03:44:27.306083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.207 [2024-07-21 03:44:27.306112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.207 qpair failed and we were unable to recover it. 00:34:42.207 [2024-07-21 03:44:27.306218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.207 [2024-07-21 03:44:27.306249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.207 qpair failed and we were unable to recover it. 00:34:42.207 [2024-07-21 03:44:27.306388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.207 [2024-07-21 03:44:27.306417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.207 qpair failed and we were unable to recover it. 00:34:42.207 [2024-07-21 03:44:27.306536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.207 [2024-07-21 03:44:27.306563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.207 qpair failed and we were unable to recover it. 00:34:42.207 [2024-07-21 03:44:27.306686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.207 [2024-07-21 03:44:27.306727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.207 qpair failed and we were unable to recover it. 00:34:42.207 [2024-07-21 03:44:27.306874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.207 [2024-07-21 03:44:27.306906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.207 qpair failed and we were unable to recover it. 00:34:42.207 [2024-07-21 03:44:27.307068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.207 [2024-07-21 03:44:27.307113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.207 qpair failed and we were unable to recover it. 00:34:42.207 [2024-07-21 03:44:27.307243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.207 [2024-07-21 03:44:27.307269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.207 qpair failed and we were unable to recover it. 00:34:42.207 [2024-07-21 03:44:27.307358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.207 [2024-07-21 03:44:27.307384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.207 qpair failed and we were unable to recover it. 00:34:42.207 [2024-07-21 03:44:27.307485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.207 [2024-07-21 03:44:27.307511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.207 qpair failed and we were unable to recover it. 00:34:42.207 [2024-07-21 03:44:27.307632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.207 [2024-07-21 03:44:27.307662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.207 qpair failed and we were unable to recover it. 00:34:42.207 [2024-07-21 03:44:27.307810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.207 [2024-07-21 03:44:27.307839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.207 qpair failed and we were unable to recover it. 00:34:42.207 [2024-07-21 03:44:27.307963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.207 [2024-07-21 03:44:27.307990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.207 qpair failed and we were unable to recover it. 00:34:42.207 [2024-07-21 03:44:27.308109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.207 [2024-07-21 03:44:27.308135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.207 qpair failed and we were unable to recover it. 00:34:42.207 [2024-07-21 03:44:27.308275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.207 [2024-07-21 03:44:27.308304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.207 qpair failed and we were unable to recover it. 00:34:42.207 [2024-07-21 03:44:27.308435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.207 [2024-07-21 03:44:27.308460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.207 qpair failed and we were unable to recover it. 00:34:42.207 [2024-07-21 03:44:27.308556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.207 [2024-07-21 03:44:27.308582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.207 qpair failed and we were unable to recover it. 00:34:42.207 [2024-07-21 03:44:27.308715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.207 [2024-07-21 03:44:27.308742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.207 qpair failed and we were unable to recover it. 00:34:42.207 [2024-07-21 03:44:27.308838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.207 [2024-07-21 03:44:27.308867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.207 qpair failed and we were unable to recover it. 00:34:42.207 [2024-07-21 03:44:27.309073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.207 [2024-07-21 03:44:27.309102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.207 qpair failed and we were unable to recover it. 00:34:42.207 [2024-07-21 03:44:27.309220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.207 [2024-07-21 03:44:27.309249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.207 qpair failed and we were unable to recover it. 00:34:42.207 [2024-07-21 03:44:27.309345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.207 [2024-07-21 03:44:27.309374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.207 qpair failed and we were unable to recover it. 00:34:42.207 [2024-07-21 03:44:27.309555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.207 [2024-07-21 03:44:27.309596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.207 qpair failed and we were unable to recover it. 00:34:42.207 [2024-07-21 03:44:27.309760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.207 [2024-07-21 03:44:27.309788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.207 qpair failed and we were unable to recover it. 00:34:42.207 [2024-07-21 03:44:27.309906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.207 [2024-07-21 03:44:27.309962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.207 qpair failed and we were unable to recover it. 00:34:42.207 [2024-07-21 03:44:27.310107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.207 [2024-07-21 03:44:27.310150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.207 qpair failed and we were unable to recover it. 00:34:42.207 [2024-07-21 03:44:27.310345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.207 [2024-07-21 03:44:27.310405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.207 qpair failed and we were unable to recover it. 00:34:42.207 [2024-07-21 03:44:27.310533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.207 [2024-07-21 03:44:27.310561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.207 qpair failed and we were unable to recover it. 00:34:42.207 [2024-07-21 03:44:27.310695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.207 [2024-07-21 03:44:27.310723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.207 qpair failed and we were unable to recover it. 00:34:42.207 [2024-07-21 03:44:27.310856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.207 [2024-07-21 03:44:27.310900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.207 qpair failed and we were unable to recover it. 00:34:42.207 [2024-07-21 03:44:27.311037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.207 [2024-07-21 03:44:27.311067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.207 qpair failed and we were unable to recover it. 00:34:42.207 [2024-07-21 03:44:27.311186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.207 [2024-07-21 03:44:27.311238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.207 qpair failed and we were unable to recover it. 00:34:42.207 [2024-07-21 03:44:27.311381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.207 [2024-07-21 03:44:27.311407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.207 qpair failed and we were unable to recover it. 00:34:42.207 [2024-07-21 03:44:27.311555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.207 [2024-07-21 03:44:27.311586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.207 qpair failed and we were unable to recover it. 00:34:42.207 [2024-07-21 03:44:27.311754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.207 [2024-07-21 03:44:27.311804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.207 qpair failed and we were unable to recover it. 00:34:42.207 [2024-07-21 03:44:27.311968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.207 [2024-07-21 03:44:27.311995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.207 qpair failed and we were unable to recover it. 00:34:42.207 [2024-07-21 03:44:27.312143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.207 [2024-07-21 03:44:27.312169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.207 qpair failed and we were unable to recover it. 00:34:42.208 [2024-07-21 03:44:27.312309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.208 [2024-07-21 03:44:27.312335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.208 qpair failed and we were unable to recover it. 00:34:42.208 [2024-07-21 03:44:27.312454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.208 [2024-07-21 03:44:27.312480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.208 qpair failed and we were unable to recover it. 00:34:42.208 [2024-07-21 03:44:27.312573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.208 [2024-07-21 03:44:27.312597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.208 qpair failed and we were unable to recover it. 00:34:42.208 [2024-07-21 03:44:27.312770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.208 [2024-07-21 03:44:27.312814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.208 qpair failed and we were unable to recover it. 00:34:42.208 [2024-07-21 03:44:27.312997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.208 [2024-07-21 03:44:27.313042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.208 qpair failed and we were unable to recover it. 00:34:42.208 [2024-07-21 03:44:27.313190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.208 [2024-07-21 03:44:27.313233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.208 qpair failed and we were unable to recover it. 00:34:42.208 [2024-07-21 03:44:27.313382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.208 [2024-07-21 03:44:27.313409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.208 qpair failed and we were unable to recover it. 00:34:42.208 [2024-07-21 03:44:27.313505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.208 [2024-07-21 03:44:27.313532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.208 qpair failed and we were unable to recover it. 00:34:42.208 [2024-07-21 03:44:27.313668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.208 [2024-07-21 03:44:27.313699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.208 qpair failed and we were unable to recover it. 00:34:42.208 [2024-07-21 03:44:27.313824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.208 [2024-07-21 03:44:27.313879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.208 qpair failed and we were unable to recover it. 00:34:42.208 [2024-07-21 03:44:27.314012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.208 [2024-07-21 03:44:27.314038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.208 qpair failed and we were unable to recover it. 00:34:42.208 [2024-07-21 03:44:27.314153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.208 [2024-07-21 03:44:27.314180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.208 qpair failed and we were unable to recover it. 00:34:42.208 [2024-07-21 03:44:27.314304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.208 [2024-07-21 03:44:27.314340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.208 qpair failed and we were unable to recover it. 00:34:42.208 [2024-07-21 03:44:27.314431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.208 [2024-07-21 03:44:27.314457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.208 qpair failed and we were unable to recover it. 00:34:42.208 [2024-07-21 03:44:27.314609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.208 [2024-07-21 03:44:27.314643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.208 qpair failed and we were unable to recover it. 00:34:42.208 [2024-07-21 03:44:27.314792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.208 [2024-07-21 03:44:27.314839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.208 qpair failed and we were unable to recover it. 00:34:42.208 [2024-07-21 03:44:27.314950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.208 [2024-07-21 03:44:27.314993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.208 qpair failed and we were unable to recover it. 00:34:42.208 [2024-07-21 03:44:27.315161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.208 [2024-07-21 03:44:27.315205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.208 qpair failed and we were unable to recover it. 00:34:42.208 [2024-07-21 03:44:27.315305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.208 [2024-07-21 03:44:27.315332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.208 qpair failed and we were unable to recover it. 00:34:42.208 [2024-07-21 03:44:27.315456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.208 [2024-07-21 03:44:27.315484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.208 qpair failed and we were unable to recover it. 00:34:42.208 [2024-07-21 03:44:27.315632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.208 [2024-07-21 03:44:27.315669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.208 qpair failed and we were unable to recover it. 00:34:42.208 [2024-07-21 03:44:27.315823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.208 [2024-07-21 03:44:27.315849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.208 qpair failed and we were unable to recover it. 00:34:42.208 [2024-07-21 03:44:27.315970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.208 [2024-07-21 03:44:27.315996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.208 qpair failed and we were unable to recover it. 00:34:42.208 [2024-07-21 03:44:27.316126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.208 [2024-07-21 03:44:27.316153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.208 qpair failed and we were unable to recover it. 00:34:42.208 [2024-07-21 03:44:27.316277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.208 [2024-07-21 03:44:27.316306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.208 qpair failed and we were unable to recover it. 00:34:42.208 [2024-07-21 03:44:27.316452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.208 [2024-07-21 03:44:27.316479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.208 qpair failed and we were unable to recover it. 00:34:42.208 [2024-07-21 03:44:27.316594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.208 [2024-07-21 03:44:27.316627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.208 qpair failed and we were unable to recover it. 00:34:42.208 [2024-07-21 03:44:27.316784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.208 [2024-07-21 03:44:27.316825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.208 qpair failed and we were unable to recover it. 00:34:42.208 [2024-07-21 03:44:27.316959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.208 [2024-07-21 03:44:27.317020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.208 qpair failed and we were unable to recover it. 00:34:42.208 [2024-07-21 03:44:27.317157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.208 [2024-07-21 03:44:27.317186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.208 qpair failed and we were unable to recover it. 00:34:42.208 [2024-07-21 03:44:27.317293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.208 [2024-07-21 03:44:27.317321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.208 qpair failed and we were unable to recover it. 00:34:42.208 [2024-07-21 03:44:27.317423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.208 [2024-07-21 03:44:27.317451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.208 qpair failed and we were unable to recover it. 00:34:42.208 [2024-07-21 03:44:27.317583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.208 [2024-07-21 03:44:27.317612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.208 qpair failed and we were unable to recover it. 00:34:42.208 [2024-07-21 03:44:27.317750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.208 [2024-07-21 03:44:27.317779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.208 qpair failed and we were unable to recover it. 00:34:42.208 [2024-07-21 03:44:27.317895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.208 [2024-07-21 03:44:27.317923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.208 qpair failed and we were unable to recover it. 00:34:42.208 [2024-07-21 03:44:27.318100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.208 [2024-07-21 03:44:27.318128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.208 qpair failed and we were unable to recover it. 00:34:42.208 [2024-07-21 03:44:27.318257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.208 [2024-07-21 03:44:27.318307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.208 qpair failed and we were unable to recover it. 00:34:42.208 [2024-07-21 03:44:27.318480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.208 [2024-07-21 03:44:27.318506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.208 qpair failed and we were unable to recover it. 00:34:42.208 [2024-07-21 03:44:27.318592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.208 [2024-07-21 03:44:27.318624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.208 qpair failed and we were unable to recover it. 00:34:42.208 [2024-07-21 03:44:27.318760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.208 [2024-07-21 03:44:27.318789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.208 qpair failed and we were unable to recover it. 00:34:42.208 [2024-07-21 03:44:27.318925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.208 [2024-07-21 03:44:27.318954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.208 qpair failed and we were unable to recover it. 00:34:42.208 [2024-07-21 03:44:27.319078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.208 [2024-07-21 03:44:27.319106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.208 qpair failed and we were unable to recover it. 00:34:42.208 [2024-07-21 03:44:27.319218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.208 [2024-07-21 03:44:27.319246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.208 qpair failed and we were unable to recover it. 00:34:42.208 [2024-07-21 03:44:27.319409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.208 [2024-07-21 03:44:27.319438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.208 qpair failed and we were unable to recover it. 00:34:42.208 [2024-07-21 03:44:27.319540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.208 [2024-07-21 03:44:27.319569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.208 qpair failed and we were unable to recover it. 00:34:42.208 [2024-07-21 03:44:27.319754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.208 [2024-07-21 03:44:27.319783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.208 qpair failed and we were unable to recover it. 00:34:42.208 [2024-07-21 03:44:27.319916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.208 [2024-07-21 03:44:27.319961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.208 qpair failed and we were unable to recover it. 00:34:42.208 [2024-07-21 03:44:27.320053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.208 [2024-07-21 03:44:27.320080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.208 qpair failed and we were unable to recover it. 00:34:42.208 [2024-07-21 03:44:27.320176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.208 [2024-07-21 03:44:27.320203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.208 qpair failed and we were unable to recover it. 00:34:42.208 [2024-07-21 03:44:27.320298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.208 [2024-07-21 03:44:27.320326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.208 qpair failed and we were unable to recover it. 00:34:42.208 [2024-07-21 03:44:27.320423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.208 [2024-07-21 03:44:27.320451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.209 qpair failed and we were unable to recover it. 00:34:42.209 [2024-07-21 03:44:27.320560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.209 [2024-07-21 03:44:27.320588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.209 qpair failed and we were unable to recover it. 00:34:42.209 [2024-07-21 03:44:27.320727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.209 [2024-07-21 03:44:27.320765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.209 qpair failed and we were unable to recover it. 00:34:42.209 [2024-07-21 03:44:27.320919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.209 [2024-07-21 03:44:27.320947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.209 qpair failed and we were unable to recover it. 00:34:42.209 [2024-07-21 03:44:27.321079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.209 [2024-07-21 03:44:27.321106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.209 qpair failed and we were unable to recover it. 00:34:42.209 [2024-07-21 03:44:27.321252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.209 [2024-07-21 03:44:27.321280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.209 qpair failed and we were unable to recover it. 00:34:42.209 [2024-07-21 03:44:27.321373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.209 [2024-07-21 03:44:27.321405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.209 qpair failed and we were unable to recover it. 00:34:42.209 [2024-07-21 03:44:27.321526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.209 [2024-07-21 03:44:27.321552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.209 qpair failed and we were unable to recover it. 00:34:42.209 [2024-07-21 03:44:27.321676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.209 [2024-07-21 03:44:27.321704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.209 qpair failed and we were unable to recover it. 00:34:42.209 [2024-07-21 03:44:27.321829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.209 [2024-07-21 03:44:27.321854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.209 qpair failed and we were unable to recover it. 00:34:42.209 [2024-07-21 03:44:27.322037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.209 [2024-07-21 03:44:27.322067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.209 qpair failed and we were unable to recover it. 00:34:42.209 [2024-07-21 03:44:27.322234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.209 [2024-07-21 03:44:27.322263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.209 qpair failed and we were unable to recover it. 00:34:42.209 [2024-07-21 03:44:27.322384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.209 [2024-07-21 03:44:27.322426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.209 qpair failed and we were unable to recover it. 00:34:42.209 [2024-07-21 03:44:27.322560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.209 [2024-07-21 03:44:27.322591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.209 qpair failed and we were unable to recover it. 00:34:42.209 [2024-07-21 03:44:27.322690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.209 [2024-07-21 03:44:27.322718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.209 qpair failed and we were unable to recover it. 00:34:42.209 [2024-07-21 03:44:27.322811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.209 [2024-07-21 03:44:27.322841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.209 qpair failed and we were unable to recover it. 00:34:42.209 [2024-07-21 03:44:27.323011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.209 [2024-07-21 03:44:27.323041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.209 qpair failed and we were unable to recover it. 00:34:42.209 [2024-07-21 03:44:27.323193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.209 [2024-07-21 03:44:27.323256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.209 qpair failed and we were unable to recover it. 00:34:42.209 [2024-07-21 03:44:27.323356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.209 [2024-07-21 03:44:27.323383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.209 qpair failed and we were unable to recover it. 00:34:42.209 [2024-07-21 03:44:27.323552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.209 [2024-07-21 03:44:27.323578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.209 qpair failed and we were unable to recover it. 00:34:42.209 [2024-07-21 03:44:27.323707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.209 [2024-07-21 03:44:27.323740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.209 qpair failed and we were unable to recover it. 00:34:42.209 [2024-07-21 03:44:27.323861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.209 [2024-07-21 03:44:27.323904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.209 qpair failed and we were unable to recover it. 00:34:42.209 [2024-07-21 03:44:27.324040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.209 [2024-07-21 03:44:27.324088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.209 qpair failed and we were unable to recover it. 00:34:42.209 [2024-07-21 03:44:27.324197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.209 [2024-07-21 03:44:27.324226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.209 qpair failed and we were unable to recover it. 00:34:42.209 [2024-07-21 03:44:27.324341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.209 [2024-07-21 03:44:27.324383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.209 qpair failed and we were unable to recover it. 00:34:42.209 [2024-07-21 03:44:27.324522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.209 [2024-07-21 03:44:27.324549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.209 qpair failed and we were unable to recover it. 00:34:42.209 [2024-07-21 03:44:27.324676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.209 [2024-07-21 03:44:27.324704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.209 qpair failed and we were unable to recover it. 00:34:42.209 [2024-07-21 03:44:27.324827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.209 [2024-07-21 03:44:27.324855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.209 qpair failed and we were unable to recover it. 00:34:42.209 [2024-07-21 03:44:27.324948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.209 [2024-07-21 03:44:27.324992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.209 qpair failed and we were unable to recover it. 00:34:42.209 [2024-07-21 03:44:27.325155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.209 [2024-07-21 03:44:27.325184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.209 qpair failed and we were unable to recover it. 00:34:42.209 [2024-07-21 03:44:27.325347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.209 [2024-07-21 03:44:27.325377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.209 qpair failed and we were unable to recover it. 00:34:42.209 [2024-07-21 03:44:27.325565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.209 [2024-07-21 03:44:27.325605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.209 qpair failed and we were unable to recover it. 00:34:42.209 [2024-07-21 03:44:27.325722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.209 [2024-07-21 03:44:27.325750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.209 qpair failed and we were unable to recover it. 00:34:42.209 [2024-07-21 03:44:27.325897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.209 [2024-07-21 03:44:27.325924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.209 qpair failed and we were unable to recover it. 00:34:42.209 [2024-07-21 03:44:27.326070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.209 [2024-07-21 03:44:27.326099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.209 qpair failed and we were unable to recover it. 00:34:42.209 [2024-07-21 03:44:27.326268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.209 [2024-07-21 03:44:27.326312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.209 qpair failed and we were unable to recover it. 00:34:42.209 [2024-07-21 03:44:27.326486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.209 [2024-07-21 03:44:27.326543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.209 qpair failed and we were unable to recover it. 00:34:42.209 [2024-07-21 03:44:27.326690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.209 [2024-07-21 03:44:27.326718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.209 qpair failed and we were unable to recover it. 00:34:42.209 [2024-07-21 03:44:27.326844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.209 [2024-07-21 03:44:27.326870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.209 qpair failed and we were unable to recover it. 00:34:42.209 [2024-07-21 03:44:27.326984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.209 [2024-07-21 03:44:27.327013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.209 qpair failed and we were unable to recover it. 00:34:42.209 [2024-07-21 03:44:27.327142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.209 [2024-07-21 03:44:27.327177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.209 qpair failed and we were unable to recover it. 00:34:42.209 [2024-07-21 03:44:27.327308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.209 [2024-07-21 03:44:27.327337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.209 qpair failed and we were unable to recover it. 00:34:42.209 [2024-07-21 03:44:27.327442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.209 [2024-07-21 03:44:27.327469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.209 qpair failed and we were unable to recover it. 00:34:42.209 [2024-07-21 03:44:27.327638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.209 [2024-07-21 03:44:27.327666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.209 qpair failed and we were unable to recover it. 00:34:42.209 [2024-07-21 03:44:27.327795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.209 [2024-07-21 03:44:27.327821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.209 qpair failed and we were unable to recover it. 00:34:42.209 [2024-07-21 03:44:27.327946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.209 [2024-07-21 03:44:27.327988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.209 qpair failed and we were unable to recover it. 00:34:42.209 [2024-07-21 03:44:27.328153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.209 [2024-07-21 03:44:27.328198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.209 qpair failed and we were unable to recover it. 00:34:42.209 [2024-07-21 03:44:27.328303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.209 [2024-07-21 03:44:27.328334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.209 qpair failed and we were unable to recover it. 00:34:42.209 [2024-07-21 03:44:27.328435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.209 [2024-07-21 03:44:27.328463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.209 qpair failed and we were unable to recover it. 00:34:42.209 [2024-07-21 03:44:27.328625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.209 [2024-07-21 03:44:27.328669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.209 qpair failed and we were unable to recover it. 00:34:42.209 [2024-07-21 03:44:27.328777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.209 [2024-07-21 03:44:27.328805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.209 qpair failed and we were unable to recover it. 00:34:42.209 [2024-07-21 03:44:27.328936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.209 [2024-07-21 03:44:27.328964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.209 qpair failed and we were unable to recover it. 00:34:42.209 [2024-07-21 03:44:27.329073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.209 [2024-07-21 03:44:27.329100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.209 qpair failed and we were unable to recover it. 00:34:42.209 [2024-07-21 03:44:27.329241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.209 [2024-07-21 03:44:27.329270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.209 qpair failed and we were unable to recover it. 00:34:42.209 [2024-07-21 03:44:27.329374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.209 [2024-07-21 03:44:27.329401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.209 qpair failed and we were unable to recover it. 00:34:42.209 [2024-07-21 03:44:27.329534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.209 [2024-07-21 03:44:27.329563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.209 qpair failed and we were unable to recover it. 00:34:42.209 [2024-07-21 03:44:27.329703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.209 [2024-07-21 03:44:27.329728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.209 qpair failed and we were unable to recover it. 00:34:42.209 [2024-07-21 03:44:27.329850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.210 [2024-07-21 03:44:27.329876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.210 qpair failed and we were unable to recover it. 00:34:42.210 [2024-07-21 03:44:27.330029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.210 [2024-07-21 03:44:27.330058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.210 qpair failed and we were unable to recover it. 00:34:42.210 [2024-07-21 03:44:27.330224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.210 [2024-07-21 03:44:27.330253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.210 qpair failed and we were unable to recover it. 00:34:42.210 [2024-07-21 03:44:27.330358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.210 [2024-07-21 03:44:27.330388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.210 qpair failed and we were unable to recover it. 00:34:42.210 [2024-07-21 03:44:27.330521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.210 [2024-07-21 03:44:27.330550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.210 qpair failed and we were unable to recover it. 00:34:42.210 [2024-07-21 03:44:27.330671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.210 [2024-07-21 03:44:27.330699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.210 qpair failed and we were unable to recover it. 00:34:42.210 [2024-07-21 03:44:27.330829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.210 [2024-07-21 03:44:27.330855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.210 qpair failed and we were unable to recover it. 00:34:42.210 [2024-07-21 03:44:27.330989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.210 [2024-07-21 03:44:27.331018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.210 qpair failed and we were unable to recover it. 00:34:42.210 [2024-07-21 03:44:27.331117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.210 [2024-07-21 03:44:27.331144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.210 qpair failed and we were unable to recover it. 00:34:42.210 [2024-07-21 03:44:27.331280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.210 [2024-07-21 03:44:27.331308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.210 qpair failed and we were unable to recover it. 00:34:42.210 [2024-07-21 03:44:27.331408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.210 [2024-07-21 03:44:27.331441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.210 qpair failed and we were unable to recover it. 00:34:42.210 [2024-07-21 03:44:27.331542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.210 [2024-07-21 03:44:27.331567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.210 qpair failed and we were unable to recover it. 00:34:42.210 [2024-07-21 03:44:27.331687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.210 [2024-07-21 03:44:27.331714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.210 qpair failed and we were unable to recover it. 00:34:42.210 [2024-07-21 03:44:27.331807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.210 [2024-07-21 03:44:27.331832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.210 qpair failed and we were unable to recover it. 00:34:42.210 [2024-07-21 03:44:27.331954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.210 [2024-07-21 03:44:27.331994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.210 qpair failed and we were unable to recover it. 00:34:42.210 [2024-07-21 03:44:27.332123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.210 [2024-07-21 03:44:27.332152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.210 qpair failed and we were unable to recover it. 00:34:42.210 [2024-07-21 03:44:27.332297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.210 [2024-07-21 03:44:27.332341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.210 qpair failed and we were unable to recover it. 00:34:42.210 [2024-07-21 03:44:27.332448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.210 [2024-07-21 03:44:27.332479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.210 qpair failed and we were unable to recover it. 00:34:42.210 [2024-07-21 03:44:27.332624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.210 [2024-07-21 03:44:27.332669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.210 qpair failed and we were unable to recover it. 00:34:42.210 [2024-07-21 03:44:27.332796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.210 [2024-07-21 03:44:27.332824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.210 qpair failed and we were unable to recover it. 00:34:42.210 [2024-07-21 03:44:27.332939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.210 [2024-07-21 03:44:27.332965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.210 qpair failed and we were unable to recover it. 00:34:42.210 [2024-07-21 03:44:27.333080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.210 [2024-07-21 03:44:27.333109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.210 qpair failed and we were unable to recover it. 00:34:42.210 [2024-07-21 03:44:27.333235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.210 [2024-07-21 03:44:27.333263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.210 qpair failed and we were unable to recover it. 00:34:42.210 [2024-07-21 03:44:27.333436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.210 [2024-07-21 03:44:27.333494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.210 qpair failed and we were unable to recover it. 00:34:42.210 [2024-07-21 03:44:27.333634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.210 [2024-07-21 03:44:27.333664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.210 qpair failed and we were unable to recover it. 00:34:42.210 [2024-07-21 03:44:27.333758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.210 [2024-07-21 03:44:27.333784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.210 qpair failed and we were unable to recover it. 00:34:42.210 [2024-07-21 03:44:27.333885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.210 [2024-07-21 03:44:27.333912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.210 qpair failed and we were unable to recover it. 00:34:42.210 [2024-07-21 03:44:27.334018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.210 [2024-07-21 03:44:27.334045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.210 qpair failed and we were unable to recover it. 00:34:42.210 [2024-07-21 03:44:27.334200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.210 [2024-07-21 03:44:27.334245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.210 qpair failed and we were unable to recover it. 00:34:42.210 [2024-07-21 03:44:27.334377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.210 [2024-07-21 03:44:27.334404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.210 qpair failed and we were unable to recover it. 00:34:42.210 [2024-07-21 03:44:27.334525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.210 [2024-07-21 03:44:27.334562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.210 qpair failed and we were unable to recover it. 00:34:42.210 [2024-07-21 03:44:27.334691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.210 [2024-07-21 03:44:27.334721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.210 qpair failed and we were unable to recover it. 00:34:42.210 [2024-07-21 03:44:27.334853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.210 [2024-07-21 03:44:27.334882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.210 qpair failed and we were unable to recover it. 00:34:42.210 [2024-07-21 03:44:27.335042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.210 [2024-07-21 03:44:27.335069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.210 qpair failed and we were unable to recover it. 00:34:42.210 [2024-07-21 03:44:27.335167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.210 [2024-07-21 03:44:27.335191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.210 qpair failed and we were unable to recover it. 00:34:42.210 [2024-07-21 03:44:27.335278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.210 [2024-07-21 03:44:27.335303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.210 qpair failed and we were unable to recover it. 00:34:42.210 [2024-07-21 03:44:27.335388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.210 [2024-07-21 03:44:27.335413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.210 qpair failed and we were unable to recover it. 00:34:42.210 [2024-07-21 03:44:27.335493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.210 [2024-07-21 03:44:27.335522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.210 qpair failed and we were unable to recover it. 00:34:42.210 [2024-07-21 03:44:27.335643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.210 [2024-07-21 03:44:27.335669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.210 qpair failed and we were unable to recover it. 00:34:42.210 [2024-07-21 03:44:27.335785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.210 [2024-07-21 03:44:27.335810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.210 qpair failed and we were unable to recover it. 00:34:42.210 [2024-07-21 03:44:27.335918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.210 [2024-07-21 03:44:27.335945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.210 qpair failed and we were unable to recover it. 00:34:42.210 [2024-07-21 03:44:27.336076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.210 [2024-07-21 03:44:27.336104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.210 qpair failed and we were unable to recover it. 00:34:42.210 [2024-07-21 03:44:27.336223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.210 [2024-07-21 03:44:27.336252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.210 qpair failed and we were unable to recover it. 00:34:42.210 [2024-07-21 03:44:27.336354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.210 [2024-07-21 03:44:27.336383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.210 qpair failed and we were unable to recover it. 00:34:42.210 [2024-07-21 03:44:27.336484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.210 [2024-07-21 03:44:27.336513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.210 qpair failed and we were unable to recover it. 00:34:42.210 [2024-07-21 03:44:27.336605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.210 [2024-07-21 03:44:27.336661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.210 qpair failed and we were unable to recover it. 00:34:42.210 [2024-07-21 03:44:27.336795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.210 [2024-07-21 03:44:27.336822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.210 qpair failed and we were unable to recover it. 00:34:42.210 [2024-07-21 03:44:27.336914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.210 [2024-07-21 03:44:27.336958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.210 qpair failed and we were unable to recover it. 00:34:42.210 [2024-07-21 03:44:27.337094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.210 [2024-07-21 03:44:27.337123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.210 qpair failed and we were unable to recover it. 00:34:42.210 [2024-07-21 03:44:27.337261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.210 [2024-07-21 03:44:27.337290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.210 qpair failed and we were unable to recover it. 00:34:42.210 [2024-07-21 03:44:27.337422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.210 [2024-07-21 03:44:27.337451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.210 qpair failed and we were unable to recover it. 00:34:42.210 [2024-07-21 03:44:27.337569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.210 [2024-07-21 03:44:27.337598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.210 qpair failed and we were unable to recover it. 00:34:42.210 [2024-07-21 03:44:27.337717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.210 [2024-07-21 03:44:27.337744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.210 qpair failed and we were unable to recover it. 00:34:42.210 [2024-07-21 03:44:27.337911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.210 [2024-07-21 03:44:27.337954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.210 qpair failed and we were unable to recover it. 00:34:42.210 [2024-07-21 03:44:27.338059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.210 [2024-07-21 03:44:27.338107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.210 qpair failed and we were unable to recover it. 00:34:42.210 [2024-07-21 03:44:27.338277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.210 [2024-07-21 03:44:27.338320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.210 qpair failed and we were unable to recover it. 00:34:42.210 [2024-07-21 03:44:27.338440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.210 [2024-07-21 03:44:27.338466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.211 qpair failed and we were unable to recover it. 00:34:42.211 [2024-07-21 03:44:27.338632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.211 [2024-07-21 03:44:27.338677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.211 qpair failed and we were unable to recover it. 00:34:42.211 [2024-07-21 03:44:27.338797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.211 [2024-07-21 03:44:27.338830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.211 qpair failed and we were unable to recover it. 00:34:42.211 [2024-07-21 03:44:27.338977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.211 [2024-07-21 03:44:27.339021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.211 qpair failed and we were unable to recover it. 00:34:42.211 [2024-07-21 03:44:27.339162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.211 [2024-07-21 03:44:27.339194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.211 qpair failed and we were unable to recover it. 00:34:42.211 [2024-07-21 03:44:27.339306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.211 [2024-07-21 03:44:27.339337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.211 qpair failed and we were unable to recover it. 00:34:42.211 [2024-07-21 03:44:27.339500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.211 [2024-07-21 03:44:27.339530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.211 qpair failed and we were unable to recover it. 00:34:42.211 [2024-07-21 03:44:27.339631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.211 [2024-07-21 03:44:27.339673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.211 qpair failed and we were unable to recover it. 00:34:42.211 [2024-07-21 03:44:27.339815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.211 [2024-07-21 03:44:27.339849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.211 qpair failed and we were unable to recover it. 00:34:42.211 [2024-07-21 03:44:27.340023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.211 [2024-07-21 03:44:27.340079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.211 qpair failed and we were unable to recover it. 00:34:42.211 [2024-07-21 03:44:27.340205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.211 [2024-07-21 03:44:27.340235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.211 qpair failed and we were unable to recover it. 00:34:42.211 [2024-07-21 03:44:27.340416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.211 [2024-07-21 03:44:27.340463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.211 qpair failed and we were unable to recover it. 00:34:42.211 [2024-07-21 03:44:27.340562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.211 [2024-07-21 03:44:27.340590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.211 qpair failed and we were unable to recover it. 00:34:42.211 [2024-07-21 03:44:27.340693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.211 [2024-07-21 03:44:27.340719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.211 qpair failed and we were unable to recover it. 00:34:42.211 [2024-07-21 03:44:27.340812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.211 [2024-07-21 03:44:27.340838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.211 qpair failed and we were unable to recover it. 00:34:42.211 [2024-07-21 03:44:27.341027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.211 [2024-07-21 03:44:27.341077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.211 qpair failed and we were unable to recover it. 00:34:42.211 [2024-07-21 03:44:27.341236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.211 [2024-07-21 03:44:27.341265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.211 qpair failed and we were unable to recover it. 00:34:42.211 [2024-07-21 03:44:27.341397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.211 [2024-07-21 03:44:27.341426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.211 qpair failed and we were unable to recover it. 00:34:42.211 [2024-07-21 03:44:27.341588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.211 [2024-07-21 03:44:27.341643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.211 qpair failed and we were unable to recover it. 00:34:42.211 [2024-07-21 03:44:27.341783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.211 [2024-07-21 03:44:27.341827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.211 qpair failed and we were unable to recover it. 00:34:42.211 [2024-07-21 03:44:27.341946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.211 [2024-07-21 03:44:27.341974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.211 qpair failed and we were unable to recover it. 00:34:42.211 [2024-07-21 03:44:27.342186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.211 [2024-07-21 03:44:27.342217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.211 qpair failed and we were unable to recover it. 00:34:42.211 [2024-07-21 03:44:27.342385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.211 [2024-07-21 03:44:27.342412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.211 qpair failed and we were unable to recover it. 00:34:42.211 [2024-07-21 03:44:27.342531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.211 [2024-07-21 03:44:27.342558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.211 qpair failed and we were unable to recover it. 00:34:42.211 [2024-07-21 03:44:27.342699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.211 [2024-07-21 03:44:27.342744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.211 qpair failed and we were unable to recover it. 00:34:42.211 [2024-07-21 03:44:27.342912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.211 [2024-07-21 03:44:27.342941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.211 qpair failed and we were unable to recover it. 00:34:42.211 [2024-07-21 03:44:27.343094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.211 [2024-07-21 03:44:27.343124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.211 qpair failed and we were unable to recover it. 00:34:42.211 [2024-07-21 03:44:27.343287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.211 [2024-07-21 03:44:27.343313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.211 qpair failed and we were unable to recover it. 00:34:42.211 [2024-07-21 03:44:27.343408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.211 [2024-07-21 03:44:27.343433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.211 qpair failed and we were unable to recover it. 00:34:42.211 [2024-07-21 03:44:27.343556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.211 [2024-07-21 03:44:27.343581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.211 qpair failed and we were unable to recover it. 00:34:42.211 [2024-07-21 03:44:27.343711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.211 [2024-07-21 03:44:27.343743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.211 qpair failed and we were unable to recover it. 00:34:42.211 [2024-07-21 03:44:27.343879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.211 [2024-07-21 03:44:27.343908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.211 qpair failed and we were unable to recover it. 00:34:42.211 [2024-07-21 03:44:27.344032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.211 [2024-07-21 03:44:27.344061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.211 qpair failed and we were unable to recover it. 00:34:42.211 [2024-07-21 03:44:27.344218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.211 [2024-07-21 03:44:27.344247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.211 qpair failed and we were unable to recover it. 00:34:42.211 [2024-07-21 03:44:27.344377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.211 [2024-07-21 03:44:27.344406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.211 qpair failed and we were unable to recover it. 00:34:42.211 [2024-07-21 03:44:27.344532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.211 [2024-07-21 03:44:27.344578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.211 qpair failed and we were unable to recover it. 00:34:42.211 [2024-07-21 03:44:27.344739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.211 [2024-07-21 03:44:27.344768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.211 qpair failed and we were unable to recover it. 00:34:42.211 [2024-07-21 03:44:27.344880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.211 [2024-07-21 03:44:27.344908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.211 qpair failed and we were unable to recover it. 00:34:42.211 [2024-07-21 03:44:27.345058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.211 [2024-07-21 03:44:27.345102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.211 qpair failed and we were unable to recover it. 00:34:42.211 [2024-07-21 03:44:27.345212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.211 [2024-07-21 03:44:27.345255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.211 qpair failed and we were unable to recover it. 00:34:42.211 [2024-07-21 03:44:27.345406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.211 [2024-07-21 03:44:27.345432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.211 qpair failed and we were unable to recover it. 00:34:42.211 [2024-07-21 03:44:27.345549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.211 [2024-07-21 03:44:27.345574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.211 qpair failed and we were unable to recover it. 00:34:42.211 [2024-07-21 03:44:27.345735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.211 [2024-07-21 03:44:27.345780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.211 qpair failed and we were unable to recover it. 00:34:42.211 [2024-07-21 03:44:27.345921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.211 [2024-07-21 03:44:27.345965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.211 qpair failed and we were unable to recover it. 00:34:42.211 [2024-07-21 03:44:27.346092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.211 [2024-07-21 03:44:27.346119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.211 qpair failed and we were unable to recover it. 00:34:42.211 [2024-07-21 03:44:27.346291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.211 [2024-07-21 03:44:27.346345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.211 qpair failed and we were unable to recover it. 00:34:42.211 [2024-07-21 03:44:27.346502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.211 [2024-07-21 03:44:27.346529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.211 qpair failed and we were unable to recover it. 00:34:42.211 [2024-07-21 03:44:27.346624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.211 [2024-07-21 03:44:27.346650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.211 qpair failed and we were unable to recover it. 00:34:42.211 [2024-07-21 03:44:27.346786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.211 [2024-07-21 03:44:27.346834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.211 qpair failed and we were unable to recover it. 00:34:42.211 [2024-07-21 03:44:27.346968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.211 [2024-07-21 03:44:27.347012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.211 qpair failed and we were unable to recover it. 00:34:42.211 [2024-07-21 03:44:27.347142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.211 [2024-07-21 03:44:27.347186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.211 qpair failed and we were unable to recover it. 00:34:42.211 [2024-07-21 03:44:27.347311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.211 [2024-07-21 03:44:27.347338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.211 qpair failed and we were unable to recover it. 00:34:42.211 [2024-07-21 03:44:27.347460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.211 [2024-07-21 03:44:27.347486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.211 qpair failed and we were unable to recover it. 00:34:42.211 [2024-07-21 03:44:27.347603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.211 [2024-07-21 03:44:27.347636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.211 qpair failed and we were unable to recover it. 00:34:42.211 [2024-07-21 03:44:27.347754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.211 [2024-07-21 03:44:27.347796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.211 qpair failed and we were unable to recover it. 00:34:42.211 [2024-07-21 03:44:27.347944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.211 [2024-07-21 03:44:27.347970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.211 qpair failed and we were unable to recover it. 00:34:42.211 [2024-07-21 03:44:27.348131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.211 [2024-07-21 03:44:27.348160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.211 qpair failed and we were unable to recover it. 00:34:42.212 [2024-07-21 03:44:27.348400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.212 [2024-07-21 03:44:27.348461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.212 qpair failed and we were unable to recover it. 00:34:42.212 [2024-07-21 03:44:27.348599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.212 [2024-07-21 03:44:27.348645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.212 qpair failed and we were unable to recover it. 00:34:42.212 [2024-07-21 03:44:27.348815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.212 [2024-07-21 03:44:27.348859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.212 qpair failed and we were unable to recover it. 00:34:42.212 [2024-07-21 03:44:27.349013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.212 [2024-07-21 03:44:27.349057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.212 qpair failed and we were unable to recover it. 00:34:42.212 [2024-07-21 03:44:27.349144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.212 [2024-07-21 03:44:27.349169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.212 qpair failed and we were unable to recover it. 00:34:42.212 [2024-07-21 03:44:27.349291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.212 [2024-07-21 03:44:27.349316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.212 qpair failed and we were unable to recover it. 00:34:42.212 [2024-07-21 03:44:27.349411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.212 [2024-07-21 03:44:27.349436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.212 qpair failed and we were unable to recover it. 00:34:42.212 [2024-07-21 03:44:27.349558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.212 [2024-07-21 03:44:27.349583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.212 qpair failed and we were unable to recover it. 00:34:42.212 [2024-07-21 03:44:27.349721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.212 [2024-07-21 03:44:27.349750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.212 qpair failed and we were unable to recover it. 00:34:42.212 [2024-07-21 03:44:27.349881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.212 [2024-07-21 03:44:27.349909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.212 qpair failed and we were unable to recover it. 00:34:42.212 [2024-07-21 03:44:27.350007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.212 [2024-07-21 03:44:27.350036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.212 qpair failed and we were unable to recover it. 00:34:42.212 [2024-07-21 03:44:27.350179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.212 [2024-07-21 03:44:27.350208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.212 qpair failed and we were unable to recover it. 00:34:42.212 [2024-07-21 03:44:27.350346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.212 [2024-07-21 03:44:27.350372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.212 qpair failed and we were unable to recover it. 00:34:42.212 [2024-07-21 03:44:27.350494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.212 [2024-07-21 03:44:27.350520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.212 qpair failed and we were unable to recover it. 00:34:42.212 [2024-07-21 03:44:27.350625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.212 [2024-07-21 03:44:27.350650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.212 qpair failed and we were unable to recover it. 00:34:42.212 [2024-07-21 03:44:27.350776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.212 [2024-07-21 03:44:27.350803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.212 qpair failed and we were unable to recover it. 00:34:42.212 [2024-07-21 03:44:27.350915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.212 [2024-07-21 03:44:27.350943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.212 qpair failed and we were unable to recover it. 00:34:42.212 [2024-07-21 03:44:27.351117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.212 [2024-07-21 03:44:27.351163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.212 qpair failed and we were unable to recover it. 00:34:42.212 [2024-07-21 03:44:27.351272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.212 [2024-07-21 03:44:27.351305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.212 qpair failed and we were unable to recover it. 00:34:42.212 [2024-07-21 03:44:27.351420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.212 [2024-07-21 03:44:27.351447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.212 qpair failed and we were unable to recover it. 00:34:42.212 [2024-07-21 03:44:27.351560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.212 [2024-07-21 03:44:27.351585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.212 qpair failed and we were unable to recover it. 00:34:42.212 [2024-07-21 03:44:27.351735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.212 [2024-07-21 03:44:27.351766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.212 qpair failed and we were unable to recover it. 00:34:42.212 [2024-07-21 03:44:27.351926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.212 [2024-07-21 03:44:27.351972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.212 qpair failed and we were unable to recover it. 00:34:42.212 [2024-07-21 03:44:27.352193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.212 [2024-07-21 03:44:27.352250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.212 qpair failed and we were unable to recover it. 00:34:42.212 [2024-07-21 03:44:27.352350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.212 [2024-07-21 03:44:27.352376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.212 qpair failed and we were unable to recover it. 00:34:42.212 [2024-07-21 03:44:27.352532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.212 [2024-07-21 03:44:27.352558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.212 qpair failed and we were unable to recover it. 00:34:42.212 [2024-07-21 03:44:27.352706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.212 [2024-07-21 03:44:27.352739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.212 qpair failed and we were unable to recover it. 00:34:42.212 [2024-07-21 03:44:27.352876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.212 [2024-07-21 03:44:27.352905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.212 qpair failed and we were unable to recover it. 00:34:42.212 [2024-07-21 03:44:27.353007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.212 [2024-07-21 03:44:27.353034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.212 qpair failed and we were unable to recover it. 00:34:42.212 [2024-07-21 03:44:27.353169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.212 [2024-07-21 03:44:27.353199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.212 qpair failed and we were unable to recover it. 00:34:42.212 [2024-07-21 03:44:27.353445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.212 [2024-07-21 03:44:27.353495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.212 qpair failed and we were unable to recover it. 00:34:42.212 [2024-07-21 03:44:27.353596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.212 [2024-07-21 03:44:27.353633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.212 qpair failed and we were unable to recover it. 00:34:42.212 [2024-07-21 03:44:27.353773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.212 [2024-07-21 03:44:27.353802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.212 qpair failed and we were unable to recover it. 00:34:42.212 [2024-07-21 03:44:27.353968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.212 [2024-07-21 03:44:27.353997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.212 qpair failed and we were unable to recover it. 00:34:42.212 [2024-07-21 03:44:27.354133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.212 [2024-07-21 03:44:27.354162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.212 qpair failed and we were unable to recover it. 00:34:42.212 [2024-07-21 03:44:27.354283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.212 [2024-07-21 03:44:27.354312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.212 qpair failed and we were unable to recover it. 00:34:42.212 [2024-07-21 03:44:27.354423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.212 [2024-07-21 03:44:27.354448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.212 qpair failed and we were unable to recover it. 00:34:42.212 [2024-07-21 03:44:27.354568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.212 [2024-07-21 03:44:27.354595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.212 qpair failed and we were unable to recover it. 00:34:42.212 [2024-07-21 03:44:27.354702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.212 [2024-07-21 03:44:27.354730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.212 qpair failed and we were unable to recover it. 00:34:42.212 [2024-07-21 03:44:27.354888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.212 [2024-07-21 03:44:27.354917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.212 qpair failed and we were unable to recover it. 00:34:42.212 [2024-07-21 03:44:27.355077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.212 [2024-07-21 03:44:27.355106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.212 qpair failed and we were unable to recover it. 00:34:42.212 [2024-07-21 03:44:27.355267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.212 [2024-07-21 03:44:27.355313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.212 qpair failed and we were unable to recover it. 00:34:42.212 [2024-07-21 03:44:27.355435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.212 [2024-07-21 03:44:27.355462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.212 qpair failed and we were unable to recover it. 00:34:42.212 [2024-07-21 03:44:27.355556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.212 [2024-07-21 03:44:27.355582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.212 qpair failed and we were unable to recover it. 00:34:42.212 [2024-07-21 03:44:27.355741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.212 [2024-07-21 03:44:27.355768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.212 qpair failed and we were unable to recover it. 00:34:42.212 [2024-07-21 03:44:27.355905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.212 [2024-07-21 03:44:27.355951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.212 qpair failed and we were unable to recover it. 00:34:42.212 [2024-07-21 03:44:27.356044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.212 [2024-07-21 03:44:27.356070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.212 qpair failed and we were unable to recover it. 00:34:42.212 [2024-07-21 03:44:27.356197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.212 [2024-07-21 03:44:27.356225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.212 qpair failed and we were unable to recover it. 00:34:42.212 [2024-07-21 03:44:27.356346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.212 [2024-07-21 03:44:27.356372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.212 qpair failed and we were unable to recover it. 00:34:42.212 [2024-07-21 03:44:27.356469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.212 [2024-07-21 03:44:27.356494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.212 qpair failed and we were unable to recover it. 00:34:42.212 [2024-07-21 03:44:27.356657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.212 [2024-07-21 03:44:27.356689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.212 qpair failed and we were unable to recover it. 00:34:42.212 [2024-07-21 03:44:27.356818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.212 [2024-07-21 03:44:27.356847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.213 qpair failed and we were unable to recover it. 00:34:42.213 [2024-07-21 03:44:27.356982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.213 [2024-07-21 03:44:27.357011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.213 qpair failed and we were unable to recover it. 00:34:42.213 [2024-07-21 03:44:27.357139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.213 [2024-07-21 03:44:27.357167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.213 qpair failed and we were unable to recover it. 00:34:42.213 [2024-07-21 03:44:27.357318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.213 [2024-07-21 03:44:27.357343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.213 qpair failed and we were unable to recover it. 00:34:42.213 [2024-07-21 03:44:27.357432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.213 [2024-07-21 03:44:27.357457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.213 qpair failed and we were unable to recover it. 00:34:42.213 [2024-07-21 03:44:27.357580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.213 [2024-07-21 03:44:27.357607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.213 qpair failed and we were unable to recover it. 00:34:42.213 [2024-07-21 03:44:27.357756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.213 [2024-07-21 03:44:27.357801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.213 qpair failed and we were unable to recover it. 00:34:42.213 [2024-07-21 03:44:27.357945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.213 [2024-07-21 03:44:27.357989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.213 qpair failed and we were unable to recover it. 00:34:42.213 [2024-07-21 03:44:27.358101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.213 [2024-07-21 03:44:27.358132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.213 qpair failed and we were unable to recover it. 00:34:42.213 [2024-07-21 03:44:27.358303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.213 [2024-07-21 03:44:27.358329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.213 qpair failed and we were unable to recover it. 00:34:42.213 [2024-07-21 03:44:27.358454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.213 [2024-07-21 03:44:27.358480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.213 qpair failed and we were unable to recover it. 00:34:42.213 [2024-07-21 03:44:27.358580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.213 [2024-07-21 03:44:27.358608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.213 qpair failed and we were unable to recover it. 00:34:42.213 [2024-07-21 03:44:27.358759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.213 [2024-07-21 03:44:27.358801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.213 qpair failed and we were unable to recover it. 00:34:42.213 [2024-07-21 03:44:27.358931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.213 [2024-07-21 03:44:27.358976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.213 qpair failed and we were unable to recover it. 00:34:42.213 [2024-07-21 03:44:27.359156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.213 [2024-07-21 03:44:27.359219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.213 qpair failed and we were unable to recover it. 00:34:42.213 [2024-07-21 03:44:27.359347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.213 [2024-07-21 03:44:27.359376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.213 qpair failed and we were unable to recover it. 00:34:42.213 [2024-07-21 03:44:27.359499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.213 [2024-07-21 03:44:27.359527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.213 qpair failed and we were unable to recover it. 00:34:42.213 [2024-07-21 03:44:27.359640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.213 [2024-07-21 03:44:27.359667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.213 qpair failed and we were unable to recover it. 00:34:42.213 [2024-07-21 03:44:27.359844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.213 [2024-07-21 03:44:27.359889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.213 qpair failed and we were unable to recover it. 00:34:42.213 [2024-07-21 03:44:27.360033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.213 [2024-07-21 03:44:27.360077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.213 qpair failed and we were unable to recover it. 00:34:42.213 [2024-07-21 03:44:27.360221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.213 [2024-07-21 03:44:27.360267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.213 qpair failed and we were unable to recover it. 00:34:42.213 [2024-07-21 03:44:27.360417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.213 [2024-07-21 03:44:27.360444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.213 qpair failed and we were unable to recover it. 00:34:42.213 [2024-07-21 03:44:27.360534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.213 [2024-07-21 03:44:27.360561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.213 qpair failed and we were unable to recover it. 00:34:42.213 [2024-07-21 03:44:27.360673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.213 [2024-07-21 03:44:27.360704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.213 qpair failed and we were unable to recover it. 00:34:42.213 [2024-07-21 03:44:27.360831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.213 [2024-07-21 03:44:27.360860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.213 qpair failed and we were unable to recover it. 00:34:42.213 [2024-07-21 03:44:27.361027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.213 [2024-07-21 03:44:27.361053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.213 qpair failed and we were unable to recover it. 00:34:42.213 [2024-07-21 03:44:27.361196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.213 [2024-07-21 03:44:27.361242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.213 qpair failed and we were unable to recover it. 00:34:42.213 [2024-07-21 03:44:27.361361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.213 [2024-07-21 03:44:27.361385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.213 qpair failed and we were unable to recover it. 00:34:42.213 [2024-07-21 03:44:27.361501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.213 [2024-07-21 03:44:27.361527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.213 qpair failed and we were unable to recover it. 00:34:42.213 [2024-07-21 03:44:27.361705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.213 [2024-07-21 03:44:27.361736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.213 qpair failed and we were unable to recover it. 00:34:42.213 [2024-07-21 03:44:27.361868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.213 [2024-07-21 03:44:27.361896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.213 qpair failed and we were unable to recover it. 00:34:42.213 [2024-07-21 03:44:27.362003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.213 [2024-07-21 03:44:27.362031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.213 qpair failed and we were unable to recover it. 00:34:42.213 [2024-07-21 03:44:27.362157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.213 [2024-07-21 03:44:27.362186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.213 qpair failed and we were unable to recover it. 00:34:42.213 [2024-07-21 03:44:27.362281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.213 [2024-07-21 03:44:27.362323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.213 qpair failed and we were unable to recover it. 00:34:42.213 [2024-07-21 03:44:27.362445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.213 [2024-07-21 03:44:27.362470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.213 qpair failed and we were unable to recover it. 00:34:42.213 [2024-07-21 03:44:27.362590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.213 [2024-07-21 03:44:27.362621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.213 qpair failed and we were unable to recover it. 00:34:42.213 [2024-07-21 03:44:27.362749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.213 [2024-07-21 03:44:27.362778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.213 qpair failed and we were unable to recover it. 00:34:42.213 [2024-07-21 03:44:27.362908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.213 [2024-07-21 03:44:27.362936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.213 qpair failed and we were unable to recover it. 00:34:42.213 [2024-07-21 03:44:27.363063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.213 [2024-07-21 03:44:27.363092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.213 qpair failed and we were unable to recover it. 00:34:42.213 [2024-07-21 03:44:27.363200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.213 [2024-07-21 03:44:27.363227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.213 qpair failed and we were unable to recover it. 00:34:42.213 [2024-07-21 03:44:27.363352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.213 [2024-07-21 03:44:27.363381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.213 qpair failed and we were unable to recover it. 00:34:42.213 [2024-07-21 03:44:27.363520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.213 [2024-07-21 03:44:27.363546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.213 qpair failed and we were unable to recover it. 00:34:42.213 [2024-07-21 03:44:27.363629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.213 [2024-07-21 03:44:27.363655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.213 qpair failed and we were unable to recover it. 00:34:42.213 [2024-07-21 03:44:27.363740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.213 [2024-07-21 03:44:27.363765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.213 qpair failed and we were unable to recover it. 00:34:42.213 [2024-07-21 03:44:27.363872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.213 [2024-07-21 03:44:27.363900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.213 qpair failed and we were unable to recover it. 00:34:42.213 [2024-07-21 03:44:27.364031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.213 [2024-07-21 03:44:27.364058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.213 qpair failed and we were unable to recover it. 00:34:42.213 [2024-07-21 03:44:27.364190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.213 [2024-07-21 03:44:27.364219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.213 qpair failed and we were unable to recover it. 00:34:42.213 [2024-07-21 03:44:27.364352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.213 [2024-07-21 03:44:27.364383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.213 qpair failed and we were unable to recover it. 00:34:42.213 [2024-07-21 03:44:27.364567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.213 [2024-07-21 03:44:27.364607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.213 qpair failed and we were unable to recover it. 00:34:42.213 [2024-07-21 03:44:27.364746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.213 [2024-07-21 03:44:27.364774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.213 qpair failed and we were unable to recover it. 00:34:42.213 [2024-07-21 03:44:27.364895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.213 [2024-07-21 03:44:27.364923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.213 qpair failed and we were unable to recover it. 00:34:42.213 [2024-07-21 03:44:27.365091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.213 [2024-07-21 03:44:27.365121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.213 qpair failed and we were unable to recover it. 00:34:42.213 [2024-07-21 03:44:27.365250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.213 [2024-07-21 03:44:27.365279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.213 qpair failed and we were unable to recover it. 00:34:42.213 [2024-07-21 03:44:27.365377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.213 [2024-07-21 03:44:27.365418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.213 qpair failed and we were unable to recover it. 00:34:42.213 [2024-07-21 03:44:27.365588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.213 [2024-07-21 03:44:27.365624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.213 qpair failed and we were unable to recover it. 00:34:42.213 [2024-07-21 03:44:27.365762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.213 [2024-07-21 03:44:27.365789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.213 qpair failed and we were unable to recover it. 00:34:42.213 [2024-07-21 03:44:27.365925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.213 [2024-07-21 03:44:27.365954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.213 qpair failed and we were unable to recover it. 00:34:42.213 [2024-07-21 03:44:27.366140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.213 [2024-07-21 03:44:27.366184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.214 qpair failed and we were unable to recover it. 00:34:42.214 [2024-07-21 03:44:27.366297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.214 [2024-07-21 03:44:27.366342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.214 qpair failed and we were unable to recover it. 00:34:42.214 [2024-07-21 03:44:27.366451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.214 [2024-07-21 03:44:27.366478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.214 qpair failed and we were unable to recover it. 00:34:42.214 [2024-07-21 03:44:27.366631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.214 [2024-07-21 03:44:27.366659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.214 qpair failed and we were unable to recover it. 00:34:42.214 [2024-07-21 03:44:27.366781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.214 [2024-07-21 03:44:27.366813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.214 qpair failed and we were unable to recover it. 00:34:42.214 [2024-07-21 03:44:27.366918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.214 [2024-07-21 03:44:27.366948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.214 qpair failed and we were unable to recover it. 00:34:42.214 [2024-07-21 03:44:27.367088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.214 [2024-07-21 03:44:27.367113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.214 qpair failed and we were unable to recover it. 00:34:42.214 [2024-07-21 03:44:27.367263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.214 [2024-07-21 03:44:27.367293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.214 qpair failed and we were unable to recover it. 00:34:42.214 [2024-07-21 03:44:27.367393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.214 [2024-07-21 03:44:27.367434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.214 qpair failed and we were unable to recover it. 00:34:42.214 [2024-07-21 03:44:27.367580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.214 [2024-07-21 03:44:27.367607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.214 qpair failed and we were unable to recover it. 00:34:42.214 [2024-07-21 03:44:27.367713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.214 [2024-07-21 03:44:27.367741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.214 qpair failed and we were unable to recover it. 00:34:42.214 [2024-07-21 03:44:27.367910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.214 [2024-07-21 03:44:27.367943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.214 qpair failed and we were unable to recover it. 00:34:42.214 [2024-07-21 03:44:27.368103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.214 [2024-07-21 03:44:27.368150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.214 qpair failed and we were unable to recover it. 00:34:42.214 [2024-07-21 03:44:27.368292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.214 [2024-07-21 03:44:27.368337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.214 qpair failed and we were unable to recover it. 00:34:42.214 [2024-07-21 03:44:27.368491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.214 [2024-07-21 03:44:27.368518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.214 qpair failed and we were unable to recover it. 00:34:42.214 [2024-07-21 03:44:27.368625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.214 [2024-07-21 03:44:27.368652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.214 qpair failed and we were unable to recover it. 00:34:42.214 [2024-07-21 03:44:27.368766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.214 [2024-07-21 03:44:27.368810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.214 qpair failed and we were unable to recover it. 00:34:42.214 [2024-07-21 03:44:27.368956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.214 [2024-07-21 03:44:27.368985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.214 qpair failed and we were unable to recover it. 00:34:42.214 [2024-07-21 03:44:27.369234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.214 [2024-07-21 03:44:27.369277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.214 qpair failed and we were unable to recover it. 00:34:42.214 [2024-07-21 03:44:27.369412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.214 [2024-07-21 03:44:27.369439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.214 qpair failed and we were unable to recover it. 00:34:42.214 [2024-07-21 03:44:27.369539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.214 [2024-07-21 03:44:27.369567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.214 qpair failed and we were unable to recover it. 00:34:42.214 [2024-07-21 03:44:27.369666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.214 [2024-07-21 03:44:27.369711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.214 qpair failed and we were unable to recover it. 00:34:42.214 [2024-07-21 03:44:27.369807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.214 [2024-07-21 03:44:27.369836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.214 qpair failed and we were unable to recover it. 00:34:42.214 [2024-07-21 03:44:27.369959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.214 [2024-07-21 03:44:27.369988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.214 qpair failed and we were unable to recover it. 00:34:42.214 [2024-07-21 03:44:27.370187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.214 [2024-07-21 03:44:27.370239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.214 qpair failed and we were unable to recover it. 00:34:42.214 [2024-07-21 03:44:27.370371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.214 [2024-07-21 03:44:27.370400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.214 qpair failed and we were unable to recover it. 00:34:42.214 [2024-07-21 03:44:27.370543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.214 [2024-07-21 03:44:27.370570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.214 qpair failed and we were unable to recover it. 00:34:42.214 [2024-07-21 03:44:27.370662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.214 [2024-07-21 03:44:27.370688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.214 qpair failed and we were unable to recover it. 00:34:42.214 [2024-07-21 03:44:27.370786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.214 [2024-07-21 03:44:27.370812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.214 qpair failed and we were unable to recover it. 00:34:42.214 [2024-07-21 03:44:27.370945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.214 [2024-07-21 03:44:27.370974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.214 qpair failed and we were unable to recover it. 00:34:42.214 [2024-07-21 03:44:27.371086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.214 [2024-07-21 03:44:27.371115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.214 qpair failed and we were unable to recover it. 00:34:42.214 [2024-07-21 03:44:27.371217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.214 [2024-07-21 03:44:27.371252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.214 qpair failed and we were unable to recover it. 00:34:42.214 [2024-07-21 03:44:27.371440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.214 [2024-07-21 03:44:27.371469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.214 qpair failed and we were unable to recover it. 00:34:42.214 [2024-07-21 03:44:27.371630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.214 [2024-07-21 03:44:27.371675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.214 qpair failed and we were unable to recover it. 00:34:42.214 [2024-07-21 03:44:27.371767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.214 [2024-07-21 03:44:27.371792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.214 qpair failed and we were unable to recover it. 00:34:42.214 [2024-07-21 03:44:27.371957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.214 [2024-07-21 03:44:27.371986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.214 qpair failed and we were unable to recover it. 00:34:42.214 [2024-07-21 03:44:27.372079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.214 [2024-07-21 03:44:27.372108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.214 qpair failed and we were unable to recover it. 00:34:42.214 [2024-07-21 03:44:27.372237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.214 [2024-07-21 03:44:27.372266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.214 qpair failed and we were unable to recover it. 00:34:42.214 [2024-07-21 03:44:27.372367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.214 [2024-07-21 03:44:27.372396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.214 qpair failed and we were unable to recover it. 00:34:42.214 [2024-07-21 03:44:27.372530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.214 [2024-07-21 03:44:27.372558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.214 qpair failed and we were unable to recover it. 00:34:42.214 [2024-07-21 03:44:27.372705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.214 [2024-07-21 03:44:27.372732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.214 qpair failed and we were unable to recover it. 00:34:42.214 [2024-07-21 03:44:27.372871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.214 [2024-07-21 03:44:27.372900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.214 qpair failed and we were unable to recover it. 00:34:42.214 [2024-07-21 03:44:27.373026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.214 [2024-07-21 03:44:27.373055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.214 qpair failed and we were unable to recover it. 00:34:42.214 [2024-07-21 03:44:27.373163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.214 [2024-07-21 03:44:27.373189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.214 qpair failed and we were unable to recover it. 00:34:42.214 [2024-07-21 03:44:27.373306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.214 [2024-07-21 03:44:27.373335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.214 qpair failed and we were unable to recover it. 00:34:42.214 [2024-07-21 03:44:27.373509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.214 [2024-07-21 03:44:27.373554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.214 qpair failed and we were unable to recover it. 00:34:42.214 [2024-07-21 03:44:27.373685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.214 [2024-07-21 03:44:27.373714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.214 qpair failed and we were unable to recover it. 00:34:42.214 [2024-07-21 03:44:27.373831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.214 [2024-07-21 03:44:27.373878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.214 qpair failed and we were unable to recover it. 00:34:42.214 [2024-07-21 03:44:27.373990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.214 [2024-07-21 03:44:27.374020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.214 qpair failed and we were unable to recover it. 00:34:42.214 [2024-07-21 03:44:27.374170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.214 [2024-07-21 03:44:27.374199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.214 qpair failed and we were unable to recover it. 00:34:42.214 [2024-07-21 03:44:27.374329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.214 [2024-07-21 03:44:27.374355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.214 qpair failed and we were unable to recover it. 00:34:42.214 [2024-07-21 03:44:27.374475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.214 [2024-07-21 03:44:27.374501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.214 qpair failed and we were unable to recover it. 00:34:42.214 [2024-07-21 03:44:27.374627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.214 [2024-07-21 03:44:27.374655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.214 qpair failed and we were unable to recover it. 00:34:42.214 [2024-07-21 03:44:27.374799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.214 [2024-07-21 03:44:27.374843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.214 qpair failed and we were unable to recover it. 00:34:42.214 [2024-07-21 03:44:27.374986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.214 [2024-07-21 03:44:27.375031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.214 qpair failed and we were unable to recover it. 00:34:42.214 [2024-07-21 03:44:27.375123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.214 [2024-07-21 03:44:27.375150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.214 qpair failed and we were unable to recover it. 00:34:42.214 [2024-07-21 03:44:27.375271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.214 [2024-07-21 03:44:27.375297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.214 qpair failed and we were unable to recover it. 00:34:42.214 [2024-07-21 03:44:27.375422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.214 [2024-07-21 03:44:27.375450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.214 qpair failed and we were unable to recover it. 00:34:42.215 [2024-07-21 03:44:27.375542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.215 [2024-07-21 03:44:27.375573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.215 qpair failed and we were unable to recover it. 00:34:42.215 [2024-07-21 03:44:27.375749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.215 [2024-07-21 03:44:27.375779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.215 qpair failed and we were unable to recover it. 00:34:42.215 [2024-07-21 03:44:27.376009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.215 [2024-07-21 03:44:27.376060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.215 qpair failed and we were unable to recover it. 00:34:42.215 [2024-07-21 03:44:27.376190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.215 [2024-07-21 03:44:27.376219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.215 qpair failed and we were unable to recover it. 00:34:42.215 [2024-07-21 03:44:27.376348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.215 [2024-07-21 03:44:27.376379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.215 qpair failed and we were unable to recover it. 00:34:42.215 [2024-07-21 03:44:27.376522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.215 [2024-07-21 03:44:27.376548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.215 qpair failed and we were unable to recover it. 00:34:42.215 [2024-07-21 03:44:27.376693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.215 [2024-07-21 03:44:27.376720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.215 qpair failed and we were unable to recover it. 00:34:42.215 [2024-07-21 03:44:27.376809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.215 [2024-07-21 03:44:27.376852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.215 qpair failed and we were unable to recover it. 00:34:42.215 [2024-07-21 03:44:27.376961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.215 [2024-07-21 03:44:27.377005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.215 qpair failed and we were unable to recover it. 00:34:42.215 [2024-07-21 03:44:27.377112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.215 [2024-07-21 03:44:27.377138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.215 qpair failed and we were unable to recover it. 00:34:42.215 [2024-07-21 03:44:27.377315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.215 [2024-07-21 03:44:27.377344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.215 qpair failed and we were unable to recover it. 00:34:42.215 [2024-07-21 03:44:27.377507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.215 [2024-07-21 03:44:27.377533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.215 qpair failed and we were unable to recover it. 00:34:42.215 [2024-07-21 03:44:27.377655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.215 [2024-07-21 03:44:27.377682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.215 qpair failed and we were unable to recover it. 00:34:42.215 [2024-07-21 03:44:27.377782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.215 [2024-07-21 03:44:27.377808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.215 qpair failed and we were unable to recover it. 00:34:42.215 [2024-07-21 03:44:27.377957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.215 [2024-07-21 03:44:27.378002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.215 qpair failed and we were unable to recover it. 00:34:42.215 [2024-07-21 03:44:27.378146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.215 [2024-07-21 03:44:27.378194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.215 qpair failed and we were unable to recover it. 00:34:42.215 [2024-07-21 03:44:27.378300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.215 [2024-07-21 03:44:27.378331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.215 qpair failed and we were unable to recover it. 00:34:42.215 [2024-07-21 03:44:27.378468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.215 [2024-07-21 03:44:27.378495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.215 qpair failed and we were unable to recover it. 00:34:42.215 [2024-07-21 03:44:27.378625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.215 [2024-07-21 03:44:27.378682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.215 qpair failed and we were unable to recover it. 00:34:42.215 [2024-07-21 03:44:27.378838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.215 [2024-07-21 03:44:27.378883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.215 qpair failed and we were unable to recover it. 00:34:42.215 [2024-07-21 03:44:27.379027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.215 [2024-07-21 03:44:27.379059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.215 qpair failed and we were unable to recover it. 00:34:42.215 [2024-07-21 03:44:27.379225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.215 [2024-07-21 03:44:27.379255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.215 qpair failed and we were unable to recover it. 00:34:42.215 [2024-07-21 03:44:27.379427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.215 [2024-07-21 03:44:27.379486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.215 qpair failed and we were unable to recover it. 00:34:42.215 [2024-07-21 03:44:27.379629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.215 [2024-07-21 03:44:27.379672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.215 qpair failed and we were unable to recover it. 00:34:42.215 [2024-07-21 03:44:27.379812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.215 [2024-07-21 03:44:27.379842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.215 qpair failed and we were unable to recover it. 00:34:42.215 [2024-07-21 03:44:27.379977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.215 [2024-07-21 03:44:27.380006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.215 qpair failed and we were unable to recover it. 00:34:42.215 [2024-07-21 03:44:27.380117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.215 [2024-07-21 03:44:27.380146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.215 qpair failed and we were unable to recover it. 00:34:42.215 [2024-07-21 03:44:27.380272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.215 [2024-07-21 03:44:27.380308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.215 qpair failed and we were unable to recover it. 00:34:42.215 [2024-07-21 03:44:27.380432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.215 [2024-07-21 03:44:27.380462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.215 qpair failed and we were unable to recover it. 00:34:42.215 [2024-07-21 03:44:27.380588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.215 [2024-07-21 03:44:27.380625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.215 qpair failed and we were unable to recover it. 00:34:42.215 [2024-07-21 03:44:27.380760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.215 [2024-07-21 03:44:27.380789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.215 qpair failed and we were unable to recover it. 00:34:42.215 [2024-07-21 03:44:27.380930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.215 [2024-07-21 03:44:27.380978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.215 qpair failed and we were unable to recover it. 00:34:42.215 [2024-07-21 03:44:27.381145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.215 [2024-07-21 03:44:27.381187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.215 qpair failed and we were unable to recover it. 00:34:42.215 [2024-07-21 03:44:27.381325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.215 [2024-07-21 03:44:27.381369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.215 qpair failed and we were unable to recover it. 00:34:42.215 [2024-07-21 03:44:27.381466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.215 [2024-07-21 03:44:27.381493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.215 qpair failed and we were unable to recover it. 00:34:42.215 [2024-07-21 03:44:27.381608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.215 [2024-07-21 03:44:27.381644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.215 qpair failed and we were unable to recover it. 00:34:42.215 [2024-07-21 03:44:27.381765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.215 [2024-07-21 03:44:27.381792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.215 qpair failed and we were unable to recover it. 00:34:42.215 [2024-07-21 03:44:27.381906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.215 [2024-07-21 03:44:27.381933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.215 qpair failed and we were unable to recover it. 00:34:42.215 [2024-07-21 03:44:27.382069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.215 [2024-07-21 03:44:27.382109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.215 qpair failed and we were unable to recover it. 00:34:42.215 [2024-07-21 03:44:27.382241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.215 [2024-07-21 03:44:27.382268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.215 qpair failed and we were unable to recover it. 00:34:42.215 [2024-07-21 03:44:27.382393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.215 [2024-07-21 03:44:27.382421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.215 qpair failed and we were unable to recover it. 00:34:42.215 [2024-07-21 03:44:27.382550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.215 [2024-07-21 03:44:27.382577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.215 qpair failed and we were unable to recover it. 00:34:42.215 [2024-07-21 03:44:27.382683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.215 [2024-07-21 03:44:27.382710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.215 qpair failed and we were unable to recover it. 00:34:42.215 [2024-07-21 03:44:27.382840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.215 [2024-07-21 03:44:27.382867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.215 qpair failed and we were unable to recover it. 00:34:42.215 [2024-07-21 03:44:27.383020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.215 [2024-07-21 03:44:27.383063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.215 qpair failed and we were unable to recover it. 00:34:42.215 [2024-07-21 03:44:27.383200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.215 [2024-07-21 03:44:27.383229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.215 qpair failed and we were unable to recover it. 00:34:42.215 [2024-07-21 03:44:27.383329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.215 [2024-07-21 03:44:27.383358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.215 qpair failed and we were unable to recover it. 00:34:42.215 [2024-07-21 03:44:27.383485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.215 [2024-07-21 03:44:27.383514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.215 qpair failed and we were unable to recover it. 00:34:42.215 [2024-07-21 03:44:27.383653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.215 [2024-07-21 03:44:27.383696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.215 qpair failed and we were unable to recover it. 00:34:42.215 [2024-07-21 03:44:27.383792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.215 [2024-07-21 03:44:27.383819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.215 qpair failed and we were unable to recover it. 00:34:42.215 [2024-07-21 03:44:27.383930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.215 [2024-07-21 03:44:27.383956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.215 qpair failed and we were unable to recover it. 00:34:42.216 [2024-07-21 03:44:27.384046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.216 [2024-07-21 03:44:27.384073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.216 qpair failed and we were unable to recover it. 00:34:42.216 [2024-07-21 03:44:27.384185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.216 [2024-07-21 03:44:27.384211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.216 qpair failed and we were unable to recover it. 00:34:42.216 [2024-07-21 03:44:27.384339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.216 [2024-07-21 03:44:27.384368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.216 qpair failed and we were unable to recover it. 00:34:42.216 [2024-07-21 03:44:27.384482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.216 [2024-07-21 03:44:27.384513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.216 qpair failed and we were unable to recover it. 00:34:42.216 [2024-07-21 03:44:27.384620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.216 [2024-07-21 03:44:27.384647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.216 qpair failed and we were unable to recover it. 00:34:42.216 [2024-07-21 03:44:27.384764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.216 [2024-07-21 03:44:27.384790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.216 qpair failed and we were unable to recover it. 00:34:42.216 [2024-07-21 03:44:27.384891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.216 [2024-07-21 03:44:27.384919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.216 qpair failed and we were unable to recover it. 00:34:42.216 [2024-07-21 03:44:27.385077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.216 [2024-07-21 03:44:27.385106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.216 qpair failed and we were unable to recover it. 00:34:42.216 [2024-07-21 03:44:27.385220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.216 [2024-07-21 03:44:27.385267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.216 qpair failed and we were unable to recover it. 00:34:42.216 [2024-07-21 03:44:27.385368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.216 [2024-07-21 03:44:27.385397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.216 qpair failed and we were unable to recover it. 00:34:42.216 [2024-07-21 03:44:27.385493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.216 [2024-07-21 03:44:27.385524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.216 qpair failed and we were unable to recover it. 00:34:42.216 [2024-07-21 03:44:27.385658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.216 [2024-07-21 03:44:27.385685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.216 qpair failed and we were unable to recover it. 00:34:42.216 [2024-07-21 03:44:27.385803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.216 [2024-07-21 03:44:27.385830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.216 qpair failed and we were unable to recover it. 00:34:42.216 [2024-07-21 03:44:27.385984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.216 [2024-07-21 03:44:27.386011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.216 qpair failed and we were unable to recover it. 00:34:42.216 [2024-07-21 03:44:27.386147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.216 [2024-07-21 03:44:27.386176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.216 qpair failed and we were unable to recover it. 00:34:42.216 [2024-07-21 03:44:27.386276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.216 [2024-07-21 03:44:27.386307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.216 qpair failed and we were unable to recover it. 00:34:42.216 [2024-07-21 03:44:27.386466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.216 [2024-07-21 03:44:27.386495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.216 qpair failed and we were unable to recover it. 00:34:42.216 [2024-07-21 03:44:27.386658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.216 [2024-07-21 03:44:27.386699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.216 qpair failed and we were unable to recover it. 00:34:42.216 [2024-07-21 03:44:27.386844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.216 [2024-07-21 03:44:27.386893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.216 qpair failed and we were unable to recover it. 00:34:42.216 [2024-07-21 03:44:27.387010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.216 [2024-07-21 03:44:27.387056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.216 qpair failed and we were unable to recover it. 00:34:42.216 [2024-07-21 03:44:27.387280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.216 [2024-07-21 03:44:27.387334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.216 qpair failed and we were unable to recover it. 00:34:42.216 [2024-07-21 03:44:27.387466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.216 [2024-07-21 03:44:27.387493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.216 qpair failed and we were unable to recover it. 00:34:42.216 [2024-07-21 03:44:27.387623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.216 [2024-07-21 03:44:27.387651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.216 qpair failed and we were unable to recover it. 00:34:42.216 [2024-07-21 03:44:27.387789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.216 [2024-07-21 03:44:27.387834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.216 qpair failed and we were unable to recover it. 00:34:42.216 [2024-07-21 03:44:27.387978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.216 [2024-07-21 03:44:27.388021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.216 qpair failed and we were unable to recover it. 00:34:42.216 [2024-07-21 03:44:27.388195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.216 [2024-07-21 03:44:27.388241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.216 qpair failed and we were unable to recover it. 00:34:42.216 [2024-07-21 03:44:27.388359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.216 [2024-07-21 03:44:27.388386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.216 qpair failed and we were unable to recover it. 00:34:42.216 [2024-07-21 03:44:27.388506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.216 [2024-07-21 03:44:27.388532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.216 qpair failed and we were unable to recover it. 00:34:42.216 [2024-07-21 03:44:27.388635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.216 [2024-07-21 03:44:27.388662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.216 qpair failed and we were unable to recover it. 00:34:42.216 [2024-07-21 03:44:27.388810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.216 [2024-07-21 03:44:27.388854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.216 qpair failed and we were unable to recover it. 00:34:42.216 [2024-07-21 03:44:27.389038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.216 [2024-07-21 03:44:27.389082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.216 qpair failed and we were unable to recover it. 00:34:42.216 [2024-07-21 03:44:27.389268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.216 [2024-07-21 03:44:27.389300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.216 qpair failed and we were unable to recover it. 00:34:42.216 [2024-07-21 03:44:27.389512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.216 [2024-07-21 03:44:27.389567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.216 qpair failed and we were unable to recover it. 00:34:42.216 [2024-07-21 03:44:27.389718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.216 [2024-07-21 03:44:27.389750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.216 qpair failed and we were unable to recover it. 00:34:42.216 [2024-07-21 03:44:27.389909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.216 [2024-07-21 03:44:27.389939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.216 qpair failed and we were unable to recover it. 00:34:42.216 [2024-07-21 03:44:27.390123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.216 [2024-07-21 03:44:27.390178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.216 qpair failed and we were unable to recover it. 00:34:42.216 [2024-07-21 03:44:27.390348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.216 [2024-07-21 03:44:27.390378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.216 qpair failed and we were unable to recover it. 00:34:42.216 [2024-07-21 03:44:27.390503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.216 [2024-07-21 03:44:27.390531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.216 qpair failed and we were unable to recover it. 00:34:42.216 [2024-07-21 03:44:27.390651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.216 [2024-07-21 03:44:27.390679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.216 qpair failed and we were unable to recover it. 00:34:42.216 [2024-07-21 03:44:27.390814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.216 [2024-07-21 03:44:27.390841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.216 qpair failed and we were unable to recover it. 00:34:42.216 [2024-07-21 03:44:27.390965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.216 [2024-07-21 03:44:27.391023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.216 qpair failed and we were unable to recover it. 00:34:42.216 [2024-07-21 03:44:27.391214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.216 [2024-07-21 03:44:27.391274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.216 qpair failed and we were unable to recover it. 00:34:42.216 [2024-07-21 03:44:27.391412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.216 [2024-07-21 03:44:27.391441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.216 qpair failed and we were unable to recover it. 00:34:42.216 [2024-07-21 03:44:27.391570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.216 [2024-07-21 03:44:27.391599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.216 qpair failed and we were unable to recover it. 00:34:42.216 [2024-07-21 03:44:27.391759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.216 [2024-07-21 03:44:27.391787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.216 qpair failed and we were unable to recover it. 00:34:42.216 [2024-07-21 03:44:27.391917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.216 [2024-07-21 03:44:27.391945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.216 qpair failed and we were unable to recover it. 00:34:42.216 [2024-07-21 03:44:27.392091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.216 [2024-07-21 03:44:27.392136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.216 qpair failed and we were unable to recover it. 00:34:42.216 [2024-07-21 03:44:27.392352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.216 [2024-07-21 03:44:27.392406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.216 qpair failed and we were unable to recover it. 00:34:42.216 [2024-07-21 03:44:27.392527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.216 [2024-07-21 03:44:27.392555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.216 qpair failed and we were unable to recover it. 00:34:42.216 [2024-07-21 03:44:27.392655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.216 [2024-07-21 03:44:27.392684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.216 qpair failed and we were unable to recover it. 00:34:42.216 [2024-07-21 03:44:27.392831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.216 [2024-07-21 03:44:27.392876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.216 qpair failed and we were unable to recover it. 00:34:42.216 [2024-07-21 03:44:27.393019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.216 [2024-07-21 03:44:27.393066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.216 qpair failed and we were unable to recover it. 00:34:42.216 [2024-07-21 03:44:27.393207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.216 [2024-07-21 03:44:27.393251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.216 qpair failed and we were unable to recover it. 00:34:42.216 [2024-07-21 03:44:27.393387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.216 [2024-07-21 03:44:27.393427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.216 qpair failed and we were unable to recover it. 00:34:42.216 [2024-07-21 03:44:27.393531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.216 [2024-07-21 03:44:27.393559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.216 qpair failed and we were unable to recover it. 00:34:42.216 [2024-07-21 03:44:27.393651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.216 [2024-07-21 03:44:27.393679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.216 qpair failed and we were unable to recover it. 00:34:42.216 [2024-07-21 03:44:27.393798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.216 [2024-07-21 03:44:27.393827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.217 qpair failed and we were unable to recover it. 00:34:42.217 [2024-07-21 03:44:27.393960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.217 [2024-07-21 03:44:27.394002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.217 qpair failed and we were unable to recover it. 00:34:42.217 [2024-07-21 03:44:27.394144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.217 [2024-07-21 03:44:27.394173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.217 qpair failed and we were unable to recover it. 00:34:42.217 [2024-07-21 03:44:27.394270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.217 [2024-07-21 03:44:27.394299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.217 qpair failed and we were unable to recover it. 00:34:42.217 [2024-07-21 03:44:27.394423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.217 [2024-07-21 03:44:27.394452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.217 qpair failed and we were unable to recover it. 00:34:42.217 [2024-07-21 03:44:27.394588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.217 [2024-07-21 03:44:27.394627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.217 qpair failed and we were unable to recover it. 00:34:42.217 [2024-07-21 03:44:27.394742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.217 [2024-07-21 03:44:27.394768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.217 qpair failed and we were unable to recover it. 00:34:42.217 [2024-07-21 03:44:27.394887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.217 [2024-07-21 03:44:27.394914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.217 qpair failed and we were unable to recover it. 00:34:42.217 [2024-07-21 03:44:27.395062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.217 [2024-07-21 03:44:27.395091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.217 qpair failed and we were unable to recover it. 00:34:42.217 [2024-07-21 03:44:27.395213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.217 [2024-07-21 03:44:27.395240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.217 qpair failed and we were unable to recover it. 00:34:42.217 [2024-07-21 03:44:27.395399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.217 [2024-07-21 03:44:27.395425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.217 qpair failed and we were unable to recover it. 00:34:42.217 [2024-07-21 03:44:27.395578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.217 [2024-07-21 03:44:27.395608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.217 qpair failed and we were unable to recover it. 00:34:42.217 [2024-07-21 03:44:27.395789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.217 [2024-07-21 03:44:27.395815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.217 qpair failed and we were unable to recover it. 00:34:42.217 [2024-07-21 03:44:27.395979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.217 [2024-07-21 03:44:27.396010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.217 qpair failed and we were unable to recover it. 00:34:42.217 [2024-07-21 03:44:27.396130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.217 [2024-07-21 03:44:27.396164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.217 qpair failed and we were unable to recover it. 00:34:42.217 [2024-07-21 03:44:27.396275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.217 [2024-07-21 03:44:27.396304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.217 qpair failed and we were unable to recover it. 00:34:42.217 [2024-07-21 03:44:27.396462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.217 [2024-07-21 03:44:27.396492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.217 qpair failed and we were unable to recover it. 00:34:42.217 [2024-07-21 03:44:27.396591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.217 [2024-07-21 03:44:27.396628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.217 qpair failed and we were unable to recover it. 00:34:42.217 [2024-07-21 03:44:27.396772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.217 [2024-07-21 03:44:27.396798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.217 qpair failed and we were unable to recover it. 00:34:42.217 [2024-07-21 03:44:27.396900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.217 [2024-07-21 03:44:27.396929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.217 qpair failed and we were unable to recover it. 00:34:42.217 [2024-07-21 03:44:27.397072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.217 [2024-07-21 03:44:27.397102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.217 qpair failed and we were unable to recover it. 00:34:42.217 [2024-07-21 03:44:27.397226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.217 [2024-07-21 03:44:27.397273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.217 qpair failed and we were unable to recover it. 00:34:42.217 [2024-07-21 03:44:27.397408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.217 [2024-07-21 03:44:27.397438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.217 qpair failed and we were unable to recover it. 00:34:42.217 [2024-07-21 03:44:27.397541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.217 [2024-07-21 03:44:27.397571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.217 qpair failed and we were unable to recover it. 00:34:42.217 [2024-07-21 03:44:27.397717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.217 [2024-07-21 03:44:27.397745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.217 qpair failed and we were unable to recover it. 00:34:42.217 [2024-07-21 03:44:27.397906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.217 [2024-07-21 03:44:27.397936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.217 qpair failed and we were unable to recover it. 00:34:42.217 [2024-07-21 03:44:27.398082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.217 [2024-07-21 03:44:27.398109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.217 qpair failed and we were unable to recover it. 00:34:42.217 [2024-07-21 03:44:27.398260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.217 [2024-07-21 03:44:27.398290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.217 qpair failed and we were unable to recover it. 00:34:42.217 [2024-07-21 03:44:27.398437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.217 [2024-07-21 03:44:27.398468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.217 qpair failed and we were unable to recover it. 00:34:42.217 [2024-07-21 03:44:27.398604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.217 [2024-07-21 03:44:27.398640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.217 qpair failed and we were unable to recover it. 00:34:42.217 [2024-07-21 03:44:27.398782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.217 [2024-07-21 03:44:27.398809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.217 qpair failed and we were unable to recover it. 00:34:42.217 [2024-07-21 03:44:27.398908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.217 [2024-07-21 03:44:27.398934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.217 qpair failed and we were unable to recover it. 00:34:42.217 [2024-07-21 03:44:27.399058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.217 [2024-07-21 03:44:27.399087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.217 qpair failed and we were unable to recover it. 00:34:42.217 [2024-07-21 03:44:27.399211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.217 [2024-07-21 03:44:27.399254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.217 qpair failed and we were unable to recover it. 00:34:42.217 [2024-07-21 03:44:27.399393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.217 [2024-07-21 03:44:27.399421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.217 qpair failed and we were unable to recover it. 00:34:42.217 [2024-07-21 03:44:27.399568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.217 [2024-07-21 03:44:27.399594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.217 qpair failed and we were unable to recover it. 00:34:42.217 [2024-07-21 03:44:27.399728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.217 [2024-07-21 03:44:27.399755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.217 qpair failed and we were unable to recover it. 00:34:42.217 [2024-07-21 03:44:27.399842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.217 [2024-07-21 03:44:27.399868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.217 qpair failed and we were unable to recover it. 00:34:42.217 [2024-07-21 03:44:27.400009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.217 [2024-07-21 03:44:27.400038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.217 qpair failed and we were unable to recover it. 00:34:42.217 [2024-07-21 03:44:27.400153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.217 [2024-07-21 03:44:27.400179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.217 qpair failed and we were unable to recover it. 00:34:42.217 [2024-07-21 03:44:27.400327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.217 [2024-07-21 03:44:27.400353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.217 qpair failed and we were unable to recover it. 00:34:42.217 [2024-07-21 03:44:27.400465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.217 [2024-07-21 03:44:27.400499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.217 qpair failed and we were unable to recover it. 00:34:42.217 [2024-07-21 03:44:27.400665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.217 [2024-07-21 03:44:27.400692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.217 qpair failed and we were unable to recover it. 00:34:42.217 [2024-07-21 03:44:27.400837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.217 [2024-07-21 03:44:27.400863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.217 qpair failed and we were unable to recover it. 00:34:42.217 [2024-07-21 03:44:27.401004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.217 [2024-07-21 03:44:27.401033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.217 qpair failed and we were unable to recover it. 00:34:42.217 [2024-07-21 03:44:27.401176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.217 [2024-07-21 03:44:27.401202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.217 qpair failed and we were unable to recover it. 00:34:42.217 [2024-07-21 03:44:27.401368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.217 [2024-07-21 03:44:27.401397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.217 qpair failed and we were unable to recover it. 00:34:42.217 [2024-07-21 03:44:27.401519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.217 [2024-07-21 03:44:27.401548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.217 qpair failed and we were unable to recover it. 00:34:42.217 [2024-07-21 03:44:27.401689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.217 [2024-07-21 03:44:27.401716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.217 qpair failed and we were unable to recover it. 00:34:42.217 [2024-07-21 03:44:27.401818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.217 [2024-07-21 03:44:27.401845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.217 qpair failed and we were unable to recover it. 00:34:42.217 [2024-07-21 03:44:27.401935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.217 [2024-07-21 03:44:27.401961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.217 qpair failed and we were unable to recover it. 00:34:42.217 [2024-07-21 03:44:27.402085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.217 [2024-07-21 03:44:27.402112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.217 qpair failed and we were unable to recover it. 00:34:42.217 [2024-07-21 03:44:27.402241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.217 [2024-07-21 03:44:27.402301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.217 qpair failed and we were unable to recover it. 00:34:42.217 [2024-07-21 03:44:27.402435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.217 [2024-07-21 03:44:27.402467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.217 qpair failed and we were unable to recover it. 00:34:42.217 [2024-07-21 03:44:27.402619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.217 [2024-07-21 03:44:27.402648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.217 qpair failed and we were unable to recover it. 00:34:42.217 [2024-07-21 03:44:27.402782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.217 [2024-07-21 03:44:27.402809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.217 qpair failed and we were unable to recover it. 00:34:42.217 [2024-07-21 03:44:27.402927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.217 [2024-07-21 03:44:27.402954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.217 qpair failed and we were unable to recover it. 00:34:42.217 [2024-07-21 03:44:27.403098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.217 [2024-07-21 03:44:27.403124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.218 qpair failed and we were unable to recover it. 00:34:42.218 [2024-07-21 03:44:27.403216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.218 [2024-07-21 03:44:27.403244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.218 qpair failed and we were unable to recover it. 00:34:42.218 [2024-07-21 03:44:27.403365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.218 [2024-07-21 03:44:27.403391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.218 qpair failed and we were unable to recover it. 00:34:42.218 [2024-07-21 03:44:27.403539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.218 [2024-07-21 03:44:27.403565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.218 qpair failed and we were unable to recover it. 00:34:42.218 [2024-07-21 03:44:27.403680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.218 [2024-07-21 03:44:27.403707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.218 qpair failed and we were unable to recover it. 00:34:42.218 [2024-07-21 03:44:27.403796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.218 [2024-07-21 03:44:27.403824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.218 qpair failed and we were unable to recover it. 00:34:42.218 [2024-07-21 03:44:27.403972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.218 [2024-07-21 03:44:27.403998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.218 qpair failed and we were unable to recover it. 00:34:42.218 [2024-07-21 03:44:27.404095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.218 [2024-07-21 03:44:27.404137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.218 qpair failed and we were unable to recover it. 00:34:42.218 [2024-07-21 03:44:27.404244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.218 [2024-07-21 03:44:27.404273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.218 qpair failed and we were unable to recover it. 00:34:42.218 [2024-07-21 03:44:27.404382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.218 [2024-07-21 03:44:27.404408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.218 qpair failed and we were unable to recover it. 00:34:42.218 [2024-07-21 03:44:27.404536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.218 [2024-07-21 03:44:27.404562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.218 qpair failed and we were unable to recover it. 00:34:42.218 [2024-07-21 03:44:27.404700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.218 [2024-07-21 03:44:27.404726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.218 qpair failed and we were unable to recover it. 00:34:42.218 [2024-07-21 03:44:27.404822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.218 [2024-07-21 03:44:27.404849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.218 qpair failed and we were unable to recover it. 00:34:42.218 [2024-07-21 03:44:27.404970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.218 [2024-07-21 03:44:27.404997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.218 qpair failed and we were unable to recover it. 00:34:42.218 [2024-07-21 03:44:27.405159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.218 [2024-07-21 03:44:27.405187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.218 qpair failed and we were unable to recover it. 00:34:42.218 [2024-07-21 03:44:27.405323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.218 [2024-07-21 03:44:27.405351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.218 qpair failed and we were unable to recover it. 00:34:42.218 [2024-07-21 03:44:27.405471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.218 [2024-07-21 03:44:27.405501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.218 qpair failed and we were unable to recover it. 00:34:42.218 [2024-07-21 03:44:27.405630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.218 [2024-07-21 03:44:27.405674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.218 qpair failed and we were unable to recover it. 00:34:42.218 [2024-07-21 03:44:27.405800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.218 [2024-07-21 03:44:27.405827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.218 qpair failed and we were unable to recover it. 00:34:42.218 [2024-07-21 03:44:27.405924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.218 [2024-07-21 03:44:27.405951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.218 qpair failed and we were unable to recover it. 00:34:42.218 [2024-07-21 03:44:27.406071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.218 [2024-07-21 03:44:27.406097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.218 qpair failed and we were unable to recover it. 00:34:42.218 [2024-07-21 03:44:27.406222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.218 [2024-07-21 03:44:27.406249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.218 qpair failed and we were unable to recover it. 00:34:42.218 [2024-07-21 03:44:27.406364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.218 [2024-07-21 03:44:27.406392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.218 qpair failed and we were unable to recover it. 00:34:42.218 [2024-07-21 03:44:27.406555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.218 [2024-07-21 03:44:27.406582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.218 qpair failed and we were unable to recover it. 00:34:42.218 [2024-07-21 03:44:27.406680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.218 [2024-07-21 03:44:27.406708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.218 qpair failed and we were unable to recover it. 00:34:42.218 [2024-07-21 03:44:27.406836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.218 [2024-07-21 03:44:27.406863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.218 qpair failed and we were unable to recover it. 00:34:42.218 [2024-07-21 03:44:27.407002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.218 [2024-07-21 03:44:27.407032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.218 qpair failed and we were unable to recover it. 00:34:42.218 [2024-07-21 03:44:27.407149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.218 [2024-07-21 03:44:27.407176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.218 qpair failed and we were unable to recover it. 00:34:42.218 [2024-07-21 03:44:27.407327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.218 [2024-07-21 03:44:27.407353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.218 qpair failed and we were unable to recover it. 00:34:42.218 [2024-07-21 03:44:27.407497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.218 [2024-07-21 03:44:27.407526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.218 qpair failed and we were unable to recover it. 00:34:42.218 [2024-07-21 03:44:27.407652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.218 [2024-07-21 03:44:27.407679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.218 qpair failed and we were unable to recover it. 00:34:42.218 [2024-07-21 03:44:27.407792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.218 [2024-07-21 03:44:27.407818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.218 qpair failed and we were unable to recover it. 00:34:42.218 [2024-07-21 03:44:27.407909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.218 [2024-07-21 03:44:27.407953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.218 qpair failed and we were unable to recover it. 00:34:42.218 [2024-07-21 03:44:27.408066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.218 [2024-07-21 03:44:27.408092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.218 qpair failed and we were unable to recover it. 00:34:42.218 [2024-07-21 03:44:27.408210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.218 [2024-07-21 03:44:27.408236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.218 qpair failed and we were unable to recover it. 00:34:42.218 [2024-07-21 03:44:27.408356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.218 [2024-07-21 03:44:27.408382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.218 qpair failed and we were unable to recover it. 00:34:42.218 [2024-07-21 03:44:27.408534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.218 [2024-07-21 03:44:27.408560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.218 qpair failed and we were unable to recover it. 00:34:42.218 [2024-07-21 03:44:27.408691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.218 [2024-07-21 03:44:27.408718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.218 qpair failed and we were unable to recover it. 00:34:42.218 [2024-07-21 03:44:27.408835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.218 [2024-07-21 03:44:27.408862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.218 qpair failed and we were unable to recover it. 00:34:42.218 [2024-07-21 03:44:27.409008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.218 [2024-07-21 03:44:27.409036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.218 qpair failed and we were unable to recover it. 00:34:42.218 [2024-07-21 03:44:27.409118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.218 [2024-07-21 03:44:27.409145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.218 qpair failed and we were unable to recover it. 00:34:42.218 [2024-07-21 03:44:27.409263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.218 [2024-07-21 03:44:27.409292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.218 qpair failed and we were unable to recover it. 00:34:42.218 [2024-07-21 03:44:27.409404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.218 [2024-07-21 03:44:27.409431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.218 qpair failed and we were unable to recover it. 00:34:42.218 [2024-07-21 03:44:27.409582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.218 [2024-07-21 03:44:27.409608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.218 qpair failed and we were unable to recover it. 00:34:42.218 [2024-07-21 03:44:27.409736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.218 [2024-07-21 03:44:27.409762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.218 qpair failed and we were unable to recover it. 00:34:42.218 [2024-07-21 03:44:27.409883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.218 [2024-07-21 03:44:27.409909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.218 qpair failed and we were unable to recover it. 00:34:42.218 [2024-07-21 03:44:27.410024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.218 [2024-07-21 03:44:27.410050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.218 qpair failed and we were unable to recover it. 00:34:42.218 [2024-07-21 03:44:27.410180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.218 [2024-07-21 03:44:27.410206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.218 qpair failed and we were unable to recover it. 00:34:42.218 [2024-07-21 03:44:27.410303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.218 [2024-07-21 03:44:27.410329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.218 qpair failed and we were unable to recover it. 00:34:42.218 [2024-07-21 03:44:27.410472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.218 [2024-07-21 03:44:27.410499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.218 qpair failed and we were unable to recover it. 00:34:42.218 [2024-07-21 03:44:27.410676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.218 [2024-07-21 03:44:27.410703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.218 qpair failed and we were unable to recover it. 00:34:42.218 [2024-07-21 03:44:27.410826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.219 [2024-07-21 03:44:27.410852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.219 qpair failed and we were unable to recover it. 00:34:42.219 [2024-07-21 03:44:27.410970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.219 [2024-07-21 03:44:27.411001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.219 qpair failed and we were unable to recover it. 00:34:42.219 [2024-07-21 03:44:27.411119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.219 [2024-07-21 03:44:27.411150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.219 qpair failed and we were unable to recover it. 00:34:42.219 [2024-07-21 03:44:27.411267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.219 [2024-07-21 03:44:27.411294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.219 qpair failed and we were unable to recover it. 00:34:42.219 [2024-07-21 03:44:27.411428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.219 [2024-07-21 03:44:27.411456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.219 qpair failed and we were unable to recover it. 00:34:42.219 [2024-07-21 03:44:27.411578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.219 [2024-07-21 03:44:27.411604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.219 qpair failed and we were unable to recover it. 00:34:42.219 [2024-07-21 03:44:27.411735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.219 [2024-07-21 03:44:27.411762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.219 qpair failed and we were unable to recover it. 00:34:42.219 [2024-07-21 03:44:27.411862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.219 [2024-07-21 03:44:27.411888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.219 qpair failed and we were unable to recover it. 00:34:42.219 [2024-07-21 03:44:27.412000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.219 [2024-07-21 03:44:27.412030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.219 qpair failed and we were unable to recover it. 00:34:42.219 [2024-07-21 03:44:27.412176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.219 [2024-07-21 03:44:27.412202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.219 qpair failed and we were unable to recover it. 00:34:42.219 [2024-07-21 03:44:27.412284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.219 [2024-07-21 03:44:27.412311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.219 qpair failed and we were unable to recover it. 00:34:42.219 [2024-07-21 03:44:27.412479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.219 [2024-07-21 03:44:27.412508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.219 qpair failed and we were unable to recover it. 00:34:42.219 [2024-07-21 03:44:27.412651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.219 [2024-07-21 03:44:27.412679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.219 qpair failed and we were unable to recover it. 00:34:42.219 [2024-07-21 03:44:27.412805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.219 [2024-07-21 03:44:27.412833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.219 qpair failed and we were unable to recover it. 00:34:42.219 [2024-07-21 03:44:27.412990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.219 [2024-07-21 03:44:27.413018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.219 qpair failed and we were unable to recover it. 00:34:42.219 [2024-07-21 03:44:27.413172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.219 [2024-07-21 03:44:27.413202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.219 qpair failed and we were unable to recover it. 00:34:42.219 [2024-07-21 03:44:27.413401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.219 [2024-07-21 03:44:27.413459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.219 qpair failed and we were unable to recover it. 00:34:42.219 [2024-07-21 03:44:27.413591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.219 [2024-07-21 03:44:27.413629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.219 qpair failed and we were unable to recover it. 00:34:42.219 [2024-07-21 03:44:27.413794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.219 [2024-07-21 03:44:27.413820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.219 qpair failed and we were unable to recover it. 00:34:42.219 [2024-07-21 03:44:27.414008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.219 [2024-07-21 03:44:27.414061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.219 qpair failed and we were unable to recover it. 00:34:42.219 [2024-07-21 03:44:27.414188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.219 [2024-07-21 03:44:27.414218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.219 qpair failed and we were unable to recover it. 00:34:42.219 [2024-07-21 03:44:27.414354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.219 [2024-07-21 03:44:27.414381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.219 qpair failed and we were unable to recover it. 00:34:42.219 [2024-07-21 03:44:27.414533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.219 [2024-07-21 03:44:27.414562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.219 qpair failed and we were unable to recover it. 00:34:42.219 [2024-07-21 03:44:27.414701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.219 [2024-07-21 03:44:27.414729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.219 qpair failed and we were unable to recover it. 00:34:42.219 [2024-07-21 03:44:27.414853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.219 [2024-07-21 03:44:27.414879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.219 qpair failed and we were unable to recover it. 00:34:42.219 [2024-07-21 03:44:27.415061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.219 [2024-07-21 03:44:27.415122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.219 qpair failed and we were unable to recover it. 00:34:42.219 [2024-07-21 03:44:27.415250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.219 [2024-07-21 03:44:27.415294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.219 qpair failed and we were unable to recover it. 00:34:42.219 [2024-07-21 03:44:27.415408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.219 [2024-07-21 03:44:27.415437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.219 qpair failed and we were unable to recover it. 00:34:42.219 [2024-07-21 03:44:27.415599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.219 [2024-07-21 03:44:27.415642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.219 qpair failed and we were unable to recover it. 00:34:42.219 [2024-07-21 03:44:27.415786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.219 [2024-07-21 03:44:27.415812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.219 qpair failed and we were unable to recover it. 00:34:42.219 [2024-07-21 03:44:27.415939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.219 [2024-07-21 03:44:27.415967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.219 qpair failed and we were unable to recover it. 00:34:42.219 [2024-07-21 03:44:27.416061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.219 [2024-07-21 03:44:27.416088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.219 qpair failed and we were unable to recover it. 00:34:42.219 [2024-07-21 03:44:27.416232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.219 [2024-07-21 03:44:27.416259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.219 qpair failed and we were unable to recover it. 00:34:42.219 [2024-07-21 03:44:27.416354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.219 [2024-07-21 03:44:27.416383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.219 qpair failed and we were unable to recover it. 00:34:42.219 [2024-07-21 03:44:27.416517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.219 [2024-07-21 03:44:27.416557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.219 qpair failed and we were unable to recover it. 00:34:42.219 [2024-07-21 03:44:27.416741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.219 [2024-07-21 03:44:27.416770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.219 qpair failed and we were unable to recover it. 00:34:42.219 [2024-07-21 03:44:27.416861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.219 [2024-07-21 03:44:27.416888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.219 qpair failed and we were unable to recover it. 00:34:42.219 [2024-07-21 03:44:27.417009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.219 [2024-07-21 03:44:27.417036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.219 qpair failed and we were unable to recover it. 00:34:42.219 [2024-07-21 03:44:27.417189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.219 [2024-07-21 03:44:27.417216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.219 qpair failed and we were unable to recover it. 00:34:42.219 [2024-07-21 03:44:27.417340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.219 [2024-07-21 03:44:27.417366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.219 qpair failed and we were unable to recover it. 00:34:42.219 [2024-07-21 03:44:27.417514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.219 [2024-07-21 03:44:27.417541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.219 qpair failed and we were unable to recover it. 00:34:42.219 [2024-07-21 03:44:27.417693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.219 [2024-07-21 03:44:27.417720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.219 qpair failed and we were unable to recover it. 00:34:42.219 [2024-07-21 03:44:27.417828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.219 [2024-07-21 03:44:27.417856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.219 qpair failed and we were unable to recover it. 00:34:42.219 [2024-07-21 03:44:27.417982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.219 [2024-07-21 03:44:27.418009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.219 qpair failed and we were unable to recover it. 00:34:42.219 [2024-07-21 03:44:27.418160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.219 [2024-07-21 03:44:27.418189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.219 qpair failed and we were unable to recover it. 00:34:42.219 [2024-07-21 03:44:27.418361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.219 [2024-07-21 03:44:27.418387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.219 qpair failed and we were unable to recover it. 00:34:42.219 [2024-07-21 03:44:27.418504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.219 [2024-07-21 03:44:27.418547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.219 qpair failed and we were unable to recover it. 00:34:42.219 [2024-07-21 03:44:27.418647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.219 [2024-07-21 03:44:27.418692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.219 qpair failed and we were unable to recover it. 00:34:42.219 [2024-07-21 03:44:27.418817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.219 [2024-07-21 03:44:27.418843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.219 qpair failed and we were unable to recover it. 00:34:42.219 [2024-07-21 03:44:27.418948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.219 [2024-07-21 03:44:27.418977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.219 qpair failed and we were unable to recover it. 00:34:42.219 [2024-07-21 03:44:27.419146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.219 [2024-07-21 03:44:27.419176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.219 qpair failed and we were unable to recover it. 00:34:42.219 [2024-07-21 03:44:27.419314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.219 [2024-07-21 03:44:27.419340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.219 qpair failed and we were unable to recover it. 00:34:42.219 [2024-07-21 03:44:27.419484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.219 [2024-07-21 03:44:27.419512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.219 qpair failed and we were unable to recover it. 00:34:42.219 [2024-07-21 03:44:27.419697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.219 [2024-07-21 03:44:27.419725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.219 qpair failed and we were unable to recover it. 00:34:42.219 [2024-07-21 03:44:27.419875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.219 [2024-07-21 03:44:27.419901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.219 qpair failed and we were unable to recover it. 00:34:42.219 [2024-07-21 03:44:27.420027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.219 [2024-07-21 03:44:27.420073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.219 qpair failed and we were unable to recover it. 00:34:42.219 [2024-07-21 03:44:27.420208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.219 [2024-07-21 03:44:27.420239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.219 qpair failed and we were unable to recover it. 00:34:42.219 [2024-07-21 03:44:27.420415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.219 [2024-07-21 03:44:27.420442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.219 qpair failed and we were unable to recover it. 00:34:42.219 [2024-07-21 03:44:27.420565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.219 [2024-07-21 03:44:27.420610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.219 qpair failed and we were unable to recover it. 00:34:42.219 [2024-07-21 03:44:27.420741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.219 [2024-07-21 03:44:27.420768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.219 qpair failed and we were unable to recover it. 00:34:42.219 [2024-07-21 03:44:27.420896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.219 [2024-07-21 03:44:27.420923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.219 qpair failed and we were unable to recover it. 00:34:42.219 [2024-07-21 03:44:27.421083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.219 [2024-07-21 03:44:27.421113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.219 qpair failed and we were unable to recover it. 00:34:42.219 [2024-07-21 03:44:27.421246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.219 [2024-07-21 03:44:27.421276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.219 qpair failed and we were unable to recover it. 00:34:42.219 [2024-07-21 03:44:27.421382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.219 [2024-07-21 03:44:27.421425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.219 qpair failed and we were unable to recover it. 00:34:42.219 [2024-07-21 03:44:27.421545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.219 [2024-07-21 03:44:27.421572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.219 qpair failed and we were unable to recover it. 00:34:42.219 [2024-07-21 03:44:27.421744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.219 [2024-07-21 03:44:27.421772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.219 qpair failed and we were unable to recover it. 00:34:42.219 [2024-07-21 03:44:27.421896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.219 [2024-07-21 03:44:27.421924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.219 qpair failed and we were unable to recover it. 00:34:42.219 [2024-07-21 03:44:27.422044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.219 [2024-07-21 03:44:27.422072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.219 qpair failed and we were unable to recover it. 00:34:42.219 [2024-07-21 03:44:27.422217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.219 [2024-07-21 03:44:27.422248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.219 qpair failed and we were unable to recover it. 00:34:42.219 [2024-07-21 03:44:27.422408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.219 [2024-07-21 03:44:27.422434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.219 qpair failed and we were unable to recover it. 00:34:42.219 [2024-07-21 03:44:27.422535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.219 [2024-07-21 03:44:27.422564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.219 qpair failed and we were unable to recover it. 00:34:42.219 [2024-07-21 03:44:27.422702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.219 [2024-07-21 03:44:27.422730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.219 qpair failed and we were unable to recover it. 00:34:42.219 [2024-07-21 03:44:27.422825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.219 [2024-07-21 03:44:27.422853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.219 qpair failed and we were unable to recover it. 00:34:42.219 [2024-07-21 03:44:27.422965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.219 [2024-07-21 03:44:27.422992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.220 qpair failed and we were unable to recover it. 00:34:42.220 [2024-07-21 03:44:27.423102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.220 [2024-07-21 03:44:27.423150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.220 qpair failed and we were unable to recover it. 00:34:42.220 [2024-07-21 03:44:27.423278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.220 [2024-07-21 03:44:27.423305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.220 qpair failed and we were unable to recover it. 00:34:42.220 [2024-07-21 03:44:27.423421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.220 [2024-07-21 03:44:27.423449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.220 qpair failed and we were unable to recover it. 00:34:42.220 [2024-07-21 03:44:27.423575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.220 [2024-07-21 03:44:27.423602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.220 qpair failed and we were unable to recover it. 00:34:42.220 [2024-07-21 03:44:27.423719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.220 [2024-07-21 03:44:27.423748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.220 qpair failed and we were unable to recover it. 00:34:42.220 [2024-07-21 03:44:27.423878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.220 [2024-07-21 03:44:27.423922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.220 qpair failed and we were unable to recover it. 00:34:42.220 [2024-07-21 03:44:27.424054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.220 [2024-07-21 03:44:27.424084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.220 qpair failed and we were unable to recover it. 00:34:42.220 [2024-07-21 03:44:27.424232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.220 [2024-07-21 03:44:27.424258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.220 qpair failed and we were unable to recover it. 00:34:42.220 [2024-07-21 03:44:27.424413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.220 [2024-07-21 03:44:27.424439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.220 qpair failed and we were unable to recover it. 00:34:42.220 [2024-07-21 03:44:27.424575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.220 [2024-07-21 03:44:27.424605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.220 qpair failed and we were unable to recover it. 00:34:42.220 [2024-07-21 03:44:27.424764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.220 [2024-07-21 03:44:27.424791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.220 qpair failed and we were unable to recover it. 00:34:42.220 [2024-07-21 03:44:27.424916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.220 [2024-07-21 03:44:27.424942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.220 qpair failed and we were unable to recover it. 00:34:42.220 [2024-07-21 03:44:27.425074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.220 [2024-07-21 03:44:27.425104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.220 qpair failed and we were unable to recover it. 00:34:42.220 [2024-07-21 03:44:27.425222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.220 [2024-07-21 03:44:27.425249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.220 qpair failed and we were unable to recover it. 00:34:42.220 [2024-07-21 03:44:27.425367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.220 [2024-07-21 03:44:27.425395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.220 qpair failed and we were unable to recover it. 00:34:42.220 [2024-07-21 03:44:27.425567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.220 [2024-07-21 03:44:27.425597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.220 qpair failed and we were unable to recover it. 00:34:42.220 [2024-07-21 03:44:27.425756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.220 [2024-07-21 03:44:27.425783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.220 qpair failed and we were unable to recover it. 00:34:42.220 [2024-07-21 03:44:27.425945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.220 [2024-07-21 03:44:27.425975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.220 qpair failed and we were unable to recover it. 00:34:42.220 [2024-07-21 03:44:27.426097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.220 [2024-07-21 03:44:27.426130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.220 qpair failed and we were unable to recover it. 00:34:42.220 [2024-07-21 03:44:27.426279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.220 [2024-07-21 03:44:27.426306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.220 qpair failed and we were unable to recover it. 00:34:42.220 [2024-07-21 03:44:27.426470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.220 [2024-07-21 03:44:27.426500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.220 qpair failed and we were unable to recover it. 00:34:42.220 [2024-07-21 03:44:27.426674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.220 [2024-07-21 03:44:27.426702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.220 qpair failed and we were unable to recover it. 00:34:42.220 [2024-07-21 03:44:27.426823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.220 [2024-07-21 03:44:27.426850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.220 qpair failed and we were unable to recover it. 00:34:42.220 [2024-07-21 03:44:27.426946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.220 [2024-07-21 03:44:27.426974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.220 qpair failed and we were unable to recover it. 00:34:42.220 [2024-07-21 03:44:27.427136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.220 [2024-07-21 03:44:27.427166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.220 qpair failed and we were unable to recover it. 00:34:42.220 [2024-07-21 03:44:27.427301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.220 [2024-07-21 03:44:27.427328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.220 qpair failed and we were unable to recover it. 00:34:42.220 [2024-07-21 03:44:27.427422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.220 [2024-07-21 03:44:27.427448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.220 qpair failed and we were unable to recover it. 00:34:42.220 [2024-07-21 03:44:27.427560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.220 [2024-07-21 03:44:27.427589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.220 qpair failed and we were unable to recover it. 00:34:42.220 [2024-07-21 03:44:27.427741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.220 [2024-07-21 03:44:27.427769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.220 qpair failed and we were unable to recover it. 00:34:42.220 [2024-07-21 03:44:27.427861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.220 [2024-07-21 03:44:27.427889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.220 qpair failed and we were unable to recover it. 00:34:42.220 [2024-07-21 03:44:27.428060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.220 [2024-07-21 03:44:27.428090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.220 qpair failed and we were unable to recover it. 00:34:42.220 [2024-07-21 03:44:27.428224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.220 [2024-07-21 03:44:27.428251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.220 qpair failed and we were unable to recover it. 00:34:42.220 [2024-07-21 03:44:27.428363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.220 [2024-07-21 03:44:27.428390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.220 qpair failed and we were unable to recover it. 00:34:42.220 [2024-07-21 03:44:27.428529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.220 [2024-07-21 03:44:27.428559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.220 qpair failed and we were unable to recover it. 00:34:42.220 [2024-07-21 03:44:27.428685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.220 [2024-07-21 03:44:27.428717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.220 qpair failed and we were unable to recover it. 00:34:42.220 [2024-07-21 03:44:27.428830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.220 [2024-07-21 03:44:27.428856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.220 qpair failed and we were unable to recover it. 00:34:42.220 [2024-07-21 03:44:27.429005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.220 [2024-07-21 03:44:27.429032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.220 qpair failed and we were unable to recover it. 00:34:42.220 [2024-07-21 03:44:27.429122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.220 [2024-07-21 03:44:27.429151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.220 qpair failed and we were unable to recover it. 00:34:42.220 [2024-07-21 03:44:27.429299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.220 [2024-07-21 03:44:27.429326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.220 qpair failed and we were unable to recover it. 00:34:42.220 [2024-07-21 03:44:27.429483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.220 [2024-07-21 03:44:27.429509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.220 qpair failed and we were unable to recover it. 00:34:42.220 [2024-07-21 03:44:27.429625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.220 [2024-07-21 03:44:27.429652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.220 qpair failed and we were unable to recover it. 00:34:42.220 [2024-07-21 03:44:27.429746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.220 [2024-07-21 03:44:27.429773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.220 qpair failed and we were unable to recover it. 00:34:42.220 [2024-07-21 03:44:27.429875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.220 [2024-07-21 03:44:27.429903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.220 qpair failed and we were unable to recover it. 00:34:42.220 [2024-07-21 03:44:27.430020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.220 [2024-07-21 03:44:27.430047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.220 qpair failed and we were unable to recover it. 00:34:42.220 [2024-07-21 03:44:27.430165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.220 [2024-07-21 03:44:27.430207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.220 qpair failed and we were unable to recover it. 00:34:42.220 [2024-07-21 03:44:27.430348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.220 [2024-07-21 03:44:27.430377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.220 qpair failed and we were unable to recover it. 00:34:42.220 [2024-07-21 03:44:27.430571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.220 [2024-07-21 03:44:27.430600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.220 qpair failed and we were unable to recover it. 00:34:42.220 [2024-07-21 03:44:27.430790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.220 [2024-07-21 03:44:27.430831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.220 qpair failed and we were unable to recover it. 00:34:42.220 [2024-07-21 03:44:27.430938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.220 [2024-07-21 03:44:27.430967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.220 qpair failed and we were unable to recover it. 00:34:42.220 [2024-07-21 03:44:27.431122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.220 [2024-07-21 03:44:27.431148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.220 qpair failed and we were unable to recover it. 00:34:42.220 [2024-07-21 03:44:27.431299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.220 [2024-07-21 03:44:27.431359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.220 qpair failed and we were unable to recover it. 00:34:42.220 [2024-07-21 03:44:27.431456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.220 [2024-07-21 03:44:27.431485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.220 qpair failed and we were unable to recover it. 00:34:42.220 [2024-07-21 03:44:27.431598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.220 [2024-07-21 03:44:27.431635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.220 qpair failed and we were unable to recover it. 00:34:42.220 [2024-07-21 03:44:27.431785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.220 [2024-07-21 03:44:27.431813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.220 qpair failed and we were unable to recover it. 00:34:42.220 [2024-07-21 03:44:27.431957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.220 [2024-07-21 03:44:27.431987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.220 qpair failed and we were unable to recover it. 00:34:42.220 [2024-07-21 03:44:27.432101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.220 [2024-07-21 03:44:27.432127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.220 qpair failed and we were unable to recover it. 00:34:42.220 [2024-07-21 03:44:27.432243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.220 [2024-07-21 03:44:27.432269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.220 qpair failed and we were unable to recover it. 00:34:42.220 [2024-07-21 03:44:27.432376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.220 [2024-07-21 03:44:27.432405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.220 qpair failed and we were unable to recover it. 00:34:42.220 [2024-07-21 03:44:27.432550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.220 [2024-07-21 03:44:27.432577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.220 qpair failed and we were unable to recover it. 00:34:42.220 [2024-07-21 03:44:27.432689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.220 [2024-07-21 03:44:27.432715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.220 qpair failed and we were unable to recover it. 00:34:42.220 [2024-07-21 03:44:27.432814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.220 [2024-07-21 03:44:27.432841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.220 qpair failed and we were unable to recover it. 00:34:42.220 [2024-07-21 03:44:27.432960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.220 [2024-07-21 03:44:27.432990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.220 qpair failed and we were unable to recover it. 00:34:42.220 [2024-07-21 03:44:27.433102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.220 [2024-07-21 03:44:27.433128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.220 qpair failed and we were unable to recover it. 00:34:42.220 [2024-07-21 03:44:27.433277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.220 [2024-07-21 03:44:27.433306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.220 qpair failed and we were unable to recover it. 00:34:42.220 [2024-07-21 03:44:27.433452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.220 [2024-07-21 03:44:27.433478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.220 qpair failed and we were unable to recover it. 00:34:42.220 [2024-07-21 03:44:27.433627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.220 [2024-07-21 03:44:27.433654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.220 qpair failed and we were unable to recover it. 00:34:42.220 [2024-07-21 03:44:27.433774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.220 [2024-07-21 03:44:27.433801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.220 qpair failed and we were unable to recover it. 00:34:42.220 [2024-07-21 03:44:27.433910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.220 [2024-07-21 03:44:27.433936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.220 qpair failed and we were unable to recover it. 00:34:42.220 [2024-07-21 03:44:27.434085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.220 [2024-07-21 03:44:27.434111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.220 qpair failed and we were unable to recover it. 00:34:42.220 [2024-07-21 03:44:27.434247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.220 [2024-07-21 03:44:27.434276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.221 qpair failed and we were unable to recover it. 00:34:42.221 [2024-07-21 03:44:27.434474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.221 [2024-07-21 03:44:27.434503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.221 qpair failed and we were unable to recover it. 00:34:42.221 [2024-07-21 03:44:27.434624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.221 [2024-07-21 03:44:27.434651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.221 qpair failed and we were unable to recover it. 00:34:42.221 [2024-07-21 03:44:27.434746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.221 [2024-07-21 03:44:27.434772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.221 qpair failed and we were unable to recover it. 00:34:42.221 [2024-07-21 03:44:27.434886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.221 [2024-07-21 03:44:27.434911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.221 qpair failed and we were unable to recover it. 00:34:42.221 [2024-07-21 03:44:27.435061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.221 [2024-07-21 03:44:27.435105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.221 qpair failed and we were unable to recover it. 00:34:42.221 [2024-07-21 03:44:27.435223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.221 [2024-07-21 03:44:27.435252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.221 qpair failed and we were unable to recover it. 00:34:42.221 [2024-07-21 03:44:27.435386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.221 [2024-07-21 03:44:27.435412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.221 qpair failed and we were unable to recover it. 00:34:42.221 [2024-07-21 03:44:27.435547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.221 [2024-07-21 03:44:27.435587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.221 qpair failed and we were unable to recover it. 00:34:42.221 [2024-07-21 03:44:27.435761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.221 [2024-07-21 03:44:27.435790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.221 qpair failed and we were unable to recover it. 00:34:42.221 [2024-07-21 03:44:27.435909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.221 [2024-07-21 03:44:27.435937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.221 qpair failed and we were unable to recover it. 00:34:42.221 [2024-07-21 03:44:27.436085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.221 [2024-07-21 03:44:27.436129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.221 qpair failed and we were unable to recover it. 00:34:42.221 [2024-07-21 03:44:27.436226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.221 [2024-07-21 03:44:27.436256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.221 qpair failed and we were unable to recover it. 00:34:42.221 [2024-07-21 03:44:27.436392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.221 [2024-07-21 03:44:27.436420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.221 qpair failed and we were unable to recover it. 00:34:42.221 [2024-07-21 03:44:27.436526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.221 [2024-07-21 03:44:27.436553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.221 qpair failed and we were unable to recover it. 00:34:42.221 [2024-07-21 03:44:27.436650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.221 [2024-07-21 03:44:27.436678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.221 qpair failed and we were unable to recover it. 00:34:42.221 [2024-07-21 03:44:27.436793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.221 [2024-07-21 03:44:27.436820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.221 qpair failed and we were unable to recover it. 00:34:42.221 [2024-07-21 03:44:27.436940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.221 [2024-07-21 03:44:27.436966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.221 qpair failed and we were unable to recover it. 00:34:42.221 [2024-07-21 03:44:27.437086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.221 [2024-07-21 03:44:27.437115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.221 qpair failed and we were unable to recover it. 00:34:42.221 [2024-07-21 03:44:27.437204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.221 [2024-07-21 03:44:27.437236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.221 qpair failed and we were unable to recover it. 00:34:42.221 [2024-07-21 03:44:27.437385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.221 [2024-07-21 03:44:27.437428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.221 qpair failed and we were unable to recover it. 00:34:42.221 [2024-07-21 03:44:27.437528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.221 [2024-07-21 03:44:27.437573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.221 qpair failed and we were unable to recover it. 00:34:42.221 [2024-07-21 03:44:27.437685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.221 [2024-07-21 03:44:27.437713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.221 qpair failed and we were unable to recover it. 00:34:42.221 [2024-07-21 03:44:27.437811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.221 [2024-07-21 03:44:27.437836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.221 qpair failed and we were unable to recover it. 00:34:42.221 [2024-07-21 03:44:27.437960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.221 [2024-07-21 03:44:27.437987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.221 qpair failed and we were unable to recover it. 00:34:42.221 [2024-07-21 03:44:27.438076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.221 [2024-07-21 03:44:27.438104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.221 qpair failed and we were unable to recover it. 00:34:42.221 [2024-07-21 03:44:27.438227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.221 [2024-07-21 03:44:27.438254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.221 qpair failed and we were unable to recover it. 00:34:42.221 [2024-07-21 03:44:27.438410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.221 [2024-07-21 03:44:27.438456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.221 qpair failed and we were unable to recover it. 00:34:42.221 [2024-07-21 03:44:27.438579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.221 [2024-07-21 03:44:27.438631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.221 qpair failed and we were unable to recover it. 00:34:42.221 [2024-07-21 03:44:27.438773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.221 [2024-07-21 03:44:27.438800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.221 qpair failed and we were unable to recover it. 00:34:42.221 [2024-07-21 03:44:27.438890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.221 [2024-07-21 03:44:27.438932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.221 qpair failed and we were unable to recover it. 00:34:42.221 [2024-07-21 03:44:27.439080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.221 [2024-07-21 03:44:27.439107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.221 qpair failed and we were unable to recover it. 00:34:42.221 [2024-07-21 03:44:27.439206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.221 [2024-07-21 03:44:27.439232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.221 qpair failed and we were unable to recover it. 00:34:42.221 [2024-07-21 03:44:27.439337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.221 [2024-07-21 03:44:27.439363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.221 qpair failed and we were unable to recover it. 00:34:42.221 [2024-07-21 03:44:27.439481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.221 [2024-07-21 03:44:27.439507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.221 qpair failed and we were unable to recover it. 00:34:42.221 [2024-07-21 03:44:27.439633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.221 [2024-07-21 03:44:27.439662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.221 qpair failed and we were unable to recover it. 00:34:42.221 [2024-07-21 03:44:27.439762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.221 [2024-07-21 03:44:27.439790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.221 qpair failed and we were unable to recover it. 00:34:42.221 [2024-07-21 03:44:27.439911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.221 [2024-07-21 03:44:27.439938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.221 qpair failed and we were unable to recover it. 00:34:42.221 [2024-07-21 03:44:27.440038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.221 [2024-07-21 03:44:27.440065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.221 qpair failed and we were unable to recover it. 00:34:42.221 [2024-07-21 03:44:27.440180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.221 [2024-07-21 03:44:27.440207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.221 qpair failed and we were unable to recover it. 00:34:42.221 [2024-07-21 03:44:27.440332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.221 [2024-07-21 03:44:27.440359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.221 qpair failed and we were unable to recover it. 00:34:42.221 [2024-07-21 03:44:27.440478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.221 [2024-07-21 03:44:27.440504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.221 qpair failed and we were unable to recover it. 00:34:42.221 [2024-07-21 03:44:27.440649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.221 [2024-07-21 03:44:27.440676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.221 qpair failed and we were unable to recover it. 00:34:42.221 [2024-07-21 03:44:27.440799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.221 [2024-07-21 03:44:27.440826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.221 qpair failed and we were unable to recover it. 00:34:42.221 [2024-07-21 03:44:27.440914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.221 [2024-07-21 03:44:27.440941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.221 qpair failed and we were unable to recover it. 00:34:42.221 [2024-07-21 03:44:27.441061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.221 [2024-07-21 03:44:27.441088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.221 qpair failed and we were unable to recover it. 00:34:42.221 [2024-07-21 03:44:27.441216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.221 [2024-07-21 03:44:27.441244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.221 qpair failed and we were unable to recover it. 00:34:42.221 [2024-07-21 03:44:27.441376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.221 [2024-07-21 03:44:27.441420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.221 qpair failed and we were unable to recover it. 00:34:42.221 [2024-07-21 03:44:27.441579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.221 [2024-07-21 03:44:27.441608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.221 qpair failed and we were unable to recover it. 00:34:42.221 [2024-07-21 03:44:27.441715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.221 [2024-07-21 03:44:27.441741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.221 qpair failed and we were unable to recover it. 00:34:42.221 [2024-07-21 03:44:27.441893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.221 [2024-07-21 03:44:27.441936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.221 qpair failed and we were unable to recover it. 00:34:42.221 [2024-07-21 03:44:27.442083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.221 [2024-07-21 03:44:27.442114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.221 qpair failed and we were unable to recover it. 00:34:42.221 [2024-07-21 03:44:27.442275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.221 [2024-07-21 03:44:27.442306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.221 qpair failed and we were unable to recover it. 00:34:42.221 [2024-07-21 03:44:27.442448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.221 [2024-07-21 03:44:27.442475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.221 qpair failed and we were unable to recover it. 00:34:42.221 [2024-07-21 03:44:27.442646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.221 [2024-07-21 03:44:27.442675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.221 qpair failed and we were unable to recover it. 00:34:42.221 [2024-07-21 03:44:27.442769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.221 [2024-07-21 03:44:27.442796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.221 qpair failed and we were unable to recover it. 00:34:42.221 [2024-07-21 03:44:27.442893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.221 [2024-07-21 03:44:27.442919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.221 qpair failed and we were unable to recover it. 00:34:42.221 [2024-07-21 03:44:27.443043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.221 [2024-07-21 03:44:27.443085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.221 qpair failed and we were unable to recover it. 00:34:42.221 [2024-07-21 03:44:27.443189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.221 [2024-07-21 03:44:27.443217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.221 qpair failed and we were unable to recover it. 00:34:42.221 [2024-07-21 03:44:27.443319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.221 [2024-07-21 03:44:27.443350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.221 qpair failed and we were unable to recover it. 00:34:42.221 [2024-07-21 03:44:27.443457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.221 [2024-07-21 03:44:27.443484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.221 qpair failed and we were unable to recover it. 00:34:42.221 [2024-07-21 03:44:27.443629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.221 [2024-07-21 03:44:27.443675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.221 qpair failed and we were unable to recover it. 00:34:42.221 [2024-07-21 03:44:27.443823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.221 [2024-07-21 03:44:27.443849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.221 qpair failed and we were unable to recover it. 00:34:42.221 [2024-07-21 03:44:27.443959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.221 [2024-07-21 03:44:27.444002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.221 qpair failed and we were unable to recover it. 00:34:42.221 [2024-07-21 03:44:27.444114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.221 [2024-07-21 03:44:27.444146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.221 qpair failed and we were unable to recover it. 00:34:42.221 [2024-07-21 03:44:27.444312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.221 [2024-07-21 03:44:27.444340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.221 qpair failed and we were unable to recover it. 00:34:42.221 [2024-07-21 03:44:27.444464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.221 [2024-07-21 03:44:27.444505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.221 qpair failed and we were unable to recover it. 00:34:42.221 [2024-07-21 03:44:27.444662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.221 [2024-07-21 03:44:27.444695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.221 qpair failed and we were unable to recover it. 00:34:42.221 [2024-07-21 03:44:27.444791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.221 [2024-07-21 03:44:27.444819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.221 qpair failed and we were unable to recover it. 00:34:42.221 [2024-07-21 03:44:27.444934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.221 [2024-07-21 03:44:27.444962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.221 qpair failed and we were unable to recover it. 00:34:42.221 [2024-07-21 03:44:27.445080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.221 [2024-07-21 03:44:27.445108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.221 qpair failed and we were unable to recover it. 00:34:42.221 [2024-07-21 03:44:27.445233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.221 [2024-07-21 03:44:27.445260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.221 qpair failed and we were unable to recover it. 00:34:42.221 [2024-07-21 03:44:27.445383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.221 [2024-07-21 03:44:27.445411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.221 qpair failed and we were unable to recover it. 00:34:42.221 [2024-07-21 03:44:27.445537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.221 [2024-07-21 03:44:27.445564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.221 qpair failed and we were unable to recover it. 00:34:42.221 [2024-07-21 03:44:27.445690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.221 [2024-07-21 03:44:27.445718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.221 qpair failed and we were unable to recover it. 00:34:42.221 [2024-07-21 03:44:27.445808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.221 [2024-07-21 03:44:27.445835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.221 qpair failed and we were unable to recover it. 00:34:42.221 [2024-07-21 03:44:27.445928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.221 [2024-07-21 03:44:27.445954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.221 qpair failed and we were unable to recover it. 00:34:42.221 [2024-07-21 03:44:27.446123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.221 [2024-07-21 03:44:27.446150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.222 qpair failed and we were unable to recover it. 00:34:42.222 [2024-07-21 03:44:27.446289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.222 [2024-07-21 03:44:27.446319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.222 qpair failed and we were unable to recover it. 00:34:42.222 [2024-07-21 03:44:27.446495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.222 [2024-07-21 03:44:27.446523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.222 qpair failed and we were unable to recover it. 00:34:42.222 [2024-07-21 03:44:27.446659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.222 [2024-07-21 03:44:27.446688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.222 qpair failed and we were unable to recover it. 00:34:42.222 [2024-07-21 03:44:27.446788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.222 [2024-07-21 03:44:27.446815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.222 qpair failed and we were unable to recover it. 00:34:42.222 [2024-07-21 03:44:27.446965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.222 [2024-07-21 03:44:27.446995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.222 qpair failed and we were unable to recover it. 00:34:42.222 [2024-07-21 03:44:27.447116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.222 [2024-07-21 03:44:27.447143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.222 qpair failed and we were unable to recover it. 00:34:42.222 [2024-07-21 03:44:27.447265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.222 [2024-07-21 03:44:27.447291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.222 qpair failed and we were unable to recover it. 00:34:42.222 [2024-07-21 03:44:27.447379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.222 [2024-07-21 03:44:27.447405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.222 qpair failed and we were unable to recover it. 00:34:42.222 [2024-07-21 03:44:27.447508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.222 [2024-07-21 03:44:27.447534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.222 qpair failed and we were unable to recover it. 00:34:42.222 [2024-07-21 03:44:27.447652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.222 [2024-07-21 03:44:27.447679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.222 qpair failed and we were unable to recover it. 00:34:42.222 [2024-07-21 03:44:27.447769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.222 [2024-07-21 03:44:27.447795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.222 qpair failed and we were unable to recover it. 00:34:42.222 [2024-07-21 03:44:27.447920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.222 [2024-07-21 03:44:27.447946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.222 qpair failed and we were unable to recover it. 00:34:42.222 [2024-07-21 03:44:27.448064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.222 [2024-07-21 03:44:27.448090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.222 qpair failed and we were unable to recover it. 00:34:42.222 [2024-07-21 03:44:27.448217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.222 [2024-07-21 03:44:27.448249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.222 qpair failed and we were unable to recover it. 00:34:42.222 [2024-07-21 03:44:27.448411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.222 [2024-07-21 03:44:27.448439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.222 qpair failed and we were unable to recover it. 00:34:42.222 [2024-07-21 03:44:27.448568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.222 [2024-07-21 03:44:27.448594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.222 qpair failed and we were unable to recover it. 00:34:42.222 [2024-07-21 03:44:27.448711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.222 [2024-07-21 03:44:27.448737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.222 qpair failed and we were unable to recover it. 00:34:42.222 [2024-07-21 03:44:27.448866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.222 [2024-07-21 03:44:27.448892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.222 qpair failed and we were unable to recover it. 00:34:42.222 [2024-07-21 03:44:27.449019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.222 [2024-07-21 03:44:27.449047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.222 qpair failed and we were unable to recover it. 00:34:42.222 [2024-07-21 03:44:27.449167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.222 [2024-07-21 03:44:27.449196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.222 qpair failed and we were unable to recover it. 00:34:42.222 [2024-07-21 03:44:27.449307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.222 [2024-07-21 03:44:27.449349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.222 qpair failed and we were unable to recover it. 00:34:42.222 [2024-07-21 03:44:27.449468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.222 [2024-07-21 03:44:27.449499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.222 qpair failed and we were unable to recover it. 00:34:42.222 [2024-07-21 03:44:27.449640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.222 [2024-07-21 03:44:27.449689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.222 qpair failed and we were unable to recover it. 00:34:42.222 [2024-07-21 03:44:27.449783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.222 [2024-07-21 03:44:27.449811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.222 qpair failed and we were unable to recover it. 00:34:42.222 [2024-07-21 03:44:27.449971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.222 [2024-07-21 03:44:27.449998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.222 qpair failed and we were unable to recover it. 00:34:42.222 [2024-07-21 03:44:27.450152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.222 [2024-07-21 03:44:27.450178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.222 qpair failed and we were unable to recover it. 00:34:42.222 [2024-07-21 03:44:27.450302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.222 [2024-07-21 03:44:27.450328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.222 qpair failed and we were unable to recover it. 00:34:42.222 [2024-07-21 03:44:27.450491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.222 [2024-07-21 03:44:27.450520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.222 qpair failed and we were unable to recover it. 00:34:42.222 [2024-07-21 03:44:27.450679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.222 [2024-07-21 03:44:27.450708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.222 qpair failed and we were unable to recover it. 00:34:42.222 [2024-07-21 03:44:27.450856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.222 [2024-07-21 03:44:27.450889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.222 qpair failed and we were unable to recover it. 00:34:42.222 [2024-07-21 03:44:27.451000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.222 [2024-07-21 03:44:27.451029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.222 qpair failed and we were unable to recover it. 00:34:42.222 [2024-07-21 03:44:27.451171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.222 [2024-07-21 03:44:27.451198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.222 qpair failed and we were unable to recover it. 00:34:42.222 [2024-07-21 03:44:27.451294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.222 [2024-07-21 03:44:27.451322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.222 qpair failed and we were unable to recover it. 00:34:42.222 [2024-07-21 03:44:27.451483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.222 [2024-07-21 03:44:27.451510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.222 qpair failed and we were unable to recover it. 00:34:42.222 [2024-07-21 03:44:27.451599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.222 [2024-07-21 03:44:27.451632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.222 qpair failed and we were unable to recover it. 00:34:42.222 [2024-07-21 03:44:27.451795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.222 [2024-07-21 03:44:27.451822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.222 qpair failed and we were unable to recover it. 00:34:42.222 [2024-07-21 03:44:27.451952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.222 [2024-07-21 03:44:27.451979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.222 qpair failed and we were unable to recover it. 00:34:42.222 [2024-07-21 03:44:27.452065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.222 [2024-07-21 03:44:27.452092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.222 qpair failed and we were unable to recover it. 00:34:42.222 [2024-07-21 03:44:27.452219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.222 [2024-07-21 03:44:27.452245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.222 qpair failed and we were unable to recover it. 00:34:42.222 [2024-07-21 03:44:27.452365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.222 [2024-07-21 03:44:27.452391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.222 qpair failed and we were unable to recover it. 00:34:42.222 [2024-07-21 03:44:27.452485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.222 [2024-07-21 03:44:27.452512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.222 qpair failed and we were unable to recover it. 00:34:42.222 [2024-07-21 03:44:27.452633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.222 [2024-07-21 03:44:27.452664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.222 qpair failed and we were unable to recover it. 00:34:42.222 [2024-07-21 03:44:27.452784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.222 [2024-07-21 03:44:27.452810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.222 qpair failed and we were unable to recover it. 00:34:42.222 [2024-07-21 03:44:27.452944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.222 [2024-07-21 03:44:27.452988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.222 qpair failed and we were unable to recover it. 00:34:42.222 [2024-07-21 03:44:27.453100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.222 [2024-07-21 03:44:27.453129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.222 qpair failed and we were unable to recover it. 00:34:42.222 [2024-07-21 03:44:27.453277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.222 [2024-07-21 03:44:27.453303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.222 qpair failed and we were unable to recover it. 00:34:42.222 [2024-07-21 03:44:27.453420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.222 [2024-07-21 03:44:27.453447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.222 qpair failed and we were unable to recover it. 00:34:42.222 [2024-07-21 03:44:27.453568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.222 [2024-07-21 03:44:27.453595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.222 qpair failed and we were unable to recover it. 00:34:42.222 [2024-07-21 03:44:27.453737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.222 [2024-07-21 03:44:27.453764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.222 qpair failed and we were unable to recover it. 00:34:42.222 [2024-07-21 03:44:27.453865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.222 [2024-07-21 03:44:27.453892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.222 qpair failed and we were unable to recover it. 00:34:42.222 [2024-07-21 03:44:27.453988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.222 [2024-07-21 03:44:27.454015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.222 qpair failed and we were unable to recover it. 00:34:42.222 [2024-07-21 03:44:27.454160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.222 [2024-07-21 03:44:27.454186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.222 qpair failed and we were unable to recover it. 00:34:42.222 [2024-07-21 03:44:27.454309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.222 [2024-07-21 03:44:27.454354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.222 qpair failed and we were unable to recover it. 00:34:42.222 [2024-07-21 03:44:27.454457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.222 [2024-07-21 03:44:27.454487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.222 qpair failed and we were unable to recover it. 00:34:42.222 [2024-07-21 03:44:27.454633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.222 [2024-07-21 03:44:27.454663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.222 qpair failed and we were unable to recover it. 00:34:42.222 [2024-07-21 03:44:27.454783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.222 [2024-07-21 03:44:27.454809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.222 qpair failed and we were unable to recover it. 00:34:42.222 [2024-07-21 03:44:27.454944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.222 [2024-07-21 03:44:27.454982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.222 qpair failed and we were unable to recover it. 00:34:42.222 [2024-07-21 03:44:27.455143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.222 [2024-07-21 03:44:27.455170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.222 qpair failed and we were unable to recover it. 00:34:42.222 [2024-07-21 03:44:27.455271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.222 [2024-07-21 03:44:27.455298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.222 qpair failed and we were unable to recover it. 00:34:42.222 [2024-07-21 03:44:27.455399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.222 [2024-07-21 03:44:27.455432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.222 qpair failed and we were unable to recover it. 00:34:42.222 [2024-07-21 03:44:27.455598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.222 [2024-07-21 03:44:27.455635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.222 qpair failed and we were unable to recover it. 00:34:42.222 [2024-07-21 03:44:27.455821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.222 [2024-07-21 03:44:27.455854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.222 qpair failed and we were unable to recover it. 00:34:42.222 [2024-07-21 03:44:27.456000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.222 [2024-07-21 03:44:27.456027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.222 qpair failed and we were unable to recover it. 00:34:42.222 [2024-07-21 03:44:27.456179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.222 [2024-07-21 03:44:27.456209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.222 qpair failed and we were unable to recover it. 00:34:42.222 [2024-07-21 03:44:27.456307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.222 [2024-07-21 03:44:27.456349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.222 qpair failed and we were unable to recover it. 00:34:42.222 [2024-07-21 03:44:27.456484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.222 [2024-07-21 03:44:27.456515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.222 qpair failed and we were unable to recover it. 00:34:42.222 [2024-07-21 03:44:27.456690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.222 [2024-07-21 03:44:27.456717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.222 qpair failed and we were unable to recover it. 00:34:42.222 [2024-07-21 03:44:27.456880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.222 [2024-07-21 03:44:27.456909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.222 qpair failed and we were unable to recover it. 00:34:42.222 [2024-07-21 03:44:27.457007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.222 [2024-07-21 03:44:27.457037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.222 qpair failed and we were unable to recover it. 00:34:42.222 [2024-07-21 03:44:27.457204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.222 [2024-07-21 03:44:27.457231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.222 qpair failed and we were unable to recover it. 00:34:42.222 [2024-07-21 03:44:27.457354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.222 [2024-07-21 03:44:27.457399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.222 qpair failed and we were unable to recover it. 00:34:42.222 [2024-07-21 03:44:27.457528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.222 [2024-07-21 03:44:27.457558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.222 qpair failed and we were unable to recover it. 00:34:42.222 [2024-07-21 03:44:27.457682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.222 [2024-07-21 03:44:27.457710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.222 qpair failed and we were unable to recover it. 00:34:42.222 [2024-07-21 03:44:27.457828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.222 [2024-07-21 03:44:27.457854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.222 qpair failed and we were unable to recover it. 00:34:42.222 [2024-07-21 03:44:27.458006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.222 [2024-07-21 03:44:27.458033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.222 qpair failed and we were unable to recover it. 00:34:42.222 [2024-07-21 03:44:27.458167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.222 [2024-07-21 03:44:27.458193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.222 qpair failed and we were unable to recover it. 00:34:42.222 [2024-07-21 03:44:27.458340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.222 [2024-07-21 03:44:27.458367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.223 qpair failed and we were unable to recover it. 00:34:42.223 [2024-07-21 03:44:27.458511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.223 [2024-07-21 03:44:27.458540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.223 qpair failed and we were unable to recover it. 00:34:42.223 [2024-07-21 03:44:27.458715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.223 [2024-07-21 03:44:27.458743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.223 qpair failed and we were unable to recover it. 00:34:42.223 [2024-07-21 03:44:27.458845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.223 [2024-07-21 03:44:27.458890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.223 qpair failed and we were unable to recover it. 00:34:42.223 [2024-07-21 03:44:27.459037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.223 [2024-07-21 03:44:27.459064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.223 qpair failed and we were unable to recover it. 00:34:42.223 [2024-07-21 03:44:27.459221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.223 [2024-07-21 03:44:27.459248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.223 qpair failed and we were unable to recover it. 00:34:42.223 [2024-07-21 03:44:27.459346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.223 [2024-07-21 03:44:27.459391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.223 qpair failed and we were unable to recover it. 00:34:42.223 [2024-07-21 03:44:27.459519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.223 [2024-07-21 03:44:27.459550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.223 qpair failed and we were unable to recover it. 00:34:42.223 [2024-07-21 03:44:27.459675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.223 [2024-07-21 03:44:27.459703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.223 qpair failed and we were unable to recover it. 00:34:42.223 [2024-07-21 03:44:27.459829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.223 [2024-07-21 03:44:27.459856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.223 qpair failed and we were unable to recover it. 00:34:42.223 [2024-07-21 03:44:27.459978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.223 [2024-07-21 03:44:27.460007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.223 qpair failed and we were unable to recover it. 00:34:42.223 [2024-07-21 03:44:27.460137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.223 [2024-07-21 03:44:27.460169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.223 qpair failed and we were unable to recover it. 00:34:42.223 [2024-07-21 03:44:27.460273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.223 [2024-07-21 03:44:27.460300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.223 qpair failed and we were unable to recover it. 00:34:42.223 [2024-07-21 03:44:27.460420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.223 [2024-07-21 03:44:27.460446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.223 qpair failed and we were unable to recover it. 00:34:42.223 [2024-07-21 03:44:27.460629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.223 [2024-07-21 03:44:27.460667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.223 qpair failed and we were unable to recover it. 00:34:42.223 [2024-07-21 03:44:27.462107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.223 [2024-07-21 03:44:27.462143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.223 qpair failed and we were unable to recover it. 00:34:42.223 [2024-07-21 03:44:27.462310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.223 [2024-07-21 03:44:27.462339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.223 qpair failed and we were unable to recover it. 00:34:42.223 [2024-07-21 03:44:27.462487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.223 [2024-07-21 03:44:27.462514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.223 qpair failed and we were unable to recover it. 00:34:42.223 [2024-07-21 03:44:27.462642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.223 [2024-07-21 03:44:27.462680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.223 qpair failed and we were unable to recover it. 00:34:42.223 [2024-07-21 03:44:27.462780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.223 [2024-07-21 03:44:27.462807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.223 qpair failed and we were unable to recover it. 00:34:42.223 [2024-07-21 03:44:27.462899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.223 [2024-07-21 03:44:27.462930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.223 qpair failed and we were unable to recover it. 00:34:42.223 [2024-07-21 03:44:27.463101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.223 [2024-07-21 03:44:27.463131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.223 qpair failed and we were unable to recover it. 00:34:42.223 [2024-07-21 03:44:27.463288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.223 [2024-07-21 03:44:27.463319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.223 qpair failed and we were unable to recover it. 00:34:42.223 [2024-07-21 03:44:27.463497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.223 [2024-07-21 03:44:27.463524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.223 qpair failed and we were unable to recover it. 00:34:42.223 [2024-07-21 03:44:27.463625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.223 [2024-07-21 03:44:27.463652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.223 qpair failed and we were unable to recover it. 00:34:42.223 [2024-07-21 03:44:27.463817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.223 [2024-07-21 03:44:27.463851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.223 qpair failed and we were unable to recover it. 00:34:42.223 [2024-07-21 03:44:27.463993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.223 [2024-07-21 03:44:27.464019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.223 qpair failed and we were unable to recover it. 00:34:42.223 [2024-07-21 03:44:27.464193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.223 [2024-07-21 03:44:27.464220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.223 qpair failed and we were unable to recover it. 00:34:42.223 [2024-07-21 03:44:27.464365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.223 [2024-07-21 03:44:27.464392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.223 qpair failed and we were unable to recover it. 00:34:42.223 [2024-07-21 03:44:27.464488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.223 [2024-07-21 03:44:27.464515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.223 qpair failed and we were unable to recover it. 00:34:42.223 [2024-07-21 03:44:27.464624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.223 [2024-07-21 03:44:27.464652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.223 qpair failed and we were unable to recover it. 00:34:42.223 [2024-07-21 03:44:27.464771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.223 [2024-07-21 03:44:27.464798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.223 qpair failed and we were unable to recover it. 00:34:42.223 [2024-07-21 03:44:27.466081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.223 [2024-07-21 03:44:27.466118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.223 qpair failed and we were unable to recover it. 00:34:42.223 [2024-07-21 03:44:27.466299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.223 [2024-07-21 03:44:27.466332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.223 qpair failed and we were unable to recover it. 00:34:42.223 [2024-07-21 03:44:27.466497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.223 [2024-07-21 03:44:27.466524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.223 qpair failed and we were unable to recover it. 00:34:42.223 [2024-07-21 03:44:27.466672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.223 [2024-07-21 03:44:27.466700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.223 qpair failed and we were unable to recover it. 00:34:42.223 [2024-07-21 03:44:27.466839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.223 [2024-07-21 03:44:27.466870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.223 qpair failed and we were unable to recover it. 00:34:42.223 [2024-07-21 03:44:27.466986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.223 [2024-07-21 03:44:27.467012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.223 qpair failed and we were unable to recover it. 00:34:42.223 [2024-07-21 03:44:27.467134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.223 [2024-07-21 03:44:27.467161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.223 qpair failed and we were unable to recover it. 00:34:42.223 [2024-07-21 03:44:27.467290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.224 [2024-07-21 03:44:27.467317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.224 qpair failed and we were unable to recover it. 00:34:42.224 [2024-07-21 03:44:27.467465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.224 [2024-07-21 03:44:27.467495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.224 qpair failed and we were unable to recover it. 00:34:42.224 [2024-07-21 03:44:27.467628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.224 [2024-07-21 03:44:27.467677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.224 qpair failed and we were unable to recover it. 00:34:42.224 [2024-07-21 03:44:27.467802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.224 [2024-07-21 03:44:27.467828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.224 qpair failed and we were unable to recover it. 00:34:42.224 [2024-07-21 03:44:27.467979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.224 [2024-07-21 03:44:27.468005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.224 qpair failed and we were unable to recover it. 00:34:42.224 [2024-07-21 03:44:27.468126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.224 [2024-07-21 03:44:27.468153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.224 qpair failed and we were unable to recover it. 00:34:42.224 [2024-07-21 03:44:27.468281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.224 [2024-07-21 03:44:27.468308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.224 qpair failed and we were unable to recover it. 00:34:42.224 [2024-07-21 03:44:27.468414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.224 [2024-07-21 03:44:27.468443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.224 qpair failed and we were unable to recover it. 00:34:42.224 [2024-07-21 03:44:27.468545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.224 [2024-07-21 03:44:27.468576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.224 qpair failed and we were unable to recover it. 00:34:42.224 [2024-07-21 03:44:27.468726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.224 [2024-07-21 03:44:27.468754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.224 qpair failed and we were unable to recover it. 00:34:42.224 [2024-07-21 03:44:27.468878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.224 [2024-07-21 03:44:27.468905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.224 qpair failed and we were unable to recover it. 00:34:42.224 [2024-07-21 03:44:27.468989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.224 [2024-07-21 03:44:27.469015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.224 qpair failed and we were unable to recover it. 00:34:42.224 [2024-07-21 03:44:27.469162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.224 [2024-07-21 03:44:27.469189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.224 qpair failed and we were unable to recover it. 00:34:42.224 [2024-07-21 03:44:27.469284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.224 [2024-07-21 03:44:27.469312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.224 qpair failed and we were unable to recover it. 00:34:42.224 [2024-07-21 03:44:27.470083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.224 [2024-07-21 03:44:27.470128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.224 qpair failed and we were unable to recover it. 00:34:42.224 [2024-07-21 03:44:27.470274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.224 [2024-07-21 03:44:27.470306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.224 qpair failed and we were unable to recover it. 00:34:42.224 [2024-07-21 03:44:27.470443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.224 [2024-07-21 03:44:27.470470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.224 qpair failed and we were unable to recover it. 00:34:42.224 [2024-07-21 03:44:27.470596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.224 [2024-07-21 03:44:27.470630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.224 qpair failed and we were unable to recover it. 00:34:42.224 [2024-07-21 03:44:27.470797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.224 [2024-07-21 03:44:27.470823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.224 qpair failed and we were unable to recover it. 00:34:42.224 [2024-07-21 03:44:27.470968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.224 [2024-07-21 03:44:27.470995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.224 qpair failed and we were unable to recover it. 00:34:42.224 [2024-07-21 03:44:27.471113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.224 [2024-07-21 03:44:27.471166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.224 qpair failed and we were unable to recover it. 00:34:42.224 [2024-07-21 03:44:27.471318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.224 [2024-07-21 03:44:27.471345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.224 qpair failed and we were unable to recover it. 00:34:42.224 [2024-07-21 03:44:27.471499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.224 [2024-07-21 03:44:27.471528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.224 qpair failed and we were unable to recover it. 00:34:42.224 [2024-07-21 03:44:27.471651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.224 [2024-07-21 03:44:27.471678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.224 qpair failed and we were unable to recover it. 00:34:42.224 [2024-07-21 03:44:27.471775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.224 [2024-07-21 03:44:27.471801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.224 qpair failed and we were unable to recover it. 00:34:42.224 [2024-07-21 03:44:27.471924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.224 [2024-07-21 03:44:27.471951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.224 qpair failed and we were unable to recover it. 00:34:42.224 [2024-07-21 03:44:27.472045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.224 [2024-07-21 03:44:27.472077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.224 qpair failed and we were unable to recover it. 00:34:42.224 [2024-07-21 03:44:27.472194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.225 [2024-07-21 03:44:27.472222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.225 qpair failed and we were unable to recover it. 00:34:42.225 [2024-07-21 03:44:27.472346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.225 [2024-07-21 03:44:27.472374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.225 qpair failed and we were unable to recover it. 00:34:42.225 [2024-07-21 03:44:27.472530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.225 [2024-07-21 03:44:27.472558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.225 qpair failed and we were unable to recover it. 00:34:42.225 [2024-07-21 03:44:27.472694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.225 [2024-07-21 03:44:27.472722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.225 qpair failed and we were unable to recover it. 00:34:42.225 [2024-07-21 03:44:27.472822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.225 [2024-07-21 03:44:27.472850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.225 qpair failed and we were unable to recover it. 00:34:42.225 [2024-07-21 03:44:27.472987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.225 [2024-07-21 03:44:27.473015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.225 qpair failed and we were unable to recover it. 00:34:42.225 [2024-07-21 03:44:27.473117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.225 [2024-07-21 03:44:27.473144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.225 qpair failed and we were unable to recover it. 00:34:42.225 [2024-07-21 03:44:27.473343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.225 [2024-07-21 03:44:27.473369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.225 qpair failed and we were unable to recover it. 00:34:42.225 [2024-07-21 03:44:27.473516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.225 [2024-07-21 03:44:27.473542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.225 qpair failed and we were unable to recover it. 00:34:42.225 [2024-07-21 03:44:27.473703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.225 [2024-07-21 03:44:27.473734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.225 qpair failed and we were unable to recover it. 00:34:42.225 [2024-07-21 03:44:27.473891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.225 [2024-07-21 03:44:27.473921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.225 qpair failed and we were unable to recover it. 00:34:42.225 [2024-07-21 03:44:27.474037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.225 [2024-07-21 03:44:27.474064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.225 qpair failed and we were unable to recover it. 00:34:42.225 [2024-07-21 03:44:27.474225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.225 [2024-07-21 03:44:27.474252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.225 qpair failed and we were unable to recover it. 00:34:42.225 [2024-07-21 03:44:27.474432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.225 [2024-07-21 03:44:27.474479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.225 qpair failed and we were unable to recover it. 00:34:42.225 [2024-07-21 03:44:27.474584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.225 [2024-07-21 03:44:27.474620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.225 qpair failed and we were unable to recover it. 00:34:42.225 [2024-07-21 03:44:27.474791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.225 [2024-07-21 03:44:27.474818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.225 qpair failed and we were unable to recover it. 00:34:42.225 [2024-07-21 03:44:27.474957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.225 [2024-07-21 03:44:27.474988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.225 qpair failed and we were unable to recover it. 00:34:42.225 [2024-07-21 03:44:27.475134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.225 [2024-07-21 03:44:27.475161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.225 qpair failed and we were unable to recover it. 00:34:42.225 [2024-07-21 03:44:27.475257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.225 [2024-07-21 03:44:27.475284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.225 qpair failed and we were unable to recover it. 00:34:42.225 [2024-07-21 03:44:27.475375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.225 [2024-07-21 03:44:27.475402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.225 qpair failed and we were unable to recover it. 00:34:42.225 [2024-07-21 03:44:27.475551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.225 [2024-07-21 03:44:27.475578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.225 qpair failed and we were unable to recover it. 00:34:42.225 [2024-07-21 03:44:27.475703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.225 [2024-07-21 03:44:27.475731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.225 qpair failed and we were unable to recover it. 00:34:42.225 [2024-07-21 03:44:27.475881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.225 [2024-07-21 03:44:27.475907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.225 qpair failed and we were unable to recover it. 00:34:42.225 [2024-07-21 03:44:27.476030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.225 [2024-07-21 03:44:27.476057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.225 qpair failed and we were unable to recover it. 00:34:42.225 [2024-07-21 03:44:27.476148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.225 [2024-07-21 03:44:27.476174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.225 qpair failed and we were unable to recover it. 00:34:42.225 [2024-07-21 03:44:27.476311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.225 [2024-07-21 03:44:27.476340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.225 qpair failed and we were unable to recover it. 00:34:42.225 [2024-07-21 03:44:27.476510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.225 [2024-07-21 03:44:27.476545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.225 qpair failed and we were unable to recover it. 00:34:42.226 [2024-07-21 03:44:27.476665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.226 [2024-07-21 03:44:27.476709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.226 qpair failed and we were unable to recover it. 00:34:42.226 [2024-07-21 03:44:27.476855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.226 [2024-07-21 03:44:27.476892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.226 qpair failed and we were unable to recover it. 00:34:42.226 [2024-07-21 03:44:27.477038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.226 [2024-07-21 03:44:27.477065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.226 qpair failed and we were unable to recover it. 00:34:42.226 [2024-07-21 03:44:27.477157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.226 [2024-07-21 03:44:27.477185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.226 qpair failed and we were unable to recover it. 00:34:42.226 [2024-07-21 03:44:27.477350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.226 [2024-07-21 03:44:27.477379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.226 qpair failed and we were unable to recover it. 00:34:42.226 [2024-07-21 03:44:27.477499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.226 [2024-07-21 03:44:27.477527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.226 qpair failed and we were unable to recover it. 00:34:42.226 [2024-07-21 03:44:27.477676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.226 [2024-07-21 03:44:27.477720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.226 qpair failed and we were unable to recover it. 00:34:42.226 [2024-07-21 03:44:27.477825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.226 [2024-07-21 03:44:27.477881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.226 qpair failed and we were unable to recover it. 00:34:42.226 [2024-07-21 03:44:27.477993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.226 [2024-07-21 03:44:27.478020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.226 qpair failed and we were unable to recover it. 00:34:42.226 [2024-07-21 03:44:27.478169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.226 [2024-07-21 03:44:27.478196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.226 qpair failed and we were unable to recover it. 00:34:42.226 [2024-07-21 03:44:27.478341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.226 [2024-07-21 03:44:27.478371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.226 qpair failed and we were unable to recover it. 00:34:42.226 [2024-07-21 03:44:27.478510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.226 [2024-07-21 03:44:27.478537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.226 qpair failed and we were unable to recover it. 00:34:42.226 [2024-07-21 03:44:27.478683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.226 [2024-07-21 03:44:27.478714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.226 qpair failed and we were unable to recover it. 00:34:42.226 [2024-07-21 03:44:27.478829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.226 [2024-07-21 03:44:27.478861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.226 qpair failed and we were unable to recover it. 00:34:42.226 [2024-07-21 03:44:27.478986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.226 [2024-07-21 03:44:27.479014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.226 qpair failed and we were unable to recover it. 00:34:42.226 [2024-07-21 03:44:27.479159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.226 [2024-07-21 03:44:27.479187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.226 qpair failed and we were unable to recover it. 00:34:42.226 [2024-07-21 03:44:27.479359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.226 [2024-07-21 03:44:27.479390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.226 qpair failed and we were unable to recover it. 00:34:42.226 [2024-07-21 03:44:27.479499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.226 [2024-07-21 03:44:27.479528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.226 qpair failed and we were unable to recover it. 00:34:42.226 [2024-07-21 03:44:27.479659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.226 [2024-07-21 03:44:27.479686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.226 qpair failed and we were unable to recover it. 00:34:42.226 [2024-07-21 03:44:27.479785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.226 [2024-07-21 03:44:27.479812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.226 qpair failed and we were unable to recover it. 00:34:42.226 [2024-07-21 03:44:27.479958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.226 [2024-07-21 03:44:27.479985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.226 qpair failed and we were unable to recover it. 00:34:42.226 [2024-07-21 03:44:27.480133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.226 [2024-07-21 03:44:27.480159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.226 qpair failed and we were unable to recover it. 00:34:42.226 [2024-07-21 03:44:27.480260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.226 [2024-07-21 03:44:27.480287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.226 qpair failed and we were unable to recover it. 00:34:42.226 [2024-07-21 03:44:27.480406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.226 [2024-07-21 03:44:27.480441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.226 qpair failed and we were unable to recover it. 00:34:42.226 [2024-07-21 03:44:27.480592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.226 [2024-07-21 03:44:27.480624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.226 qpair failed and we were unable to recover it. 00:34:42.226 [2024-07-21 03:44:27.480789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.226 [2024-07-21 03:44:27.480816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.226 qpair failed and we were unable to recover it. 00:34:42.226 [2024-07-21 03:44:27.480949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.226 [2024-07-21 03:44:27.480977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.226 qpair failed and we were unable to recover it. 00:34:42.227 [2024-07-21 03:44:27.481121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.227 [2024-07-21 03:44:27.481166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.227 qpair failed and we were unable to recover it. 00:34:42.227 [2024-07-21 03:44:27.481291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.227 [2024-07-21 03:44:27.481320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.227 qpair failed and we were unable to recover it. 00:34:42.227 [2024-07-21 03:44:27.481439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.227 [2024-07-21 03:44:27.481466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.227 qpair failed and we were unable to recover it. 00:34:42.227 [2024-07-21 03:44:27.481560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.227 [2024-07-21 03:44:27.481587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.227 qpair failed and we were unable to recover it. 00:34:42.227 [2024-07-21 03:44:27.481725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.227 [2024-07-21 03:44:27.481753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.227 qpair failed and we were unable to recover it. 00:34:42.227 [2024-07-21 03:44:27.481906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.227 [2024-07-21 03:44:27.481933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.227 qpair failed and we were unable to recover it. 00:34:42.227 [2024-07-21 03:44:27.482027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.227 [2024-07-21 03:44:27.482053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.227 qpair failed and we were unable to recover it. 00:34:42.227 [2024-07-21 03:44:27.482155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.227 [2024-07-21 03:44:27.482183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.227 qpair failed and we were unable to recover it. 00:34:42.227 [2024-07-21 03:44:27.482302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.227 [2024-07-21 03:44:27.482330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.227 qpair failed and we were unable to recover it. 00:34:42.227 [2024-07-21 03:44:27.482484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.227 [2024-07-21 03:44:27.482526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.227 qpair failed and we were unable to recover it. 00:34:42.227 [2024-07-21 03:44:27.482690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.227 [2024-07-21 03:44:27.482718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.227 qpair failed and we were unable to recover it. 00:34:42.227 [2024-07-21 03:44:27.482845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.227 [2024-07-21 03:44:27.482884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.227 qpair failed and we were unable to recover it. 00:34:42.227 [2024-07-21 03:44:27.482994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.509 [2024-07-21 03:44:27.483035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.509 qpair failed and we were unable to recover it. 00:34:42.509 [2024-07-21 03:44:27.483127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.509 [2024-07-21 03:44:27.483154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.509 qpair failed and we were unable to recover it. 00:34:42.509 [2024-07-21 03:44:27.483277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.509 [2024-07-21 03:44:27.483303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.509 qpair failed and we were unable to recover it. 00:34:42.509 [2024-07-21 03:44:27.483454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.510 [2024-07-21 03:44:27.483481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.510 qpair failed and we were unable to recover it. 00:34:42.510 [2024-07-21 03:44:27.483597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.510 [2024-07-21 03:44:27.483629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.510 qpair failed and we were unable to recover it. 00:34:42.510 [2024-07-21 03:44:27.483768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.510 [2024-07-21 03:44:27.483795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.510 qpair failed and we were unable to recover it. 00:34:42.510 [2024-07-21 03:44:27.483919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.510 [2024-07-21 03:44:27.483946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.510 qpair failed and we were unable to recover it. 00:34:42.510 [2024-07-21 03:44:27.484032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.510 [2024-07-21 03:44:27.484059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.510 qpair failed and we were unable to recover it. 00:34:42.510 [2024-07-21 03:44:27.484173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.510 [2024-07-21 03:44:27.484200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.510 qpair failed and we were unable to recover it. 00:34:42.510 [2024-07-21 03:44:27.484324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.510 [2024-07-21 03:44:27.484352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.510 qpair failed and we were unable to recover it. 00:34:42.510 [2024-07-21 03:44:27.484487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.510 [2024-07-21 03:44:27.484514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.510 qpair failed and we were unable to recover it. 00:34:42.510 [2024-07-21 03:44:27.484637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.510 [2024-07-21 03:44:27.484668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.510 qpair failed and we were unable to recover it. 00:34:42.510 [2024-07-21 03:44:27.484786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.510 [2024-07-21 03:44:27.484813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.510 qpair failed and we were unable to recover it. 00:34:42.510 [2024-07-21 03:44:27.484941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.510 [2024-07-21 03:44:27.484972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.510 qpair failed and we were unable to recover it. 00:34:42.510 [2024-07-21 03:44:27.485094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.510 [2024-07-21 03:44:27.485120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.510 qpair failed and we were unable to recover it. 00:34:42.510 [2024-07-21 03:44:27.485219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.510 [2024-07-21 03:44:27.485246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.510 qpair failed and we were unable to recover it. 00:34:42.510 [2024-07-21 03:44:27.485340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.510 [2024-07-21 03:44:27.485368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.510 qpair failed and we were unable to recover it. 00:34:42.510 [2024-07-21 03:44:27.485493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.510 [2024-07-21 03:44:27.485520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.510 qpair failed and we were unable to recover it. 00:34:42.510 [2024-07-21 03:44:27.485637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.510 [2024-07-21 03:44:27.485675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.510 qpair failed and we were unable to recover it. 00:34:42.510 [2024-07-21 03:44:27.485796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.510 [2024-07-21 03:44:27.485822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.510 qpair failed and we were unable to recover it. 00:34:42.510 [2024-07-21 03:44:27.485954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.510 [2024-07-21 03:44:27.485980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.510 qpair failed and we were unable to recover it. 00:34:42.510 [2024-07-21 03:44:27.486103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.510 [2024-07-21 03:44:27.486129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.510 qpair failed and we were unable to recover it. 00:34:42.510 [2024-07-21 03:44:27.486218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.510 [2024-07-21 03:44:27.486247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.510 qpair failed and we were unable to recover it. 00:34:42.510 [2024-07-21 03:44:27.486340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.510 [2024-07-21 03:44:27.486368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.510 qpair failed and we were unable to recover it. 00:34:42.510 [2024-07-21 03:44:27.486506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.510 [2024-07-21 03:44:27.486547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.510 qpair failed and we were unable to recover it. 00:34:42.510 [2024-07-21 03:44:27.486662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.510 [2024-07-21 03:44:27.486691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.510 qpair failed and we were unable to recover it. 00:34:42.510 [2024-07-21 03:44:27.486798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.510 [2024-07-21 03:44:27.486825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.510 qpair failed and we were unable to recover it. 00:34:42.510 [2024-07-21 03:44:27.486940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.510 [2024-07-21 03:44:27.486968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.510 qpair failed and we were unable to recover it. 00:34:42.510 [2024-07-21 03:44:27.487066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.510 [2024-07-21 03:44:27.487092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.510 qpair failed and we were unable to recover it. 00:34:42.510 [2024-07-21 03:44:27.487240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.510 [2024-07-21 03:44:27.487267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.510 qpair failed and we were unable to recover it. 00:34:42.510 [2024-07-21 03:44:27.487386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.510 [2024-07-21 03:44:27.487415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.510 qpair failed and we were unable to recover it. 00:34:42.510 [2024-07-21 03:44:27.487532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.510 [2024-07-21 03:44:27.487558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.510 qpair failed and we were unable to recover it. 00:34:42.510 [2024-07-21 03:44:27.487687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.510 [2024-07-21 03:44:27.487715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.510 qpair failed and we were unable to recover it. 00:34:42.510 [2024-07-21 03:44:27.487836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.510 [2024-07-21 03:44:27.487862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.510 qpair failed and we were unable to recover it. 00:34:42.510 [2024-07-21 03:44:27.487992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.510 [2024-07-21 03:44:27.488018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.510 qpair failed and we were unable to recover it. 00:34:42.510 [2024-07-21 03:44:27.488128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.510 [2024-07-21 03:44:27.488155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.510 qpair failed and we were unable to recover it. 00:34:42.510 [2024-07-21 03:44:27.488243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.510 [2024-07-21 03:44:27.488271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.510 qpair failed and we were unable to recover it. 00:34:42.510 [2024-07-21 03:44:27.488395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.510 [2024-07-21 03:44:27.488422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.510 qpair failed and we were unable to recover it. 00:34:42.510 [2024-07-21 03:44:27.488542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.510 [2024-07-21 03:44:27.488569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.510 qpair failed and we were unable to recover it. 00:34:42.510 [2024-07-21 03:44:27.488674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.510 [2024-07-21 03:44:27.488700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.510 qpair failed and we were unable to recover it. 00:34:42.510 [2024-07-21 03:44:27.488794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.510 [2024-07-21 03:44:27.488825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.510 qpair failed and we were unable to recover it. 00:34:42.510 [2024-07-21 03:44:27.488922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.510 [2024-07-21 03:44:27.488948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.510 qpair failed and we were unable to recover it. 00:34:42.510 [2024-07-21 03:44:27.489072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.510 [2024-07-21 03:44:27.489100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.510 qpair failed and we were unable to recover it. 00:34:42.511 [2024-07-21 03:44:27.489212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.511 [2024-07-21 03:44:27.489241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.511 qpair failed and we were unable to recover it. 00:34:42.511 [2024-07-21 03:44:27.489390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.511 [2024-07-21 03:44:27.489416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.511 qpair failed and we were unable to recover it. 00:34:42.511 [2024-07-21 03:44:27.489541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.511 [2024-07-21 03:44:27.489567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.511 qpair failed and we were unable to recover it. 00:34:42.511 [2024-07-21 03:44:27.489721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.511 [2024-07-21 03:44:27.489748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.511 qpair failed and we were unable to recover it. 00:34:42.511 [2024-07-21 03:44:27.489903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.511 [2024-07-21 03:44:27.489929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.511 qpair failed and we were unable to recover it. 00:34:42.511 [2024-07-21 03:44:27.490026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.511 [2024-07-21 03:44:27.490052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.511 qpair failed and we were unable to recover it. 00:34:42.511 [2024-07-21 03:44:27.490174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.511 [2024-07-21 03:44:27.490202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.511 qpair failed and we were unable to recover it. 00:34:42.511 [2024-07-21 03:44:27.490293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.511 [2024-07-21 03:44:27.490319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.511 qpair failed and we were unable to recover it. 00:34:42.511 [2024-07-21 03:44:27.490457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.511 [2024-07-21 03:44:27.490488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.511 qpair failed and we were unable to recover it. 00:34:42.511 [2024-07-21 03:44:27.490626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.511 [2024-07-21 03:44:27.490670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.511 qpair failed and we were unable to recover it. 00:34:42.511 [2024-07-21 03:44:27.490818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.511 [2024-07-21 03:44:27.490845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.511 qpair failed and we were unable to recover it. 00:34:42.511 [2024-07-21 03:44:27.490950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.511 [2024-07-21 03:44:27.490976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.511 qpair failed and we were unable to recover it. 00:34:42.511 [2024-07-21 03:44:27.491093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.511 [2024-07-21 03:44:27.491120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.511 qpair failed and we were unable to recover it. 00:34:42.511 [2024-07-21 03:44:27.491242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.511 [2024-07-21 03:44:27.491268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.511 qpair failed and we were unable to recover it. 00:34:42.511 [2024-07-21 03:44:27.491363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.511 [2024-07-21 03:44:27.491389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.511 qpair failed and we were unable to recover it. 00:34:42.511 [2024-07-21 03:44:27.491540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.511 [2024-07-21 03:44:27.491566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.511 qpair failed and we were unable to recover it. 00:34:42.511 [2024-07-21 03:44:27.491665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.511 [2024-07-21 03:44:27.491692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.511 qpair failed and we were unable to recover it. 00:34:42.511 [2024-07-21 03:44:27.491817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.511 [2024-07-21 03:44:27.491843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.511 qpair failed and we were unable to recover it. 00:34:42.511 [2024-07-21 03:44:27.491951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.511 [2024-07-21 03:44:27.491980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.511 qpair failed and we were unable to recover it. 00:34:42.511 [2024-07-21 03:44:27.492128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.511 [2024-07-21 03:44:27.492155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.511 qpair failed and we were unable to recover it. 00:34:42.511 [2024-07-21 03:44:27.492303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.511 [2024-07-21 03:44:27.492329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.511 qpair failed and we were unable to recover it. 00:34:42.511 [2024-07-21 03:44:27.492502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.511 [2024-07-21 03:44:27.492531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.511 qpair failed and we were unable to recover it. 00:34:42.511 [2024-07-21 03:44:27.492648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.511 [2024-07-21 03:44:27.492675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.511 qpair failed and we were unable to recover it. 00:34:42.511 [2024-07-21 03:44:27.492800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.511 [2024-07-21 03:44:27.492826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.511 qpair failed and we were unable to recover it. 00:34:42.511 [2024-07-21 03:44:27.492948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.511 [2024-07-21 03:44:27.492995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.511 qpair failed and we were unable to recover it. 00:34:42.511 [2024-07-21 03:44:27.493114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.511 [2024-07-21 03:44:27.493141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.511 qpair failed and we were unable to recover it. 00:34:42.511 [2024-07-21 03:44:27.493292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.511 [2024-07-21 03:44:27.493335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.511 qpair failed and we were unable to recover it. 00:34:42.511 [2024-07-21 03:44:27.493435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.511 [2024-07-21 03:44:27.493465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.511 qpair failed and we were unable to recover it. 00:34:42.511 [2024-07-21 03:44:27.493586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.511 [2024-07-21 03:44:27.493619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.511 qpair failed and we were unable to recover it. 00:34:42.511 [2024-07-21 03:44:27.493756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.511 [2024-07-21 03:44:27.493783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.511 qpair failed and we were unable to recover it. 00:34:42.511 [2024-07-21 03:44:27.493933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.511 [2024-07-21 03:44:27.493963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.511 qpair failed and we were unable to recover it. 00:34:42.511 [2024-07-21 03:44:27.494131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.511 [2024-07-21 03:44:27.494158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.511 qpair failed and we were unable to recover it. 00:34:42.511 [2024-07-21 03:44:27.494248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.511 [2024-07-21 03:44:27.494274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.511 qpair failed and we were unable to recover it. 00:34:42.511 [2024-07-21 03:44:27.494396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.511 [2024-07-21 03:44:27.494423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.511 qpair failed and we were unable to recover it. 00:34:42.511 [2024-07-21 03:44:27.494557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.511 [2024-07-21 03:44:27.494585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.511 qpair failed and we were unable to recover it. 00:34:42.511 [2024-07-21 03:44:27.494711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.511 [2024-07-21 03:44:27.494739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.511 qpair failed and we were unable to recover it. 00:34:42.511 [2024-07-21 03:44:27.494824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.511 [2024-07-21 03:44:27.494851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.511 qpair failed and we were unable to recover it. 00:34:42.511 [2024-07-21 03:44:27.494998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.511 [2024-07-21 03:44:27.495024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.511 qpair failed and we were unable to recover it. 00:34:42.511 [2024-07-21 03:44:27.495166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.511 [2024-07-21 03:44:27.495195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.511 qpair failed and we were unable to recover it. 00:34:42.511 [2024-07-21 03:44:27.495295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.511 [2024-07-21 03:44:27.495325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.512 qpair failed and we were unable to recover it. 00:34:42.512 [2024-07-21 03:44:27.495467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.512 [2024-07-21 03:44:27.495494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.512 qpair failed and we were unable to recover it. 00:34:42.512 [2024-07-21 03:44:27.495621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.512 [2024-07-21 03:44:27.495648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.512 qpair failed and we were unable to recover it. 00:34:42.512 [2024-07-21 03:44:27.495801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.512 [2024-07-21 03:44:27.495828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.512 qpair failed and we were unable to recover it. 00:34:42.512 [2024-07-21 03:44:27.495931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.512 [2024-07-21 03:44:27.495957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.512 qpair failed and we were unable to recover it. 00:34:42.512 [2024-07-21 03:44:27.496083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.512 [2024-07-21 03:44:27.496109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.512 qpair failed and we were unable to recover it. 00:34:42.512 [2024-07-21 03:44:27.496250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.512 [2024-07-21 03:44:27.496279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.512 qpair failed and we were unable to recover it. 00:34:42.512 [2024-07-21 03:44:27.496416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.512 [2024-07-21 03:44:27.496443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.512 qpair failed and we were unable to recover it. 00:34:42.512 [2024-07-21 03:44:27.496541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.512 [2024-07-21 03:44:27.496568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.512 qpair failed and we were unable to recover it. 00:34:42.512 [2024-07-21 03:44:27.496695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.512 [2024-07-21 03:44:27.496722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.512 qpair failed and we were unable to recover it. 00:34:42.512 [2024-07-21 03:44:27.496868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.512 [2024-07-21 03:44:27.496895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.512 qpair failed and we were unable to recover it. 00:34:42.512 [2024-07-21 03:44:27.497031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.512 [2024-07-21 03:44:27.497060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.512 qpair failed and we were unable to recover it. 00:34:42.512 [2024-07-21 03:44:27.497186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.512 [2024-07-21 03:44:27.497220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.512 qpair failed and we were unable to recover it. 00:34:42.512 [2024-07-21 03:44:27.497327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.512 [2024-07-21 03:44:27.497352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.512 qpair failed and we were unable to recover it. 00:34:42.512 [2024-07-21 03:44:27.497497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.512 [2024-07-21 03:44:27.497524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.512 qpair failed and we were unable to recover it. 00:34:42.512 [2024-07-21 03:44:27.497661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.512 [2024-07-21 03:44:27.497706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.512 qpair failed and we were unable to recover it. 00:34:42.512 [2024-07-21 03:44:27.497805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.512 [2024-07-21 03:44:27.497831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.512 qpair failed and we were unable to recover it. 00:34:42.512 [2024-07-21 03:44:27.497961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.512 [2024-07-21 03:44:27.497988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.512 qpair failed and we were unable to recover it. 00:34:42.512 [2024-07-21 03:44:27.498121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.512 [2024-07-21 03:44:27.498150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.512 qpair failed and we were unable to recover it. 00:34:42.512 [2024-07-21 03:44:27.498294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.512 [2024-07-21 03:44:27.498321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.512 qpair failed and we were unable to recover it. 00:34:42.512 [2024-07-21 03:44:27.498460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.512 [2024-07-21 03:44:27.498518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.512 qpair failed and we were unable to recover it. 00:34:42.512 [2024-07-21 03:44:27.498684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.512 [2024-07-21 03:44:27.498713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.512 qpair failed and we were unable to recover it. 00:34:42.512 [2024-07-21 03:44:27.498832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.512 [2024-07-21 03:44:27.498859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.512 qpair failed and we were unable to recover it. 00:34:42.512 [2024-07-21 03:44:27.498993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.512 [2024-07-21 03:44:27.499020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.512 qpair failed and we were unable to recover it. 00:34:42.512 [2024-07-21 03:44:27.499144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.512 [2024-07-21 03:44:27.499171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.512 qpair failed and we were unable to recover it. 00:34:42.512 [2024-07-21 03:44:27.499322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.512 [2024-07-21 03:44:27.499349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.512 qpair failed and we were unable to recover it. 00:34:42.512 [2024-07-21 03:44:27.499450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.512 [2024-07-21 03:44:27.499478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.512 qpair failed and we were unable to recover it. 00:34:42.512 [2024-07-21 03:44:27.499602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.512 [2024-07-21 03:44:27.499636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.512 qpair failed and we were unable to recover it. 00:34:42.512 [2024-07-21 03:44:27.499772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.512 [2024-07-21 03:44:27.499799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.512 qpair failed and we were unable to recover it. 00:34:42.512 [2024-07-21 03:44:27.499919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.512 [2024-07-21 03:44:27.499946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.512 qpair failed and we were unable to recover it. 00:34:42.512 [2024-07-21 03:44:27.500060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.512 [2024-07-21 03:44:27.500104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.512 qpair failed and we were unable to recover it. 00:34:42.512 [2024-07-21 03:44:27.500229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.512 [2024-07-21 03:44:27.500257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.512 qpair failed and we were unable to recover it. 00:34:42.512 [2024-07-21 03:44:27.500383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.512 [2024-07-21 03:44:27.500430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.512 qpair failed and we were unable to recover it. 00:34:42.512 [2024-07-21 03:44:27.500598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.512 [2024-07-21 03:44:27.500632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.512 qpair failed and we were unable to recover it. 00:34:42.512 [2024-07-21 03:44:27.500794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.512 [2024-07-21 03:44:27.500821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.512 qpair failed and we were unable to recover it. 00:34:42.512 [2024-07-21 03:44:27.500972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.512 [2024-07-21 03:44:27.501003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.512 qpair failed and we were unable to recover it. 00:34:42.512 [2024-07-21 03:44:27.501192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.512 [2024-07-21 03:44:27.501219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.512 qpair failed and we were unable to recover it. 00:34:42.512 [2024-07-21 03:44:27.501366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.512 [2024-07-21 03:44:27.501393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.512 qpair failed and we were unable to recover it. 00:34:42.512 [2024-07-21 03:44:27.501495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.512 [2024-07-21 03:44:27.501523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.512 qpair failed and we were unable to recover it. 00:34:42.512 [2024-07-21 03:44:27.501675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.512 [2024-07-21 03:44:27.501711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.512 qpair failed and we were unable to recover it. 00:34:42.512 [2024-07-21 03:44:27.501851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.512 [2024-07-21 03:44:27.501886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.513 qpair failed and we were unable to recover it. 00:34:42.513 [2024-07-21 03:44:27.502088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.513 [2024-07-21 03:44:27.502149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.513 qpair failed and we were unable to recover it. 00:34:42.513 [2024-07-21 03:44:27.502263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.513 [2024-07-21 03:44:27.502293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.513 qpair failed and we were unable to recover it. 00:34:42.513 [2024-07-21 03:44:27.502405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.513 [2024-07-21 03:44:27.502432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.513 qpair failed and we were unable to recover it. 00:34:42.513 [2024-07-21 03:44:27.502545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.513 [2024-07-21 03:44:27.502572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.513 qpair failed and we were unable to recover it. 00:34:42.513 [2024-07-21 03:44:27.502750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.513 [2024-07-21 03:44:27.502778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.513 qpair failed and we were unable to recover it. 00:34:42.513 [2024-07-21 03:44:27.502907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.513 [2024-07-21 03:44:27.502935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.513 qpair failed and we were unable to recover it. 00:34:42.513 [2024-07-21 03:44:27.503064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.513 [2024-07-21 03:44:27.503091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.513 qpair failed and we were unable to recover it. 00:34:42.513 [2024-07-21 03:44:27.503241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.513 [2024-07-21 03:44:27.503268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.513 qpair failed and we were unable to recover it. 00:34:42.513 [2024-07-21 03:44:27.503404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.513 [2024-07-21 03:44:27.503433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.513 qpair failed and we were unable to recover it. 00:34:42.513 [2024-07-21 03:44:27.503536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.513 [2024-07-21 03:44:27.503565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.513 qpair failed and we were unable to recover it. 00:34:42.513 [2024-07-21 03:44:27.503725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.513 [2024-07-21 03:44:27.503753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.513 qpair failed and we were unable to recover it. 00:34:42.513 [2024-07-21 03:44:27.503875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.513 [2024-07-21 03:44:27.503902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.513 qpair failed and we were unable to recover it. 00:34:42.513 [2024-07-21 03:44:27.504072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.513 [2024-07-21 03:44:27.504101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.513 qpair failed and we were unable to recover it. 00:34:42.513 [2024-07-21 03:44:27.504202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.513 [2024-07-21 03:44:27.504231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.513 qpair failed and we were unable to recover it. 00:34:42.513 [2024-07-21 03:44:27.504336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.513 [2024-07-21 03:44:27.504366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.513 qpair failed and we were unable to recover it. 00:34:42.513 [2024-07-21 03:44:27.504466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.513 [2024-07-21 03:44:27.504493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.513 qpair failed and we were unable to recover it. 00:34:42.513 [2024-07-21 03:44:27.504587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.513 [2024-07-21 03:44:27.504619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.513 qpair failed and we were unable to recover it. 00:34:42.513 [2024-07-21 03:44:27.504760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.513 [2024-07-21 03:44:27.504788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.513 qpair failed and we were unable to recover it. 00:34:42.513 [2024-07-21 03:44:27.504878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.513 [2024-07-21 03:44:27.504904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.513 qpair failed and we were unable to recover it. 00:34:42.513 [2024-07-21 03:44:27.505025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.513 [2024-07-21 03:44:27.505054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.513 qpair failed and we were unable to recover it. 00:34:42.513 [2024-07-21 03:44:27.505190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.513 [2024-07-21 03:44:27.505216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.513 qpair failed and we were unable to recover it. 00:34:42.513 [2024-07-21 03:44:27.505364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.513 [2024-07-21 03:44:27.505407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.513 qpair failed and we were unable to recover it. 00:34:42.513 [2024-07-21 03:44:27.505546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.513 [2024-07-21 03:44:27.505574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.513 qpair failed and we were unable to recover it. 00:34:42.513 [2024-07-21 03:44:27.505709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.513 [2024-07-21 03:44:27.505736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.513 qpair failed and we were unable to recover it. 00:34:42.513 [2024-07-21 03:44:27.505879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.513 [2024-07-21 03:44:27.505920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.513 qpair failed and we were unable to recover it. 00:34:42.513 [2024-07-21 03:44:27.506051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.513 [2024-07-21 03:44:27.506079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.513 qpair failed and we were unable to recover it. 00:34:42.513 [2024-07-21 03:44:27.506248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.513 [2024-07-21 03:44:27.506276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.513 qpair failed and we were unable to recover it. 00:34:42.513 [2024-07-21 03:44:27.506404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.513 [2024-07-21 03:44:27.506432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.513 qpair failed and we were unable to recover it. 00:34:42.513 [2024-07-21 03:44:27.506586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.513 [2024-07-21 03:44:27.506621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.513 qpair failed and we were unable to recover it. 00:34:42.513 [2024-07-21 03:44:27.506775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.513 [2024-07-21 03:44:27.506802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.513 qpair failed and we were unable to recover it. 00:34:42.513 [2024-07-21 03:44:27.506948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.513 [2024-07-21 03:44:27.506977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.513 qpair failed and we were unable to recover it. 00:34:42.513 [2024-07-21 03:44:27.507098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.513 [2024-07-21 03:44:27.507125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.513 qpair failed and we were unable to recover it. 00:34:42.513 [2024-07-21 03:44:27.507260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.513 [2024-07-21 03:44:27.507286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.513 qpair failed and we were unable to recover it. 00:34:42.513 [2024-07-21 03:44:27.507423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.513 [2024-07-21 03:44:27.507454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.513 qpair failed and we were unable to recover it. 00:34:42.513 [2024-07-21 03:44:27.507599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.513 [2024-07-21 03:44:27.507633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.513 qpair failed and we were unable to recover it. 00:34:42.513 [2024-07-21 03:44:27.507809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.513 [2024-07-21 03:44:27.507847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.513 qpair failed and we were unable to recover it. 00:34:42.513 [2024-07-21 03:44:27.507988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.513 [2024-07-21 03:44:27.508019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.513 qpair failed and we were unable to recover it. 00:34:42.513 [2024-07-21 03:44:27.508132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.513 [2024-07-21 03:44:27.508159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.513 qpair failed and we were unable to recover it. 00:34:42.513 [2024-07-21 03:44:27.508283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.513 [2024-07-21 03:44:27.508309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.513 qpair failed and we were unable to recover it. 00:34:42.513 [2024-07-21 03:44:27.508443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.514 [2024-07-21 03:44:27.508473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.514 qpair failed and we were unable to recover it. 00:34:42.514 [2024-07-21 03:44:27.508610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.514 [2024-07-21 03:44:27.508642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.514 qpair failed and we were unable to recover it. 00:34:42.514 [2024-07-21 03:44:27.508792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.514 [2024-07-21 03:44:27.508835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.514 qpair failed and we were unable to recover it. 00:34:42.514 [2024-07-21 03:44:27.508929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.514 [2024-07-21 03:44:27.508957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.514 qpair failed and we were unable to recover it. 00:34:42.514 [2024-07-21 03:44:27.509105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.514 [2024-07-21 03:44:27.509134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.514 qpair failed and we were unable to recover it. 00:34:42.514 [2024-07-21 03:44:27.509260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.514 [2024-07-21 03:44:27.509289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.514 qpair failed and we were unable to recover it. 00:34:42.514 [2024-07-21 03:44:27.509417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.514 [2024-07-21 03:44:27.509444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.514 qpair failed and we were unable to recover it. 00:34:42.514 [2024-07-21 03:44:27.509536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.514 [2024-07-21 03:44:27.509581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.514 qpair failed and we were unable to recover it. 00:34:42.514 [2024-07-21 03:44:27.509702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.514 [2024-07-21 03:44:27.509730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.514 qpair failed and we were unable to recover it. 00:34:42.514 [2024-07-21 03:44:27.509816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.514 [2024-07-21 03:44:27.509842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.514 qpair failed and we were unable to recover it. 00:34:42.514 [2024-07-21 03:44:27.509997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.514 [2024-07-21 03:44:27.510025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.514 qpair failed and we were unable to recover it. 00:34:42.514 [2024-07-21 03:44:27.510147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.514 [2024-07-21 03:44:27.510176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.514 qpair failed and we were unable to recover it. 00:34:42.514 [2024-07-21 03:44:27.510271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.514 [2024-07-21 03:44:27.510296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.514 qpair failed and we were unable to recover it. 00:34:42.514 [2024-07-21 03:44:27.510408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.514 [2024-07-21 03:44:27.510439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.514 qpair failed and we were unable to recover it. 00:34:42.514 [2024-07-21 03:44:27.510556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.514 [2024-07-21 03:44:27.510583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.514 qpair failed and we were unable to recover it. 00:34:42.514 [2024-07-21 03:44:27.510713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.514 [2024-07-21 03:44:27.510741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.514 qpair failed and we were unable to recover it. 00:34:42.514 [2024-07-21 03:44:27.510839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.514 [2024-07-21 03:44:27.510867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.514 qpair failed and we were unable to recover it. 00:34:42.514 [2024-07-21 03:44:27.510977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.514 [2024-07-21 03:44:27.511005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.514 qpair failed and we were unable to recover it. 00:34:42.514 [2024-07-21 03:44:27.511159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.514 [2024-07-21 03:44:27.511188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.514 qpair failed and we were unable to recover it. 00:34:42.514 [2024-07-21 03:44:27.511333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.514 [2024-07-21 03:44:27.511359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.514 qpair failed and we were unable to recover it. 00:34:42.514 [2024-07-21 03:44:27.511489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.514 [2024-07-21 03:44:27.511531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.514 qpair failed and we were unable to recover it. 00:34:42.514 [2024-07-21 03:44:27.511684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.514 [2024-07-21 03:44:27.511712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.514 qpair failed and we were unable to recover it. 00:34:42.514 [2024-07-21 03:44:27.511843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.514 [2024-07-21 03:44:27.511870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.514 qpair failed and we were unable to recover it. 00:34:42.514 [2024-07-21 03:44:27.511974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.514 [2024-07-21 03:44:27.512001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.514 qpair failed and we were unable to recover it. 00:34:42.514 [2024-07-21 03:44:27.512149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.514 [2024-07-21 03:44:27.512175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.514 qpair failed and we were unable to recover it. 00:34:42.514 [2024-07-21 03:44:27.512299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.514 [2024-07-21 03:44:27.512325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.514 qpair failed and we were unable to recover it. 00:34:42.514 [2024-07-21 03:44:27.512423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.514 [2024-07-21 03:44:27.512450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.514 qpair failed and we were unable to recover it. 00:34:42.514 [2024-07-21 03:44:27.512576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.514 [2024-07-21 03:44:27.512603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.514 qpair failed and we were unable to recover it. 00:34:42.514 [2024-07-21 03:44:27.512730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.514 [2024-07-21 03:44:27.512757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.514 qpair failed and we were unable to recover it. 00:34:42.514 [2024-07-21 03:44:27.512880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.514 [2024-07-21 03:44:27.512906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.514 qpair failed and we were unable to recover it. 00:34:42.514 [2024-07-21 03:44:27.512994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.514 [2024-07-21 03:44:27.513020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.514 qpair failed and we were unable to recover it. 00:34:42.514 [2024-07-21 03:44:27.513137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.514 [2024-07-21 03:44:27.513163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.514 qpair failed and we were unable to recover it. 00:34:42.514 [2024-07-21 03:44:27.513278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.514 [2024-07-21 03:44:27.513304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.514 qpair failed and we were unable to recover it. 00:34:42.514 [2024-07-21 03:44:27.513400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.514 [2024-07-21 03:44:27.513428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.514 qpair failed and we were unable to recover it. 00:34:42.515 [2024-07-21 03:44:27.513578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.515 [2024-07-21 03:44:27.513606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.515 qpair failed and we were unable to recover it. 00:34:42.515 [2024-07-21 03:44:27.513746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.515 [2024-07-21 03:44:27.513772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.515 qpair failed and we were unable to recover it. 00:34:42.515 [2024-07-21 03:44:27.513864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.515 [2024-07-21 03:44:27.513905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.515 qpair failed and we were unable to recover it. 00:34:42.515 [2024-07-21 03:44:27.514039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.515 [2024-07-21 03:44:27.514064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.515 qpair failed and we were unable to recover it. 00:34:42.515 [2024-07-21 03:44:27.514202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.515 [2024-07-21 03:44:27.514241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.515 qpair failed and we were unable to recover it. 00:34:42.515 [2024-07-21 03:44:27.514391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.515 [2024-07-21 03:44:27.514435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.515 qpair failed and we were unable to recover it. 00:34:42.515 [2024-07-21 03:44:27.514542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.515 [2024-07-21 03:44:27.514575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.515 qpair failed and we were unable to recover it. 00:34:42.515 [2024-07-21 03:44:27.514725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.515 [2024-07-21 03:44:27.514753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.515 qpair failed and we were unable to recover it. 00:34:42.515 [2024-07-21 03:44:27.514846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.515 [2024-07-21 03:44:27.514881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.515 qpair failed and we were unable to recover it. 00:34:42.515 [2024-07-21 03:44:27.515060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.515 [2024-07-21 03:44:27.515095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.515 qpair failed and we were unable to recover it. 00:34:42.515 [2024-07-21 03:44:27.515240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.515 [2024-07-21 03:44:27.515270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.515 qpair failed and we were unable to recover it. 00:34:42.515 [2024-07-21 03:44:27.515425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.515 [2024-07-21 03:44:27.515452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.515 qpair failed and we were unable to recover it. 00:34:42.515 [2024-07-21 03:44:27.515622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.515 [2024-07-21 03:44:27.515659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.515 qpair failed and we were unable to recover it. 00:34:42.515 [2024-07-21 03:44:27.515754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.515 [2024-07-21 03:44:27.515782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.515 qpair failed and we were unable to recover it. 00:34:42.515 [2024-07-21 03:44:27.515922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.515 [2024-07-21 03:44:27.515953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.515 qpair failed and we were unable to recover it. 00:34:42.515 [2024-07-21 03:44:27.516055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.515 [2024-07-21 03:44:27.516101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.515 qpair failed and we were unable to recover it. 00:34:42.515 [2024-07-21 03:44:27.516231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.515 [2024-07-21 03:44:27.516263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.515 qpair failed and we were unable to recover it. 00:34:42.515 [2024-07-21 03:44:27.516411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.515 [2024-07-21 03:44:27.516438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.515 qpair failed and we were unable to recover it. 00:34:42.515 [2024-07-21 03:44:27.517231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.515 [2024-07-21 03:44:27.517273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.515 qpair failed and we were unable to recover it. 00:34:42.515 [2024-07-21 03:44:27.517433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.515 [2024-07-21 03:44:27.517459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.515 qpair failed and we were unable to recover it. 00:34:42.515 [2024-07-21 03:44:27.517618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.515 [2024-07-21 03:44:27.517645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.515 qpair failed and we were unable to recover it. 00:34:42.515 [2024-07-21 03:44:27.517740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.515 [2024-07-21 03:44:27.517766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.515 qpair failed and we were unable to recover it. 00:34:42.515 [2024-07-21 03:44:27.517863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.515 [2024-07-21 03:44:27.517893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.515 qpair failed and we were unable to recover it. 00:34:42.515 [2024-07-21 03:44:27.518005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.515 [2024-07-21 03:44:27.518034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.515 qpair failed and we were unable to recover it. 00:34:42.515 [2024-07-21 03:44:27.518183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.515 [2024-07-21 03:44:27.518209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.515 qpair failed and we were unable to recover it. 00:34:42.515 [2024-07-21 03:44:27.518339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.515 [2024-07-21 03:44:27.518381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.515 qpair failed and we were unable to recover it. 00:34:42.515 [2024-07-21 03:44:27.518492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.515 [2024-07-21 03:44:27.518520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.515 qpair failed and we were unable to recover it. 00:34:42.515 [2024-07-21 03:44:27.518672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.515 [2024-07-21 03:44:27.518699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.515 qpair failed and we were unable to recover it. 00:34:42.515 [2024-07-21 03:44:27.518825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.515 [2024-07-21 03:44:27.518850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.515 qpair failed and we were unable to recover it. 00:34:42.515 [2024-07-21 03:44:27.518956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.515 [2024-07-21 03:44:27.519002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.515 qpair failed and we were unable to recover it. 00:34:42.515 [2024-07-21 03:44:27.519128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.515 [2024-07-21 03:44:27.519155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.515 qpair failed and we were unable to recover it. 00:34:42.515 [2024-07-21 03:44:27.519255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.515 [2024-07-21 03:44:27.519283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.515 qpair failed and we were unable to recover it. 00:34:42.515 [2024-07-21 03:44:27.519420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.515 [2024-07-21 03:44:27.519461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.515 qpair failed and we were unable to recover it. 00:34:42.515 [2024-07-21 03:44:27.519567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.515 [2024-07-21 03:44:27.519600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.515 qpair failed and we were unable to recover it. 00:34:42.515 [2024-07-21 03:44:27.519717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.515 [2024-07-21 03:44:27.519744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.515 qpair failed and we were unable to recover it. 00:34:42.515 [2024-07-21 03:44:27.519843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.515 [2024-07-21 03:44:27.519881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.515 qpair failed and we were unable to recover it. 00:34:42.515 [2024-07-21 03:44:27.520034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.515 [2024-07-21 03:44:27.520062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.515 qpair failed and we were unable to recover it. 00:34:42.515 [2024-07-21 03:44:27.520160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.515 [2024-07-21 03:44:27.520187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.515 qpair failed and we were unable to recover it. 00:34:42.515 [2024-07-21 03:44:27.520295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.515 [2024-07-21 03:44:27.520325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.515 qpair failed and we were unable to recover it. 00:34:42.515 [2024-07-21 03:44:27.520448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.515 [2024-07-21 03:44:27.520475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.515 qpair failed and we were unable to recover it. 00:34:42.515 [2024-07-21 03:44:27.520565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.516 [2024-07-21 03:44:27.520592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.516 qpair failed and we were unable to recover it. 00:34:42.516 [2024-07-21 03:44:27.520730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.516 [2024-07-21 03:44:27.520758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.516 qpair failed and we were unable to recover it. 00:34:42.516 [2024-07-21 03:44:27.520958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.516 [2024-07-21 03:44:27.520984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.516 qpair failed and we were unable to recover it. 00:34:42.516 [2024-07-21 03:44:27.521105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.516 [2024-07-21 03:44:27.521131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.516 qpair failed and we were unable to recover it. 00:34:42.516 [2024-07-21 03:44:27.521227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.516 [2024-07-21 03:44:27.521254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.516 qpair failed and we were unable to recover it. 00:34:42.516 [2024-07-21 03:44:27.521382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.516 [2024-07-21 03:44:27.521408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.516 qpair failed and we were unable to recover it. 00:34:42.516 [2024-07-21 03:44:27.521500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.516 [2024-07-21 03:44:27.521526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.516 qpair failed and we were unable to recover it. 00:34:42.516 [2024-07-21 03:44:27.521658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.516 [2024-07-21 03:44:27.521696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.516 qpair failed and we were unable to recover it. 00:34:42.516 [2024-07-21 03:44:27.521802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.516 [2024-07-21 03:44:27.521828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.516 qpair failed and we were unable to recover it. 00:34:42.516 [2024-07-21 03:44:27.521958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.516 [2024-07-21 03:44:27.521984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.516 qpair failed and we were unable to recover it. 00:34:42.516 [2024-07-21 03:44:27.522172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.516 [2024-07-21 03:44:27.522200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.516 qpair failed and we were unable to recover it. 00:34:42.516 [2024-07-21 03:44:27.522292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.516 [2024-07-21 03:44:27.522319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.516 qpair failed and we were unable to recover it. 00:34:42.516 [2024-07-21 03:44:27.522442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.516 [2024-07-21 03:44:27.522474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.516 qpair failed and we were unable to recover it. 00:34:42.516 [2024-07-21 03:44:27.522599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.516 [2024-07-21 03:44:27.522650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.516 qpair failed and we were unable to recover it. 00:34:42.516 [2024-07-21 03:44:27.522760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.516 [2024-07-21 03:44:27.522786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.516 qpair failed and we were unable to recover it. 00:34:42.516 [2024-07-21 03:44:27.522877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.516 [2024-07-21 03:44:27.522903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.516 qpair failed and we were unable to recover it. 00:34:42.516 [2024-07-21 03:44:27.523100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.516 [2024-07-21 03:44:27.523143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.516 qpair failed and we were unable to recover it. 00:34:42.516 [2024-07-21 03:44:27.523314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.516 [2024-07-21 03:44:27.523340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.516 qpair failed and we were unable to recover it. 00:34:42.516 [2024-07-21 03:44:27.523454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.516 [2024-07-21 03:44:27.523481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.516 qpair failed and we were unable to recover it. 00:34:42.516 [2024-07-21 03:44:27.523572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.516 [2024-07-21 03:44:27.523598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.516 qpair failed and we were unable to recover it. 00:34:42.516 [2024-07-21 03:44:27.523696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.516 [2024-07-21 03:44:27.523727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.516 qpair failed and we were unable to recover it. 00:34:42.516 [2024-07-21 03:44:27.523812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.516 [2024-07-21 03:44:27.523838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.516 qpair failed and we were unable to recover it. 00:34:42.516 [2024-07-21 03:44:27.523965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.516 [2024-07-21 03:44:27.523992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.516 qpair failed and we were unable to recover it. 00:34:42.516 [2024-07-21 03:44:27.524086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.516 [2024-07-21 03:44:27.524113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.516 qpair failed and we were unable to recover it. 00:34:42.516 [2024-07-21 03:44:27.524258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.516 [2024-07-21 03:44:27.524300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.516 qpair failed and we were unable to recover it. 00:34:42.516 [2024-07-21 03:44:27.524427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.516 [2024-07-21 03:44:27.524455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.516 qpair failed and we were unable to recover it. 00:34:42.516 [2024-07-21 03:44:27.524548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.516 [2024-07-21 03:44:27.524590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.516 qpair failed and we were unable to recover it. 00:34:42.516 [2024-07-21 03:44:27.524692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.516 [2024-07-21 03:44:27.524719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.516 qpair failed and we were unable to recover it. 00:34:42.516 [2024-07-21 03:44:27.524812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.516 [2024-07-21 03:44:27.524838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.516 qpair failed and we were unable to recover it. 00:34:42.516 [2024-07-21 03:44:27.524936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.516 [2024-07-21 03:44:27.524971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.516 qpair failed and we were unable to recover it. 00:34:42.516 [2024-07-21 03:44:27.525092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.516 [2024-07-21 03:44:27.525118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.516 qpair failed and we were unable to recover it. 00:34:42.516 [2024-07-21 03:44:27.525242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.516 [2024-07-21 03:44:27.525272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.516 qpair failed and we were unable to recover it. 00:34:42.516 [2024-07-21 03:44:27.525422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.516 [2024-07-21 03:44:27.525449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.516 qpair failed and we were unable to recover it. 00:34:42.516 [2024-07-21 03:44:27.525564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.516 [2024-07-21 03:44:27.525605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.516 qpair failed and we were unable to recover it. 00:34:42.516 [2024-07-21 03:44:27.525726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.516 [2024-07-21 03:44:27.525753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.516 qpair failed and we were unable to recover it. 00:34:42.516 [2024-07-21 03:44:27.525845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.516 [2024-07-21 03:44:27.525870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.516 qpair failed and we were unable to recover it. 00:34:42.516 [2024-07-21 03:44:27.525992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.516 [2024-07-21 03:44:27.526018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.516 qpair failed and we were unable to recover it. 00:34:42.516 [2024-07-21 03:44:27.526113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.516 [2024-07-21 03:44:27.526139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.516 qpair failed and we were unable to recover it. 00:34:42.516 [2024-07-21 03:44:27.526256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.516 [2024-07-21 03:44:27.526282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.516 qpair failed and we were unable to recover it. 00:34:42.516 [2024-07-21 03:44:27.526392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.516 [2024-07-21 03:44:27.526421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.516 qpair failed and we were unable to recover it. 00:34:42.516 [2024-07-21 03:44:27.526571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.516 [2024-07-21 03:44:27.526601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.517 qpair failed and we were unable to recover it. 00:34:42.517 [2024-07-21 03:44:27.526738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.517 [2024-07-21 03:44:27.526765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.517 qpair failed and we were unable to recover it. 00:34:42.517 [2024-07-21 03:44:27.526865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.517 [2024-07-21 03:44:27.526894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.517 qpair failed and we were unable to recover it. 00:34:42.517 [2024-07-21 03:44:27.527013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.517 [2024-07-21 03:44:27.527044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.517 qpair failed and we were unable to recover it. 00:34:42.517 [2024-07-21 03:44:27.527216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.517 [2024-07-21 03:44:27.527244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.517 qpair failed and we were unable to recover it. 00:34:42.517 [2024-07-21 03:44:27.527361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.517 [2024-07-21 03:44:27.527405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.517 qpair failed and we were unable to recover it. 00:34:42.517 [2024-07-21 03:44:27.527543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.517 [2024-07-21 03:44:27.527579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.517 qpair failed and we were unable to recover it. 00:34:42.517 [2024-07-21 03:44:27.527736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.517 [2024-07-21 03:44:27.527770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.517 qpair failed and we were unable to recover it. 00:34:42.517 [2024-07-21 03:44:27.527881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.517 [2024-07-21 03:44:27.527908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.517 qpair failed and we were unable to recover it. 00:34:42.517 [2024-07-21 03:44:27.528058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.517 [2024-07-21 03:44:27.528107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.517 qpair failed and we were unable to recover it. 00:34:42.517 [2024-07-21 03:44:27.528249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.517 [2024-07-21 03:44:27.528276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.517 qpair failed and we were unable to recover it. 00:34:42.517 [2024-07-21 03:44:27.528377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.517 [2024-07-21 03:44:27.528407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.517 qpair failed and we were unable to recover it. 00:34:42.517 [2024-07-21 03:44:27.528530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.517 [2024-07-21 03:44:27.528561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.517 qpair failed and we were unable to recover it. 00:34:42.517 [2024-07-21 03:44:27.528686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.517 [2024-07-21 03:44:27.528713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.517 qpair failed and we were unable to recover it. 00:34:42.517 [2024-07-21 03:44:27.528809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.517 [2024-07-21 03:44:27.528836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.517 qpair failed and we were unable to recover it. 00:34:42.517 [2024-07-21 03:44:27.528938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.517 [2024-07-21 03:44:27.528964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.517 qpair failed and we were unable to recover it. 00:34:42.517 [2024-07-21 03:44:27.529064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.517 [2024-07-21 03:44:27.529092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.517 qpair failed and we were unable to recover it. 00:34:42.517 [2024-07-21 03:44:27.529192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.517 [2024-07-21 03:44:27.529220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.517 qpair failed and we were unable to recover it. 00:34:42.517 [2024-07-21 03:44:27.529337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.517 [2024-07-21 03:44:27.529369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.517 qpair failed and we were unable to recover it. 00:34:42.517 [2024-07-21 03:44:27.529507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.517 [2024-07-21 03:44:27.529537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.517 qpair failed and we were unable to recover it. 00:34:42.517 [2024-07-21 03:44:27.529685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.517 [2024-07-21 03:44:27.529714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.517 qpair failed and we were unable to recover it. 00:34:42.517 [2024-07-21 03:44:27.529814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.517 [2024-07-21 03:44:27.529842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.517 qpair failed and we were unable to recover it. 00:34:42.517 [2024-07-21 03:44:27.529937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.517 [2024-07-21 03:44:27.529967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.517 qpair failed and we were unable to recover it. 00:34:42.517 [2024-07-21 03:44:27.530089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.517 [2024-07-21 03:44:27.530118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.517 qpair failed and we were unable to recover it. 00:34:42.517 [2024-07-21 03:44:27.530260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.517 [2024-07-21 03:44:27.530292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.517 qpair failed and we were unable to recover it. 00:34:42.517 [2024-07-21 03:44:27.530406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.517 [2024-07-21 03:44:27.530433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.517 qpair failed and we were unable to recover it. 00:34:42.517 [2024-07-21 03:44:27.530557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.517 [2024-07-21 03:44:27.530584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.517 qpair failed and we were unable to recover it. 00:34:42.517 [2024-07-21 03:44:27.530686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.517 [2024-07-21 03:44:27.530712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.517 qpair failed and we were unable to recover it. 00:34:42.517 [2024-07-21 03:44:27.530808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.517 [2024-07-21 03:44:27.530834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.517 qpair failed and we were unable to recover it. 00:34:42.517 [2024-07-21 03:44:27.530934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.517 [2024-07-21 03:44:27.530961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.517 qpair failed and we were unable to recover it. 00:34:42.517 [2024-07-21 03:44:27.531112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.517 [2024-07-21 03:44:27.531139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.517 qpair failed and we were unable to recover it. 00:34:42.517 [2024-07-21 03:44:27.531244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.517 [2024-07-21 03:44:27.531287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.517 qpair failed and we were unable to recover it. 00:34:42.517 [2024-07-21 03:44:27.531377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.517 [2024-07-21 03:44:27.531404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.517 qpair failed and we were unable to recover it. 00:34:42.517 [2024-07-21 03:44:27.531558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.517 [2024-07-21 03:44:27.531587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.517 qpair failed and we were unable to recover it. 00:34:42.517 [2024-07-21 03:44:27.531737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.517 [2024-07-21 03:44:27.531777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.517 qpair failed and we were unable to recover it. 00:34:42.517 [2024-07-21 03:44:27.531877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.517 [2024-07-21 03:44:27.531910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.517 qpair failed and we were unable to recover it. 00:34:42.517 [2024-07-21 03:44:27.532000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.517 [2024-07-21 03:44:27.532047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.517 qpair failed and we were unable to recover it. 00:34:42.517 [2024-07-21 03:44:27.532238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.517 [2024-07-21 03:44:27.532286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.517 qpair failed and we were unable to recover it. 00:34:42.517 [2024-07-21 03:44:27.532473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.517 [2024-07-21 03:44:27.532522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.517 qpair failed and we were unable to recover it. 00:34:42.517 [2024-07-21 03:44:27.532732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.517 [2024-07-21 03:44:27.532759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.517 qpair failed and we were unable to recover it. 00:34:42.517 [2024-07-21 03:44:27.532857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.517 [2024-07-21 03:44:27.532894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.517 qpair failed and we were unable to recover it. 00:34:42.518 [2024-07-21 03:44:27.533044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.518 [2024-07-21 03:44:27.533078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.518 qpair failed and we were unable to recover it. 00:34:42.518 [2024-07-21 03:44:27.533282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.518 [2024-07-21 03:44:27.533329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.518 qpair failed and we were unable to recover it. 00:34:42.518 [2024-07-21 03:44:27.533459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.518 [2024-07-21 03:44:27.533493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.518 qpair failed and we were unable to recover it. 00:34:42.518 [2024-07-21 03:44:27.533646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.518 [2024-07-21 03:44:27.533685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.518 qpair failed and we were unable to recover it. 00:34:42.518 [2024-07-21 03:44:27.533787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.518 [2024-07-21 03:44:27.533816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.518 qpair failed and we were unable to recover it. 00:34:42.518 [2024-07-21 03:44:27.533916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.518 [2024-07-21 03:44:27.533944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.518 qpair failed and we were unable to recover it. 00:34:42.518 [2024-07-21 03:44:27.534081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.518 [2024-07-21 03:44:27.534116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.518 qpair failed and we were unable to recover it. 00:34:42.518 [2024-07-21 03:44:27.534317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.518 [2024-07-21 03:44:27.534347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.518 qpair failed and we were unable to recover it. 00:34:42.518 [2024-07-21 03:44:27.534486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.518 [2024-07-21 03:44:27.534516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.518 qpair failed and we were unable to recover it. 00:34:42.518 [2024-07-21 03:44:27.534627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.518 [2024-07-21 03:44:27.534674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.518 qpair failed and we were unable to recover it. 00:34:42.518 [2024-07-21 03:44:27.534776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.518 [2024-07-21 03:44:27.534803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.518 qpair failed and we were unable to recover it. 00:34:42.518 [2024-07-21 03:44:27.534900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.518 [2024-07-21 03:44:27.534926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.518 qpair failed and we were unable to recover it. 00:34:42.518 [2024-07-21 03:44:27.535068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.518 [2024-07-21 03:44:27.535098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.518 qpair failed and we were unable to recover it. 00:34:42.518 [2024-07-21 03:44:27.535230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.518 [2024-07-21 03:44:27.535261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.518 qpair failed and we were unable to recover it. 00:34:42.518 [2024-07-21 03:44:27.535378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.518 [2024-07-21 03:44:27.535408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.518 qpair failed and we were unable to recover it. 00:34:42.518 [2024-07-21 03:44:27.535540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.518 [2024-07-21 03:44:27.535569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.518 qpair failed and we were unable to recover it. 00:34:42.518 [2024-07-21 03:44:27.535704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.518 [2024-07-21 03:44:27.535731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.518 qpair failed and we were unable to recover it. 00:34:42.518 [2024-07-21 03:44:27.535818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.518 [2024-07-21 03:44:27.535864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.518 qpair failed and we were unable to recover it. 00:34:42.518 [2024-07-21 03:44:27.536023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.518 [2024-07-21 03:44:27.536052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.518 qpair failed and we were unable to recover it. 00:34:42.518 [2024-07-21 03:44:27.536182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.518 [2024-07-21 03:44:27.536212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.518 qpair failed and we were unable to recover it. 00:34:42.518 [2024-07-21 03:44:27.536322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.518 [2024-07-21 03:44:27.536352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.518 qpair failed and we were unable to recover it. 00:34:42.518 [2024-07-21 03:44:27.536448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.518 [2024-07-21 03:44:27.536477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.518 qpair failed and we were unable to recover it. 00:34:42.518 [2024-07-21 03:44:27.536618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.518 [2024-07-21 03:44:27.536645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.518 qpair failed and we were unable to recover it. 00:34:42.518 [2024-07-21 03:44:27.536749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.518 [2024-07-21 03:44:27.536777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.518 qpair failed and we were unable to recover it. 00:34:42.518 [2024-07-21 03:44:27.536883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.518 [2024-07-21 03:44:27.536928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.518 qpair failed and we were unable to recover it. 00:34:42.518 [2024-07-21 03:44:27.537092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.518 [2024-07-21 03:44:27.537122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.518 qpair failed and we were unable to recover it. 00:34:42.518 [2024-07-21 03:44:27.537224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.518 [2024-07-21 03:44:27.537254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.518 qpair failed and we were unable to recover it. 00:34:42.518 [2024-07-21 03:44:27.537400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.518 [2024-07-21 03:44:27.537445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.518 qpair failed and we were unable to recover it. 00:34:42.518 [2024-07-21 03:44:27.537598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.518 [2024-07-21 03:44:27.537633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.518 qpair failed and we were unable to recover it. 00:34:42.518 [2024-07-21 03:44:27.537739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.518 [2024-07-21 03:44:27.537766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.518 qpair failed and we were unable to recover it. 00:34:42.518 [2024-07-21 03:44:27.537860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.518 [2024-07-21 03:44:27.537887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.518 qpair failed and we were unable to recover it. 00:34:42.518 [2024-07-21 03:44:27.538028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.518 [2024-07-21 03:44:27.538058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.518 qpair failed and we were unable to recover it. 00:34:42.518 [2024-07-21 03:44:27.538161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.518 [2024-07-21 03:44:27.538190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.518 qpair failed and we were unable to recover it. 00:34:42.518 [2024-07-21 03:44:27.538300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.518 [2024-07-21 03:44:27.538334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.518 qpair failed and we were unable to recover it. 00:34:42.518 [2024-07-21 03:44:27.538525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.518 [2024-07-21 03:44:27.538584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.518 qpair failed and we were unable to recover it. 00:34:42.519 [2024-07-21 03:44:27.538718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.519 [2024-07-21 03:44:27.538758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.519 qpair failed and we were unable to recover it. 00:34:42.519 [2024-07-21 03:44:27.538858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.519 [2024-07-21 03:44:27.538889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.519 qpair failed and we were unable to recover it. 00:34:42.519 [2024-07-21 03:44:27.539026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.519 [2024-07-21 03:44:27.539056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.519 qpair failed and we were unable to recover it. 00:34:42.519 [2024-07-21 03:44:27.539190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.519 [2024-07-21 03:44:27.539219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.519 qpair failed and we were unable to recover it. 00:34:42.519 [2024-07-21 03:44:27.539437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.519 [2024-07-21 03:44:27.539485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.519 qpair failed and we were unable to recover it. 00:34:42.519 [2024-07-21 03:44:27.539635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.519 [2024-07-21 03:44:27.539671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.519 qpair failed and we were unable to recover it. 00:34:42.519 [2024-07-21 03:44:27.539760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.519 [2024-07-21 03:44:27.539786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.519 qpair failed and we were unable to recover it. 00:34:42.519 [2024-07-21 03:44:27.539896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.519 [2024-07-21 03:44:27.539923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.519 qpair failed and we were unable to recover it. 00:34:42.519 [2024-07-21 03:44:27.540076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.519 [2024-07-21 03:44:27.540128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.519 qpair failed and we were unable to recover it. 00:34:42.519 [2024-07-21 03:44:27.540272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.519 [2024-07-21 03:44:27.540320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.519 qpair failed and we were unable to recover it. 00:34:42.519 [2024-07-21 03:44:27.540453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.519 [2024-07-21 03:44:27.540483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.519 qpair failed and we were unable to recover it. 00:34:42.519 [2024-07-21 03:44:27.540679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.519 [2024-07-21 03:44:27.540707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.519 qpair failed and we were unable to recover it. 00:34:42.519 [2024-07-21 03:44:27.540811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.519 [2024-07-21 03:44:27.540838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.519 qpair failed and we were unable to recover it. 00:34:42.519 [2024-07-21 03:44:27.540934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.519 [2024-07-21 03:44:27.540963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.519 qpair failed and we were unable to recover it. 00:34:42.519 [2024-07-21 03:44:27.541090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.519 [2024-07-21 03:44:27.541138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.519 qpair failed and we were unable to recover it. 00:34:42.519 [2024-07-21 03:44:27.541287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.519 [2024-07-21 03:44:27.541317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.519 qpair failed and we were unable to recover it. 00:34:42.519 [2024-07-21 03:44:27.541447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.519 [2024-07-21 03:44:27.541477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.519 qpair failed and we were unable to recover it. 00:34:42.519 [2024-07-21 03:44:27.541608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.519 [2024-07-21 03:44:27.541641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.519 qpair failed and we were unable to recover it. 00:34:42.519 [2024-07-21 03:44:27.541739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.519 [2024-07-21 03:44:27.541766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.519 qpair failed and we were unable to recover it. 00:34:42.519 [2024-07-21 03:44:27.541853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.519 [2024-07-21 03:44:27.541899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.519 qpair failed and we were unable to recover it. 00:34:42.519 [2024-07-21 03:44:27.542055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.519 [2024-07-21 03:44:27.542084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.519 qpair failed and we were unable to recover it. 00:34:42.519 [2024-07-21 03:44:27.542196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.519 [2024-07-21 03:44:27.542224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.519 qpair failed and we were unable to recover it. 00:34:42.519 [2024-07-21 03:44:27.542389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.519 [2024-07-21 03:44:27.542420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.519 qpair failed and we were unable to recover it. 00:34:42.519 [2024-07-21 03:44:27.542525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.519 [2024-07-21 03:44:27.542562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.519 qpair failed and we were unable to recover it. 00:34:42.519 [2024-07-21 03:44:27.542689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.519 [2024-07-21 03:44:27.542720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.519 qpair failed and we were unable to recover it. 00:34:42.519 [2024-07-21 03:44:27.542822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.519 [2024-07-21 03:44:27.542850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.519 qpair failed and we were unable to recover it. 00:34:42.519 [2024-07-21 03:44:27.542972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.519 [2024-07-21 03:44:27.543002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.519 qpair failed and we were unable to recover it. 00:34:42.519 [2024-07-21 03:44:27.543132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.519 [2024-07-21 03:44:27.543162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.519 qpair failed and we were unable to recover it. 00:34:42.519 [2024-07-21 03:44:27.543266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.519 [2024-07-21 03:44:27.543296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.519 qpair failed and we were unable to recover it. 00:34:42.519 [2024-07-21 03:44:27.543394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.519 [2024-07-21 03:44:27.543425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.519 qpair failed and we were unable to recover it. 00:34:42.519 [2024-07-21 03:44:27.543590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.519 [2024-07-21 03:44:27.543638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.519 qpair failed and we were unable to recover it. 00:34:42.519 [2024-07-21 03:44:27.543733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.519 [2024-07-21 03:44:27.543761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.519 qpair failed and we were unable to recover it. 00:34:42.519 [2024-07-21 03:44:27.543875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.519 [2024-07-21 03:44:27.543905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.519 qpair failed and we were unable to recover it. 00:34:42.519 [2024-07-21 03:44:27.544044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.519 [2024-07-21 03:44:27.544090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.519 qpair failed and we were unable to recover it. 00:34:42.519 [2024-07-21 03:44:27.544225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.519 [2024-07-21 03:44:27.544271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.519 qpair failed and we were unable to recover it. 00:34:42.519 [2024-07-21 03:44:27.544387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.519 [2024-07-21 03:44:27.544414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.519 qpair failed and we were unable to recover it. 00:34:42.519 [2024-07-21 03:44:27.544566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.519 [2024-07-21 03:44:27.544593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.519 qpair failed and we were unable to recover it. 00:34:42.519 [2024-07-21 03:44:27.544697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.519 [2024-07-21 03:44:27.544723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.519 qpair failed and we were unable to recover it. 00:34:42.519 [2024-07-21 03:44:27.544830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.519 [2024-07-21 03:44:27.544886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.519 qpair failed and we were unable to recover it. 00:34:42.519 [2024-07-21 03:44:27.544981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.519 [2024-07-21 03:44:27.545009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.519 qpair failed and we were unable to recover it. 00:34:42.520 [2024-07-21 03:44:27.545106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.520 [2024-07-21 03:44:27.545151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.520 qpair failed and we were unable to recover it. 00:34:42.520 [2024-07-21 03:44:27.545280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.520 [2024-07-21 03:44:27.545328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.520 qpair failed and we were unable to recover it. 00:34:42.520 [2024-07-21 03:44:27.545488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.520 [2024-07-21 03:44:27.545514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.520 qpair failed and we were unable to recover it. 00:34:42.520 [2024-07-21 03:44:27.545621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.520 [2024-07-21 03:44:27.545662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.520 qpair failed and we were unable to recover it. 00:34:42.520 [2024-07-21 03:44:27.545761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.520 [2024-07-21 03:44:27.545788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.520 qpair failed and we were unable to recover it. 00:34:42.520 [2024-07-21 03:44:27.545901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.520 [2024-07-21 03:44:27.545932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.520 qpair failed and we were unable to recover it. 00:34:42.520 [2024-07-21 03:44:27.546065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.520 [2024-07-21 03:44:27.546095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.520 qpair failed and we were unable to recover it. 00:34:42.520 [2024-07-21 03:44:27.546206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.520 [2024-07-21 03:44:27.546236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.520 qpair failed and we were unable to recover it. 00:34:42.520 [2024-07-21 03:44:27.546367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.520 [2024-07-21 03:44:27.546399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.520 qpair failed and we were unable to recover it. 00:34:42.520 [2024-07-21 03:44:27.546516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.520 [2024-07-21 03:44:27.546544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.520 qpair failed and we were unable to recover it. 00:34:42.520 [2024-07-21 03:44:27.546640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.520 [2024-07-21 03:44:27.546680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.520 qpair failed and we were unable to recover it. 00:34:42.520 [2024-07-21 03:44:27.546780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.520 [2024-07-21 03:44:27.546808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.520 qpair failed and we were unable to recover it. 00:34:42.520 [2024-07-21 03:44:27.546959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.520 [2024-07-21 03:44:27.546989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.520 qpair failed and we were unable to recover it. 00:34:42.520 [2024-07-21 03:44:27.547120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.520 [2024-07-21 03:44:27.547174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.520 qpair failed and we were unable to recover it. 00:34:42.520 [2024-07-21 03:44:27.547326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.520 [2024-07-21 03:44:27.547375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.520 qpair failed and we were unable to recover it. 00:34:42.520 [2024-07-21 03:44:27.547493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.520 [2024-07-21 03:44:27.547522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.520 qpair failed and we were unable to recover it. 00:34:42.520 [2024-07-21 03:44:27.547626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.520 [2024-07-21 03:44:27.547664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.520 qpair failed and we were unable to recover it. 00:34:42.520 [2024-07-21 03:44:27.547760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.520 [2024-07-21 03:44:27.547786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.520 qpair failed and we were unable to recover it. 00:34:42.520 [2024-07-21 03:44:27.547905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.520 [2024-07-21 03:44:27.547940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.520 qpair failed and we were unable to recover it. 00:34:42.520 [2024-07-21 03:44:27.548096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.520 [2024-07-21 03:44:27.548125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.520 qpair failed and we were unable to recover it. 00:34:42.520 [2024-07-21 03:44:27.548238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.520 [2024-07-21 03:44:27.548272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.520 qpair failed and we were unable to recover it. 00:34:42.520 [2024-07-21 03:44:27.548419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.520 [2024-07-21 03:44:27.548449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.520 qpair failed and we were unable to recover it. 00:34:42.520 [2024-07-21 03:44:27.548569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.520 [2024-07-21 03:44:27.548610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.520 qpair failed and we were unable to recover it. 00:34:42.520 [2024-07-21 03:44:27.548725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.520 [2024-07-21 03:44:27.548756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.520 qpair failed and we were unable to recover it. 00:34:42.520 [2024-07-21 03:44:27.548855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.520 [2024-07-21 03:44:27.548887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.520 qpair failed and we were unable to recover it. 00:34:42.520 [2024-07-21 03:44:27.548991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.520 [2024-07-21 03:44:27.549031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.520 qpair failed and we were unable to recover it. 00:34:42.520 [2024-07-21 03:44:27.549176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.520 [2024-07-21 03:44:27.549224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.520 qpair failed and we were unable to recover it. 00:34:42.520 [2024-07-21 03:44:27.549354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.520 [2024-07-21 03:44:27.549384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.520 qpair failed and we were unable to recover it. 00:34:42.520 [2024-07-21 03:44:27.549523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.520 [2024-07-21 03:44:27.549553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.520 qpair failed and we were unable to recover it. 00:34:42.520 [2024-07-21 03:44:27.549695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.520 [2024-07-21 03:44:27.549726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.520 qpair failed and we were unable to recover it. 00:34:42.520 [2024-07-21 03:44:27.549843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.520 [2024-07-21 03:44:27.549889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.520 qpair failed and we were unable to recover it. 00:34:42.520 [2024-07-21 03:44:27.550044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.520 [2024-07-21 03:44:27.550090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.520 qpair failed and we were unable to recover it. 00:34:42.520 [2024-07-21 03:44:27.550204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.520 [2024-07-21 03:44:27.550251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.520 qpair failed and we were unable to recover it. 00:34:42.520 [2024-07-21 03:44:27.550348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.520 [2024-07-21 03:44:27.550376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.520 qpair failed and we were unable to recover it. 00:34:42.520 [2024-07-21 03:44:27.550478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.520 [2024-07-21 03:44:27.550505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.520 qpair failed and we were unable to recover it. 00:34:42.520 [2024-07-21 03:44:27.550634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.520 [2024-07-21 03:44:27.550664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.520 qpair failed and we were unable to recover it. 00:34:42.520 [2024-07-21 03:44:27.550762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.520 [2024-07-21 03:44:27.550788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.520 qpair failed and we were unable to recover it. 00:34:42.520 [2024-07-21 03:44:27.550897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.520 [2024-07-21 03:44:27.550923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.520 qpair failed and we were unable to recover it. 00:34:42.520 [2024-07-21 03:44:27.551037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.520 [2024-07-21 03:44:27.551087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.520 qpair failed and we were unable to recover it. 00:34:42.520 [2024-07-21 03:44:27.551245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.520 [2024-07-21 03:44:27.551276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.520 qpair failed and we were unable to recover it. 00:34:42.520 [2024-07-21 03:44:27.551434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.521 [2024-07-21 03:44:27.551463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.521 qpair failed and we were unable to recover it. 00:34:42.521 [2024-07-21 03:44:27.551577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.521 [2024-07-21 03:44:27.551604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.521 qpair failed and we were unable to recover it. 00:34:42.521 [2024-07-21 03:44:27.551707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.521 [2024-07-21 03:44:27.551736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.521 qpair failed and we were unable to recover it. 00:34:42.521 [2024-07-21 03:44:27.551831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.521 [2024-07-21 03:44:27.551857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.521 qpair failed and we were unable to recover it. 00:34:42.521 [2024-07-21 03:44:27.552003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.521 [2024-07-21 03:44:27.552050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.521 qpair failed and we were unable to recover it. 00:34:42.521 [2024-07-21 03:44:27.552196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.521 [2024-07-21 03:44:27.552240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.521 qpair failed and we were unable to recover it. 00:34:42.521 [2024-07-21 03:44:27.552360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.521 [2024-07-21 03:44:27.552392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.521 qpair failed and we were unable to recover it. 00:34:42.521 [2024-07-21 03:44:27.552526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.521 [2024-07-21 03:44:27.552553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.521 qpair failed and we were unable to recover it. 00:34:42.521 [2024-07-21 03:44:27.552677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.521 [2024-07-21 03:44:27.552731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.521 qpair failed and we were unable to recover it. 00:34:42.521 [2024-07-21 03:44:27.552837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.521 [2024-07-21 03:44:27.552884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.521 qpair failed and we were unable to recover it. 00:34:42.521 [2024-07-21 03:44:27.553035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.521 [2024-07-21 03:44:27.553083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.521 qpair failed and we were unable to recover it. 00:34:42.521 [2024-07-21 03:44:27.553205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.521 [2024-07-21 03:44:27.553251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.521 qpair failed and we were unable to recover it. 00:34:42.521 [2024-07-21 03:44:27.553353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.521 [2024-07-21 03:44:27.553383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.521 qpair failed and we were unable to recover it. 00:34:42.521 [2024-07-21 03:44:27.553477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.521 [2024-07-21 03:44:27.553506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.521 qpair failed and we were unable to recover it. 00:34:42.521 [2024-07-21 03:44:27.553636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.521 [2024-07-21 03:44:27.553664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.521 qpair failed and we were unable to recover it. 00:34:42.521 [2024-07-21 03:44:27.553769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.521 [2024-07-21 03:44:27.553798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.521 qpair failed and we were unable to recover it. 00:34:42.521 [2024-07-21 03:44:27.553909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.521 [2024-07-21 03:44:27.553939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.521 qpair failed and we were unable to recover it. 00:34:42.521 [2024-07-21 03:44:27.554071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.521 [2024-07-21 03:44:27.554102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.521 qpair failed and we were unable to recover it. 00:34:42.521 [2024-07-21 03:44:27.554222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.521 [2024-07-21 03:44:27.554250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.521 qpair failed and we were unable to recover it. 00:34:42.521 [2024-07-21 03:44:27.554350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.521 [2024-07-21 03:44:27.554377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.521 qpair failed and we were unable to recover it. 00:34:42.521 [2024-07-21 03:44:27.554501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.521 [2024-07-21 03:44:27.554529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.521 qpair failed and we were unable to recover it. 00:34:42.521 [2024-07-21 03:44:27.554634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.521 [2024-07-21 03:44:27.554666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.521 qpair failed and we were unable to recover it. 00:34:42.521 [2024-07-21 03:44:27.554767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.521 [2024-07-21 03:44:27.554794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.521 qpair failed and we were unable to recover it. 00:34:42.521 [2024-07-21 03:44:27.554888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.521 [2024-07-21 03:44:27.554915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.521 qpair failed and we were unable to recover it. 00:34:42.521 [2024-07-21 03:44:27.555047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.521 [2024-07-21 03:44:27.555074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.521 qpair failed and we were unable to recover it. 00:34:42.521 [2024-07-21 03:44:27.555208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.521 [2024-07-21 03:44:27.555254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.521 qpair failed and we were unable to recover it. 00:34:42.521 [2024-07-21 03:44:27.555378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.521 [2024-07-21 03:44:27.555405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.521 qpair failed and we were unable to recover it. 00:34:42.521 [2024-07-21 03:44:27.555502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.521 [2024-07-21 03:44:27.555530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.521 qpair failed and we were unable to recover it. 00:34:42.521 [2024-07-21 03:44:27.555653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.521 [2024-07-21 03:44:27.555685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.521 qpair failed and we were unable to recover it. 00:34:42.521 [2024-07-21 03:44:27.555798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.521 [2024-07-21 03:44:27.555828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.521 qpair failed and we were unable to recover it. 00:34:42.521 [2024-07-21 03:44:27.555946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.521 [2024-07-21 03:44:27.555975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.521 qpair failed and we were unable to recover it. 00:34:42.521 [2024-07-21 03:44:27.556140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.521 [2024-07-21 03:44:27.556172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.521 qpair failed and we were unable to recover it. 00:34:42.521 [2024-07-21 03:44:27.556338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.521 [2024-07-21 03:44:27.556365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.521 qpair failed and we were unable to recover it. 00:34:42.521 [2024-07-21 03:44:27.556460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.521 [2024-07-21 03:44:27.556488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.521 qpair failed and we were unable to recover it. 00:34:42.521 [2024-07-21 03:44:27.556585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.521 [2024-07-21 03:44:27.556612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.521 qpair failed and we were unable to recover it. 00:34:42.521 [2024-07-21 03:44:27.556708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.521 [2024-07-21 03:44:27.556734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.521 qpair failed and we were unable to recover it. 00:34:42.521 [2024-07-21 03:44:27.556877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.521 [2024-07-21 03:44:27.556906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.521 qpair failed and we were unable to recover it. 00:34:42.521 [2024-07-21 03:44:27.557019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.521 [2024-07-21 03:44:27.557049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.521 qpair failed and we were unable to recover it. 00:34:42.521 [2024-07-21 03:44:27.557208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.521 [2024-07-21 03:44:27.557237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.521 qpair failed and we were unable to recover it. 00:34:42.521 [2024-07-21 03:44:27.557372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.521 [2024-07-21 03:44:27.557415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.521 qpair failed and we were unable to recover it. 00:34:42.521 [2024-07-21 03:44:27.557544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.521 [2024-07-21 03:44:27.557570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.522 qpair failed and we were unable to recover it. 00:34:42.522 [2024-07-21 03:44:27.557697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.522 [2024-07-21 03:44:27.557737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.522 qpair failed and we were unable to recover it. 00:34:42.522 [2024-07-21 03:44:27.557866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.522 [2024-07-21 03:44:27.557898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.522 qpair failed and we were unable to recover it. 00:34:42.522 [2024-07-21 03:44:27.558035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.522 [2024-07-21 03:44:27.558067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.522 qpair failed and we were unable to recover it. 00:34:42.522 [2024-07-21 03:44:27.558204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.522 [2024-07-21 03:44:27.558233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.522 qpair failed and we were unable to recover it. 00:34:42.522 [2024-07-21 03:44:27.558345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.522 [2024-07-21 03:44:27.558375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.522 qpair failed and we were unable to recover it. 00:34:42.522 [2024-07-21 03:44:27.558471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.522 [2024-07-21 03:44:27.558498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.522 qpair failed and we were unable to recover it. 00:34:42.522 [2024-07-21 03:44:27.558589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.522 [2024-07-21 03:44:27.558622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.522 qpair failed and we were unable to recover it. 00:34:42.522 [2024-07-21 03:44:27.558723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.522 [2024-07-21 03:44:27.558750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.522 qpair failed and we were unable to recover it. 00:34:42.522 [2024-07-21 03:44:27.558844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.522 [2024-07-21 03:44:27.558882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.522 qpair failed and we were unable to recover it. 00:34:42.522 [2024-07-21 03:44:27.558968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.522 [2024-07-21 03:44:27.558994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.522 qpair failed and we were unable to recover it. 00:34:42.522 [2024-07-21 03:44:27.559642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.522 [2024-07-21 03:44:27.559683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.522 qpair failed and we were unable to recover it. 00:34:42.522 [2024-07-21 03:44:27.559785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.522 [2024-07-21 03:44:27.559817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.522 qpair failed and we were unable to recover it. 00:34:42.522 [2024-07-21 03:44:27.559929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.522 [2024-07-21 03:44:27.559956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.522 qpair failed and we were unable to recover it. 00:34:42.522 [2024-07-21 03:44:27.560082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.522 [2024-07-21 03:44:27.560110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.522 qpair failed and we were unable to recover it. 00:34:42.522 [2024-07-21 03:44:27.560237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.522 [2024-07-21 03:44:27.560263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.522 qpair failed and we were unable to recover it. 00:34:42.522 [2024-07-21 03:44:27.560390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.522 [2024-07-21 03:44:27.560419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.522 qpair failed and we were unable to recover it. 00:34:42.522 [2024-07-21 03:44:27.560524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.522 [2024-07-21 03:44:27.560564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.522 qpair failed and we were unable to recover it. 00:34:42.522 [2024-07-21 03:44:27.560705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.522 [2024-07-21 03:44:27.560746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.522 qpair failed and we were unable to recover it. 00:34:42.522 [2024-07-21 03:44:27.560868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.522 [2024-07-21 03:44:27.560901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.522 qpair failed and we were unable to recover it. 00:34:42.522 [2024-07-21 03:44:27.561035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.522 [2024-07-21 03:44:27.561066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.522 qpair failed and we were unable to recover it. 00:34:42.522 [2024-07-21 03:44:27.561172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.522 [2024-07-21 03:44:27.561202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.522 qpair failed and we were unable to recover it. 00:34:42.522 [2024-07-21 03:44:27.561383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.522 [2024-07-21 03:44:27.561430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.522 qpair failed and we were unable to recover it. 00:34:42.522 [2024-07-21 03:44:27.561523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.522 [2024-07-21 03:44:27.561550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.522 qpair failed and we were unable to recover it. 00:34:42.522 [2024-07-21 03:44:27.561681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.522 [2024-07-21 03:44:27.561711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.522 qpair failed and we were unable to recover it. 00:34:42.522 [2024-07-21 03:44:27.561845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.522 [2024-07-21 03:44:27.561897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.522 qpair failed and we were unable to recover it. 00:34:42.522 [2024-07-21 03:44:27.562001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.522 [2024-07-21 03:44:27.562029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.522 qpair failed and we were unable to recover it. 00:34:42.522 [2024-07-21 03:44:27.562121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.522 [2024-07-21 03:44:27.562147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.522 qpair failed and we were unable to recover it. 00:34:42.522 [2024-07-21 03:44:27.562243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.522 [2024-07-21 03:44:27.562270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.522 qpair failed and we were unable to recover it. 00:34:42.522 [2024-07-21 03:44:27.562417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.522 [2024-07-21 03:44:27.562443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.522 qpair failed and we were unable to recover it. 00:34:42.522 [2024-07-21 03:44:27.562554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.522 [2024-07-21 03:44:27.562595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.522 qpair failed and we were unable to recover it. 00:34:42.522 [2024-07-21 03:44:27.562715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.522 [2024-07-21 03:44:27.562744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.522 qpair failed and we were unable to recover it. 00:34:42.522 [2024-07-21 03:44:27.562845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.522 [2024-07-21 03:44:27.562881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.522 qpair failed and we were unable to recover it. 00:34:42.522 [2024-07-21 03:44:27.563055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.522 [2024-07-21 03:44:27.563084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.522 qpair failed and we were unable to recover it. 00:34:42.522 [2024-07-21 03:44:27.563210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.522 [2024-07-21 03:44:27.563247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.522 qpair failed and we were unable to recover it. 00:34:42.522 [2024-07-21 03:44:27.563376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.522 [2024-07-21 03:44:27.563407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.522 qpair failed and we were unable to recover it. 00:34:42.522 [2024-07-21 03:44:27.563531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.522 [2024-07-21 03:44:27.563559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.522 qpair failed and we were unable to recover it. 00:34:42.522 [2024-07-21 03:44:27.563681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.522 [2024-07-21 03:44:27.563726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.523 qpair failed and we were unable to recover it. 00:34:42.523 [2024-07-21 03:44:27.563837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.523 [2024-07-21 03:44:27.563892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.523 qpair failed and we were unable to recover it. 00:34:42.523 [2024-07-21 03:44:27.564076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.523 [2024-07-21 03:44:27.564105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.523 qpair failed and we were unable to recover it. 00:34:42.523 [2024-07-21 03:44:27.564278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.523 [2024-07-21 03:44:27.564314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.523 qpair failed and we were unable to recover it. 00:34:42.523 [2024-07-21 03:44:27.564466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.523 [2024-07-21 03:44:27.564492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.523 qpair failed and we were unable to recover it. 00:34:42.523 [2024-07-21 03:44:27.564612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.523 [2024-07-21 03:44:27.564663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.523 qpair failed and we were unable to recover it. 00:34:42.523 [2024-07-21 03:44:27.564775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.523 [2024-07-21 03:44:27.564805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.523 qpair failed and we were unable to recover it. 00:34:42.523 [2024-07-21 03:44:27.564985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.523 [2024-07-21 03:44:27.565031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.523 qpair failed and we were unable to recover it. 00:34:42.523 [2024-07-21 03:44:27.565194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.523 [2024-07-21 03:44:27.565241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.523 qpair failed and we were unable to recover it. 00:34:42.523 [2024-07-21 03:44:27.565477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.523 [2024-07-21 03:44:27.565507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.523 qpair failed and we were unable to recover it. 00:34:42.523 [2024-07-21 03:44:27.565664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.523 [2024-07-21 03:44:27.565691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.523 qpair failed and we were unable to recover it. 00:34:42.523 [2024-07-21 03:44:27.565815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.523 [2024-07-21 03:44:27.565844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.523 qpair failed and we were unable to recover it. 00:34:42.523 [2024-07-21 03:44:27.565977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.523 [2024-07-21 03:44:27.566018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.523 qpair failed and we were unable to recover it. 00:34:42.523 [2024-07-21 03:44:27.566185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.523 [2024-07-21 03:44:27.566232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.523 qpair failed and we were unable to recover it. 00:34:42.523 [2024-07-21 03:44:27.566461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.523 [2024-07-21 03:44:27.566512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.523 qpair failed and we were unable to recover it. 00:34:42.523 [2024-07-21 03:44:27.566644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.523 [2024-07-21 03:44:27.566694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.523 qpair failed and we were unable to recover it. 00:34:42.523 [2024-07-21 03:44:27.566798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.523 [2024-07-21 03:44:27.566824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.523 qpair failed and we were unable to recover it. 00:34:42.523 [2024-07-21 03:44:27.566987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.523 [2024-07-21 03:44:27.567016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.523 qpair failed and we were unable to recover it. 00:34:42.523 [2024-07-21 03:44:27.567196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.523 [2024-07-21 03:44:27.567243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.523 qpair failed and we were unable to recover it. 00:34:42.523 [2024-07-21 03:44:27.567378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.523 [2024-07-21 03:44:27.567407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.523 qpair failed and we were unable to recover it. 00:34:42.523 [2024-07-21 03:44:27.567543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.523 [2024-07-21 03:44:27.567570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.523 qpair failed and we were unable to recover it. 00:34:42.523 [2024-07-21 03:44:27.567700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.523 [2024-07-21 03:44:27.567729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.523 qpair failed and we were unable to recover it. 00:34:42.523 [2024-07-21 03:44:27.567816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.523 [2024-07-21 03:44:27.567843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.523 qpair failed and we were unable to recover it. 00:34:42.523 [2024-07-21 03:44:27.567970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.523 [2024-07-21 03:44:27.567996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.523 qpair failed and we were unable to recover it. 00:34:42.523 [2024-07-21 03:44:27.568152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.523 [2024-07-21 03:44:27.568203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.523 qpair failed and we were unable to recover it. 00:34:42.523 [2024-07-21 03:44:27.568366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.523 [2024-07-21 03:44:27.568396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.523 qpair failed and we were unable to recover it. 00:34:42.523 [2024-07-21 03:44:27.568504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.523 [2024-07-21 03:44:27.568530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.523 qpair failed and we were unable to recover it. 00:34:42.523 [2024-07-21 03:44:27.568621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.523 [2024-07-21 03:44:27.568648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.523 qpair failed and we were unable to recover it. 00:34:42.523 [2024-07-21 03:44:27.568752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.523 [2024-07-21 03:44:27.568779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.523 qpair failed and we were unable to recover it. 00:34:42.523 [2024-07-21 03:44:27.568886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.523 [2024-07-21 03:44:27.568927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.523 qpair failed and we were unable to recover it. 00:34:42.523 [2024-07-21 03:44:27.569054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.523 [2024-07-21 03:44:27.569101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.523 qpair failed and we were unable to recover it. 00:34:42.523 [2024-07-21 03:44:27.569267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.523 [2024-07-21 03:44:27.569317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.523 qpair failed and we were unable to recover it. 00:34:42.523 [2024-07-21 03:44:27.569448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.523 [2024-07-21 03:44:27.569480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.523 qpair failed and we were unable to recover it. 00:34:42.523 [2024-07-21 03:44:27.569596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.523 [2024-07-21 03:44:27.569635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.523 qpair failed and we were unable to recover it. 00:34:42.523 [2024-07-21 03:44:27.569719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.523 [2024-07-21 03:44:27.569746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.523 qpair failed and we were unable to recover it. 00:34:42.523 [2024-07-21 03:44:27.569849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.523 [2024-07-21 03:44:27.569877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.523 qpair failed and we were unable to recover it. 00:34:42.523 [2024-07-21 03:44:27.569966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.523 [2024-07-21 03:44:27.569992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.523 qpair failed and we were unable to recover it. 00:34:42.523 [2024-07-21 03:44:27.570106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.523 [2024-07-21 03:44:27.570132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.524 qpair failed and we were unable to recover it. 00:34:42.524 [2024-07-21 03:44:27.570226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.524 [2024-07-21 03:44:27.570264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.524 qpair failed and we were unable to recover it. 00:34:42.524 [2024-07-21 03:44:27.570359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.524 [2024-07-21 03:44:27.570386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.524 qpair failed and we were unable to recover it. 00:34:42.524 [2024-07-21 03:44:27.570477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.524 [2024-07-21 03:44:27.570504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.524 qpair failed and we were unable to recover it. 00:34:42.524 [2024-07-21 03:44:27.570603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.524 [2024-07-21 03:44:27.570649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.524 qpair failed and we were unable to recover it. 00:34:42.524 [2024-07-21 03:44:27.570766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.524 [2024-07-21 03:44:27.570797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.524 qpair failed and we were unable to recover it. 00:34:42.524 [2024-07-21 03:44:27.570895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.524 [2024-07-21 03:44:27.570922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.524 qpair failed and we were unable to recover it. 00:34:42.524 [2024-07-21 03:44:27.571011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.524 [2024-07-21 03:44:27.571038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.524 qpair failed and we were unable to recover it. 00:34:42.524 [2024-07-21 03:44:27.571164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.524 [2024-07-21 03:44:27.571191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.524 qpair failed and we were unable to recover it. 00:34:42.524 [2024-07-21 03:44:27.571307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.524 [2024-07-21 03:44:27.571334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.524 qpair failed and we were unable to recover it. 00:34:42.524 [2024-07-21 03:44:27.571449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.524 [2024-07-21 03:44:27.571476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.524 qpair failed and we were unable to recover it. 00:34:42.524 [2024-07-21 03:44:27.571594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.524 [2024-07-21 03:44:27.571626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.524 qpair failed and we were unable to recover it. 00:34:42.524 [2024-07-21 03:44:27.571729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.524 [2024-07-21 03:44:27.571756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.524 qpair failed and we were unable to recover it. 00:34:42.524 [2024-07-21 03:44:27.571853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.524 [2024-07-21 03:44:27.571890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.524 qpair failed and we were unable to recover it. 00:34:42.524 [2024-07-21 03:44:27.572024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.524 [2024-07-21 03:44:27.572052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.524 qpair failed and we were unable to recover it. 00:34:42.524 [2024-07-21 03:44:27.572179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.524 [2024-07-21 03:44:27.572207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.524 qpair failed and we were unable to recover it. 00:34:42.524 [2024-07-21 03:44:27.572333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.524 [2024-07-21 03:44:27.572362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.524 qpair failed and we were unable to recover it. 00:34:42.524 [2024-07-21 03:44:27.572486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.524 [2024-07-21 03:44:27.572515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.524 qpair failed and we were unable to recover it. 00:34:42.524 [2024-07-21 03:44:27.572663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.524 [2024-07-21 03:44:27.572691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.524 qpair failed and we were unable to recover it. 00:34:42.524 [2024-07-21 03:44:27.572786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.524 [2024-07-21 03:44:27.572812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.524 qpair failed and we were unable to recover it. 00:34:42.524 [2024-07-21 03:44:27.572913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.524 [2024-07-21 03:44:27.572955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.524 qpair failed and we were unable to recover it. 00:34:42.524 [2024-07-21 03:44:27.573060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.524 [2024-07-21 03:44:27.573090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.524 qpair failed and we were unable to recover it. 00:34:42.524 [2024-07-21 03:44:27.573223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.524 [2024-07-21 03:44:27.573252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.524 qpair failed and we were unable to recover it. 00:34:42.524 [2024-07-21 03:44:27.573407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.524 [2024-07-21 03:44:27.573436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.524 qpair failed and we were unable to recover it. 00:34:42.524 [2024-07-21 03:44:27.573545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.524 [2024-07-21 03:44:27.573574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.524 qpair failed and we were unable to recover it. 00:34:42.524 [2024-07-21 03:44:27.573806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.524 [2024-07-21 03:44:27.573836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.524 qpair failed and we were unable to recover it. 00:34:42.524 [2024-07-21 03:44:27.573968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.524 [2024-07-21 03:44:27.573997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.524 qpair failed and we were unable to recover it. 00:34:42.524 [2024-07-21 03:44:27.574101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.524 [2024-07-21 03:44:27.574130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.524 qpair failed and we were unable to recover it. 00:34:42.524 [2024-07-21 03:44:27.574257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.524 [2024-07-21 03:44:27.574287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.524 qpair failed and we were unable to recover it. 00:34:42.524 [2024-07-21 03:44:27.574489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.524 [2024-07-21 03:44:27.574516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.524 qpair failed and we were unable to recover it. 00:34:42.524 [2024-07-21 03:44:27.574654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.524 [2024-07-21 03:44:27.574681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.524 qpair failed and we were unable to recover it. 00:34:42.524 [2024-07-21 03:44:27.574794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.524 [2024-07-21 03:44:27.574824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.524 qpair failed and we were unable to recover it. 00:34:42.524 [2024-07-21 03:44:27.574966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.524 [2024-07-21 03:44:27.575000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.524 qpair failed and we were unable to recover it. 00:34:42.524 [2024-07-21 03:44:27.575132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.524 [2024-07-21 03:44:27.575171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.524 qpair failed and we were unable to recover it. 00:34:42.524 [2024-07-21 03:44:27.575319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.524 [2024-07-21 03:44:27.575348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.524 qpair failed and we were unable to recover it. 00:34:42.524 [2024-07-21 03:44:27.575482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.524 [2024-07-21 03:44:27.575513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.524 qpair failed and we were unable to recover it. 00:34:42.524 [2024-07-21 03:44:27.575669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.524 [2024-07-21 03:44:27.575696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.524 qpair failed and we were unable to recover it. 00:34:42.524 [2024-07-21 03:44:27.575849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.524 [2024-07-21 03:44:27.575875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.524 qpair failed and we were unable to recover it. 00:34:42.524 [2024-07-21 03:44:27.575972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.524 [2024-07-21 03:44:27.576016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.524 qpair failed and we were unable to recover it. 00:34:42.524 [2024-07-21 03:44:27.576153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.524 [2024-07-21 03:44:27.576180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.524 qpair failed and we were unable to recover it. 00:34:42.524 [2024-07-21 03:44:27.576379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.524 [2024-07-21 03:44:27.576408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.524 qpair failed and we were unable to recover it. 00:34:42.525 [2024-07-21 03:44:27.576547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.525 [2024-07-21 03:44:27.576593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.525 qpair failed and we were unable to recover it. 00:34:42.525 [2024-07-21 03:44:27.576726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.525 [2024-07-21 03:44:27.576754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.525 qpair failed and we were unable to recover it. 00:34:42.525 [2024-07-21 03:44:27.576900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.525 [2024-07-21 03:44:27.576929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.525 qpair failed and we were unable to recover it. 00:34:42.525 [2024-07-21 03:44:27.577049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.525 [2024-07-21 03:44:27.577097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.525 qpair failed and we were unable to recover it. 00:34:42.525 [2024-07-21 03:44:27.577256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.525 [2024-07-21 03:44:27.577302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.525 qpair failed and we were unable to recover it. 00:34:42.525 [2024-07-21 03:44:27.577523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.525 [2024-07-21 03:44:27.577605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.525 qpair failed and we were unable to recover it. 00:34:42.525 [2024-07-21 03:44:27.577724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.525 [2024-07-21 03:44:27.577754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.525 qpair failed and we were unable to recover it. 00:34:42.525 [2024-07-21 03:44:27.577879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.525 [2024-07-21 03:44:27.577910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.525 qpair failed and we were unable to recover it. 00:34:42.525 [2024-07-21 03:44:27.578091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.525 [2024-07-21 03:44:27.578121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.525 qpair failed and we were unable to recover it. 00:34:42.525 [2024-07-21 03:44:27.578313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.525 [2024-07-21 03:44:27.578366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.525 qpair failed and we were unable to recover it. 00:34:42.525 [2024-07-21 03:44:27.578495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.525 [2024-07-21 03:44:27.578522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.525 qpair failed and we were unable to recover it. 00:34:42.525 [2024-07-21 03:44:27.578663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.525 [2024-07-21 03:44:27.578694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.525 qpair failed and we were unable to recover it. 00:34:42.525 [2024-07-21 03:44:27.578835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.525 [2024-07-21 03:44:27.578867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.525 qpair failed and we were unable to recover it. 00:34:42.525 [2024-07-21 03:44:27.579011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.525 [2024-07-21 03:44:27.579059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.525 qpair failed and we were unable to recover it. 00:34:42.525 [2024-07-21 03:44:27.579253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.525 [2024-07-21 03:44:27.579302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.525 qpair failed and we were unable to recover it. 00:34:42.525 [2024-07-21 03:44:27.579435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.525 [2024-07-21 03:44:27.579465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.525 qpair failed and we were unable to recover it. 00:34:42.525 [2024-07-21 03:44:27.579626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.525 [2024-07-21 03:44:27.579672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.525 qpair failed and we were unable to recover it. 00:34:42.525 [2024-07-21 03:44:27.579764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.525 [2024-07-21 03:44:27.579807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.525 qpair failed and we were unable to recover it. 00:34:42.525 [2024-07-21 03:44:27.579968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.525 [2024-07-21 03:44:27.580001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.525 qpair failed and we were unable to recover it. 00:34:42.525 [2024-07-21 03:44:27.580113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.525 [2024-07-21 03:44:27.580143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.525 qpair failed and we were unable to recover it. 00:34:42.525 [2024-07-21 03:44:27.580327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.525 [2024-07-21 03:44:27.580374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.525 qpair failed and we were unable to recover it. 00:34:42.525 [2024-07-21 03:44:27.580508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.525 [2024-07-21 03:44:27.580534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.525 qpair failed and we were unable to recover it. 00:34:42.525 [2024-07-21 03:44:27.580668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.525 [2024-07-21 03:44:27.580695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.525 qpair failed and we were unable to recover it. 00:34:42.525 [2024-07-21 03:44:27.580782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.525 [2024-07-21 03:44:27.580809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.525 qpair failed and we were unable to recover it. 00:34:42.525 [2024-07-21 03:44:27.580943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.525 [2024-07-21 03:44:27.580972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.525 qpair failed and we were unable to recover it. 00:34:42.525 [2024-07-21 03:44:27.581178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.525 [2024-07-21 03:44:27.581207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.525 qpair failed and we were unable to recover it. 00:34:42.525 [2024-07-21 03:44:27.581337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.525 [2024-07-21 03:44:27.581366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.525 qpair failed and we were unable to recover it. 00:34:42.525 [2024-07-21 03:44:27.581579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.525 [2024-07-21 03:44:27.581609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.525 qpair failed and we were unable to recover it. 00:34:42.525 [2024-07-21 03:44:27.581730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.525 [2024-07-21 03:44:27.581757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.525 qpair failed and we were unable to recover it. 00:34:42.525 [2024-07-21 03:44:27.581861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.525 [2024-07-21 03:44:27.581904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.525 qpair failed and we were unable to recover it. 00:34:42.525 [2024-07-21 03:44:27.582058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.525 [2024-07-21 03:44:27.582086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.525 qpair failed and we were unable to recover it. 00:34:42.525 [2024-07-21 03:44:27.582218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.525 [2024-07-21 03:44:27.582247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.525 qpair failed and we were unable to recover it. 00:34:42.525 [2024-07-21 03:44:27.582382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.525 [2024-07-21 03:44:27.582413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.525 qpair failed and we were unable to recover it. 00:34:42.525 [2024-07-21 03:44:27.582546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.525 [2024-07-21 03:44:27.582575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.525 qpair failed and we were unable to recover it. 00:34:42.525 [2024-07-21 03:44:27.582733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.525 [2024-07-21 03:44:27.582761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.525 qpair failed and we were unable to recover it. 00:34:42.525 [2024-07-21 03:44:27.582860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.525 [2024-07-21 03:44:27.582888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.525 qpair failed and we were unable to recover it. 00:34:42.525 [2024-07-21 03:44:27.583049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.525 [2024-07-21 03:44:27.583075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.525 qpair failed and we were unable to recover it. 00:34:42.525 [2024-07-21 03:44:27.583209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.525 [2024-07-21 03:44:27.583235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.525 qpair failed and we were unable to recover it. 00:34:42.525 [2024-07-21 03:44:27.583354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.525 [2024-07-21 03:44:27.583397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.525 qpair failed and we were unable to recover it. 00:34:42.525 [2024-07-21 03:44:27.583558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.525 [2024-07-21 03:44:27.583587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.525 qpair failed and we were unable to recover it. 00:34:42.525 [2024-07-21 03:44:27.583735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.526 [2024-07-21 03:44:27.583762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.526 qpair failed and we were unable to recover it. 00:34:42.526 [2024-07-21 03:44:27.583845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.526 [2024-07-21 03:44:27.583871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.526 qpair failed and we were unable to recover it. 00:34:42.526 [2024-07-21 03:44:27.584007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.526 [2024-07-21 03:44:27.584036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.526 qpair failed and we were unable to recover it. 00:34:42.526 [2024-07-21 03:44:27.584174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.526 [2024-07-21 03:44:27.584203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.526 qpair failed and we were unable to recover it. 00:34:42.526 [2024-07-21 03:44:27.584313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.526 [2024-07-21 03:44:27.584343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.526 qpair failed and we were unable to recover it. 00:34:42.526 [2024-07-21 03:44:27.584473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.526 [2024-07-21 03:44:27.584521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.526 qpair failed and we were unable to recover it. 00:34:42.526 [2024-07-21 03:44:27.584651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.526 [2024-07-21 03:44:27.584678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.526 qpair failed and we were unable to recover it. 00:34:42.526 [2024-07-21 03:44:27.584777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.526 [2024-07-21 03:44:27.584805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.526 qpair failed and we were unable to recover it. 00:34:42.526 [2024-07-21 03:44:27.584914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.526 [2024-07-21 03:44:27.584943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.526 qpair failed and we were unable to recover it. 00:34:42.526 [2024-07-21 03:44:27.585097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.526 [2024-07-21 03:44:27.585126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.526 qpair failed and we were unable to recover it. 00:34:42.526 [2024-07-21 03:44:27.585228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.526 [2024-07-21 03:44:27.585257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.526 qpair failed and we were unable to recover it. 00:34:42.526 [2024-07-21 03:44:27.585396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.526 [2024-07-21 03:44:27.585425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.526 qpair failed and we were unable to recover it. 00:34:42.526 [2024-07-21 03:44:27.585538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.526 [2024-07-21 03:44:27.585564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.526 qpair failed and we were unable to recover it. 00:34:42.526 [2024-07-21 03:44:27.585656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.526 [2024-07-21 03:44:27.585683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.526 qpair failed and we were unable to recover it. 00:34:42.526 [2024-07-21 03:44:27.585801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.526 [2024-07-21 03:44:27.585828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.526 qpair failed and we were unable to recover it. 00:34:42.526 [2024-07-21 03:44:27.585909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.526 [2024-07-21 03:44:27.585935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.526 qpair failed and we were unable to recover it. 00:34:42.526 [2024-07-21 03:44:27.586106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.526 [2024-07-21 03:44:27.586135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.526 qpair failed and we were unable to recover it. 00:34:42.526 [2024-07-21 03:44:27.586245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.526 [2024-07-21 03:44:27.586271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.526 qpair failed and we were unable to recover it. 00:34:42.526 [2024-07-21 03:44:27.586391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.526 [2024-07-21 03:44:27.586420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.526 qpair failed and we were unable to recover it. 00:34:42.526 [2024-07-21 03:44:27.586553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.526 [2024-07-21 03:44:27.586581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.526 qpair failed and we were unable to recover it. 00:34:42.526 [2024-07-21 03:44:27.586696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.526 [2024-07-21 03:44:27.586723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.526 qpair failed and we were unable to recover it. 00:34:42.526 [2024-07-21 03:44:27.586811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.526 [2024-07-21 03:44:27.586839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.526 qpair failed and we were unable to recover it. 00:34:42.526 [2024-07-21 03:44:27.586935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.526 [2024-07-21 03:44:27.586990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.526 qpair failed and we were unable to recover it. 00:34:42.526 [2024-07-21 03:44:27.587190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.526 [2024-07-21 03:44:27.587228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.526 qpair failed and we were unable to recover it. 00:34:42.526 [2024-07-21 03:44:27.587321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.526 [2024-07-21 03:44:27.587350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.526 qpair failed and we were unable to recover it. 00:34:42.526 [2024-07-21 03:44:27.587481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.526 [2024-07-21 03:44:27.587510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.526 qpair failed and we were unable to recover it. 00:34:42.526 [2024-07-21 03:44:27.587651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.526 [2024-07-21 03:44:27.587678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.526 qpair failed and we were unable to recover it. 00:34:42.526 [2024-07-21 03:44:27.587823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.526 [2024-07-21 03:44:27.587849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.526 qpair failed and we were unable to recover it. 00:34:42.526 [2024-07-21 03:44:27.587997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.526 [2024-07-21 03:44:27.588039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.526 qpair failed and we were unable to recover it. 00:34:42.526 [2024-07-21 03:44:27.588201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.526 [2024-07-21 03:44:27.588231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.526 qpair failed and we were unable to recover it. 00:34:42.526 [2024-07-21 03:44:27.588362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.526 [2024-07-21 03:44:27.588391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.526 qpair failed and we were unable to recover it. 00:34:42.526 [2024-07-21 03:44:27.588552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.526 [2024-07-21 03:44:27.588581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.526 qpair failed and we were unable to recover it. 00:34:42.526 [2024-07-21 03:44:27.588726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.526 [2024-07-21 03:44:27.588753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.526 qpair failed and we were unable to recover it. 00:34:42.526 [2024-07-21 03:44:27.588878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.526 [2024-07-21 03:44:27.588920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.526 qpair failed and we were unable to recover it. 00:34:42.526 [2024-07-21 03:44:27.589056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.526 [2024-07-21 03:44:27.589085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.526 qpair failed and we were unable to recover it. 00:34:42.526 [2024-07-21 03:44:27.589216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.526 [2024-07-21 03:44:27.589245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.526 qpair failed and we were unable to recover it. 00:34:42.526 [2024-07-21 03:44:27.589341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.526 [2024-07-21 03:44:27.589371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.526 qpair failed and we were unable to recover it. 00:34:42.526 [2024-07-21 03:44:27.589497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.526 [2024-07-21 03:44:27.589526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.526 qpair failed and we were unable to recover it. 00:34:42.526 [2024-07-21 03:44:27.589741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.526 [2024-07-21 03:44:27.589768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.526 qpair failed and we were unable to recover it. 00:34:42.526 [2024-07-21 03:44:27.589918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.526 [2024-07-21 03:44:27.589949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.526 qpair failed and we were unable to recover it. 00:34:42.526 [2024-07-21 03:44:27.590099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.526 [2024-07-21 03:44:27.590126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.527 qpair failed and we were unable to recover it. 00:34:42.527 [2024-07-21 03:44:27.590280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.527 [2024-07-21 03:44:27.590310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.527 qpair failed and we were unable to recover it. 00:34:42.527 [2024-07-21 03:44:27.590409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.527 [2024-07-21 03:44:27.590438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.527 qpair failed and we were unable to recover it. 00:34:42.527 [2024-07-21 03:44:27.590599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.527 [2024-07-21 03:44:27.590635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.527 qpair failed and we were unable to recover it. 00:34:42.527 [2024-07-21 03:44:27.590785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.527 [2024-07-21 03:44:27.590812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.527 qpair failed and we were unable to recover it. 00:34:42.527 [2024-07-21 03:44:27.590935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.527 [2024-07-21 03:44:27.590962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.527 qpair failed and we were unable to recover it. 00:34:42.527 [2024-07-21 03:44:27.591139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.527 [2024-07-21 03:44:27.591169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.527 qpair failed and we were unable to recover it. 00:34:42.527 [2024-07-21 03:44:27.591387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.527 [2024-07-21 03:44:27.591416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.527 qpair failed and we were unable to recover it. 00:34:42.527 [2024-07-21 03:44:27.591575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.527 [2024-07-21 03:44:27.591604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.527 qpair failed and we were unable to recover it. 00:34:42.527 [2024-07-21 03:44:27.591732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.527 [2024-07-21 03:44:27.591758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.527 qpair failed and we were unable to recover it. 00:34:42.527 [2024-07-21 03:44:27.591879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.527 [2024-07-21 03:44:27.591905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.527 qpair failed and we were unable to recover it. 00:34:42.527 [2024-07-21 03:44:27.592045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.527 [2024-07-21 03:44:27.592072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.527 qpair failed and we were unable to recover it. 00:34:42.527 [2024-07-21 03:44:27.592195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.527 [2024-07-21 03:44:27.592222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.527 qpair failed and we were unable to recover it. 00:34:42.527 [2024-07-21 03:44:27.592377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.527 [2024-07-21 03:44:27.592435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.527 qpair failed and we were unable to recover it. 00:34:42.527 [2024-07-21 03:44:27.592572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.527 [2024-07-21 03:44:27.592600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.527 qpair failed and we were unable to recover it. 00:34:42.527 [2024-07-21 03:44:27.592744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.527 [2024-07-21 03:44:27.592771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.527 qpair failed and we were unable to recover it. 00:34:42.527 [2024-07-21 03:44:27.592862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.527 [2024-07-21 03:44:27.592891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.527 qpair failed and we were unable to recover it. 00:34:42.527 [2024-07-21 03:44:27.593031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.527 [2024-07-21 03:44:27.593075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.527 qpair failed and we were unable to recover it. 00:34:42.527 [2024-07-21 03:44:27.593204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.527 [2024-07-21 03:44:27.593248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.527 qpair failed and we were unable to recover it. 00:34:42.527 [2024-07-21 03:44:27.593386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.527 [2024-07-21 03:44:27.593413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.527 qpair failed and we were unable to recover it. 00:34:42.527 [2024-07-21 03:44:27.593551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.527 [2024-07-21 03:44:27.593592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.527 qpair failed and we were unable to recover it. 00:34:42.527 [2024-07-21 03:44:27.593724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.527 [2024-07-21 03:44:27.593752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.527 qpair failed and we were unable to recover it. 00:34:42.527 [2024-07-21 03:44:27.593889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.527 [2024-07-21 03:44:27.593919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.527 qpair failed and we were unable to recover it. 00:34:42.527 [2024-07-21 03:44:27.594060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.527 [2024-07-21 03:44:27.594105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.527 qpair failed and we were unable to recover it. 00:34:42.527 [2024-07-21 03:44:27.594229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.527 [2024-07-21 03:44:27.594260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.527 qpair failed and we were unable to recover it. 00:34:42.527 [2024-07-21 03:44:27.594394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.527 [2024-07-21 03:44:27.594422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.527 qpair failed and we were unable to recover it. 00:34:42.527 [2024-07-21 03:44:27.594548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.527 [2024-07-21 03:44:27.594575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.527 qpair failed and we were unable to recover it. 00:34:42.527 [2024-07-21 03:44:27.594701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.527 [2024-07-21 03:44:27.594742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.527 qpair failed and we were unable to recover it. 00:34:42.527 [2024-07-21 03:44:27.594891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.527 [2024-07-21 03:44:27.594940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.527 qpair failed and we were unable to recover it. 00:34:42.527 [2024-07-21 03:44:27.595052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.527 [2024-07-21 03:44:27.595081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.527 qpair failed and we were unable to recover it. 00:34:42.527 [2024-07-21 03:44:27.595273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.527 [2024-07-21 03:44:27.595303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.527 qpair failed and we were unable to recover it. 00:34:42.527 [2024-07-21 03:44:27.595418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.527 [2024-07-21 03:44:27.595445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.527 qpair failed and we were unable to recover it. 00:34:42.527 [2024-07-21 03:44:27.595570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.527 [2024-07-21 03:44:27.595596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.527 qpair failed and we were unable to recover it. 00:34:42.527 [2024-07-21 03:44:27.595752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.527 [2024-07-21 03:44:27.595795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.527 qpair failed and we were unable to recover it. 00:34:42.527 [2024-07-21 03:44:27.595921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.527 [2024-07-21 03:44:27.595950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.527 qpair failed and we were unable to recover it. 00:34:42.527 [2024-07-21 03:44:27.596038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.528 [2024-07-21 03:44:27.596066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.528 qpair failed and we were unable to recover it. 00:34:42.528 [2024-07-21 03:44:27.596217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.528 [2024-07-21 03:44:27.596244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.528 qpair failed and we were unable to recover it. 00:34:42.528 [2024-07-21 03:44:27.596357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.528 [2024-07-21 03:44:27.596384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.528 qpair failed and we were unable to recover it. 00:34:42.528 [2024-07-21 03:44:27.596527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.528 [2024-07-21 03:44:27.596554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.528 qpair failed and we were unable to recover it. 00:34:42.528 [2024-07-21 03:44:27.596682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.528 [2024-07-21 03:44:27.596710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.528 qpair failed and we were unable to recover it. 00:34:42.528 [2024-07-21 03:44:27.596861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.528 [2024-07-21 03:44:27.596915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.528 qpair failed and we were unable to recover it. 00:34:42.528 [2024-07-21 03:44:27.597034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.528 [2024-07-21 03:44:27.597066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.528 qpair failed and we were unable to recover it. 00:34:42.528 [2024-07-21 03:44:27.597197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.528 [2024-07-21 03:44:27.597227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.528 qpair failed and we were unable to recover it. 00:34:42.528 [2024-07-21 03:44:27.597365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.528 [2024-07-21 03:44:27.597414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.528 qpair failed and we were unable to recover it. 00:34:42.528 [2024-07-21 03:44:27.597561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.528 [2024-07-21 03:44:27.597590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.528 qpair failed and we were unable to recover it. 00:34:42.528 [2024-07-21 03:44:27.597768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.528 [2024-07-21 03:44:27.597796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.528 qpair failed and we were unable to recover it. 00:34:42.528 [2024-07-21 03:44:27.597943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.528 [2024-07-21 03:44:27.597978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.528 qpair failed and we were unable to recover it. 00:34:42.528 [2024-07-21 03:44:27.598103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.528 [2024-07-21 03:44:27.598133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.528 qpair failed and we were unable to recover it. 00:34:42.528 [2024-07-21 03:44:27.598267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.528 [2024-07-21 03:44:27.598296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.528 qpair failed and we were unable to recover it. 00:34:42.528 [2024-07-21 03:44:27.598422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.528 [2024-07-21 03:44:27.598452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.528 qpair failed and we were unable to recover it. 00:34:42.528 [2024-07-21 03:44:27.598619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.528 [2024-07-21 03:44:27.598665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.528 qpair failed and we were unable to recover it. 00:34:42.528 [2024-07-21 03:44:27.598759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.528 [2024-07-21 03:44:27.598787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.528 qpair failed and we were unable to recover it. 00:34:42.528 [2024-07-21 03:44:27.598967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.528 [2024-07-21 03:44:27.599014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.528 qpair failed and we were unable to recover it. 00:34:42.528 [2024-07-21 03:44:27.599128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.528 [2024-07-21 03:44:27.599159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.528 qpair failed and we were unable to recover it. 00:34:42.528 [2024-07-21 03:44:27.599312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.528 [2024-07-21 03:44:27.599345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.528 qpair failed and we were unable to recover it. 00:34:42.528 [2024-07-21 03:44:27.599505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.528 [2024-07-21 03:44:27.599535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.528 qpair failed and we were unable to recover it. 00:34:42.528 [2024-07-21 03:44:27.599637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.528 [2024-07-21 03:44:27.599692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.528 qpair failed and we were unable to recover it. 00:34:42.528 [2024-07-21 03:44:27.599791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.528 [2024-07-21 03:44:27.599818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.528 qpair failed and we were unable to recover it. 00:34:42.528 [2024-07-21 03:44:27.599909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.528 [2024-07-21 03:44:27.599942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.528 qpair failed and we were unable to recover it. 00:34:42.528 [2024-07-21 03:44:27.600053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.528 [2024-07-21 03:44:27.600083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.528 qpair failed and we were unable to recover it. 00:34:42.528 [2024-07-21 03:44:27.600211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.528 [2024-07-21 03:44:27.600254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.528 qpair failed and we were unable to recover it. 00:34:42.528 [2024-07-21 03:44:27.600417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.528 [2024-07-21 03:44:27.600446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.528 qpair failed and we were unable to recover it. 00:34:42.528 [2024-07-21 03:44:27.600583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.528 [2024-07-21 03:44:27.600623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.528 qpair failed and we were unable to recover it. 00:34:42.528 [2024-07-21 03:44:27.600778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.528 [2024-07-21 03:44:27.600825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.528 qpair failed and we were unable to recover it. 00:34:42.528 [2024-07-21 03:44:27.600924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.528 [2024-07-21 03:44:27.600961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.528 qpair failed and we were unable to recover it. 00:34:42.528 [2024-07-21 03:44:27.601129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.528 [2024-07-21 03:44:27.601158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.528 qpair failed and we were unable to recover it. 00:34:42.528 [2024-07-21 03:44:27.601258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.528 [2024-07-21 03:44:27.601286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.528 qpair failed and we were unable to recover it. 00:34:42.528 [2024-07-21 03:44:27.601427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.528 [2024-07-21 03:44:27.601457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.528 qpair failed and we were unable to recover it. 00:34:42.528 [2024-07-21 03:44:27.601564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.528 [2024-07-21 03:44:27.601594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.528 qpair failed and we were unable to recover it. 00:34:42.528 [2024-07-21 03:44:27.601753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.528 [2024-07-21 03:44:27.601792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.528 qpair failed and we were unable to recover it. 00:34:42.528 [2024-07-21 03:44:27.601917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.528 [2024-07-21 03:44:27.601962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.528 qpair failed and we were unable to recover it. 00:34:42.528 [2024-07-21 03:44:27.602140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.528 [2024-07-21 03:44:27.602187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.528 qpair failed and we were unable to recover it. 00:34:42.528 [2024-07-21 03:44:27.602330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.528 [2024-07-21 03:44:27.602377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.528 qpair failed and we were unable to recover it. 00:34:42.528 [2024-07-21 03:44:27.602511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.528 [2024-07-21 03:44:27.602541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.528 qpair failed and we were unable to recover it. 00:34:42.528 [2024-07-21 03:44:27.602635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.528 [2024-07-21 03:44:27.602674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.528 qpair failed and we were unable to recover it. 00:34:42.528 [2024-07-21 03:44:27.602793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.529 [2024-07-21 03:44:27.602823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.529 qpair failed and we were unable to recover it. 00:34:42.529 [2024-07-21 03:44:27.603053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.529 [2024-07-21 03:44:27.603111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.529 qpair failed and we were unable to recover it. 00:34:42.529 [2024-07-21 03:44:27.603233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.529 [2024-07-21 03:44:27.603289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.529 qpair failed and we were unable to recover it. 00:34:42.529 [2024-07-21 03:44:27.603421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.529 [2024-07-21 03:44:27.603452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.529 qpair failed and we were unable to recover it. 00:34:42.529 [2024-07-21 03:44:27.603624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.529 [2024-07-21 03:44:27.603651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.529 qpair failed and we were unable to recover it. 00:34:42.529 [2024-07-21 03:44:27.603783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.529 [2024-07-21 03:44:27.603809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.529 qpair failed and we were unable to recover it. 00:34:42.529 [2024-07-21 03:44:27.603924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.529 [2024-07-21 03:44:27.603954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.529 qpair failed and we were unable to recover it. 00:34:42.529 [2024-07-21 03:44:27.604087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.529 [2024-07-21 03:44:27.604118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.529 qpair failed and we were unable to recover it. 00:34:42.529 [2024-07-21 03:44:27.604246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.529 [2024-07-21 03:44:27.604277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.529 qpair failed and we were unable to recover it. 00:34:42.529 [2024-07-21 03:44:27.604414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.529 [2024-07-21 03:44:27.604443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.529 qpair failed and we were unable to recover it. 00:34:42.529 [2024-07-21 03:44:27.604589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.529 [2024-07-21 03:44:27.604625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.529 qpair failed and we were unable to recover it. 00:34:42.529 [2024-07-21 03:44:27.604751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.529 [2024-07-21 03:44:27.604797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.529 qpair failed and we were unable to recover it. 00:34:42.529 [2024-07-21 03:44:27.604933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.529 [2024-07-21 03:44:27.604980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.529 qpair failed and we were unable to recover it. 00:34:42.529 [2024-07-21 03:44:27.605128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.529 [2024-07-21 03:44:27.605158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.529 qpair failed and we were unable to recover it. 00:34:42.529 [2024-07-21 03:44:27.605282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.529 [2024-07-21 03:44:27.605328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.529 qpair failed and we were unable to recover it. 00:34:42.529 [2024-07-21 03:44:27.605476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.529 [2024-07-21 03:44:27.605504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.529 qpair failed and we were unable to recover it. 00:34:42.529 [2024-07-21 03:44:27.605650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.529 [2024-07-21 03:44:27.605687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.529 qpair failed and we were unable to recover it. 00:34:42.529 [2024-07-21 03:44:27.605785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.529 [2024-07-21 03:44:27.605810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.529 qpair failed and we were unable to recover it. 00:34:42.529 [2024-07-21 03:44:27.605937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.529 [2024-07-21 03:44:27.605964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.529 qpair failed and we were unable to recover it. 00:34:42.529 [2024-07-21 03:44:27.606131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.529 [2024-07-21 03:44:27.606160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.529 qpair failed and we were unable to recover it. 00:34:42.529 [2024-07-21 03:44:27.606350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.529 [2024-07-21 03:44:27.606379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.529 qpair failed and we were unable to recover it. 00:34:42.529 [2024-07-21 03:44:27.606514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.529 [2024-07-21 03:44:27.606542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.529 qpair failed and we were unable to recover it. 00:34:42.529 [2024-07-21 03:44:27.606702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.529 [2024-07-21 03:44:27.606731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.529 qpair failed and we were unable to recover it. 00:34:42.529 [2024-07-21 03:44:27.606839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.529 [2024-07-21 03:44:27.606890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.529 qpair failed and we were unable to recover it. 00:34:42.529 [2024-07-21 03:44:27.607008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.529 [2024-07-21 03:44:27.607055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.529 qpair failed and we were unable to recover it. 00:34:42.529 [2024-07-21 03:44:27.607152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.529 [2024-07-21 03:44:27.607180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.529 qpair failed and we were unable to recover it. 00:34:42.529 [2024-07-21 03:44:27.607327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.529 [2024-07-21 03:44:27.607353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.529 qpair failed and we were unable to recover it. 00:34:42.529 [2024-07-21 03:44:27.607473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.529 [2024-07-21 03:44:27.607500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.529 qpair failed and we were unable to recover it. 00:34:42.529 [2024-07-21 03:44:27.607672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.529 [2024-07-21 03:44:27.607703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.529 qpair failed and we were unable to recover it. 00:34:42.529 [2024-07-21 03:44:27.607804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.529 [2024-07-21 03:44:27.607832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.529 qpair failed and we were unable to recover it. 00:34:42.529 [2024-07-21 03:44:27.607932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.529 [2024-07-21 03:44:27.607966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.529 qpair failed and we were unable to recover it. 00:34:42.529 [2024-07-21 03:44:27.608076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.529 [2024-07-21 03:44:27.608106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.529 qpair failed and we were unable to recover it. 00:34:42.529 [2024-07-21 03:44:27.608239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.529 [2024-07-21 03:44:27.608268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.529 qpair failed and we were unable to recover it. 00:34:42.529 [2024-07-21 03:44:27.608390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.529 [2024-07-21 03:44:27.608416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.529 qpair failed and we were unable to recover it. 00:34:42.529 [2024-07-21 03:44:27.608592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.529 [2024-07-21 03:44:27.608626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.529 qpair failed and we were unable to recover it. 00:34:42.529 [2024-07-21 03:44:27.608719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.529 [2024-07-21 03:44:27.608746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.529 qpair failed and we were unable to recover it. 00:34:42.529 [2024-07-21 03:44:27.608843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.529 [2024-07-21 03:44:27.608870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.529 qpair failed and we were unable to recover it. 00:34:42.529 [2024-07-21 03:44:27.609003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.529 [2024-07-21 03:44:27.609032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.529 qpair failed and we were unable to recover it. 00:34:42.529 [2024-07-21 03:44:27.609203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.529 [2024-07-21 03:44:27.609232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.529 qpair failed and we were unable to recover it. 00:34:42.529 [2024-07-21 03:44:27.609393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.529 [2024-07-21 03:44:27.609422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.529 qpair failed and we were unable to recover it. 00:34:42.529 [2024-07-21 03:44:27.609537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.529 [2024-07-21 03:44:27.609566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.530 qpair failed and we were unable to recover it. 00:34:42.530 [2024-07-21 03:44:27.609728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.530 [2024-07-21 03:44:27.609756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.530 qpair failed and we were unable to recover it. 00:34:42.530 [2024-07-21 03:44:27.609894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.530 [2024-07-21 03:44:27.609924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.530 qpair failed and we were unable to recover it. 00:34:42.530 [2024-07-21 03:44:27.610113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.530 [2024-07-21 03:44:27.610157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.530 qpair failed and we were unable to recover it. 00:34:42.530 [2024-07-21 03:44:27.610294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.530 [2024-07-21 03:44:27.610324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.530 qpair failed and we were unable to recover it. 00:34:42.530 [2024-07-21 03:44:27.610486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.530 [2024-07-21 03:44:27.610513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.530 qpair failed and we were unable to recover it. 00:34:42.530 [2024-07-21 03:44:27.610599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.530 [2024-07-21 03:44:27.610632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.530 qpair failed and we were unable to recover it. 00:34:42.530 [2024-07-21 03:44:27.610766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.530 [2024-07-21 03:44:27.610810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.530 qpair failed and we were unable to recover it. 00:34:42.530 [2024-07-21 03:44:27.610967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.530 [2024-07-21 03:44:27.611011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.530 qpair failed and we were unable to recover it. 00:34:42.530 [2024-07-21 03:44:27.611175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.530 [2024-07-21 03:44:27.611206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.530 qpair failed and we were unable to recover it. 00:34:42.530 [2024-07-21 03:44:27.611332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.530 [2024-07-21 03:44:27.611362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.530 qpair failed and we were unable to recover it. 00:34:42.530 [2024-07-21 03:44:27.611497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.530 [2024-07-21 03:44:27.611534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.530 qpair failed and we were unable to recover it. 00:34:42.530 [2024-07-21 03:44:27.611685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.530 [2024-07-21 03:44:27.611713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.530 qpair failed and we were unable to recover it. 00:34:42.530 [2024-07-21 03:44:27.611920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.530 [2024-07-21 03:44:27.611949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.530 qpair failed and we were unable to recover it. 00:34:42.530 [2024-07-21 03:44:27.612152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.530 [2024-07-21 03:44:27.612201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.530 qpair failed and we were unable to recover it. 00:34:42.530 [2024-07-21 03:44:27.612318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.530 [2024-07-21 03:44:27.612348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.530 qpair failed and we were unable to recover it. 00:34:42.530 [2024-07-21 03:44:27.612505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.530 [2024-07-21 03:44:27.612531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.530 qpair failed and we were unable to recover it. 00:34:42.530 [2024-07-21 03:44:27.612652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.530 [2024-07-21 03:44:27.612679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.530 qpair failed and we were unable to recover it. 00:34:42.530 [2024-07-21 03:44:27.612802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.530 [2024-07-21 03:44:27.612828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.530 qpair failed and we were unable to recover it. 00:34:42.530 [2024-07-21 03:44:27.612927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.530 [2024-07-21 03:44:27.612953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.530 qpair failed and we were unable to recover it. 00:34:42.530 [2024-07-21 03:44:27.613047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.530 [2024-07-21 03:44:27.613073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.530 qpair failed and we were unable to recover it. 00:34:42.530 [2024-07-21 03:44:27.613238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.530 [2024-07-21 03:44:27.613267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.530 qpair failed and we were unable to recover it. 00:34:42.530 [2024-07-21 03:44:27.613380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.530 [2024-07-21 03:44:27.613423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.530 qpair failed and we were unable to recover it. 00:34:42.530 [2024-07-21 03:44:27.613607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.530 [2024-07-21 03:44:27.613640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.530 qpair failed and we were unable to recover it. 00:34:42.530 [2024-07-21 03:44:27.613763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.530 [2024-07-21 03:44:27.613790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.530 qpair failed and we were unable to recover it. 00:34:42.530 [2024-07-21 03:44:27.613940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.530 [2024-07-21 03:44:27.613970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.530 qpair failed and we were unable to recover it. 00:34:42.530 [2024-07-21 03:44:27.614129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.530 [2024-07-21 03:44:27.614176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.530 qpair failed and we were unable to recover it. 00:34:42.530 [2024-07-21 03:44:27.614361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.530 [2024-07-21 03:44:27.614409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.530 qpair failed and we were unable to recover it. 00:34:42.530 [2024-07-21 03:44:27.614511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.530 [2024-07-21 03:44:27.614537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.530 qpair failed and we were unable to recover it. 00:34:42.530 [2024-07-21 03:44:27.614673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.530 [2024-07-21 03:44:27.614700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.530 qpair failed and we were unable to recover it. 00:34:42.530 [2024-07-21 03:44:27.614823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.530 [2024-07-21 03:44:27.614849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.530 qpair failed and we were unable to recover it. 00:34:42.530 [2024-07-21 03:44:27.614965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.530 [2024-07-21 03:44:27.614992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.530 qpair failed and we were unable to recover it. 00:34:42.530 [2024-07-21 03:44:27.615166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.530 [2024-07-21 03:44:27.615212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.530 qpair failed and we were unable to recover it. 00:34:42.530 [2024-07-21 03:44:27.615306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.530 [2024-07-21 03:44:27.615335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.530 qpair failed and we were unable to recover it. 00:34:42.530 [2024-07-21 03:44:27.615494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.530 [2024-07-21 03:44:27.615523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.530 qpair failed and we were unable to recover it. 00:34:42.530 [2024-07-21 03:44:27.615665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.530 [2024-07-21 03:44:27.615692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.530 qpair failed and we were unable to recover it. 00:34:42.530 [2024-07-21 03:44:27.615815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.530 [2024-07-21 03:44:27.615841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.530 qpair failed and we were unable to recover it. 00:34:42.530 [2024-07-21 03:44:27.615980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.530 [2024-07-21 03:44:27.616009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.530 qpair failed and we were unable to recover it. 00:34:42.530 [2024-07-21 03:44:27.616142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.530 [2024-07-21 03:44:27.616176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.530 qpair failed and we were unable to recover it. 00:34:42.530 [2024-07-21 03:44:27.616317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.530 [2024-07-21 03:44:27.616346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.530 qpair failed and we were unable to recover it. 00:34:42.530 [2024-07-21 03:44:27.616460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.530 [2024-07-21 03:44:27.616486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.530 qpair failed and we were unable to recover it. 00:34:42.530 [2024-07-21 03:44:27.616632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.531 [2024-07-21 03:44:27.616667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.531 qpair failed and we were unable to recover it. 00:34:42.531 [2024-07-21 03:44:27.616782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.531 [2024-07-21 03:44:27.616809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.531 qpair failed and we were unable to recover it. 00:34:42.531 [2024-07-21 03:44:27.616948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.531 [2024-07-21 03:44:27.616974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.531 qpair failed and we were unable to recover it. 00:34:42.531 [2024-07-21 03:44:27.617111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.531 [2024-07-21 03:44:27.617140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.531 qpair failed and we were unable to recover it. 00:34:42.531 [2024-07-21 03:44:27.617286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.531 [2024-07-21 03:44:27.617313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.531 qpair failed and we were unable to recover it. 00:34:42.531 [2024-07-21 03:44:27.617525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.531 [2024-07-21 03:44:27.617554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.531 qpair failed and we were unable to recover it. 00:34:42.531 [2024-07-21 03:44:27.617700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.531 [2024-07-21 03:44:27.617728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.531 qpair failed and we were unable to recover it. 00:34:42.531 [2024-07-21 03:44:27.617848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.531 [2024-07-21 03:44:27.617877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.531 qpair failed and we were unable to recover it. 00:34:42.531 [2024-07-21 03:44:27.618052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.531 [2024-07-21 03:44:27.618100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.531 qpair failed and we were unable to recover it. 00:34:42.531 [2024-07-21 03:44:27.618263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.531 [2024-07-21 03:44:27.618292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.531 qpair failed and we were unable to recover it. 00:34:42.531 [2024-07-21 03:44:27.618391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.531 [2024-07-21 03:44:27.618421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.531 qpair failed and we were unable to recover it. 00:34:42.531 [2024-07-21 03:44:27.618572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.531 [2024-07-21 03:44:27.618620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.531 qpair failed and we were unable to recover it. 00:34:42.531 [2024-07-21 03:44:27.618754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.531 [2024-07-21 03:44:27.618782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.531 qpair failed and we were unable to recover it. 00:34:42.531 [2024-07-21 03:44:27.618886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.531 [2024-07-21 03:44:27.618913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.531 qpair failed and we were unable to recover it. 00:34:42.531 [2024-07-21 03:44:27.619053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.531 [2024-07-21 03:44:27.619099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.531 qpair failed and we were unable to recover it. 00:34:42.531 [2024-07-21 03:44:27.619314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.531 [2024-07-21 03:44:27.619371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.531 qpair failed and we were unable to recover it. 00:34:42.531 [2024-07-21 03:44:27.619518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.531 [2024-07-21 03:44:27.619546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.531 qpair failed and we were unable to recover it. 00:34:42.531 [2024-07-21 03:44:27.619685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.531 [2024-07-21 03:44:27.619713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.531 qpair failed and we were unable to recover it. 00:34:42.531 [2024-07-21 03:44:27.619850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.531 [2024-07-21 03:44:27.619887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.531 qpair failed and we were unable to recover it. 00:34:42.531 [2024-07-21 03:44:27.620043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.531 [2024-07-21 03:44:27.620071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.531 qpair failed and we were unable to recover it. 00:34:42.531 [2024-07-21 03:44:27.620230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.531 [2024-07-21 03:44:27.620278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.531 qpair failed and we were unable to recover it. 00:34:42.531 [2024-07-21 03:44:27.620416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.531 [2024-07-21 03:44:27.620460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.531 qpair failed and we were unable to recover it. 00:34:42.531 [2024-07-21 03:44:27.620660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.531 [2024-07-21 03:44:27.620701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.531 qpair failed and we were unable to recover it. 00:34:42.531 [2024-07-21 03:44:27.620853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.531 [2024-07-21 03:44:27.620891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.531 qpair failed and we were unable to recover it. 00:34:42.531 [2024-07-21 03:44:27.621001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.531 [2024-07-21 03:44:27.621035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.531 qpair failed and we were unable to recover it. 00:34:42.531 [2024-07-21 03:44:27.621146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.531 [2024-07-21 03:44:27.621175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.531 qpair failed and we were unable to recover it. 00:34:42.531 [2024-07-21 03:44:27.621325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.531 [2024-07-21 03:44:27.621372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.531 qpair failed and we were unable to recover it. 00:34:42.531 [2024-07-21 03:44:27.621532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.531 [2024-07-21 03:44:27.621560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.531 qpair failed and we were unable to recover it. 00:34:42.531 [2024-07-21 03:44:27.621717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.531 [2024-07-21 03:44:27.621744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.531 qpair failed and we were unable to recover it. 00:34:42.531 [2024-07-21 03:44:27.621896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.531 [2024-07-21 03:44:27.621928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.531 qpair failed and we were unable to recover it. 00:34:42.531 [2024-07-21 03:44:27.622062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.531 [2024-07-21 03:44:27.622091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.531 qpair failed and we were unable to recover it. 00:34:42.531 [2024-07-21 03:44:27.622196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.531 [2024-07-21 03:44:27.622227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.531 qpair failed and we were unable to recover it. 00:34:42.531 [2024-07-21 03:44:27.622407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.531 [2024-07-21 03:44:27.622454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.531 qpair failed and we were unable to recover it. 00:34:42.531 [2024-07-21 03:44:27.622560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.531 [2024-07-21 03:44:27.622601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.531 qpair failed and we were unable to recover it. 00:34:42.531 [2024-07-21 03:44:27.622749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.531 [2024-07-21 03:44:27.622789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.531 qpair failed and we were unable to recover it. 00:34:42.531 [2024-07-21 03:44:27.622951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.532 [2024-07-21 03:44:27.622990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.532 qpair failed and we were unable to recover it. 00:34:42.532 [2024-07-21 03:44:27.623123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.532 [2024-07-21 03:44:27.623173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.532 qpair failed and we were unable to recover it. 00:34:42.532 [2024-07-21 03:44:27.623356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.532 [2024-07-21 03:44:27.623403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.532 qpair failed and we were unable to recover it. 00:34:42.532 [2024-07-21 03:44:27.623543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.532 [2024-07-21 03:44:27.623573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.532 qpair failed and we were unable to recover it. 00:34:42.532 [2024-07-21 03:44:27.623738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.532 [2024-07-21 03:44:27.623765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.532 qpair failed and we were unable to recover it. 00:34:42.532 [2024-07-21 03:44:27.623862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.532 [2024-07-21 03:44:27.623888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.532 qpair failed and we were unable to recover it. 00:34:42.532 [2024-07-21 03:44:27.624051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.532 [2024-07-21 03:44:27.624092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.532 qpair failed and we were unable to recover it. 00:34:42.532 [2024-07-21 03:44:27.624223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.532 [2024-07-21 03:44:27.624251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.532 qpair failed and we were unable to recover it. 00:34:42.532 [2024-07-21 03:44:27.624399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.532 [2024-07-21 03:44:27.624429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.532 qpair failed and we were unable to recover it. 00:34:42.532 [2024-07-21 03:44:27.624557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.532 [2024-07-21 03:44:27.624586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.532 qpair failed and we were unable to recover it. 00:34:42.532 [2024-07-21 03:44:27.624775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.532 [2024-07-21 03:44:27.624802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.532 qpair failed and we were unable to recover it. 00:34:42.532 [2024-07-21 03:44:27.624970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.532 [2024-07-21 03:44:27.624999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.532 qpair failed and we were unable to recover it. 00:34:42.532 [2024-07-21 03:44:27.625153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.532 [2024-07-21 03:44:27.625199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.532 qpair failed and we were unable to recover it. 00:34:42.532 [2024-07-21 03:44:27.625311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.532 [2024-07-21 03:44:27.625352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.532 qpair failed and we were unable to recover it. 00:34:42.532 [2024-07-21 03:44:27.625476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.532 [2024-07-21 03:44:27.625502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.532 qpair failed and we were unable to recover it. 00:34:42.532 [2024-07-21 03:44:27.625598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.532 [2024-07-21 03:44:27.625634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.532 qpair failed and we were unable to recover it. 00:34:42.532 [2024-07-21 03:44:27.625767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.532 [2024-07-21 03:44:27.625799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.532 qpair failed and we were unable to recover it. 00:34:42.532 [2024-07-21 03:44:27.625923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.532 [2024-07-21 03:44:27.625966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.532 qpair failed and we were unable to recover it. 00:34:42.532 [2024-07-21 03:44:27.626105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.532 [2024-07-21 03:44:27.626134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.532 qpair failed and we were unable to recover it. 00:34:42.532 [2024-07-21 03:44:27.626255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.532 [2024-07-21 03:44:27.626300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.532 qpair failed and we were unable to recover it. 00:34:42.532 [2024-07-21 03:44:27.626397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.532 [2024-07-21 03:44:27.626426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.532 qpair failed and we were unable to recover it. 00:34:42.532 [2024-07-21 03:44:27.626556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.532 [2024-07-21 03:44:27.626585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.532 qpair failed and we were unable to recover it. 00:34:42.532 [2024-07-21 03:44:27.626733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.532 [2024-07-21 03:44:27.626773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.532 qpair failed and we were unable to recover it. 00:34:42.532 [2024-07-21 03:44:27.626912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.532 [2024-07-21 03:44:27.626952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.532 qpair failed and we were unable to recover it. 00:34:42.532 [2024-07-21 03:44:27.627079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.532 [2024-07-21 03:44:27.627126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.532 qpair failed and we were unable to recover it. 00:34:42.532 [2024-07-21 03:44:27.627225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.532 [2024-07-21 03:44:27.627256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.532 qpair failed and we were unable to recover it. 00:34:42.532 [2024-07-21 03:44:27.627425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.532 [2024-07-21 03:44:27.627455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.532 qpair failed and we were unable to recover it. 00:34:42.532 [2024-07-21 03:44:27.627576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.532 [2024-07-21 03:44:27.627606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.532 qpair failed and we were unable to recover it. 00:34:42.532 [2024-07-21 03:44:27.627753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.532 [2024-07-21 03:44:27.627781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.532 qpair failed and we were unable to recover it. 00:34:42.532 [2024-07-21 03:44:27.627881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.532 [2024-07-21 03:44:27.627936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.532 qpair failed and we were unable to recover it. 00:34:42.532 [2024-07-21 03:44:27.628048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.532 [2024-07-21 03:44:27.628078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.532 qpair failed and we were unable to recover it. 00:34:42.532 [2024-07-21 03:44:27.628208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.532 [2024-07-21 03:44:27.628237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.532 qpair failed and we were unable to recover it. 00:34:42.532 [2024-07-21 03:44:27.628398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.532 [2024-07-21 03:44:27.628427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.532 qpair failed and we were unable to recover it. 00:34:42.532 [2024-07-21 03:44:27.628585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.532 [2024-07-21 03:44:27.628621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.532 qpair failed and we were unable to recover it. 00:34:42.532 [2024-07-21 03:44:27.628737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.532 [2024-07-21 03:44:27.628764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.532 qpair failed and we were unable to recover it. 00:34:42.532 [2024-07-21 03:44:27.628929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.532 [2024-07-21 03:44:27.628958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.532 qpair failed and we were unable to recover it. 00:34:42.532 [2024-07-21 03:44:27.629103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.532 [2024-07-21 03:44:27.629151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.532 qpair failed and we were unable to recover it. 00:34:42.532 [2024-07-21 03:44:27.629355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.532 [2024-07-21 03:44:27.629401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.532 qpair failed and we were unable to recover it. 00:34:42.532 [2024-07-21 03:44:27.629537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.532 [2024-07-21 03:44:27.629574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.532 qpair failed and we were unable to recover it. 00:34:42.532 [2024-07-21 03:44:27.629679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.532 [2024-07-21 03:44:27.629705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.532 qpair failed and we were unable to recover it. 00:34:42.532 [2024-07-21 03:44:27.629794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.532 [2024-07-21 03:44:27.629820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.533 qpair failed and we were unable to recover it. 00:34:42.533 [2024-07-21 03:44:27.629909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.533 [2024-07-21 03:44:27.629934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.533 qpair failed and we were unable to recover it. 00:34:42.533 [2024-07-21 03:44:27.630046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.533 [2024-07-21 03:44:27.630075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.533 qpair failed and we were unable to recover it. 00:34:42.533 [2024-07-21 03:44:27.630172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.533 [2024-07-21 03:44:27.630206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.533 qpair failed and we were unable to recover it. 00:34:42.533 [2024-07-21 03:44:27.630415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.533 [2024-07-21 03:44:27.630445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.533 qpair failed and we were unable to recover it. 00:34:42.533 [2024-07-21 03:44:27.630573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.533 [2024-07-21 03:44:27.630602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.533 qpair failed and we were unable to recover it. 00:34:42.533 [2024-07-21 03:44:27.630716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.533 [2024-07-21 03:44:27.630742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.533 qpair failed and we were unable to recover it. 00:34:42.533 [2024-07-21 03:44:27.630866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.533 [2024-07-21 03:44:27.630930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.533 qpair failed and we were unable to recover it. 00:34:42.533 [2024-07-21 03:44:27.631066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.533 [2024-07-21 03:44:27.631112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.533 qpair failed and we were unable to recover it. 00:34:42.533 [2024-07-21 03:44:27.631255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.533 [2024-07-21 03:44:27.631300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.533 qpair failed and we were unable to recover it. 00:34:42.533 [2024-07-21 03:44:27.631388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.533 [2024-07-21 03:44:27.631415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.533 qpair failed and we were unable to recover it. 00:34:42.533 [2024-07-21 03:44:27.631552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.533 [2024-07-21 03:44:27.631580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.533 qpair failed and we were unable to recover it. 00:34:42.533 [2024-07-21 03:44:27.631735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.533 [2024-07-21 03:44:27.631781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.533 qpair failed and we were unable to recover it. 00:34:42.533 [2024-07-21 03:44:27.632015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.533 [2024-07-21 03:44:27.632068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.533 qpair failed and we were unable to recover it. 00:34:42.533 [2024-07-21 03:44:27.632260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.533 [2024-07-21 03:44:27.632308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.533 qpair failed and we were unable to recover it. 00:34:42.533 [2024-07-21 03:44:27.632433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.533 [2024-07-21 03:44:27.632460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.533 qpair failed and we were unable to recover it. 00:34:42.533 [2024-07-21 03:44:27.632592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.533 [2024-07-21 03:44:27.632627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.533 qpair failed and we were unable to recover it. 00:34:42.533 [2024-07-21 03:44:27.632752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.533 [2024-07-21 03:44:27.632781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.533 qpair failed and we were unable to recover it. 00:34:42.533 [2024-07-21 03:44:27.632880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.533 [2024-07-21 03:44:27.632911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.533 qpair failed and we were unable to recover it. 00:34:42.533 [2024-07-21 03:44:27.633094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.533 [2024-07-21 03:44:27.633142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.533 qpair failed and we were unable to recover it. 00:34:42.533 [2024-07-21 03:44:27.633266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.533 [2024-07-21 03:44:27.633315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.533 qpair failed and we were unable to recover it. 00:34:42.533 [2024-07-21 03:44:27.633461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.533 [2024-07-21 03:44:27.633506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.533 qpair failed and we were unable to recover it. 00:34:42.533 [2024-07-21 03:44:27.633662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.533 [2024-07-21 03:44:27.633691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.533 qpair failed and we were unable to recover it. 00:34:42.533 [2024-07-21 03:44:27.633828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.533 [2024-07-21 03:44:27.633873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.533 qpair failed and we were unable to recover it. 00:34:42.533 [2024-07-21 03:44:27.634009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.533 [2024-07-21 03:44:27.634058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.533 qpair failed and we were unable to recover it. 00:34:42.533 [2024-07-21 03:44:27.634257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.533 [2024-07-21 03:44:27.634308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.533 qpair failed and we were unable to recover it. 00:34:42.533 [2024-07-21 03:44:27.634438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.533 [2024-07-21 03:44:27.634466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.533 qpair failed and we were unable to recover it. 00:34:42.533 [2024-07-21 03:44:27.634591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.533 [2024-07-21 03:44:27.634623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.533 qpair failed and we were unable to recover it. 00:34:42.533 [2024-07-21 03:44:27.634774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.533 [2024-07-21 03:44:27.634800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.533 qpair failed and we were unable to recover it. 00:34:42.533 [2024-07-21 03:44:27.634944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.533 [2024-07-21 03:44:27.634974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.533 qpair failed and we were unable to recover it. 00:34:42.533 [2024-07-21 03:44:27.635130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.533 [2024-07-21 03:44:27.635189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.533 qpair failed and we were unable to recover it. 00:34:42.533 [2024-07-21 03:44:27.635388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.533 [2024-07-21 03:44:27.635446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.533 qpair failed and we were unable to recover it. 00:34:42.533 [2024-07-21 03:44:27.635576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.533 [2024-07-21 03:44:27.635602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.533 qpair failed and we were unable to recover it. 00:34:42.533 [2024-07-21 03:44:27.635740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.533 [2024-07-21 03:44:27.635767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.533 qpair failed and we were unable to recover it. 00:34:42.533 [2024-07-21 03:44:27.635891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.533 [2024-07-21 03:44:27.635934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.533 qpair failed and we were unable to recover it. 00:34:42.533 [2024-07-21 03:44:27.636071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.533 [2024-07-21 03:44:27.636101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.533 qpair failed and we were unable to recover it. 00:34:42.533 [2024-07-21 03:44:27.636266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.533 [2024-07-21 03:44:27.636295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.533 qpair failed and we were unable to recover it. 00:34:42.533 [2024-07-21 03:44:27.636475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.533 [2024-07-21 03:44:27.636520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.533 qpair failed and we were unable to recover it. 00:34:42.533 [2024-07-21 03:44:27.636677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.533 [2024-07-21 03:44:27.636707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.533 qpair failed and we were unable to recover it. 00:34:42.533 [2024-07-21 03:44:27.636837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.533 [2024-07-21 03:44:27.636865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.533 qpair failed and we were unable to recover it. 00:34:42.533 [2024-07-21 03:44:27.636959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.534 [2024-07-21 03:44:27.637003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.534 qpair failed and we were unable to recover it. 00:34:42.534 [2024-07-21 03:44:27.637148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.534 [2024-07-21 03:44:27.637193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.534 qpair failed and we were unable to recover it. 00:34:42.534 [2024-07-21 03:44:27.637319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.534 [2024-07-21 03:44:27.637349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.534 qpair failed and we were unable to recover it. 00:34:42.534 [2024-07-21 03:44:27.637509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.534 [2024-07-21 03:44:27.637555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.534 qpair failed and we were unable to recover it. 00:34:42.534 [2024-07-21 03:44:27.637788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.534 [2024-07-21 03:44:27.637819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.534 qpair failed and we were unable to recover it. 00:34:42.534 [2024-07-21 03:44:27.637959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.534 [2024-07-21 03:44:27.637989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.534 qpair failed and we were unable to recover it. 00:34:42.534 [2024-07-21 03:44:27.638115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.534 [2024-07-21 03:44:27.638144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.534 qpair failed and we were unable to recover it. 00:34:42.534 [2024-07-21 03:44:27.638276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.534 [2024-07-21 03:44:27.638306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.534 qpair failed and we were unable to recover it. 00:34:42.534 [2024-07-21 03:44:27.638460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.534 [2024-07-21 03:44:27.638519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.534 qpair failed and we were unable to recover it. 00:34:42.534 [2024-07-21 03:44:27.638674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.534 [2024-07-21 03:44:27.638703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.534 qpair failed and we were unable to recover it. 00:34:42.534 [2024-07-21 03:44:27.638797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.534 [2024-07-21 03:44:27.638824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.534 qpair failed and we were unable to recover it. 00:34:42.534 [2024-07-21 03:44:27.638935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.534 [2024-07-21 03:44:27.638965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.534 qpair failed and we were unable to recover it. 00:34:42.534 [2024-07-21 03:44:27.639122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.534 [2024-07-21 03:44:27.639153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.534 qpair failed and we were unable to recover it. 00:34:42.534 [2024-07-21 03:44:27.639262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.534 [2024-07-21 03:44:27.639289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.534 qpair failed and we were unable to recover it. 00:34:42.534 [2024-07-21 03:44:27.639523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.534 [2024-07-21 03:44:27.639554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.534 qpair failed and we were unable to recover it. 00:34:42.534 [2024-07-21 03:44:27.639720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.534 [2024-07-21 03:44:27.639761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.534 qpair failed and we were unable to recover it. 00:34:42.534 [2024-07-21 03:44:27.639899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.534 [2024-07-21 03:44:27.639950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.534 qpair failed and we were unable to recover it. 00:34:42.534 [2024-07-21 03:44:27.640094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.534 [2024-07-21 03:44:27.640145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.534 qpair failed and we were unable to recover it. 00:34:42.534 [2024-07-21 03:44:27.640322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.534 [2024-07-21 03:44:27.640350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.534 qpair failed and we were unable to recover it. 00:34:42.534 [2024-07-21 03:44:27.640466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.534 [2024-07-21 03:44:27.640493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.534 qpair failed and we were unable to recover it. 00:34:42.534 [2024-07-21 03:44:27.640671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.534 [2024-07-21 03:44:27.640717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.534 qpair failed and we were unable to recover it. 00:34:42.534 [2024-07-21 03:44:27.640851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.534 [2024-07-21 03:44:27.640883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.534 qpair failed and we were unable to recover it. 00:34:42.534 [2024-07-21 03:44:27.641018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.534 [2024-07-21 03:44:27.641049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.534 qpair failed and we were unable to recover it. 00:34:42.534 [2024-07-21 03:44:27.641180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.534 [2024-07-21 03:44:27.641209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.534 qpair failed and we were unable to recover it. 00:34:42.534 [2024-07-21 03:44:27.641331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.534 [2024-07-21 03:44:27.641358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.534 qpair failed and we were unable to recover it. 00:34:42.534 [2024-07-21 03:44:27.641483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.534 [2024-07-21 03:44:27.641526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.534 qpair failed and we were unable to recover it. 00:34:42.534 [2024-07-21 03:44:27.641652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.534 [2024-07-21 03:44:27.641680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.534 qpair failed and we were unable to recover it. 00:34:42.534 [2024-07-21 03:44:27.641888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.534 [2024-07-21 03:44:27.641918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.534 qpair failed and we were unable to recover it. 00:34:42.534 [2024-07-21 03:44:27.642074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.534 [2024-07-21 03:44:27.642103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.534 qpair failed and we were unable to recover it. 00:34:42.534 [2024-07-21 03:44:27.642204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.534 [2024-07-21 03:44:27.642235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.534 qpair failed and we were unable to recover it. 00:34:42.534 [2024-07-21 03:44:27.642398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.534 [2024-07-21 03:44:27.642427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.534 qpair failed and we were unable to recover it. 00:34:42.534 [2024-07-21 03:44:27.642535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.534 [2024-07-21 03:44:27.642564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.534 qpair failed and we were unable to recover it. 00:34:42.534 [2024-07-21 03:44:27.642692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.534 [2024-07-21 03:44:27.642720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.534 qpair failed and we were unable to recover it. 00:34:42.534 [2024-07-21 03:44:27.642810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.534 [2024-07-21 03:44:27.642838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.534 qpair failed and we were unable to recover it. 00:34:42.534 [2024-07-21 03:44:27.642950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.534 [2024-07-21 03:44:27.642981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.534 qpair failed and we were unable to recover it. 00:34:42.534 [2024-07-21 03:44:27.643142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.534 [2024-07-21 03:44:27.643230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.534 qpair failed and we were unable to recover it. 00:34:42.534 [2024-07-21 03:44:27.643388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.534 [2024-07-21 03:44:27.643417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.534 qpair failed and we were unable to recover it. 00:34:42.534 [2024-07-21 03:44:27.643551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.534 [2024-07-21 03:44:27.643587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.534 qpair failed and we were unable to recover it. 00:34:42.534 [2024-07-21 03:44:27.643736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.534 [2024-07-21 03:44:27.643776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.534 qpair failed and we were unable to recover it. 00:34:42.534 [2024-07-21 03:44:27.643925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.534 [2024-07-21 03:44:27.643966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.534 qpair failed and we were unable to recover it. 00:34:42.534 [2024-07-21 03:44:27.644116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.534 [2024-07-21 03:44:27.644147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.535 qpair failed and we were unable to recover it. 00:34:42.535 [2024-07-21 03:44:27.644285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.535 [2024-07-21 03:44:27.644317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.535 qpair failed and we were unable to recover it. 00:34:42.535 [2024-07-21 03:44:27.644477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.535 [2024-07-21 03:44:27.644507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.535 qpair failed and we were unable to recover it. 00:34:42.535 [2024-07-21 03:44:27.644655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.535 [2024-07-21 03:44:27.644683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.535 qpair failed and we were unable to recover it. 00:34:42.535 [2024-07-21 03:44:27.644813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.535 [2024-07-21 03:44:27.644846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.535 qpair failed and we were unable to recover it. 00:34:42.535 [2024-07-21 03:44:27.644993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.535 [2024-07-21 03:44:27.645020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.535 qpair failed and we were unable to recover it. 00:34:42.535 [2024-07-21 03:44:27.645181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.535 [2024-07-21 03:44:27.645211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.535 qpair failed and we were unable to recover it. 00:34:42.535 [2024-07-21 03:44:27.645320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.535 [2024-07-21 03:44:27.645366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.535 qpair failed and we were unable to recover it. 00:34:42.535 [2024-07-21 03:44:27.645525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.535 [2024-07-21 03:44:27.645556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.535 qpair failed and we were unable to recover it. 00:34:42.535 [2024-07-21 03:44:27.645688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.535 [2024-07-21 03:44:27.645716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.535 qpair failed and we were unable to recover it. 00:34:42.535 [2024-07-21 03:44:27.645810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.535 [2024-07-21 03:44:27.645836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.535 qpair failed and we were unable to recover it. 00:34:42.535 [2024-07-21 03:44:27.645960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.535 [2024-07-21 03:44:27.645987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.535 qpair failed and we were unable to recover it. 00:34:42.535 [2024-07-21 03:44:27.646099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.535 [2024-07-21 03:44:27.646129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.535 qpair failed and we were unable to recover it. 00:34:42.535 [2024-07-21 03:44:27.646292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.535 [2024-07-21 03:44:27.646323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.535 qpair failed and we were unable to recover it. 00:34:42.535 [2024-07-21 03:44:27.646458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.535 [2024-07-21 03:44:27.646493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.535 qpair failed and we were unable to recover it. 00:34:42.535 [2024-07-21 03:44:27.646595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.535 [2024-07-21 03:44:27.646641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.535 qpair failed and we were unable to recover it. 00:34:42.535 [2024-07-21 03:44:27.646738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.535 [2024-07-21 03:44:27.646765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.535 qpair failed and we were unable to recover it. 00:34:42.535 [2024-07-21 03:44:27.646922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.535 [2024-07-21 03:44:27.646952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.535 qpair failed and we were unable to recover it. 00:34:42.535 [2024-07-21 03:44:27.647084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.535 [2024-07-21 03:44:27.647114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.535 qpair failed and we were unable to recover it. 00:34:42.535 [2024-07-21 03:44:27.647264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.535 [2024-07-21 03:44:27.647296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.535 qpair failed and we were unable to recover it. 00:34:42.535 [2024-07-21 03:44:27.647481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.535 [2024-07-21 03:44:27.647539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.535 qpair failed and we were unable to recover it. 00:34:42.535 [2024-07-21 03:44:27.647667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.535 [2024-07-21 03:44:27.647704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.535 qpair failed and we were unable to recover it. 00:34:42.535 [2024-07-21 03:44:27.647832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.535 [2024-07-21 03:44:27.647877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.535 qpair failed and we were unable to recover it. 00:34:42.535 [2024-07-21 03:44:27.647965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.535 [2024-07-21 03:44:27.647992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.535 qpair failed and we were unable to recover it. 00:34:42.535 [2024-07-21 03:44:27.648114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.535 [2024-07-21 03:44:27.648151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.535 qpair failed and we were unable to recover it. 00:34:42.535 [2024-07-21 03:44:27.648248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.535 [2024-07-21 03:44:27.648274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.535 qpair failed and we were unable to recover it. 00:34:42.535 [2024-07-21 03:44:27.648419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.535 [2024-07-21 03:44:27.648446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.535 qpair failed and we were unable to recover it. 00:34:42.535 [2024-07-21 03:44:27.648593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.535 [2024-07-21 03:44:27.648629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.535 qpair failed and we were unable to recover it. 00:34:42.535 [2024-07-21 03:44:27.648754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.535 [2024-07-21 03:44:27.648782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.535 qpair failed and we were unable to recover it. 00:34:42.535 [2024-07-21 03:44:27.648877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.535 [2024-07-21 03:44:27.648905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.535 qpair failed and we were unable to recover it. 00:34:42.535 [2024-07-21 03:44:27.649032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.535 [2024-07-21 03:44:27.649059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.535 qpair failed and we were unable to recover it. 00:34:42.535 [2024-07-21 03:44:27.649178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.535 [2024-07-21 03:44:27.649210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.535 qpair failed and we were unable to recover it. 00:34:42.535 [2024-07-21 03:44:27.649339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.535 [2024-07-21 03:44:27.649369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.535 qpair failed and we were unable to recover it. 00:34:42.535 [2024-07-21 03:44:27.649475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.535 [2024-07-21 03:44:27.649505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.535 qpair failed and we were unable to recover it. 00:34:42.535 [2024-07-21 03:44:27.649621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.535 [2024-07-21 03:44:27.649664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.535 qpair failed and we were unable to recover it. 00:34:42.535 [2024-07-21 03:44:27.649883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.535 [2024-07-21 03:44:27.649915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.535 qpair failed and we were unable to recover it. 00:34:42.535 [2024-07-21 03:44:27.650061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.535 [2024-07-21 03:44:27.650087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.535 qpair failed and we were unable to recover it. 00:34:42.535 [2024-07-21 03:44:27.650220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.535 [2024-07-21 03:44:27.650249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.535 qpair failed and we were unable to recover it. 00:34:42.535 [2024-07-21 03:44:27.650345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.535 [2024-07-21 03:44:27.650375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.535 qpair failed and we were unable to recover it. 00:34:42.535 [2024-07-21 03:44:27.650480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.535 [2024-07-21 03:44:27.650511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.535 qpair failed and we were unable to recover it. 00:34:42.535 [2024-07-21 03:44:27.650653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.536 [2024-07-21 03:44:27.650680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.536 qpair failed and we were unable to recover it. 00:34:42.536 [2024-07-21 03:44:27.650796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.536 [2024-07-21 03:44:27.650823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.536 qpair failed and we were unable to recover it. 00:34:42.536 [2024-07-21 03:44:27.650968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.536 [2024-07-21 03:44:27.650997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.536 qpair failed and we were unable to recover it. 00:34:42.536 [2024-07-21 03:44:27.651160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.536 [2024-07-21 03:44:27.651189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.536 qpair failed and we were unable to recover it. 00:34:42.536 [2024-07-21 03:44:27.651289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.536 [2024-07-21 03:44:27.651323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.536 qpair failed and we were unable to recover it. 00:34:42.536 [2024-07-21 03:44:27.651484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.536 [2024-07-21 03:44:27.651513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.536 qpair failed and we were unable to recover it. 00:34:42.536 [2024-07-21 03:44:27.651624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.536 [2024-07-21 03:44:27.651651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.536 qpair failed and we were unable to recover it. 00:34:42.536 [2024-07-21 03:44:27.651744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.536 [2024-07-21 03:44:27.651771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.536 qpair failed and we were unable to recover it. 00:34:42.536 [2024-07-21 03:44:27.651873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.536 [2024-07-21 03:44:27.651917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.536 qpair failed and we were unable to recover it. 00:34:42.536 [2024-07-21 03:44:27.652035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.536 [2024-07-21 03:44:27.652078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.536 qpair failed and we were unable to recover it. 00:34:42.536 [2024-07-21 03:44:27.652203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.536 [2024-07-21 03:44:27.652232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.536 qpair failed and we were unable to recover it. 00:34:42.536 [2024-07-21 03:44:27.652390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.536 [2024-07-21 03:44:27.652420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.536 qpair failed and we were unable to recover it. 00:34:42.536 [2024-07-21 03:44:27.652526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.536 [2024-07-21 03:44:27.652552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.536 qpair failed and we were unable to recover it. 00:34:42.536 [2024-07-21 03:44:27.652690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.536 [2024-07-21 03:44:27.652719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.536 qpair failed and we were unable to recover it. 00:34:42.536 [2024-07-21 03:44:27.652841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.536 [2024-07-21 03:44:27.652868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.536 qpair failed and we were unable to recover it. 00:34:42.536 [2024-07-21 03:44:27.653020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.536 [2024-07-21 03:44:27.653065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.536 qpair failed and we were unable to recover it. 00:34:42.536 [2024-07-21 03:44:27.653193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.536 [2024-07-21 03:44:27.653223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.536 qpair failed and we were unable to recover it. 00:34:42.536 [2024-07-21 03:44:27.653368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.536 [2024-07-21 03:44:27.653413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.536 qpair failed and we were unable to recover it. 00:34:42.536 [2024-07-21 03:44:27.653583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.536 [2024-07-21 03:44:27.653611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.536 qpair failed and we were unable to recover it. 00:34:42.536 [2024-07-21 03:44:27.653717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.536 [2024-07-21 03:44:27.653744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.536 qpair failed and we were unable to recover it. 00:34:42.536 [2024-07-21 03:44:27.653866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.536 [2024-07-21 03:44:27.653893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.536 qpair failed and we were unable to recover it. 00:34:42.536 [2024-07-21 03:44:27.653999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.536 [2024-07-21 03:44:27.654029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.536 qpair failed and we were unable to recover it. 00:34:42.536 [2024-07-21 03:44:27.654157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.536 [2024-07-21 03:44:27.654188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.536 qpair failed and we were unable to recover it. 00:34:42.536 [2024-07-21 03:44:27.654392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.536 [2024-07-21 03:44:27.654422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.536 qpair failed and we were unable to recover it. 00:34:42.536 [2024-07-21 03:44:27.654575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.536 [2024-07-21 03:44:27.654621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.536 qpair failed and we were unable to recover it. 00:34:42.536 [2024-07-21 03:44:27.654754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.536 [2024-07-21 03:44:27.654790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.536 qpair failed and we were unable to recover it. 00:34:42.536 [2024-07-21 03:44:27.654888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.536 [2024-07-21 03:44:27.654914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.536 qpair failed and we were unable to recover it. 00:34:42.536 [2024-07-21 03:44:27.655037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.536 [2024-07-21 03:44:27.655063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.536 qpair failed and we were unable to recover it. 00:34:42.536 [2024-07-21 03:44:27.655210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.536 [2024-07-21 03:44:27.655261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.536 qpair failed and we were unable to recover it. 00:34:42.536 [2024-07-21 03:44:27.655387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.536 [2024-07-21 03:44:27.655415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.536 qpair failed and we were unable to recover it. 00:34:42.536 [2024-07-21 03:44:27.655566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.536 [2024-07-21 03:44:27.655595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.536 qpair failed and we were unable to recover it. 00:34:42.536 [2024-07-21 03:44:27.655701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.536 [2024-07-21 03:44:27.655734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.536 qpair failed and we were unable to recover it. 00:34:42.536 [2024-07-21 03:44:27.655865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.536 [2024-07-21 03:44:27.655893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.536 qpair failed and we were unable to recover it. 00:34:42.536 [2024-07-21 03:44:27.656043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.536 [2024-07-21 03:44:27.656074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.536 qpair failed and we were unable to recover it. 00:34:42.536 [2024-07-21 03:44:27.659752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.537 [2024-07-21 03:44:27.659793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.537 qpair failed and we were unable to recover it. 00:34:42.537 [2024-07-21 03:44:27.659974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.537 [2024-07-21 03:44:27.660005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.537 qpair failed and we were unable to recover it. 00:34:42.537 [2024-07-21 03:44:27.660126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.537 [2024-07-21 03:44:27.660171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.537 qpair failed and we were unable to recover it. 00:34:42.537 [2024-07-21 03:44:27.660315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.537 [2024-07-21 03:44:27.660346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.537 qpair failed and we were unable to recover it. 00:34:42.537 [2024-07-21 03:44:27.660454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.537 [2024-07-21 03:44:27.660484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.537 qpair failed and we were unable to recover it. 00:34:42.537 [2024-07-21 03:44:27.660594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.537 [2024-07-21 03:44:27.660631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.537 qpair failed and we were unable to recover it. 00:34:42.537 [2024-07-21 03:44:27.660799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.537 [2024-07-21 03:44:27.660826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.537 qpair failed and we were unable to recover it. 00:34:42.537 [2024-07-21 03:44:27.661011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.537 [2024-07-21 03:44:27.661038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.537 qpair failed and we were unable to recover it. 00:34:42.537 [2024-07-21 03:44:27.661174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.537 [2024-07-21 03:44:27.661203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.537 qpair failed and we were unable to recover it. 00:34:42.537 [2024-07-21 03:44:27.661300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.537 [2024-07-21 03:44:27.661328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.537 qpair failed and we were unable to recover it. 00:34:42.537 [2024-07-21 03:44:27.661454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.537 [2024-07-21 03:44:27.661485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.537 qpair failed and we were unable to recover it. 00:34:42.537 [2024-07-21 03:44:27.661625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.537 [2024-07-21 03:44:27.661669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.537 qpair failed and we were unable to recover it. 00:34:42.537 [2024-07-21 03:44:27.661766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.537 [2024-07-21 03:44:27.661793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.537 qpair failed and we were unable to recover it. 00:34:42.537 [2024-07-21 03:44:27.661960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.537 [2024-07-21 03:44:27.661990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.537 qpair failed and we were unable to recover it. 00:34:42.537 [2024-07-21 03:44:27.662149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.537 [2024-07-21 03:44:27.662179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.537 qpair failed and we were unable to recover it. 00:34:42.537 [2024-07-21 03:44:27.662340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.537 [2024-07-21 03:44:27.662370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.537 qpair failed and we were unable to recover it. 00:34:42.537 [2024-07-21 03:44:27.662503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.537 [2024-07-21 03:44:27.662534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.537 qpair failed and we were unable to recover it. 00:34:42.537 [2024-07-21 03:44:27.662690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.537 [2024-07-21 03:44:27.662718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.537 qpair failed and we were unable to recover it. 00:34:42.537 [2024-07-21 03:44:27.662843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.537 [2024-07-21 03:44:27.662870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.537 qpair failed and we were unable to recover it. 00:34:42.537 [2024-07-21 03:44:27.663010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.537 [2024-07-21 03:44:27.663040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.537 qpair failed and we were unable to recover it. 00:34:42.537 [2024-07-21 03:44:27.663222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.537 [2024-07-21 03:44:27.663251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.537 qpair failed and we were unable to recover it. 00:34:42.537 [2024-07-21 03:44:27.663395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.537 [2024-07-21 03:44:27.663439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.537 qpair failed and we were unable to recover it. 00:34:42.537 [2024-07-21 03:44:27.663576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.537 [2024-07-21 03:44:27.663606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.537 qpair failed and we were unable to recover it. 00:34:42.537 [2024-07-21 03:44:27.663730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.537 [2024-07-21 03:44:27.663757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.537 qpair failed and we were unable to recover it. 00:34:42.537 [2024-07-21 03:44:27.663949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.537 [2024-07-21 03:44:27.664010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.537 qpair failed and we were unable to recover it. 00:34:42.537 [2024-07-21 03:44:27.664140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.537 [2024-07-21 03:44:27.664171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.537 qpair failed and we were unable to recover it. 00:34:42.537 [2024-07-21 03:44:27.664296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.537 [2024-07-21 03:44:27.664339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.537 qpair failed and we were unable to recover it. 00:34:42.537 [2024-07-21 03:44:27.664474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.537 [2024-07-21 03:44:27.664501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.537 qpair failed and we were unable to recover it. 00:34:42.537 [2024-07-21 03:44:27.664657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.537 [2024-07-21 03:44:27.664685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.537 qpair failed and we were unable to recover it. 00:34:42.537 [2024-07-21 03:44:27.664808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.537 [2024-07-21 03:44:27.664835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.537 qpair failed and we were unable to recover it. 00:34:42.537 [2024-07-21 03:44:27.664998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.537 [2024-07-21 03:44:27.665028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.537 qpair failed and we were unable to recover it. 00:34:42.537 [2024-07-21 03:44:27.665150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.537 [2024-07-21 03:44:27.665180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.537 qpair failed and we were unable to recover it. 00:34:42.537 [2024-07-21 03:44:27.665359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.537 [2024-07-21 03:44:27.665388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.537 qpair failed and we were unable to recover it. 00:34:42.537 [2024-07-21 03:44:27.665553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.537 [2024-07-21 03:44:27.665584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.537 qpair failed and we were unable to recover it. 00:34:42.537 [2024-07-21 03:44:27.665757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.537 [2024-07-21 03:44:27.665785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.537 qpair failed and we were unable to recover it. 00:34:42.537 [2024-07-21 03:44:27.665908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.537 [2024-07-21 03:44:27.665936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.537 qpair failed and we were unable to recover it. 00:34:42.537 [2024-07-21 03:44:27.666062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.537 [2024-07-21 03:44:27.666106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.537 qpair failed and we were unable to recover it. 00:34:42.537 [2024-07-21 03:44:27.666234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.537 [2024-07-21 03:44:27.666266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.537 qpair failed and we were unable to recover it. 00:34:42.537 [2024-07-21 03:44:27.666460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.537 [2024-07-21 03:44:27.666492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.537 qpair failed and we were unable to recover it. 00:34:42.537 [2024-07-21 03:44:27.666649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.537 [2024-07-21 03:44:27.666677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.537 qpair failed and we were unable to recover it. 00:34:42.537 [2024-07-21 03:44:27.666768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.537 [2024-07-21 03:44:27.666795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.538 qpair failed and we were unable to recover it. 00:34:42.538 [2024-07-21 03:44:27.666880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.538 [2024-07-21 03:44:27.666928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.538 qpair failed and we were unable to recover it. 00:34:42.538 [2024-07-21 03:44:27.667060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.538 [2024-07-21 03:44:27.667090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.538 qpair failed and we were unable to recover it. 00:34:42.538 [2024-07-21 03:44:27.667222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.538 [2024-07-21 03:44:27.667253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.538 qpair failed and we were unable to recover it. 00:34:42.538 [2024-07-21 03:44:27.667386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.538 [2024-07-21 03:44:27.667416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.538 qpair failed and we were unable to recover it. 00:34:42.538 [2024-07-21 03:44:27.667522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.538 [2024-07-21 03:44:27.667549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.538 qpair failed and we were unable to recover it. 00:34:42.538 [2024-07-21 03:44:27.667739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.538 [2024-07-21 03:44:27.667780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.538 qpair failed and we were unable to recover it. 00:34:42.538 [2024-07-21 03:44:27.667924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.538 [2024-07-21 03:44:27.667953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.538 qpair failed and we were unable to recover it. 00:34:42.538 [2024-07-21 03:44:27.668038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.538 [2024-07-21 03:44:27.668063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.538 qpair failed and we were unable to recover it. 00:34:42.538 [2024-07-21 03:44:27.668210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.538 [2024-07-21 03:44:27.668240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.538 qpair failed and we were unable to recover it. 00:34:42.538 [2024-07-21 03:44:27.668416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.538 [2024-07-21 03:44:27.668443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.538 qpair failed and we were unable to recover it. 00:34:42.538 [2024-07-21 03:44:27.668569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.538 [2024-07-21 03:44:27.668598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.538 qpair failed and we were unable to recover it. 00:34:42.538 [2024-07-21 03:44:27.668719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.538 [2024-07-21 03:44:27.668748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.538 qpair failed and we were unable to recover it. 00:34:42.538 [2024-07-21 03:44:27.668848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.538 [2024-07-21 03:44:27.668875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.538 qpair failed and we were unable to recover it. 00:34:42.538 [2024-07-21 03:44:27.668985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.538 [2024-07-21 03:44:27.669011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.538 qpair failed and we were unable to recover it. 00:34:42.538 [2024-07-21 03:44:27.669148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.538 [2024-07-21 03:44:27.669174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.538 qpair failed and we were unable to recover it. 00:34:42.538 [2024-07-21 03:44:27.669304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.538 [2024-07-21 03:44:27.669331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.538 qpair failed and we were unable to recover it. 00:34:42.538 [2024-07-21 03:44:27.669416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.538 [2024-07-21 03:44:27.669457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.538 qpair failed and we were unable to recover it. 00:34:42.538 [2024-07-21 03:44:27.669620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.538 [2024-07-21 03:44:27.669648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.538 qpair failed and we were unable to recover it. 00:34:42.538 [2024-07-21 03:44:27.669762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.538 [2024-07-21 03:44:27.669790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.538 qpair failed and we were unable to recover it. 00:34:42.538 [2024-07-21 03:44:27.669874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.538 [2024-07-21 03:44:27.669899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.538 qpair failed and we were unable to recover it. 00:34:42.538 [2024-07-21 03:44:27.670043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.538 [2024-07-21 03:44:27.670073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.538 qpair failed and we were unable to recover it. 00:34:42.538 [2024-07-21 03:44:27.670191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.538 [2024-07-21 03:44:27.670219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.538 qpair failed and we were unable to recover it. 00:34:42.538 [2024-07-21 03:44:27.670346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.538 [2024-07-21 03:44:27.670372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.538 qpair failed and we were unable to recover it. 00:34:42.538 [2024-07-21 03:44:27.670524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.538 [2024-07-21 03:44:27.670562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.538 qpair failed and we were unable to recover it. 00:34:42.538 [2024-07-21 03:44:27.670699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.538 [2024-07-21 03:44:27.670728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.538 qpair failed and we were unable to recover it. 00:34:42.538 [2024-07-21 03:44:27.670846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.538 [2024-07-21 03:44:27.670873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.538 qpair failed and we were unable to recover it. 00:34:42.538 [2024-07-21 03:44:27.671017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.538 [2024-07-21 03:44:27.671048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.538 qpair failed and we were unable to recover it. 00:34:42.538 [2024-07-21 03:44:27.671220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.538 [2024-07-21 03:44:27.671247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.538 qpair failed and we were unable to recover it. 00:34:42.538 [2024-07-21 03:44:27.671369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.538 [2024-07-21 03:44:27.671397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.538 qpair failed and we were unable to recover it. 00:34:42.538 [2024-07-21 03:44:27.671522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.538 [2024-07-21 03:44:27.671549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.538 qpair failed and we were unable to recover it. 00:34:42.538 [2024-07-21 03:44:27.671668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.538 [2024-07-21 03:44:27.671696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.538 qpair failed and we were unable to recover it. 00:34:42.538 [2024-07-21 03:44:27.671819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.538 [2024-07-21 03:44:27.671846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.538 qpair failed and we were unable to recover it. 00:34:42.538 [2024-07-21 03:44:27.672017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.538 [2024-07-21 03:44:27.672048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.538 qpair failed and we were unable to recover it. 00:34:42.538 [2024-07-21 03:44:27.672191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.538 [2024-07-21 03:44:27.672219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.538 qpair failed and we were unable to recover it. 00:34:42.538 [2024-07-21 03:44:27.672370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.538 [2024-07-21 03:44:27.672415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.538 qpair failed and we were unable to recover it. 00:34:42.538 [2024-07-21 03:44:27.672587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.538 [2024-07-21 03:44:27.672641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.538 qpair failed and we were unable to recover it. 00:34:42.538 [2024-07-21 03:44:27.672750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.538 [2024-07-21 03:44:27.672777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.538 qpair failed and we were unable to recover it. 00:34:42.538 [2024-07-21 03:44:27.672882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.538 [2024-07-21 03:44:27.672909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.538 qpair failed and we were unable to recover it. 00:34:42.538 [2024-07-21 03:44:27.673067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.538 [2024-07-21 03:44:27.673094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.538 qpair failed and we were unable to recover it. 00:34:42.538 [2024-07-21 03:44:27.673210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.538 [2024-07-21 03:44:27.673237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.538 qpair failed and we were unable to recover it. 00:34:42.539 [2024-07-21 03:44:27.673329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.539 [2024-07-21 03:44:27.673354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.539 qpair failed and we were unable to recover it. 00:34:42.539 [2024-07-21 03:44:27.673524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.539 [2024-07-21 03:44:27.673554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.539 qpair failed and we were unable to recover it. 00:34:42.539 [2024-07-21 03:44:27.673664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.539 [2024-07-21 03:44:27.673690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.539 qpair failed and we were unable to recover it. 00:34:42.539 [2024-07-21 03:44:27.673812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.539 [2024-07-21 03:44:27.673839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.539 qpair failed and we were unable to recover it. 00:34:42.539 [2024-07-21 03:44:27.673958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.539 [2024-07-21 03:44:27.673985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.539 qpair failed and we were unable to recover it. 00:34:42.539 [2024-07-21 03:44:27.674108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.539 [2024-07-21 03:44:27.674135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.539 qpair failed and we were unable to recover it. 00:34:42.539 [2024-07-21 03:44:27.674237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.539 [2024-07-21 03:44:27.674264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.539 qpair failed and we were unable to recover it. 00:34:42.539 [2024-07-21 03:44:27.674359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.539 [2024-07-21 03:44:27.674386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.539 qpair failed and we were unable to recover it. 00:34:42.539 [2024-07-21 03:44:27.674541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.539 [2024-07-21 03:44:27.674568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.539 qpair failed and we were unable to recover it. 00:34:42.539 [2024-07-21 03:44:27.674686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.539 [2024-07-21 03:44:27.674713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.539 qpair failed and we were unable to recover it. 00:34:42.539 [2024-07-21 03:44:27.674843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.539 [2024-07-21 03:44:27.674871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.539 qpair failed and we were unable to recover it. 00:34:42.539 [2024-07-21 03:44:27.675019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.539 [2024-07-21 03:44:27.675046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.539 qpair failed and we were unable to recover it. 00:34:42.539 [2024-07-21 03:44:27.675158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.539 [2024-07-21 03:44:27.675184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.539 qpair failed and we were unable to recover it. 00:34:42.539 [2024-07-21 03:44:27.675289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.539 [2024-07-21 03:44:27.675319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.539 qpair failed and we were unable to recover it. 00:34:42.539 [2024-07-21 03:44:27.675467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.539 [2024-07-21 03:44:27.675494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.539 qpair failed and we were unable to recover it. 00:34:42.539 [2024-07-21 03:44:27.675622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.539 [2024-07-21 03:44:27.675650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.539 qpair failed and we were unable to recover it. 00:34:42.539 [2024-07-21 03:44:27.675774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.539 [2024-07-21 03:44:27.675799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.539 qpair failed and we were unable to recover it. 00:34:42.539 [2024-07-21 03:44:27.675903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.539 [2024-07-21 03:44:27.675930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.539 qpair failed and we were unable to recover it. 00:34:42.539 [2024-07-21 03:44:27.676054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.539 [2024-07-21 03:44:27.676080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.539 qpair failed and we were unable to recover it. 00:34:42.539 [2024-07-21 03:44:27.676213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.539 [2024-07-21 03:44:27.676242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.539 qpair failed and we were unable to recover it. 00:34:42.539 [2024-07-21 03:44:27.676403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.539 [2024-07-21 03:44:27.676454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.539 qpair failed and we were unable to recover it. 00:34:42.539 [2024-07-21 03:44:27.676597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.539 [2024-07-21 03:44:27.676632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.539 qpair failed and we were unable to recover it. 00:34:42.539 [2024-07-21 03:44:27.676805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.539 [2024-07-21 03:44:27.676832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.539 qpair failed and we were unable to recover it. 00:34:42.539 [2024-07-21 03:44:27.676986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.539 [2024-07-21 03:44:27.677018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.539 qpair failed and we were unable to recover it. 00:34:42.539 [2024-07-21 03:44:27.677141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.539 [2024-07-21 03:44:27.677187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.539 qpair failed and we were unable to recover it. 00:34:42.539 [2024-07-21 03:44:27.677335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.539 [2024-07-21 03:44:27.677363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.539 qpair failed and we were unable to recover it. 00:34:42.539 [2024-07-21 03:44:27.677489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.539 [2024-07-21 03:44:27.677516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.539 qpair failed and we were unable to recover it. 00:34:42.539 [2024-07-21 03:44:27.677620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.539 [2024-07-21 03:44:27.677647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.539 qpair failed and we were unable to recover it. 00:34:42.539 [2024-07-21 03:44:27.677797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.539 [2024-07-21 03:44:27.677824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.539 qpair failed and we were unable to recover it. 00:34:42.539 [2024-07-21 03:44:27.677942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.539 [2024-07-21 03:44:27.677968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.539 qpair failed and we were unable to recover it. 00:34:42.539 [2024-07-21 03:44:27.678057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.539 [2024-07-21 03:44:27.678082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.539 qpair failed and we were unable to recover it. 00:34:42.539 [2024-07-21 03:44:27.678171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.539 [2024-07-21 03:44:27.678198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.539 qpair failed and we were unable to recover it. 00:34:42.539 [2024-07-21 03:44:27.678322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.539 [2024-07-21 03:44:27.678348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.539 qpair failed and we were unable to recover it. 00:34:42.539 [2024-07-21 03:44:27.678460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.539 [2024-07-21 03:44:27.678487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.540 qpair failed and we were unable to recover it. 00:34:42.540 [2024-07-21 03:44:27.678601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.540 [2024-07-21 03:44:27.678640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.540 qpair failed and we were unable to recover it. 00:34:42.540 [2024-07-21 03:44:27.678749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.540 [2024-07-21 03:44:27.678781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.540 qpair failed and we were unable to recover it. 00:34:42.540 [2024-07-21 03:44:27.678878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.540 [2024-07-21 03:44:27.678906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.540 qpair failed and we were unable to recover it. 00:34:42.540 [2024-07-21 03:44:27.679004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.540 [2024-07-21 03:44:27.679031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.540 qpair failed and we were unable to recover it. 00:34:42.540 [2024-07-21 03:44:27.679153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.540 [2024-07-21 03:44:27.679182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.540 qpair failed and we were unable to recover it. 00:34:42.540 [2024-07-21 03:44:27.679280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.540 [2024-07-21 03:44:27.679307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.540 qpair failed and we were unable to recover it. 00:34:42.540 [2024-07-21 03:44:27.679428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.540 [2024-07-21 03:44:27.679459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.540 qpair failed and we were unable to recover it. 00:34:42.540 [2024-07-21 03:44:27.679610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.540 [2024-07-21 03:44:27.679646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.540 qpair failed and we were unable to recover it. 00:34:42.540 [2024-07-21 03:44:27.679768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.540 [2024-07-21 03:44:27.679796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.540 qpair failed and we were unable to recover it. 00:34:42.540 [2024-07-21 03:44:27.679915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.540 [2024-07-21 03:44:27.679943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.540 qpair failed and we were unable to recover it. 00:34:42.540 [2024-07-21 03:44:27.680063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.540 [2024-07-21 03:44:27.680090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.540 qpair failed and we were unable to recover it. 00:34:42.540 [2024-07-21 03:44:27.680213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.540 [2024-07-21 03:44:27.680240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.540 qpair failed and we were unable to recover it. 00:34:42.540 [2024-07-21 03:44:27.680376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.540 [2024-07-21 03:44:27.680417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.540 qpair failed and we were unable to recover it. 00:34:42.540 [2024-07-21 03:44:27.680568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.540 [2024-07-21 03:44:27.680598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.540 qpair failed and we were unable to recover it. 00:34:42.540 [2024-07-21 03:44:27.680730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.540 [2024-07-21 03:44:27.680758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.540 qpair failed and we were unable to recover it. 00:34:42.540 [2024-07-21 03:44:27.680858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.540 [2024-07-21 03:44:27.680886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.540 qpair failed and we were unable to recover it. 00:34:42.540 [2024-07-21 03:44:27.681022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.540 [2024-07-21 03:44:27.681050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.540 qpair failed and we were unable to recover it. 00:34:42.540 [2024-07-21 03:44:27.681144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.540 [2024-07-21 03:44:27.681173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.540 qpair failed and we were unable to recover it. 00:34:42.540 [2024-07-21 03:44:27.681358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.540 [2024-07-21 03:44:27.681386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.540 qpair failed and we were unable to recover it. 00:34:42.540 [2024-07-21 03:44:27.681510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.540 [2024-07-21 03:44:27.681537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.540 qpair failed and we were unable to recover it. 00:34:42.540 [2024-07-21 03:44:27.681658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.540 [2024-07-21 03:44:27.681685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.540 qpair failed and we were unable to recover it. 00:34:42.540 [2024-07-21 03:44:27.681794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.540 [2024-07-21 03:44:27.681824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.540 qpair failed and we were unable to recover it. 00:34:42.540 [2024-07-21 03:44:27.681993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.540 [2024-07-21 03:44:27.682019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.540 qpair failed and we were unable to recover it. 00:34:42.540 [2024-07-21 03:44:27.682185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.540 [2024-07-21 03:44:27.682214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.540 qpair failed and we were unable to recover it. 00:34:42.540 [2024-07-21 03:44:27.682346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.540 [2024-07-21 03:44:27.682384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.540 qpair failed and we were unable to recover it. 00:34:42.540 [2024-07-21 03:44:27.682499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.540 [2024-07-21 03:44:27.682524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.540 qpair failed and we were unable to recover it. 00:34:42.540 [2024-07-21 03:44:27.682645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.540 [2024-07-21 03:44:27.682673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.540 qpair failed and we were unable to recover it. 00:34:42.540 [2024-07-21 03:44:27.682829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.540 [2024-07-21 03:44:27.682855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.540 qpair failed and we were unable to recover it. 00:34:42.540 [2024-07-21 03:44:27.683002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.540 [2024-07-21 03:44:27.683029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.540 qpair failed and we were unable to recover it. 00:34:42.541 [2024-07-21 03:44:27.683118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.541 [2024-07-21 03:44:27.683150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.541 qpair failed and we were unable to recover it. 00:34:42.541 [2024-07-21 03:44:27.683294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.541 [2024-07-21 03:44:27.683324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.541 qpair failed and we were unable to recover it. 00:34:42.541 [2024-07-21 03:44:27.683470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.541 [2024-07-21 03:44:27.683496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.541 qpair failed and we were unable to recover it. 00:34:42.541 [2024-07-21 03:44:27.683647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.541 [2024-07-21 03:44:27.683692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.541 qpair failed and we were unable to recover it. 00:34:42.541 [2024-07-21 03:44:27.683846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.541 [2024-07-21 03:44:27.683875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.541 qpair failed and we were unable to recover it. 00:34:42.541 [2024-07-21 03:44:27.684021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.541 [2024-07-21 03:44:27.684048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.541 qpair failed and we were unable to recover it. 00:34:42.541 [2024-07-21 03:44:27.684169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.541 [2024-07-21 03:44:27.684196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.541 qpair failed and we were unable to recover it. 00:34:42.541 [2024-07-21 03:44:27.684371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.541 [2024-07-21 03:44:27.684401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.541 qpair failed and we were unable to recover it. 00:34:42.541 [2024-07-21 03:44:27.684546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.541 [2024-07-21 03:44:27.684572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.541 qpair failed and we were unable to recover it. 00:34:42.541 [2024-07-21 03:44:27.684708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.541 [2024-07-21 03:44:27.684735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.541 qpair failed and we were unable to recover it. 00:34:42.541 [2024-07-21 03:44:27.684856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.541 [2024-07-21 03:44:27.684883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.541 qpair failed and we were unable to recover it. 00:34:42.541 [2024-07-21 03:44:27.684972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.541 [2024-07-21 03:44:27.684997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.541 qpair failed and we were unable to recover it. 00:34:42.541 [2024-07-21 03:44:27.685118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.541 [2024-07-21 03:44:27.685145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.541 qpair failed and we were unable to recover it. 00:34:42.541 [2024-07-21 03:44:27.685284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.541 [2024-07-21 03:44:27.685313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.541 qpair failed and we were unable to recover it. 00:34:42.541 [2024-07-21 03:44:27.685454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.541 [2024-07-21 03:44:27.685481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.541 qpair failed and we were unable to recover it. 00:34:42.541 [2024-07-21 03:44:27.685569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.541 [2024-07-21 03:44:27.685594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.541 qpair failed and we were unable to recover it. 00:34:42.541 [2024-07-21 03:44:27.685752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.541 [2024-07-21 03:44:27.685797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.541 qpair failed and we were unable to recover it. 00:34:42.541 [2024-07-21 03:44:27.685978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.541 [2024-07-21 03:44:27.686006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.541 qpair failed and we were unable to recover it. 00:34:42.541 [2024-07-21 03:44:27.686146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.541 [2024-07-21 03:44:27.686178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.541 qpair failed and we were unable to recover it. 00:34:42.541 [2024-07-21 03:44:27.686342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.541 [2024-07-21 03:44:27.686372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.541 qpair failed and we were unable to recover it. 00:34:42.541 [2024-07-21 03:44:27.686530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.541 [2024-07-21 03:44:27.686560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.541 qpair failed and we were unable to recover it. 00:34:42.541 [2024-07-21 03:44:27.686713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.541 [2024-07-21 03:44:27.686741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.541 qpair failed and we were unable to recover it. 00:34:42.541 [2024-07-21 03:44:27.686842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.541 [2024-07-21 03:44:27.686871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.541 qpair failed and we were unable to recover it. 00:34:42.541 [2024-07-21 03:44:27.687029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.541 [2024-07-21 03:44:27.687056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.541 qpair failed and we were unable to recover it. 00:34:42.541 [2024-07-21 03:44:27.687176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.541 [2024-07-21 03:44:27.687204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.541 qpair failed and we were unable to recover it. 00:34:42.541 [2024-07-21 03:44:27.687326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.541 [2024-07-21 03:44:27.687354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.541 qpair failed and we were unable to recover it. 00:34:42.541 [2024-07-21 03:44:27.687443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.541 [2024-07-21 03:44:27.687469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.541 qpair failed and we were unable to recover it. 00:34:42.541 [2024-07-21 03:44:27.687570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.541 [2024-07-21 03:44:27.687598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.541 qpair failed and we were unable to recover it. 00:34:42.541 [2024-07-21 03:44:27.687739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.541 [2024-07-21 03:44:27.687770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.541 qpair failed and we were unable to recover it. 00:34:42.541 [2024-07-21 03:44:27.687915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.541 [2024-07-21 03:44:27.687943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.541 qpair failed and we were unable to recover it. 00:34:42.541 [2024-07-21 03:44:27.688062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.541 [2024-07-21 03:44:27.688089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.541 qpair failed and we were unable to recover it. 00:34:42.541 [2024-07-21 03:44:27.688230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.541 [2024-07-21 03:44:27.688260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.541 qpair failed and we were unable to recover it. 00:34:42.541 [2024-07-21 03:44:27.688379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.541 [2024-07-21 03:44:27.688405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.541 qpair failed and we were unable to recover it. 00:34:42.541 [2024-07-21 03:44:27.688522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.541 [2024-07-21 03:44:27.688549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.541 qpair failed and we were unable to recover it. 00:34:42.541 [2024-07-21 03:44:27.688669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.541 [2024-07-21 03:44:27.688701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.541 qpair failed and we were unable to recover it. 00:34:42.541 [2024-07-21 03:44:27.688864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.541 [2024-07-21 03:44:27.688891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.541 qpair failed and we were unable to recover it. 00:34:42.541 [2024-07-21 03:44:27.689015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.541 [2024-07-21 03:44:27.689041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.541 qpair failed and we were unable to recover it. 00:34:42.541 [2024-07-21 03:44:27.689164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.541 [2024-07-21 03:44:27.689192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.541 qpair failed and we were unable to recover it. 00:34:42.541 [2024-07-21 03:44:27.689289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.541 [2024-07-21 03:44:27.689316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.541 qpair failed and we were unable to recover it. 00:34:42.541 [2024-07-21 03:44:27.689444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.542 [2024-07-21 03:44:27.689471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.542 qpair failed and we were unable to recover it. 00:34:42.542 [2024-07-21 03:44:27.689580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.542 [2024-07-21 03:44:27.689619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.542 qpair failed and we were unable to recover it. 00:34:42.542 [2024-07-21 03:44:27.689775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.542 [2024-07-21 03:44:27.689802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.542 qpair failed and we were unable to recover it. 00:34:42.542 [2024-07-21 03:44:27.689896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.542 [2024-07-21 03:44:27.689924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.542 qpair failed and we were unable to recover it. 00:34:42.542 [2024-07-21 03:44:27.690052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.542 [2024-07-21 03:44:27.690080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.542 qpair failed and we were unable to recover it. 00:34:42.542 [2024-07-21 03:44:27.690227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.542 [2024-07-21 03:44:27.690254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.542 qpair failed and we were unable to recover it. 00:34:42.542 [2024-07-21 03:44:27.690352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.542 [2024-07-21 03:44:27.690380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.542 qpair failed and we were unable to recover it. 00:34:42.542 [2024-07-21 03:44:27.690502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.542 [2024-07-21 03:44:27.690529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.542 qpair failed and we were unable to recover it. 00:34:42.542 [2024-07-21 03:44:27.690652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.542 [2024-07-21 03:44:27.690680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.542 qpair failed and we were unable to recover it. 00:34:42.542 [2024-07-21 03:44:27.690773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.542 [2024-07-21 03:44:27.690799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.542 qpair failed and we were unable to recover it. 00:34:42.542 [2024-07-21 03:44:27.690879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.542 [2024-07-21 03:44:27.690904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.542 qpair failed and we were unable to recover it. 00:34:42.542 [2024-07-21 03:44:27.691024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.542 [2024-07-21 03:44:27.691051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.542 qpair failed and we were unable to recover it. 00:34:42.542 [2024-07-21 03:44:27.691198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.542 [2024-07-21 03:44:27.691243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.542 qpair failed and we were unable to recover it. 00:34:42.542 [2024-07-21 03:44:27.691414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.542 [2024-07-21 03:44:27.691459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.542 qpair failed and we were unable to recover it. 00:34:42.542 [2024-07-21 03:44:27.691579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.542 [2024-07-21 03:44:27.691608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.542 qpair failed and we were unable to recover it. 00:34:42.542 [2024-07-21 03:44:27.691759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.542 [2024-07-21 03:44:27.691788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.542 qpair failed and we were unable to recover it. 00:34:42.542 [2024-07-21 03:44:27.691935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.542 [2024-07-21 03:44:27.691965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.542 qpair failed and we were unable to recover it. 00:34:42.542 [2024-07-21 03:44:27.692090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.542 [2024-07-21 03:44:27.692119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.542 qpair failed and we were unable to recover it. 00:34:42.542 [2024-07-21 03:44:27.692239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.542 [2024-07-21 03:44:27.692269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.542 qpair failed and we were unable to recover it. 00:34:42.542 [2024-07-21 03:44:27.692406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.542 [2024-07-21 03:44:27.692439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.542 qpair failed and we were unable to recover it. 00:34:42.542 [2024-07-21 03:44:27.692589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.542 [2024-07-21 03:44:27.692622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.542 qpair failed and we were unable to recover it. 00:34:42.542 [2024-07-21 03:44:27.692770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.542 [2024-07-21 03:44:27.692797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.542 qpair failed and we were unable to recover it. 00:34:42.542 [2024-07-21 03:44:27.692914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.542 [2024-07-21 03:44:27.692943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.542 qpair failed and we were unable to recover it. 00:34:42.542 [2024-07-21 03:44:27.693110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.542 [2024-07-21 03:44:27.693137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.542 qpair failed and we were unable to recover it. 00:34:42.542 [2024-07-21 03:44:27.693236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.542 [2024-07-21 03:44:27.693263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.542 qpair failed and we were unable to recover it. 00:34:42.542 [2024-07-21 03:44:27.693414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.542 [2024-07-21 03:44:27.693443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.542 qpair failed and we were unable to recover it. 00:34:42.542 [2024-07-21 03:44:27.693592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.542 [2024-07-21 03:44:27.693625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.542 qpair failed and we were unable to recover it. 00:34:42.542 [2024-07-21 03:44:27.693771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.542 [2024-07-21 03:44:27.693801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.542 qpair failed and we were unable to recover it. 00:34:42.542 [2024-07-21 03:44:27.693957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.542 [2024-07-21 03:44:27.694016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.542 qpair failed and we were unable to recover it. 00:34:42.542 [2024-07-21 03:44:27.694187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.542 [2024-07-21 03:44:27.694214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.542 qpair failed and we were unable to recover it. 00:34:42.542 [2024-07-21 03:44:27.694345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.542 [2024-07-21 03:44:27.694389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.542 qpair failed and we were unable to recover it. 00:34:42.542 [2024-07-21 03:44:27.694530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.542 [2024-07-21 03:44:27.694562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.542 qpair failed and we were unable to recover it. 00:34:42.542 [2024-07-21 03:44:27.694719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.542 [2024-07-21 03:44:27.694747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.542 qpair failed and we were unable to recover it. 00:34:42.542 [2024-07-21 03:44:27.694847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.542 [2024-07-21 03:44:27.694874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.543 qpair failed and we were unable to recover it. 00:34:42.543 [2024-07-21 03:44:27.694995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.543 [2024-07-21 03:44:27.695022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.543 qpair failed and we were unable to recover it. 00:34:42.543 [2024-07-21 03:44:27.695149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.543 [2024-07-21 03:44:27.695178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.543 qpair failed and we were unable to recover it. 00:34:42.543 [2024-07-21 03:44:27.695298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.543 [2024-07-21 03:44:27.695342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.543 qpair failed and we were unable to recover it. 00:34:42.543 [2024-07-21 03:44:27.695501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.543 [2024-07-21 03:44:27.695531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.543 qpair failed and we were unable to recover it. 00:34:42.543 [2024-07-21 03:44:27.695674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.543 [2024-07-21 03:44:27.695702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.543 qpair failed and we were unable to recover it. 00:34:42.543 [2024-07-21 03:44:27.695802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.543 [2024-07-21 03:44:27.695831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.543 qpair failed and we were unable to recover it. 00:34:42.543 [2024-07-21 03:44:27.695924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.543 [2024-07-21 03:44:27.695951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.543 qpair failed and we were unable to recover it. 00:34:42.543 [2024-07-21 03:44:27.696077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.543 [2024-07-21 03:44:27.696108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.543 qpair failed and we were unable to recover it. 00:34:42.543 [2024-07-21 03:44:27.696221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.543 [2024-07-21 03:44:27.696264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.543 qpair failed and we were unable to recover it. 00:34:42.543 [2024-07-21 03:44:27.696431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.543 [2024-07-21 03:44:27.696460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.543 qpair failed and we were unable to recover it. 00:34:42.543 [2024-07-21 03:44:27.696610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.543 [2024-07-21 03:44:27.696644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.543 qpair failed and we were unable to recover it. 00:34:42.543 [2024-07-21 03:44:27.696783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.543 [2024-07-21 03:44:27.696814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.543 qpair failed and we were unable to recover it. 00:34:42.543 [2024-07-21 03:44:27.696942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.543 [2024-07-21 03:44:27.696972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.543 qpair failed and we were unable to recover it. 00:34:42.543 [2024-07-21 03:44:27.697078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.543 [2024-07-21 03:44:27.697105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.543 qpair failed and we were unable to recover it. 00:34:42.543 [2024-07-21 03:44:27.697236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.543 [2024-07-21 03:44:27.697263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.543 qpair failed and we were unable to recover it. 00:34:42.543 [2024-07-21 03:44:27.697402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.543 [2024-07-21 03:44:27.697432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.543 qpair failed and we were unable to recover it. 00:34:42.543 [2024-07-21 03:44:27.697568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.543 [2024-07-21 03:44:27.697599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.543 qpair failed and we were unable to recover it. 00:34:42.543 [2024-07-21 03:44:27.697765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.543 [2024-07-21 03:44:27.697792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.543 qpair failed and we were unable to recover it. 00:34:42.543 [2024-07-21 03:44:27.697886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.543 [2024-07-21 03:44:27.697913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.543 qpair failed and we were unable to recover it. 00:34:42.543 [2024-07-21 03:44:27.698014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.543 [2024-07-21 03:44:27.698041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.543 qpair failed and we were unable to recover it. 00:34:42.543 [2024-07-21 03:44:27.698157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.543 [2024-07-21 03:44:27.698184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.543 qpair failed and we were unable to recover it. 00:34:42.543 [2024-07-21 03:44:27.698333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.543 [2024-07-21 03:44:27.698364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.543 qpair failed and we were unable to recover it. 00:34:42.543 [2024-07-21 03:44:27.698509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.543 [2024-07-21 03:44:27.698537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.543 qpair failed and we were unable to recover it. 00:34:42.543 [2024-07-21 03:44:27.698683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.543 [2024-07-21 03:44:27.698728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.543 qpair failed and we were unable to recover it. 00:34:42.543 [2024-07-21 03:44:27.698859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.543 [2024-07-21 03:44:27.698890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.543 qpair failed and we were unable to recover it. 00:34:42.543 [2024-07-21 03:44:27.699029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.543 [2024-07-21 03:44:27.699056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.543 qpair failed and we were unable to recover it. 00:34:42.543 [2024-07-21 03:44:27.699181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.543 [2024-07-21 03:44:27.699208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.543 qpair failed and we were unable to recover it. 00:34:42.543 [2024-07-21 03:44:27.699329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.543 [2024-07-21 03:44:27.699375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.543 qpair failed and we were unable to recover it. 00:34:42.543 [2024-07-21 03:44:27.699471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.543 [2024-07-21 03:44:27.699499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.543 qpair failed and we were unable to recover it. 00:34:42.543 [2024-07-21 03:44:27.699636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.543 [2024-07-21 03:44:27.699665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.543 qpair failed and we were unable to recover it. 00:34:42.543 [2024-07-21 03:44:27.699816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.543 [2024-07-21 03:44:27.699846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.543 qpair failed and we were unable to recover it. 00:34:42.544 [2024-07-21 03:44:27.699967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.544 [2024-07-21 03:44:27.699994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.544 qpair failed and we were unable to recover it. 00:34:42.544 [2024-07-21 03:44:27.700105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.544 [2024-07-21 03:44:27.700132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.544 qpair failed and we were unable to recover it. 00:34:42.544 [2024-07-21 03:44:27.700226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.544 [2024-07-21 03:44:27.700255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.544 qpair failed and we were unable to recover it. 00:34:42.544 [2024-07-21 03:44:27.700385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.544 [2024-07-21 03:44:27.700414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.544 qpair failed and we were unable to recover it. 00:34:42.544 [2024-07-21 03:44:27.700542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.544 [2024-07-21 03:44:27.700585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.544 qpair failed and we were unable to recover it. 00:34:42.544 [2024-07-21 03:44:27.700697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.544 [2024-07-21 03:44:27.700727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.544 qpair failed and we were unable to recover it. 00:34:42.544 [2024-07-21 03:44:27.700843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.544 [2024-07-21 03:44:27.700882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.544 qpair failed and we were unable to recover it. 00:34:42.544 [2024-07-21 03:44:27.701005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.544 [2024-07-21 03:44:27.701033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.544 qpair failed and we were unable to recover it. 00:34:42.544 [2024-07-21 03:44:27.701191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.544 [2024-07-21 03:44:27.701219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.544 qpair failed and we were unable to recover it. 00:34:42.544 [2024-07-21 03:44:27.701339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.544 [2024-07-21 03:44:27.701368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.544 qpair failed and we were unable to recover it. 00:34:42.544 [2024-07-21 03:44:27.701490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.544 [2024-07-21 03:44:27.701517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.544 qpair failed and we were unable to recover it. 00:34:42.544 [2024-07-21 03:44:27.701693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.544 [2024-07-21 03:44:27.701723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.544 qpair failed and we were unable to recover it. 00:34:42.544 [2024-07-21 03:44:27.701865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.544 [2024-07-21 03:44:27.701891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.544 qpair failed and we were unable to recover it. 00:34:42.544 [2024-07-21 03:44:27.702011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.544 [2024-07-21 03:44:27.702038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.544 qpair failed and we were unable to recover it. 00:34:42.544 [2024-07-21 03:44:27.702151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.544 [2024-07-21 03:44:27.702181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.544 qpair failed and we were unable to recover it. 00:34:42.544 [2024-07-21 03:44:27.702358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.544 [2024-07-21 03:44:27.702385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.544 qpair failed and we were unable to recover it. 00:34:42.544 [2024-07-21 03:44:27.702534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.544 [2024-07-21 03:44:27.702568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.544 qpair failed and we were unable to recover it. 00:34:42.544 [2024-07-21 03:44:27.702722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.544 [2024-07-21 03:44:27.702749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.544 qpair failed and we were unable to recover it. 00:34:42.544 [2024-07-21 03:44:27.702873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.544 [2024-07-21 03:44:27.702900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.544 qpair failed and we were unable to recover it. 00:34:42.544 [2024-07-21 03:44:27.703020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.544 [2024-07-21 03:44:27.703046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.544 qpair failed and we were unable to recover it. 00:34:42.544 [2024-07-21 03:44:27.703194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.544 [2024-07-21 03:44:27.703224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.544 qpair failed and we were unable to recover it. 00:34:42.544 [2024-07-21 03:44:27.703345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.544 [2024-07-21 03:44:27.703371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.544 qpair failed and we were unable to recover it. 00:34:42.544 [2024-07-21 03:44:27.703462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.544 [2024-07-21 03:44:27.703489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.544 qpair failed and we were unable to recover it. 00:34:42.544 [2024-07-21 03:44:27.703637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.544 [2024-07-21 03:44:27.703665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.544 qpair failed and we were unable to recover it. 00:34:42.544 [2024-07-21 03:44:27.703753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.544 [2024-07-21 03:44:27.703780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.544 qpair failed and we were unable to recover it. 00:34:42.544 [2024-07-21 03:44:27.703897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.544 [2024-07-21 03:44:27.703925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.544 qpair failed and we were unable to recover it. 00:34:42.544 [2024-07-21 03:44:27.704054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.544 [2024-07-21 03:44:27.704084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.544 qpair failed and we were unable to recover it. 00:34:42.544 [2024-07-21 03:44:27.704232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.544 [2024-07-21 03:44:27.704259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.544 qpair failed and we were unable to recover it. 00:34:42.544 [2024-07-21 03:44:27.704380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.544 [2024-07-21 03:44:27.704407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.544 qpair failed and we were unable to recover it. 00:34:42.544 [2024-07-21 03:44:27.704517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.544 [2024-07-21 03:44:27.704565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.544 qpair failed and we were unable to recover it. 00:34:42.544 [2024-07-21 03:44:27.704696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.544 [2024-07-21 03:44:27.704724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.544 qpair failed and we were unable to recover it. 00:34:42.544 [2024-07-21 03:44:27.704847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.544 [2024-07-21 03:44:27.704875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.545 qpair failed and we were unable to recover it. 00:34:42.545 [2024-07-21 03:44:27.705023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.545 [2024-07-21 03:44:27.705050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.545 qpair failed and we were unable to recover it. 00:34:42.545 [2024-07-21 03:44:27.705176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.545 [2024-07-21 03:44:27.705203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.545 qpair failed and we were unable to recover it. 00:34:42.545 [2024-07-21 03:44:27.705326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.545 [2024-07-21 03:44:27.705353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.545 qpair failed and we were unable to recover it. 00:34:42.545 [2024-07-21 03:44:27.705453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.545 [2024-07-21 03:44:27.705480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.545 qpair failed and we were unable to recover it. 00:34:42.545 [2024-07-21 03:44:27.705595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.545 [2024-07-21 03:44:27.705628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.545 qpair failed and we were unable to recover it. 00:34:42.545 [2024-07-21 03:44:27.705754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.545 [2024-07-21 03:44:27.705780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.545 qpair failed and we were unable to recover it. 00:34:42.545 [2024-07-21 03:44:27.705924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.545 [2024-07-21 03:44:27.705954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.545 qpair failed and we were unable to recover it. 00:34:42.545 [2024-07-21 03:44:27.706098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.545 [2024-07-21 03:44:27.706124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.545 qpair failed and we were unable to recover it. 00:34:42.545 [2024-07-21 03:44:27.706238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.545 [2024-07-21 03:44:27.706264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.545 qpair failed and we were unable to recover it. 00:34:42.545 [2024-07-21 03:44:27.706382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.545 [2024-07-21 03:44:27.706412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.545 qpair failed and we were unable to recover it. 00:34:42.545 [2024-07-21 03:44:27.706507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.545 [2024-07-21 03:44:27.706537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.545 qpair failed and we were unable to recover it. 00:34:42.545 [2024-07-21 03:44:27.706710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.545 [2024-07-21 03:44:27.706738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.545 qpair failed and we were unable to recover it. 00:34:42.545 [2024-07-21 03:44:27.706833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.545 [2024-07-21 03:44:27.706860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.545 qpair failed and we were unable to recover it. 00:34:42.545 [2024-07-21 03:44:27.706989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.545 [2024-07-21 03:44:27.707015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.545 qpair failed and we were unable to recover it. 00:34:42.545 [2024-07-21 03:44:27.707162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.545 [2024-07-21 03:44:27.707189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.545 qpair failed and we were unable to recover it. 00:34:42.545 [2024-07-21 03:44:27.707312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.545 [2024-07-21 03:44:27.707338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.545 qpair failed and we were unable to recover it. 00:34:42.545 [2024-07-21 03:44:27.707468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.545 [2024-07-21 03:44:27.707494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.545 qpair failed and we were unable to recover it. 00:34:42.545 [2024-07-21 03:44:27.707589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.545 [2024-07-21 03:44:27.707631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.545 qpair failed and we were unable to recover it. 00:34:42.545 [2024-07-21 03:44:27.707766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.545 [2024-07-21 03:44:27.707797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.545 qpair failed and we were unable to recover it. 00:34:42.545 [2024-07-21 03:44:27.707944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.545 [2024-07-21 03:44:27.707971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.545 qpair failed and we were unable to recover it. 00:34:42.545 [2024-07-21 03:44:27.708061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.545 [2024-07-21 03:44:27.708087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.545 qpair failed and we were unable to recover it. 00:34:42.545 [2024-07-21 03:44:27.708238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.545 [2024-07-21 03:44:27.708264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.545 qpair failed and we were unable to recover it. 00:34:42.545 [2024-07-21 03:44:27.708398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.545 [2024-07-21 03:44:27.708427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.545 qpair failed and we were unable to recover it. 00:34:42.545 [2024-07-21 03:44:27.708571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.545 [2024-07-21 03:44:27.708601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.545 qpair failed and we were unable to recover it. 00:34:42.545 [2024-07-21 03:44:27.708750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.545 [2024-07-21 03:44:27.708780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.545 qpair failed and we were unable to recover it. 00:34:42.545 [2024-07-21 03:44:27.708900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.545 [2024-07-21 03:44:27.708927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.545 qpair failed and we were unable to recover it. 00:34:42.545 [2024-07-21 03:44:27.709048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.545 [2024-07-21 03:44:27.709076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.545 qpair failed and we were unable to recover it. 00:34:42.545 [2024-07-21 03:44:27.709204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.545 [2024-07-21 03:44:27.709248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.545 qpair failed and we were unable to recover it. 00:34:42.545 [2024-07-21 03:44:27.709429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.545 [2024-07-21 03:44:27.709457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.545 qpair failed and we were unable to recover it. 00:34:42.545 [2024-07-21 03:44:27.709556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.545 [2024-07-21 03:44:27.709583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.545 qpair failed and we were unable to recover it. 00:34:42.545 [2024-07-21 03:44:27.709717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.545 [2024-07-21 03:44:27.709745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.545 qpair failed and we were unable to recover it. 00:34:42.545 [2024-07-21 03:44:27.709838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.545 [2024-07-21 03:44:27.709867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.545 qpair failed and we were unable to recover it. 00:34:42.545 [2024-07-21 03:44:27.709995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.545 [2024-07-21 03:44:27.710022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.545 qpair failed and we were unable to recover it. 00:34:42.545 [2024-07-21 03:44:27.710143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.545 [2024-07-21 03:44:27.710169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.546 qpair failed and we were unable to recover it. 00:34:42.546 [2024-07-21 03:44:27.710284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.546 [2024-07-21 03:44:27.710311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.546 qpair failed and we were unable to recover it. 00:34:42.546 [2024-07-21 03:44:27.710437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.546 [2024-07-21 03:44:27.710481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.546 qpair failed and we were unable to recover it. 00:34:42.546 [2024-07-21 03:44:27.710626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.546 [2024-07-21 03:44:27.710668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.546 qpair failed and we were unable to recover it. 00:34:42.546 [2024-07-21 03:44:27.710785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.546 [2024-07-21 03:44:27.710813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.546 qpair failed and we were unable to recover it. 00:34:42.546 [2024-07-21 03:44:27.710933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.546 [2024-07-21 03:44:27.710959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.546 qpair failed and we were unable to recover it. 00:34:42.546 [2024-07-21 03:44:27.711103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.546 [2024-07-21 03:44:27.711133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.546 qpair failed and we were unable to recover it. 00:34:42.546 [2024-07-21 03:44:27.711270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.546 [2024-07-21 03:44:27.711298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.546 qpair failed and we were unable to recover it. 00:34:42.546 [2024-07-21 03:44:27.711398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.546 [2024-07-21 03:44:27.711426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.546 qpair failed and we were unable to recover it. 00:34:42.546 [2024-07-21 03:44:27.711584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.546 [2024-07-21 03:44:27.711627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.546 qpair failed and we were unable to recover it. 00:34:42.546 [2024-07-21 03:44:27.711798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.546 [2024-07-21 03:44:27.711825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.546 qpair failed and we were unable to recover it. 00:34:42.546 [2024-07-21 03:44:27.711923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.546 [2024-07-21 03:44:27.711950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.546 qpair failed and we were unable to recover it. 00:34:42.546 [2024-07-21 03:44:27.712116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.546 [2024-07-21 03:44:27.712143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.546 qpair failed and we were unable to recover it. 00:34:42.546 [2024-07-21 03:44:27.712287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.546 [2024-07-21 03:44:27.712314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.546 qpair failed and we were unable to recover it. 00:34:42.546 [2024-07-21 03:44:27.712428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.546 [2024-07-21 03:44:27.712471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.546 qpair failed and we were unable to recover it. 00:34:42.546 [2024-07-21 03:44:27.712575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.546 [2024-07-21 03:44:27.712607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.546 qpair failed and we were unable to recover it. 00:34:42.546 [2024-07-21 03:44:27.712736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.546 [2024-07-21 03:44:27.712764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.546 qpair failed and we were unable to recover it. 00:34:42.546 [2024-07-21 03:44:27.712884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.546 [2024-07-21 03:44:27.712913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.546 qpair failed and we were unable to recover it. 00:34:42.546 [2024-07-21 03:44:27.713064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.546 [2024-07-21 03:44:27.713096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.546 qpair failed and we were unable to recover it. 00:34:42.546 [2024-07-21 03:44:27.713241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.546 [2024-07-21 03:44:27.713267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.546 qpair failed and we were unable to recover it. 00:34:42.546 [2024-07-21 03:44:27.713358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.546 [2024-07-21 03:44:27.713385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.546 qpair failed and we were unable to recover it. 00:34:42.546 [2024-07-21 03:44:27.713554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.546 [2024-07-21 03:44:27.713583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.546 qpair failed and we were unable to recover it. 00:34:42.546 [2024-07-21 03:44:27.713741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.546 [2024-07-21 03:44:27.713768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.546 qpair failed and we were unable to recover it. 00:34:42.546 [2024-07-21 03:44:27.713897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.546 [2024-07-21 03:44:27.713924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.546 qpair failed and we were unable to recover it. 00:34:42.546 [2024-07-21 03:44:27.714066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.546 [2024-07-21 03:44:27.714095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.546 qpair failed and we were unable to recover it. 00:34:42.546 [2024-07-21 03:44:27.714267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.546 [2024-07-21 03:44:27.714293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.546 qpair failed and we were unable to recover it. 00:34:42.546 [2024-07-21 03:44:27.714409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.546 [2024-07-21 03:44:27.714452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.546 qpair failed and we were unable to recover it. 00:34:42.546 [2024-07-21 03:44:27.714564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.546 [2024-07-21 03:44:27.714593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.546 qpair failed and we were unable to recover it. 00:34:42.546 [2024-07-21 03:44:27.714751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.546 [2024-07-21 03:44:27.714779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.546 qpair failed and we were unable to recover it. 00:34:42.546 [2024-07-21 03:44:27.714898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.546 [2024-07-21 03:44:27.714926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.546 qpair failed and we were unable to recover it. 00:34:42.546 [2024-07-21 03:44:27.715098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.546 [2024-07-21 03:44:27.715128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.546 qpair failed and we were unable to recover it. 00:34:42.546 [2024-07-21 03:44:27.715272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.546 [2024-07-21 03:44:27.715331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.546 qpair failed and we were unable to recover it. 00:34:42.546 [2024-07-21 03:44:27.715437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.546 [2024-07-21 03:44:27.715464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.546 qpair failed and we were unable to recover it. 00:34:42.546 [2024-07-21 03:44:27.715588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.546 [2024-07-21 03:44:27.715636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.546 qpair failed and we were unable to recover it. 00:34:42.546 [2024-07-21 03:44:27.715826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.546 [2024-07-21 03:44:27.715853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.546 qpair failed and we were unable to recover it. 00:34:42.546 [2024-07-21 03:44:27.715974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.546 [2024-07-21 03:44:27.716019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.546 qpair failed and we were unable to recover it. 00:34:42.546 [2024-07-21 03:44:27.716154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.546 [2024-07-21 03:44:27.716184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.546 qpair failed and we were unable to recover it. 00:34:42.546 [2024-07-21 03:44:27.716297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.546 [2024-07-21 03:44:27.716324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.546 qpair failed and we were unable to recover it. 00:34:42.546 [2024-07-21 03:44:27.716449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.546 [2024-07-21 03:44:27.716476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.546 qpair failed and we were unable to recover it. 00:34:42.546 [2024-07-21 03:44:27.716628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.546 [2024-07-21 03:44:27.716658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.546 qpair failed and we were unable to recover it. 00:34:42.546 [2024-07-21 03:44:27.716803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.547 [2024-07-21 03:44:27.716830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.547 qpair failed and we were unable to recover it. 00:34:42.547 [2024-07-21 03:44:27.716926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.547 [2024-07-21 03:44:27.716953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.547 qpair failed and we were unable to recover it. 00:34:42.547 [2024-07-21 03:44:27.717084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.547 [2024-07-21 03:44:27.717114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.547 qpair failed and we were unable to recover it. 00:34:42.547 [2024-07-21 03:44:27.717257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.547 [2024-07-21 03:44:27.717284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.547 qpair failed and we were unable to recover it. 00:34:42.547 [2024-07-21 03:44:27.717411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.547 [2024-07-21 03:44:27.717438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.547 qpair failed and we were unable to recover it. 00:34:42.547 [2024-07-21 03:44:27.717537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.547 [2024-07-21 03:44:27.717565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.547 qpair failed and we were unable to recover it. 00:34:42.547 [2024-07-21 03:44:27.717720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.547 [2024-07-21 03:44:27.717747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.547 qpair failed and we were unable to recover it. 00:34:42.547 [2024-07-21 03:44:27.717888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.547 [2024-07-21 03:44:27.717917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.547 qpair failed and we were unable to recover it. 00:34:42.547 [2024-07-21 03:44:27.718039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.547 [2024-07-21 03:44:27.718068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.547 qpair failed and we were unable to recover it. 00:34:42.547 [2024-07-21 03:44:27.718205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.547 [2024-07-21 03:44:27.718232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.547 qpair failed and we were unable to recover it. 00:34:42.547 [2024-07-21 03:44:27.718356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.547 [2024-07-21 03:44:27.718383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.547 qpair failed and we were unable to recover it. 00:34:42.547 [2024-07-21 03:44:27.718513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.547 [2024-07-21 03:44:27.718539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.547 qpair failed and we were unable to recover it. 00:34:42.547 [2024-07-21 03:44:27.718694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.547 [2024-07-21 03:44:27.718721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.547 qpair failed and we were unable to recover it. 00:34:42.547 [2024-07-21 03:44:27.718814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.547 [2024-07-21 03:44:27.718841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.547 qpair failed and we were unable to recover it. 00:34:42.547 [2024-07-21 03:44:27.718965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.547 [2024-07-21 03:44:27.718992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.547 qpair failed and we were unable to recover it. 00:34:42.547 [2024-07-21 03:44:27.719116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.547 [2024-07-21 03:44:27.719143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.547 qpair failed and we were unable to recover it. 00:34:42.547 [2024-07-21 03:44:27.719259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.547 [2024-07-21 03:44:27.719300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.547 qpair failed and we were unable to recover it. 00:34:42.547 [2024-07-21 03:44:27.719400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.547 [2024-07-21 03:44:27.719446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.547 qpair failed and we were unable to recover it. 00:34:42.547 [2024-07-21 03:44:27.719544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.547 [2024-07-21 03:44:27.719571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.547 qpair failed and we were unable to recover it. 00:34:42.547 [2024-07-21 03:44:27.719727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.547 [2024-07-21 03:44:27.719754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.547 qpair failed and we were unable to recover it. 00:34:42.547 [2024-07-21 03:44:27.719874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.547 [2024-07-21 03:44:27.719923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.547 qpair failed and we were unable to recover it. 00:34:42.547 [2024-07-21 03:44:27.720094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.547 [2024-07-21 03:44:27.720121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.547 qpair failed and we were unable to recover it. 00:34:42.547 [2024-07-21 03:44:27.720241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.547 [2024-07-21 03:44:27.720284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.547 qpair failed and we were unable to recover it. 00:34:42.547 [2024-07-21 03:44:27.720384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.547 [2024-07-21 03:44:27.720413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.547 qpair failed and we were unable to recover it. 00:34:42.547 [2024-07-21 03:44:27.720529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.547 [2024-07-21 03:44:27.720556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.547 qpair failed and we were unable to recover it. 00:34:42.547 [2024-07-21 03:44:27.720683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.547 [2024-07-21 03:44:27.720710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.547 qpair failed and we were unable to recover it. 00:34:42.547 [2024-07-21 03:44:27.720868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.547 [2024-07-21 03:44:27.720896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.547 qpair failed and we were unable to recover it. 00:34:42.547 [2024-07-21 03:44:27.721016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.547 [2024-07-21 03:44:27.721042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.547 qpair failed and we were unable to recover it. 00:34:42.547 [2024-07-21 03:44:27.721161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.547 [2024-07-21 03:44:27.721187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.547 qpair failed and we were unable to recover it. 00:34:42.547 [2024-07-21 03:44:27.721318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.547 [2024-07-21 03:44:27.721362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.547 qpair failed and we were unable to recover it. 00:34:42.547 [2024-07-21 03:44:27.721492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.547 [2024-07-21 03:44:27.721520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.547 qpair failed and we were unable to recover it. 00:34:42.547 [2024-07-21 03:44:27.721623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.547 [2024-07-21 03:44:27.721660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.547 qpair failed and we were unable to recover it. 00:34:42.547 [2024-07-21 03:44:27.721754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.547 [2024-07-21 03:44:27.721781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.547 qpair failed and we were unable to recover it. 00:34:42.547 [2024-07-21 03:44:27.721933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.547 [2024-07-21 03:44:27.721960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.547 qpair failed and we were unable to recover it. 00:34:42.547 [2024-07-21 03:44:27.722063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.547 [2024-07-21 03:44:27.722109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.547 qpair failed and we were unable to recover it. 00:34:42.547 [2024-07-21 03:44:27.722207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.547 [2024-07-21 03:44:27.722237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.547 qpair failed and we were unable to recover it. 00:34:42.547 [2024-07-21 03:44:27.722376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.547 [2024-07-21 03:44:27.722404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.547 qpair failed and we were unable to recover it. 00:34:42.547 [2024-07-21 03:44:27.722521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.547 [2024-07-21 03:44:27.722548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.547 qpair failed and we were unable to recover it. 00:34:42.547 [2024-07-21 03:44:27.722744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.547 [2024-07-21 03:44:27.722774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.547 qpair failed and we were unable to recover it. 00:34:42.547 [2024-07-21 03:44:27.722895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.547 [2024-07-21 03:44:27.722923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.548 qpair failed and we were unable to recover it. 00:34:42.548 [2024-07-21 03:44:27.723047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.548 [2024-07-21 03:44:27.723075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.548 qpair failed and we were unable to recover it. 00:34:42.548 [2024-07-21 03:44:27.723218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.548 [2024-07-21 03:44:27.723248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.548 qpair failed and we were unable to recover it. 00:34:42.548 [2024-07-21 03:44:27.723384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.548 [2024-07-21 03:44:27.723411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.548 qpair failed and we were unable to recover it. 00:34:42.548 [2024-07-21 03:44:27.723561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.548 [2024-07-21 03:44:27.723604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.548 qpair failed and we were unable to recover it. 00:34:42.548 [2024-07-21 03:44:27.723789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.548 [2024-07-21 03:44:27.723817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.548 qpair failed and we were unable to recover it. 00:34:42.548 [2024-07-21 03:44:27.723911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.548 [2024-07-21 03:44:27.723937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.548 qpair failed and we were unable to recover it. 00:34:42.548 [2024-07-21 03:44:27.724086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.548 [2024-07-21 03:44:27.724113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.548 qpair failed and we were unable to recover it. 00:34:42.548 [2024-07-21 03:44:27.724282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.548 [2024-07-21 03:44:27.724312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.548 qpair failed and we were unable to recover it. 00:34:42.548 [2024-07-21 03:44:27.724471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.548 [2024-07-21 03:44:27.724498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.548 qpair failed and we were unable to recover it. 00:34:42.548 [2024-07-21 03:44:27.724638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.548 [2024-07-21 03:44:27.724668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.548 qpair failed and we were unable to recover it. 00:34:42.548 [2024-07-21 03:44:27.724812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.548 [2024-07-21 03:44:27.724840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.548 qpair failed and we were unable to recover it. 00:34:42.548 [2024-07-21 03:44:27.724959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.548 [2024-07-21 03:44:27.724985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.548 qpair failed and we were unable to recover it. 00:34:42.548 [2024-07-21 03:44:27.725140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.548 [2024-07-21 03:44:27.725167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.548 qpair failed and we were unable to recover it. 00:34:42.548 [2024-07-21 03:44:27.725317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.548 [2024-07-21 03:44:27.725347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.548 qpair failed and we were unable to recover it. 00:34:42.548 [2024-07-21 03:44:27.725516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.548 [2024-07-21 03:44:27.725543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.548 qpair failed and we were unable to recover it. 00:34:42.548 [2024-07-21 03:44:27.725668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.548 [2024-07-21 03:44:27.725714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.548 qpair failed and we were unable to recover it. 00:34:42.548 [2024-07-21 03:44:27.725823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.548 [2024-07-21 03:44:27.725856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.548 qpair failed and we were unable to recover it. 00:34:42.548 [2024-07-21 03:44:27.726002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.548 [2024-07-21 03:44:27.726029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.548 qpair failed and we were unable to recover it. 00:34:42.548 [2024-07-21 03:44:27.726130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.548 [2024-07-21 03:44:27.726158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.548 qpair failed and we were unable to recover it. 00:34:42.548 [2024-07-21 03:44:27.726311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.548 [2024-07-21 03:44:27.726338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.548 qpair failed and we were unable to recover it. 00:34:42.548 [2024-07-21 03:44:27.726491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.548 [2024-07-21 03:44:27.726517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.548 qpair failed and we were unable to recover it. 00:34:42.548 [2024-07-21 03:44:27.726608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.548 [2024-07-21 03:44:27.726647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.548 qpair failed and we were unable to recover it. 00:34:42.548 [2024-07-21 03:44:27.726798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.548 [2024-07-21 03:44:27.726826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.548 qpair failed and we were unable to recover it. 00:34:42.548 [2024-07-21 03:44:27.726991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.548 [2024-07-21 03:44:27.727017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.548 qpair failed and we were unable to recover it. 00:34:42.548 [2024-07-21 03:44:27.727186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.548 [2024-07-21 03:44:27.727215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.548 qpair failed and we were unable to recover it. 00:34:42.548 [2024-07-21 03:44:27.727339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.548 [2024-07-21 03:44:27.727368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.548 qpair failed and we were unable to recover it. 00:34:42.548 [2024-07-21 03:44:27.727515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.548 [2024-07-21 03:44:27.727541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.548 qpair failed and we were unable to recover it. 00:34:42.548 [2024-07-21 03:44:27.727697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.548 [2024-07-21 03:44:27.727724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.548 qpair failed and we were unable to recover it. 00:34:42.548 [2024-07-21 03:44:27.727949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.548 [2024-07-21 03:44:27.728009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.548 qpair failed and we were unable to recover it. 00:34:42.548 [2024-07-21 03:44:27.728130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.548 [2024-07-21 03:44:27.728156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.548 qpair failed and we were unable to recover it. 00:34:42.548 [2024-07-21 03:44:27.728278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.548 [2024-07-21 03:44:27.728305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.548 qpair failed and we were unable to recover it. 00:34:42.548 [2024-07-21 03:44:27.728495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.548 [2024-07-21 03:44:27.728525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.548 qpair failed and we were unable to recover it. 00:34:42.548 [2024-07-21 03:44:27.728636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.548 [2024-07-21 03:44:27.728664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.548 qpair failed and we were unable to recover it. 00:34:42.548 [2024-07-21 03:44:27.728782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.548 [2024-07-21 03:44:27.728809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.548 qpair failed and we were unable to recover it. 00:34:42.548 [2024-07-21 03:44:27.728912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.548 [2024-07-21 03:44:27.728940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.548 qpair failed and we were unable to recover it. 00:34:42.548 [2024-07-21 03:44:27.729037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.548 [2024-07-21 03:44:27.729065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.548 qpair failed and we were unable to recover it. 00:34:42.548 [2024-07-21 03:44:27.729192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.548 [2024-07-21 03:44:27.729218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.548 qpair failed and we were unable to recover it. 00:34:42.548 [2024-07-21 03:44:27.729362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.548 [2024-07-21 03:44:27.729391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.548 qpair failed and we were unable to recover it. 00:34:42.548 [2024-07-21 03:44:27.729513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.548 [2024-07-21 03:44:27.729558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.548 qpair failed and we were unable to recover it. 00:34:42.548 [2024-07-21 03:44:27.729705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.549 [2024-07-21 03:44:27.729732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.549 qpair failed and we were unable to recover it. 00:34:42.549 [2024-07-21 03:44:27.729851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.549 [2024-07-21 03:44:27.729878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.549 qpair failed and we were unable to recover it. 00:34:42.549 [2024-07-21 03:44:27.730027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.549 [2024-07-21 03:44:27.730054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.549 qpair failed and we were unable to recover it. 00:34:42.549 [2024-07-21 03:44:27.730207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.549 [2024-07-21 03:44:27.730236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.549 qpair failed and we were unable to recover it. 00:34:42.549 [2024-07-21 03:44:27.730413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.549 [2024-07-21 03:44:27.730439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.549 qpair failed and we were unable to recover it. 00:34:42.549 [2024-07-21 03:44:27.730538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.549 [2024-07-21 03:44:27.730565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.549 qpair failed and we were unable to recover it. 00:34:42.549 [2024-07-21 03:44:27.730722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.549 [2024-07-21 03:44:27.730750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.549 qpair failed and we were unable to recover it. 00:34:42.549 [2024-07-21 03:44:27.730850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.549 [2024-07-21 03:44:27.730879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.549 qpair failed and we were unable to recover it. 00:34:42.549 [2024-07-21 03:44:27.731000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.549 [2024-07-21 03:44:27.731028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.549 qpair failed and we were unable to recover it. 00:34:42.549 [2024-07-21 03:44:27.731124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.549 [2024-07-21 03:44:27.731151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.549 qpair failed and we were unable to recover it. 00:34:42.549 [2024-07-21 03:44:27.731296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.549 [2024-07-21 03:44:27.731327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.549 qpair failed and we were unable to recover it. 00:34:42.549 [2024-07-21 03:44:27.731501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.549 [2024-07-21 03:44:27.731527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.549 qpair failed and we were unable to recover it. 00:34:42.549 [2024-07-21 03:44:27.731677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.549 [2024-07-21 03:44:27.731704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.549 qpair failed and we were unable to recover it. 00:34:42.549 [2024-07-21 03:44:27.731850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.549 [2024-07-21 03:44:27.731879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.549 qpair failed and we were unable to recover it. 00:34:42.549 [2024-07-21 03:44:27.732023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.549 [2024-07-21 03:44:27.732050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.549 qpair failed and we were unable to recover it. 00:34:42.549 [2024-07-21 03:44:27.732158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.549 [2024-07-21 03:44:27.732185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.549 qpair failed and we were unable to recover it. 00:34:42.549 [2024-07-21 03:44:27.732332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.549 [2024-07-21 03:44:27.732363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.549 qpair failed and we were unable to recover it. 00:34:42.549 [2024-07-21 03:44:27.732484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.549 [2024-07-21 03:44:27.732511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.549 qpair failed and we were unable to recover it. 00:34:42.549 [2024-07-21 03:44:27.732631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.549 [2024-07-21 03:44:27.732659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.549 qpair failed and we were unable to recover it. 00:34:42.549 [2024-07-21 03:44:27.732788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.549 [2024-07-21 03:44:27.732816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.549 qpair failed and we were unable to recover it. 00:34:42.549 [2024-07-21 03:44:27.732982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.549 [2024-07-21 03:44:27.733009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.549 qpair failed and we were unable to recover it. 00:34:42.549 [2024-07-21 03:44:27.733127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.549 [2024-07-21 03:44:27.733154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.549 qpair failed and we were unable to recover it. 00:34:42.549 [2024-07-21 03:44:27.733266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.549 [2024-07-21 03:44:27.733310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.549 qpair failed and we were unable to recover it. 00:34:42.549 [2024-07-21 03:44:27.733479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.549 [2024-07-21 03:44:27.733508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.549 qpair failed and we were unable to recover it. 00:34:42.549 [2024-07-21 03:44:27.733650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.549 [2024-07-21 03:44:27.733677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.549 qpair failed and we were unable to recover it. 00:34:42.549 [2024-07-21 03:44:27.733798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.549 [2024-07-21 03:44:27.733825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.549 qpair failed and we were unable to recover it. 00:34:42.549 [2024-07-21 03:44:27.733924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.549 [2024-07-21 03:44:27.733951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.549 qpair failed and we were unable to recover it. 00:34:42.549 [2024-07-21 03:44:27.734068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.549 [2024-07-21 03:44:27.734096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.549 qpair failed and we were unable to recover it. 00:34:42.549 [2024-07-21 03:44:27.734214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.549 [2024-07-21 03:44:27.734258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.549 qpair failed and we were unable to recover it. 00:34:42.549 [2024-07-21 03:44:27.734407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.549 [2024-07-21 03:44:27.734434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.549 qpair failed and we were unable to recover it. 00:34:42.549 [2024-07-21 03:44:27.734558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.549 [2024-07-21 03:44:27.734584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.549 qpair failed and we were unable to recover it. 00:34:42.549 [2024-07-21 03:44:27.734684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.549 [2024-07-21 03:44:27.734712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.549 qpair failed and we were unable to recover it. 00:34:42.549 [2024-07-21 03:44:27.734811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.549 [2024-07-21 03:44:27.734842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.549 qpair failed and we were unable to recover it. 00:34:42.549 [2024-07-21 03:44:27.734964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.549 [2024-07-21 03:44:27.734992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.549 qpair failed and we were unable to recover it. 00:34:42.549 [2024-07-21 03:44:27.735172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.549 [2024-07-21 03:44:27.735204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.549 qpair failed and we were unable to recover it. 00:34:42.549 [2024-07-21 03:44:27.735376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.549 [2024-07-21 03:44:27.735403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.549 qpair failed and we were unable to recover it. 00:34:42.549 [2024-07-21 03:44:27.735503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.549 [2024-07-21 03:44:27.735532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.549 qpair failed and we were unable to recover it. 00:34:42.549 [2024-07-21 03:44:27.735680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.549 [2024-07-21 03:44:27.735711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.549 qpair failed and we were unable to recover it. 00:34:42.549 [2024-07-21 03:44:27.735879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.549 [2024-07-21 03:44:27.735905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.549 qpair failed and we were unable to recover it. 00:34:42.549 [2024-07-21 03:44:27.736023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.549 [2024-07-21 03:44:27.736049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.550 qpair failed and we were unable to recover it. 00:34:42.550 [2024-07-21 03:44:27.736199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.550 [2024-07-21 03:44:27.736226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.550 qpair failed and we were unable to recover it. 00:34:42.550 [2024-07-21 03:44:27.736348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.550 [2024-07-21 03:44:27.736376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.550 qpair failed and we were unable to recover it. 00:34:42.550 [2024-07-21 03:44:27.736499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.550 [2024-07-21 03:44:27.736526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.550 qpair failed and we were unable to recover it. 00:34:42.550 [2024-07-21 03:44:27.736680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.550 [2024-07-21 03:44:27.736710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.550 qpair failed and we were unable to recover it. 00:34:42.550 [2024-07-21 03:44:27.736819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.550 [2024-07-21 03:44:27.736846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.550 qpair failed and we were unable to recover it. 00:34:42.550 [2024-07-21 03:44:27.736976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.550 [2024-07-21 03:44:27.737004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.550 qpair failed and we were unable to recover it. 00:34:42.550 [2024-07-21 03:44:27.737126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.550 [2024-07-21 03:44:27.737154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.550 qpair failed and we were unable to recover it. 00:34:42.550 [2024-07-21 03:44:27.737247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.550 [2024-07-21 03:44:27.737275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.550 qpair failed and we were unable to recover it. 00:34:42.550 [2024-07-21 03:44:27.737377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.550 [2024-07-21 03:44:27.737404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.550 qpair failed and we were unable to recover it. 00:34:42.550 [2024-07-21 03:44:27.737535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.550 [2024-07-21 03:44:27.737565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.550 qpair failed and we were unable to recover it. 00:34:42.550 [2024-07-21 03:44:27.737734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.550 [2024-07-21 03:44:27.737760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.550 qpair failed and we were unable to recover it. 00:34:42.550 [2024-07-21 03:44:27.737876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.550 [2024-07-21 03:44:27.737919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.550 qpair failed and we were unable to recover it. 00:34:42.550 [2024-07-21 03:44:27.738059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.550 [2024-07-21 03:44:27.738085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.550 qpair failed and we were unable to recover it. 00:34:42.550 [2024-07-21 03:44:27.738204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.550 [2024-07-21 03:44:27.738230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.550 qpair failed and we were unable to recover it. 00:34:42.550 [2024-07-21 03:44:27.738351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.550 [2024-07-21 03:44:27.738377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.550 qpair failed and we were unable to recover it. 00:34:42.550 [2024-07-21 03:44:27.738512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.550 [2024-07-21 03:44:27.738541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.550 qpair failed and we were unable to recover it. 00:34:42.550 [2024-07-21 03:44:27.738678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.550 [2024-07-21 03:44:27.738705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.550 qpair failed and we were unable to recover it. 00:34:42.550 [2024-07-21 03:44:27.738806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.550 [2024-07-21 03:44:27.738833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.550 qpair failed and we were unable to recover it. 00:34:42.550 [2024-07-21 03:44:27.738967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.550 [2024-07-21 03:44:27.738993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.550 qpair failed and we were unable to recover it. 00:34:42.550 [2024-07-21 03:44:27.739161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.550 [2024-07-21 03:44:27.739202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.550 qpair failed and we were unable to recover it. 00:34:42.550 [2024-07-21 03:44:27.739341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.550 [2024-07-21 03:44:27.739387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.550 qpair failed and we were unable to recover it. 00:34:42.550 [2024-07-21 03:44:27.739481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.550 [2024-07-21 03:44:27.739509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.550 qpair failed and we were unable to recover it. 00:34:42.550 [2024-07-21 03:44:27.739662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.550 [2024-07-21 03:44:27.739690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.550 qpair failed and we were unable to recover it. 00:34:42.550 [2024-07-21 03:44:27.739840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.550 [2024-07-21 03:44:27.739867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.550 qpair failed and we were unable to recover it. 00:34:42.550 [2024-07-21 03:44:27.740050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.550 [2024-07-21 03:44:27.740113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.550 qpair failed and we were unable to recover it. 00:34:42.550 [2024-07-21 03:44:27.740360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.550 [2024-07-21 03:44:27.740412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.550 qpair failed and we were unable to recover it. 00:34:42.550 [2024-07-21 03:44:27.740518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.550 [2024-07-21 03:44:27.740556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.550 qpair failed and we were unable to recover it. 00:34:42.550 [2024-07-21 03:44:27.740697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.550 [2024-07-21 03:44:27.740733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.550 qpair failed and we were unable to recover it. 00:34:42.550 [2024-07-21 03:44:27.740882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.550 [2024-07-21 03:44:27.740926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.550 qpair failed and we were unable to recover it. 00:34:42.550 [2024-07-21 03:44:27.741083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.550 [2024-07-21 03:44:27.741113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.550 qpair failed and we were unable to recover it. 00:34:42.550 [2024-07-21 03:44:27.741244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.550 [2024-07-21 03:44:27.741288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.550 qpair failed and we were unable to recover it. 00:34:42.550 [2024-07-21 03:44:27.741411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.550 [2024-07-21 03:44:27.741441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.550 qpair failed and we were unable to recover it. 00:34:42.550 [2024-07-21 03:44:27.741604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.550 [2024-07-21 03:44:27.741668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.550 qpair failed and we were unable to recover it. 00:34:42.550 [2024-07-21 03:44:27.741799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.550 [2024-07-21 03:44:27.741842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.550 qpair failed and we were unable to recover it. 00:34:42.551 [2024-07-21 03:44:27.741986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.551 [2024-07-21 03:44:27.742016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.551 qpair failed and we were unable to recover it. 00:34:42.551 [2024-07-21 03:44:27.742124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.551 [2024-07-21 03:44:27.742154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.551 qpair failed and we were unable to recover it. 00:34:42.551 [2024-07-21 03:44:27.742321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.551 [2024-07-21 03:44:27.742350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.551 qpair failed and we were unable to recover it. 00:34:42.551 [2024-07-21 03:44:27.742509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.551 [2024-07-21 03:44:27.742539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.551 qpair failed and we were unable to recover it. 00:34:42.551 [2024-07-21 03:44:27.742657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.551 [2024-07-21 03:44:27.742684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.551 qpair failed and we were unable to recover it. 00:34:42.551 [2024-07-21 03:44:27.742817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.551 [2024-07-21 03:44:27.742847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.551 qpair failed and we were unable to recover it. 00:34:42.551 [2024-07-21 03:44:27.742956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.551 [2024-07-21 03:44:27.742986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.551 qpair failed and we were unable to recover it. 00:34:42.551 [2024-07-21 03:44:27.743118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.551 [2024-07-21 03:44:27.743147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.551 qpair failed and we were unable to recover it. 00:34:42.551 [2024-07-21 03:44:27.743279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.551 [2024-07-21 03:44:27.743309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.551 qpair failed and we were unable to recover it. 00:34:42.551 [2024-07-21 03:44:27.743405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.551 [2024-07-21 03:44:27.743438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.551 qpair failed and we were unable to recover it. 00:34:42.551 [2024-07-21 03:44:27.743572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.551 [2024-07-21 03:44:27.743602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.551 qpair failed and we were unable to recover it. 00:34:42.551 [2024-07-21 03:44:27.743794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.551 [2024-07-21 03:44:27.743834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.551 qpair failed and we were unable to recover it. 00:34:42.551 [2024-07-21 03:44:27.743968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.551 [2024-07-21 03:44:27.743999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.551 qpair failed and we were unable to recover it. 00:34:42.551 [2024-07-21 03:44:27.744123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.551 [2024-07-21 03:44:27.744152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.551 qpair failed and we were unable to recover it. 00:34:42.551 [2024-07-21 03:44:27.744253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.551 [2024-07-21 03:44:27.744283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.551 qpair failed and we were unable to recover it. 00:34:42.551 [2024-07-21 03:44:27.744413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.551 [2024-07-21 03:44:27.744444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.551 qpair failed and we were unable to recover it. 00:34:42.551 [2024-07-21 03:44:27.744599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.551 [2024-07-21 03:44:27.744636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.551 qpair failed and we were unable to recover it. 00:34:42.551 [2024-07-21 03:44:27.744775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.551 [2024-07-21 03:44:27.744802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.551 qpair failed and we were unable to recover it. 00:34:42.551 [2024-07-21 03:44:27.744950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.551 [2024-07-21 03:44:27.744979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.551 qpair failed and we were unable to recover it. 00:34:42.551 [2024-07-21 03:44:27.745106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.551 [2024-07-21 03:44:27.745135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.551 qpair failed and we were unable to recover it. 00:34:42.551 [2024-07-21 03:44:27.745242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.551 [2024-07-21 03:44:27.745284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.551 qpair failed and we were unable to recover it. 00:34:42.551 [2024-07-21 03:44:27.745427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.551 [2024-07-21 03:44:27.745456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.551 qpair failed and we were unable to recover it. 00:34:42.551 [2024-07-21 03:44:27.745590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.551 [2024-07-21 03:44:27.745626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.551 qpair failed and we were unable to recover it. 00:34:42.551 [2024-07-21 03:44:27.745779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.551 [2024-07-21 03:44:27.745806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.551 qpair failed and we were unable to recover it. 00:34:42.551 [2024-07-21 03:44:27.745921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.551 [2024-07-21 03:44:27.745950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.551 qpair failed and we were unable to recover it. 00:34:42.551 [2024-07-21 03:44:27.746084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.551 [2024-07-21 03:44:27.746118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.551 qpair failed and we were unable to recover it. 00:34:42.551 [2024-07-21 03:44:27.746275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.551 [2024-07-21 03:44:27.746305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.551 qpair failed and we were unable to recover it. 00:34:42.551 [2024-07-21 03:44:27.746472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.551 [2024-07-21 03:44:27.746521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.551 qpair failed and we were unable to recover it. 00:34:42.551 [2024-07-21 03:44:27.746652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.551 [2024-07-21 03:44:27.746680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.551 qpair failed and we were unable to recover it. 00:34:42.551 [2024-07-21 03:44:27.746820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.551 [2024-07-21 03:44:27.746865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.551 qpair failed and we were unable to recover it. 00:34:42.551 [2024-07-21 03:44:27.746979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.551 [2024-07-21 03:44:27.747023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.551 qpair failed and we were unable to recover it. 00:34:42.551 [2024-07-21 03:44:27.747162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.551 [2024-07-21 03:44:27.747191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.551 qpair failed and we were unable to recover it. 00:34:42.551 [2024-07-21 03:44:27.747322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.551 [2024-07-21 03:44:27.747349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.551 qpair failed and we were unable to recover it. 00:34:42.551 [2024-07-21 03:44:27.747429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.551 [2024-07-21 03:44:27.747454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.551 qpair failed and we were unable to recover it. 00:34:42.551 [2024-07-21 03:44:27.747568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.551 [2024-07-21 03:44:27.747595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.551 qpair failed and we were unable to recover it. 00:34:42.551 [2024-07-21 03:44:27.747774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.551 [2024-07-21 03:44:27.747814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.551 qpair failed and we were unable to recover it. 00:34:42.551 [2024-07-21 03:44:27.747949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.551 [2024-07-21 03:44:27.747978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.551 qpair failed and we were unable to recover it. 00:34:42.551 [2024-07-21 03:44:27.748099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.551 [2024-07-21 03:44:27.748126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.551 qpair failed and we were unable to recover it. 00:34:42.551 [2024-07-21 03:44:27.748296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.551 [2024-07-21 03:44:27.748325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.551 qpair failed and we were unable to recover it. 00:34:42.551 [2024-07-21 03:44:27.748461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.552 [2024-07-21 03:44:27.748492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.552 qpair failed and we were unable to recover it. 00:34:42.552 [2024-07-21 03:44:27.748627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.552 [2024-07-21 03:44:27.748683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.552 qpair failed and we were unable to recover it. 00:34:42.552 [2024-07-21 03:44:27.748825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.552 [2024-07-21 03:44:27.748854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.552 qpair failed and we were unable to recover it. 00:34:42.552 [2024-07-21 03:44:27.748986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.552 [2024-07-21 03:44:27.749015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.552 qpair failed and we were unable to recover it. 00:34:42.552 [2024-07-21 03:44:27.749175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.552 [2024-07-21 03:44:27.749204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.552 qpair failed and we were unable to recover it. 00:34:42.552 [2024-07-21 03:44:27.749308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.552 [2024-07-21 03:44:27.749336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.552 qpair failed and we were unable to recover it. 00:34:42.552 [2024-07-21 03:44:27.749433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.552 [2024-07-21 03:44:27.749460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.552 qpair failed and we were unable to recover it. 00:34:42.552 [2024-07-21 03:44:27.749608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.552 [2024-07-21 03:44:27.749641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.552 qpair failed and we were unable to recover it. 00:34:42.552 [2024-07-21 03:44:27.749763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.552 [2024-07-21 03:44:27.749789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.552 qpair failed and we were unable to recover it. 00:34:42.552 [2024-07-21 03:44:27.749972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.552 [2024-07-21 03:44:27.750016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.552 qpair failed and we were unable to recover it. 00:34:42.552 [2024-07-21 03:44:27.750128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.552 [2024-07-21 03:44:27.750159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.552 qpair failed and we were unable to recover it. 00:34:42.552 [2024-07-21 03:44:27.750313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.552 [2024-07-21 03:44:27.750356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.552 qpair failed and we were unable to recover it. 00:34:42.552 [2024-07-21 03:44:27.750478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.552 [2024-07-21 03:44:27.750504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.552 qpair failed and we were unable to recover it. 00:34:42.552 [2024-07-21 03:44:27.750598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.552 [2024-07-21 03:44:27.750632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.552 qpair failed and we were unable to recover it. 00:34:42.552 [2024-07-21 03:44:27.750793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.552 [2024-07-21 03:44:27.750838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.552 qpair failed and we were unable to recover it. 00:34:42.552 [2024-07-21 03:44:27.751009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.552 [2024-07-21 03:44:27.751054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.552 qpair failed and we were unable to recover it. 00:34:42.552 [2024-07-21 03:44:27.751196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.552 [2024-07-21 03:44:27.751228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.552 qpair failed and we were unable to recover it. 00:34:42.552 [2024-07-21 03:44:27.751391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.552 [2024-07-21 03:44:27.751422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.552 qpair failed and we were unable to recover it. 00:34:42.552 [2024-07-21 03:44:27.751565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.552 [2024-07-21 03:44:27.751591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.552 qpair failed and we were unable to recover it. 00:34:42.552 [2024-07-21 03:44:27.751714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.552 [2024-07-21 03:44:27.751742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.552 qpair failed and we were unable to recover it. 00:34:42.552 [2024-07-21 03:44:27.751917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.552 [2024-07-21 03:44:27.751947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.552 qpair failed and we were unable to recover it. 00:34:42.552 [2024-07-21 03:44:27.752109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.552 [2024-07-21 03:44:27.752138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.552 qpair failed and we were unable to recover it. 00:34:42.552 [2024-07-21 03:44:27.752273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.552 [2024-07-21 03:44:27.752303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.552 qpair failed and we were unable to recover it. 00:34:42.552 [2024-07-21 03:44:27.752443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.552 [2024-07-21 03:44:27.752491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.552 qpair failed and we were unable to recover it. 00:34:42.552 [2024-07-21 03:44:27.752668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.552 [2024-07-21 03:44:27.752695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.552 qpair failed and we were unable to recover it. 00:34:42.552 [2024-07-21 03:44:27.752819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.552 [2024-07-21 03:44:27.752845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.552 qpair failed and we were unable to recover it. 00:34:42.552 [2024-07-21 03:44:27.753014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.552 [2024-07-21 03:44:27.753048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.552 qpair failed and we were unable to recover it. 00:34:42.552 [2024-07-21 03:44:27.753151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.552 [2024-07-21 03:44:27.753180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.552 qpair failed and we were unable to recover it. 00:34:42.552 [2024-07-21 03:44:27.753339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.552 [2024-07-21 03:44:27.753369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.552 qpair failed and we were unable to recover it. 00:34:42.552 [2024-07-21 03:44:27.753503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.552 [2024-07-21 03:44:27.753542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.552 qpair failed and we were unable to recover it. 00:34:42.552 [2024-07-21 03:44:27.753675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.552 [2024-07-21 03:44:27.753702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.552 qpair failed and we were unable to recover it. 00:34:42.552 [2024-07-21 03:44:27.753836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.552 [2024-07-21 03:44:27.753862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.552 qpair failed and we were unable to recover it. 00:34:42.552 [2024-07-21 03:44:27.753979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.552 [2024-07-21 03:44:27.754010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.552 qpair failed and we were unable to recover it. 00:34:42.552 [2024-07-21 03:44:27.754234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.552 [2024-07-21 03:44:27.754263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.552 qpair failed and we were unable to recover it. 00:34:42.552 [2024-07-21 03:44:27.754400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.553 [2024-07-21 03:44:27.754429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.553 qpair failed and we were unable to recover it. 00:34:42.553 [2024-07-21 03:44:27.754559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.553 [2024-07-21 03:44:27.754590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.553 qpair failed and we were unable to recover it. 00:34:42.553 [2024-07-21 03:44:27.754780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.553 [2024-07-21 03:44:27.754821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.553 qpair failed and we were unable to recover it. 00:34:42.553 [2024-07-21 03:44:27.754963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.553 [2024-07-21 03:44:27.755007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.553 qpair failed and we were unable to recover it. 00:34:42.553 [2024-07-21 03:44:27.755178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.553 [2024-07-21 03:44:27.755225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.553 qpair failed and we were unable to recover it. 00:34:42.553 [2024-07-21 03:44:27.755360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.553 [2024-07-21 03:44:27.755404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.553 qpair failed and we were unable to recover it. 00:34:42.553 [2024-07-21 03:44:27.755524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.553 [2024-07-21 03:44:27.755560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.553 qpair failed and we were unable to recover it. 00:34:42.553 [2024-07-21 03:44:27.755686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.553 [2024-07-21 03:44:27.755727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.553 qpair failed and we were unable to recover it. 00:34:42.553 [2024-07-21 03:44:27.755862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.553 [2024-07-21 03:44:27.755892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.553 qpair failed and we were unable to recover it. 00:34:42.553 [2024-07-21 03:44:27.756017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.553 [2024-07-21 03:44:27.756044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.553 qpair failed and we were unable to recover it. 00:34:42.553 [2024-07-21 03:44:27.756213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.553 [2024-07-21 03:44:27.756242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.553 qpair failed and we were unable to recover it. 00:34:42.553 [2024-07-21 03:44:27.756435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.553 [2024-07-21 03:44:27.756499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.553 qpair failed and we were unable to recover it. 00:34:42.553 [2024-07-21 03:44:27.756629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.553 [2024-07-21 03:44:27.756673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.553 qpair failed and we were unable to recover it. 00:34:42.553 [2024-07-21 03:44:27.756791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.553 [2024-07-21 03:44:27.756817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.553 qpair failed and we were unable to recover it. 00:34:42.553 [2024-07-21 03:44:27.756938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.553 [2024-07-21 03:44:27.756982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.553 qpair failed and we were unable to recover it. 00:34:42.553 [2024-07-21 03:44:27.757083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.553 [2024-07-21 03:44:27.757117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.553 qpair failed and we were unable to recover it. 00:34:42.553 [2024-07-21 03:44:27.757270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.553 [2024-07-21 03:44:27.757298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.553 qpair failed and we were unable to recover it. 00:34:42.553 [2024-07-21 03:44:27.757459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.553 [2024-07-21 03:44:27.757487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.553 qpair failed and we were unable to recover it. 00:34:42.553 [2024-07-21 03:44:27.757599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.553 [2024-07-21 03:44:27.757638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.553 qpair failed and we were unable to recover it. 00:34:42.553 [2024-07-21 03:44:27.757789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.553 [2024-07-21 03:44:27.757816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.553 qpair failed and we were unable to recover it. 00:34:42.553 [2024-07-21 03:44:27.757959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.553 [2024-07-21 03:44:27.757985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.553 qpair failed and we were unable to recover it. 00:34:42.553 [2024-07-21 03:44:27.758151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.553 [2024-07-21 03:44:27.758180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.553 qpair failed and we were unable to recover it. 00:34:42.553 [2024-07-21 03:44:27.758337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.553 [2024-07-21 03:44:27.758365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.553 qpair failed and we were unable to recover it. 00:34:42.553 [2024-07-21 03:44:27.758510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.553 [2024-07-21 03:44:27.758538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.553 qpair failed and we were unable to recover it. 00:34:42.553 [2024-07-21 03:44:27.758689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.553 [2024-07-21 03:44:27.758716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.553 qpair failed and we were unable to recover it. 00:34:42.553 [2024-07-21 03:44:27.758830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.553 [2024-07-21 03:44:27.758856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.553 qpair failed and we were unable to recover it. 00:34:42.553 [2024-07-21 03:44:27.758979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.553 [2024-07-21 03:44:27.759006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.553 qpair failed and we were unable to recover it. 00:34:42.553 [2024-07-21 03:44:27.759178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.553 [2024-07-21 03:44:27.759208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.553 qpair failed and we were unable to recover it. 00:34:42.553 [2024-07-21 03:44:27.759405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.553 [2024-07-21 03:44:27.759435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.553 qpair failed and we were unable to recover it. 00:34:42.553 [2024-07-21 03:44:27.759575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.553 [2024-07-21 03:44:27.759601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.553 qpair failed and we were unable to recover it. 00:34:42.553 [2024-07-21 03:44:27.759737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.553 [2024-07-21 03:44:27.759764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.553 qpair failed and we were unable to recover it. 00:34:42.553 [2024-07-21 03:44:27.759909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.553 [2024-07-21 03:44:27.759935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.553 qpair failed and we were unable to recover it. 00:34:42.553 [2024-07-21 03:44:27.760050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.553 [2024-07-21 03:44:27.760097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.553 qpair failed and we were unable to recover it. 00:34:42.553 [2024-07-21 03:44:27.760255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.553 [2024-07-21 03:44:27.760284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.553 qpair failed and we were unable to recover it. 00:34:42.553 [2024-07-21 03:44:27.760412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.553 [2024-07-21 03:44:27.760439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.553 qpair failed and we were unable to recover it. 00:34:42.553 [2024-07-21 03:44:27.760563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.553 [2024-07-21 03:44:27.760591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.554 qpair failed and we were unable to recover it. 00:34:42.554 [2024-07-21 03:44:27.760740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.554 [2024-07-21 03:44:27.760767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.554 qpair failed and we were unable to recover it. 00:34:42.554 [2024-07-21 03:44:27.760886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.554 [2024-07-21 03:44:27.760912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.554 qpair failed and we were unable to recover it. 00:34:42.554 [2024-07-21 03:44:27.761075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.554 [2024-07-21 03:44:27.761104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.554 qpair failed and we were unable to recover it. 00:34:42.554 [2024-07-21 03:44:27.761246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.554 [2024-07-21 03:44:27.761273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.554 qpair failed and we were unable to recover it. 00:34:42.554 [2024-07-21 03:44:27.761446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.554 [2024-07-21 03:44:27.761476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.554 qpair failed and we were unable to recover it. 00:34:42.554 [2024-07-21 03:44:27.761584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.554 [2024-07-21 03:44:27.761620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.554 qpair failed and we were unable to recover it. 00:34:42.554 [2024-07-21 03:44:27.761766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.554 [2024-07-21 03:44:27.761793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.554 qpair failed and we were unable to recover it. 00:34:42.554 [2024-07-21 03:44:27.761944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.554 [2024-07-21 03:44:27.761970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.554 qpair failed and we were unable to recover it. 00:34:42.554 [2024-07-21 03:44:27.762064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.554 [2024-07-21 03:44:27.762108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.554 qpair failed and we were unable to recover it. 00:34:42.554 [2024-07-21 03:44:27.762212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.554 [2024-07-21 03:44:27.762251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.554 qpair failed and we were unable to recover it. 00:34:42.554 [2024-07-21 03:44:27.762397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.554 [2024-07-21 03:44:27.762426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.554 qpair failed and we were unable to recover it. 00:34:42.554 [2024-07-21 03:44:27.762521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.554 [2024-07-21 03:44:27.762552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.554 qpair failed and we were unable to recover it. 00:34:42.554 [2024-07-21 03:44:27.762701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.554 [2024-07-21 03:44:27.762727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.554 qpair failed and we were unable to recover it. 00:34:42.554 [2024-07-21 03:44:27.762826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.554 [2024-07-21 03:44:27.762852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.554 qpair failed and we were unable to recover it. 00:34:42.554 [2024-07-21 03:44:27.762956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.554 [2024-07-21 03:44:27.762986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.554 qpair failed and we were unable to recover it. 00:34:42.554 [2024-07-21 03:44:27.763078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.554 [2024-07-21 03:44:27.763107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.554 qpair failed and we were unable to recover it. 00:34:42.554 [2024-07-21 03:44:27.763311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.554 [2024-07-21 03:44:27.763370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.554 qpair failed and we were unable to recover it. 00:34:42.554 [2024-07-21 03:44:27.763475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.554 [2024-07-21 03:44:27.763503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.554 qpair failed and we were unable to recover it. 00:34:42.554 [2024-07-21 03:44:27.763631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.554 [2024-07-21 03:44:27.763665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.554 qpair failed and we were unable to recover it. 00:34:42.554 [2024-07-21 03:44:27.763842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.554 [2024-07-21 03:44:27.763900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.554 qpair failed and we were unable to recover it. 00:34:42.554 [2024-07-21 03:44:27.764038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.554 [2024-07-21 03:44:27.764084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.554 qpair failed and we were unable to recover it. 00:34:42.554 [2024-07-21 03:44:27.764230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.554 [2024-07-21 03:44:27.764280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.554 qpair failed and we were unable to recover it. 00:34:42.554 [2024-07-21 03:44:27.764381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.554 [2024-07-21 03:44:27.764407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.554 qpair failed and we were unable to recover it. 00:34:42.554 [2024-07-21 03:44:27.764563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.554 [2024-07-21 03:44:27.764590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.554 qpair failed and we were unable to recover it. 00:34:42.554 [2024-07-21 03:44:27.764736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.554 [2024-07-21 03:44:27.764782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.554 qpair failed and we were unable to recover it. 00:34:42.554 [2024-07-21 03:44:27.764951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.554 [2024-07-21 03:44:27.764984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.554 qpair failed and we were unable to recover it. 00:34:42.554 [2024-07-21 03:44:27.765114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.554 [2024-07-21 03:44:27.765144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.554 qpair failed and we were unable to recover it. 00:34:42.554 [2024-07-21 03:44:27.765272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.554 [2024-07-21 03:44:27.765306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.554 qpair failed and we were unable to recover it. 00:34:42.554 [2024-07-21 03:44:27.765412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.554 [2024-07-21 03:44:27.765440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.554 qpair failed and we were unable to recover it. 00:34:42.554 [2024-07-21 03:44:27.765573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.555 [2024-07-21 03:44:27.765602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.555 qpair failed and we were unable to recover it. 00:34:42.555 [2024-07-21 03:44:27.765755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.555 [2024-07-21 03:44:27.765782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.555 qpair failed and we were unable to recover it. 00:34:42.555 [2024-07-21 03:44:27.765879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.555 [2024-07-21 03:44:27.765924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.555 qpair failed and we were unable to recover it. 00:34:42.555 [2024-07-21 03:44:27.766067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.555 [2024-07-21 03:44:27.766097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.555 qpair failed and we were unable to recover it. 00:34:42.555 [2024-07-21 03:44:27.766255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.555 [2024-07-21 03:44:27.766303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.555 qpair failed and we were unable to recover it. 00:34:42.555 [2024-07-21 03:44:27.766405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.555 [2024-07-21 03:44:27.766433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.555 qpair failed and we were unable to recover it. 00:34:42.555 [2024-07-21 03:44:27.766554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.555 [2024-07-21 03:44:27.766595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.555 qpair failed and we were unable to recover it. 00:34:42.555 [2024-07-21 03:44:27.766739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.555 [2024-07-21 03:44:27.766775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.555 qpair failed and we were unable to recover it. 00:34:42.555 [2024-07-21 03:44:27.766897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.555 [2024-07-21 03:44:27.766942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.555 qpair failed and we were unable to recover it. 00:34:42.555 [2024-07-21 03:44:27.767072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.555 [2024-07-21 03:44:27.767102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.555 qpair failed and we were unable to recover it. 00:34:42.555 [2024-07-21 03:44:27.767237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.555 [2024-07-21 03:44:27.767269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.555 qpair failed and we were unable to recover it. 00:34:42.555 [2024-07-21 03:44:27.767435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.555 [2024-07-21 03:44:27.767465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.555 qpair failed and we were unable to recover it. 00:34:42.555 [2024-07-21 03:44:27.767582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.555 [2024-07-21 03:44:27.767610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.555 qpair failed and we were unable to recover it. 00:34:42.555 [2024-07-21 03:44:27.767714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.555 [2024-07-21 03:44:27.767742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.555 qpair failed and we were unable to recover it. 00:34:42.555 [2024-07-21 03:44:27.767895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.555 [2024-07-21 03:44:27.767923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.555 qpair failed and we were unable to recover it. 00:34:42.555 [2024-07-21 03:44:27.768065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.555 [2024-07-21 03:44:27.768096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.555 qpair failed and we were unable to recover it. 00:34:42.555 [2024-07-21 03:44:27.768314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.555 [2024-07-21 03:44:27.768373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.555 qpair failed and we were unable to recover it. 00:34:42.555 [2024-07-21 03:44:27.768507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.555 [2024-07-21 03:44:27.768552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.555 qpair failed and we were unable to recover it. 00:34:42.555 [2024-07-21 03:44:27.768725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.555 [2024-07-21 03:44:27.768754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.555 qpair failed and we were unable to recover it. 00:34:42.555 [2024-07-21 03:44:27.768854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.555 [2024-07-21 03:44:27.768891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.555 qpair failed and we were unable to recover it. 00:34:42.555 [2024-07-21 03:44:27.769045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.555 [2024-07-21 03:44:27.769074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.555 qpair failed and we were unable to recover it. 00:34:42.555 [2024-07-21 03:44:27.769200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.555 [2024-07-21 03:44:27.769246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.555 qpair failed and we were unable to recover it. 00:34:42.555 [2024-07-21 03:44:27.769389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.555 [2024-07-21 03:44:27.769433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.555 qpair failed and we were unable to recover it. 00:34:42.555 [2024-07-21 03:44:27.769569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.555 [2024-07-21 03:44:27.769599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.555 qpair failed and we were unable to recover it. 00:34:42.555 [2024-07-21 03:44:27.769724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.555 [2024-07-21 03:44:27.769751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.556 qpair failed and we were unable to recover it. 00:34:42.556 [2024-07-21 03:44:27.769873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.556 [2024-07-21 03:44:27.769919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.556 qpair failed and we were unable to recover it. 00:34:42.556 [2024-07-21 03:44:27.770041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.556 [2024-07-21 03:44:27.770071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.556 qpair failed and we were unable to recover it. 00:34:42.556 [2024-07-21 03:44:27.770230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.556 [2024-07-21 03:44:27.770260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.556 qpair failed and we were unable to recover it. 00:34:42.556 [2024-07-21 03:44:27.770391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.556 [2024-07-21 03:44:27.770421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.556 qpair failed and we were unable to recover it. 00:34:42.556 [2024-07-21 03:44:27.770576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.556 [2024-07-21 03:44:27.770607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.556 qpair failed and we were unable to recover it. 00:34:42.556 [2024-07-21 03:44:27.770726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.556 [2024-07-21 03:44:27.770755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.556 qpair failed and we were unable to recover it. 00:34:42.556 [2024-07-21 03:44:27.770882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.556 [2024-07-21 03:44:27.770911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.556 qpair failed and we were unable to recover it. 00:34:42.556 [2024-07-21 03:44:27.771043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.556 [2024-07-21 03:44:27.771073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.556 qpair failed and we were unable to recover it. 00:34:42.556 [2024-07-21 03:44:27.771195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.556 [2024-07-21 03:44:27.771238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.556 qpair failed and we were unable to recover it. 00:34:42.556 [2024-07-21 03:44:27.771411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.556 [2024-07-21 03:44:27.771441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.556 qpair failed and we were unable to recover it. 00:34:42.556 [2024-07-21 03:44:27.771568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.556 [2024-07-21 03:44:27.771599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.556 qpair failed and we were unable to recover it. 00:34:42.556 [2024-07-21 03:44:27.771760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.556 [2024-07-21 03:44:27.771800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.556 qpair failed and we were unable to recover it. 00:34:42.556 [2024-07-21 03:44:27.771931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.556 [2024-07-21 03:44:27.771977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.556 qpair failed and we were unable to recover it. 00:34:42.556 [2024-07-21 03:44:27.772154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.556 [2024-07-21 03:44:27.772199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.556 qpair failed and we were unable to recover it. 00:34:42.556 [2024-07-21 03:44:27.772336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.556 [2024-07-21 03:44:27.772380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.556 qpair failed and we were unable to recover it. 00:34:42.556 [2024-07-21 03:44:27.772507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.556 [2024-07-21 03:44:27.772534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.556 qpair failed and we were unable to recover it. 00:34:42.556 [2024-07-21 03:44:27.772665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.556 [2024-07-21 03:44:27.772695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.556 qpair failed and we were unable to recover it. 00:34:42.556 [2024-07-21 03:44:27.772807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.556 [2024-07-21 03:44:27.772835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.556 qpair failed and we were unable to recover it. 00:34:42.556 [2024-07-21 03:44:27.772979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.556 [2024-07-21 03:44:27.773020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.556 qpair failed and we were unable to recover it. 00:34:42.556 [2024-07-21 03:44:27.773204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.556 [2024-07-21 03:44:27.773256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.556 qpair failed and we were unable to recover it. 00:34:42.556 [2024-07-21 03:44:27.773424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.556 [2024-07-21 03:44:27.773451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.556 qpair failed and we were unable to recover it. 00:34:42.556 [2024-07-21 03:44:27.773586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.556 [2024-07-21 03:44:27.773622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.556 qpair failed and we were unable to recover it. 00:34:42.556 [2024-07-21 03:44:27.773797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.556 [2024-07-21 03:44:27.773831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.556 qpair failed and we were unable to recover it. 00:34:42.556 [2024-07-21 03:44:27.773970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.556 [2024-07-21 03:44:27.774001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.556 qpair failed and we were unable to recover it. 00:34:42.556 [2024-07-21 03:44:27.774151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.556 [2024-07-21 03:44:27.774197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.556 qpair failed and we were unable to recover it. 00:34:42.556 [2024-07-21 03:44:27.774408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.556 [2024-07-21 03:44:27.774465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.556 qpair failed and we were unable to recover it. 00:34:42.556 [2024-07-21 03:44:27.774552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.556 [2024-07-21 03:44:27.774579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.556 qpair failed and we were unable to recover it. 00:34:42.556 [2024-07-21 03:44:27.774731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.556 [2024-07-21 03:44:27.774778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.556 qpair failed and we were unable to recover it. 00:34:42.556 [2024-07-21 03:44:27.774919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.556 [2024-07-21 03:44:27.774971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.556 qpair failed and we were unable to recover it. 00:34:42.556 [2024-07-21 03:44:27.775104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.556 [2024-07-21 03:44:27.775153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.556 qpair failed and we were unable to recover it. 00:34:42.556 [2024-07-21 03:44:27.775247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.556 [2024-07-21 03:44:27.775272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.556 qpair failed and we were unable to recover it. 00:34:42.556 [2024-07-21 03:44:27.775395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.556 [2024-07-21 03:44:27.775423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.556 qpair failed and we were unable to recover it. 00:34:42.556 [2024-07-21 03:44:27.775570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.556 [2024-07-21 03:44:27.775597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.556 qpair failed and we were unable to recover it. 00:34:42.556 [2024-07-21 03:44:27.775762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.556 [2024-07-21 03:44:27.775807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.556 qpair failed and we were unable to recover it. 00:34:42.556 [2024-07-21 03:44:27.775949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.556 [2024-07-21 03:44:27.775993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.556 qpair failed and we were unable to recover it. 00:34:42.556 [2024-07-21 03:44:27.776203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.556 [2024-07-21 03:44:27.776255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.556 qpair failed and we were unable to recover it. 00:34:42.556 [2024-07-21 03:44:27.776439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.556 [2024-07-21 03:44:27.776495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.556 qpair failed and we were unable to recover it. 00:34:42.556 [2024-07-21 03:44:27.776656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.556 [2024-07-21 03:44:27.776684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.556 qpair failed and we were unable to recover it. 00:34:42.556 [2024-07-21 03:44:27.776825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.556 [2024-07-21 03:44:27.776854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.556 qpair failed and we were unable to recover it. 00:34:42.556 [2024-07-21 03:44:27.776957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.556 [2024-07-21 03:44:27.776986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.556 qpair failed and we were unable to recover it. 00:34:42.556 [2024-07-21 03:44:27.777222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.556 [2024-07-21 03:44:27.777251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.556 qpair failed and we were unable to recover it. 00:34:42.556 [2024-07-21 03:44:27.777427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.556 [2024-07-21 03:44:27.777480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.556 qpair failed and we were unable to recover it. 00:34:42.556 [2024-07-21 03:44:27.778327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.556 [2024-07-21 03:44:27.778361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.556 qpair failed and we were unable to recover it. 00:34:42.556 [2024-07-21 03:44:27.778506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.556 [2024-07-21 03:44:27.778536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.556 qpair failed and we were unable to recover it. 00:34:42.556 [2024-07-21 03:44:27.778687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.556 [2024-07-21 03:44:27.778714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.556 qpair failed and we were unable to recover it. 00:34:42.556 [2024-07-21 03:44:27.778817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.556 [2024-07-21 03:44:27.778844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.556 qpair failed and we were unable to recover it. 00:34:42.556 [2024-07-21 03:44:27.778967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.556 [2024-07-21 03:44:27.779014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.556 qpair failed and we were unable to recover it. 00:34:42.556 [2024-07-21 03:44:27.779172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.556 [2024-07-21 03:44:27.779217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.556 qpair failed and we were unable to recover it. 00:34:42.556 [2024-07-21 03:44:27.779370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.556 [2024-07-21 03:44:27.779404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.556 qpair failed and we were unable to recover it. 00:34:42.557 [2024-07-21 03:44:27.779567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.557 [2024-07-21 03:44:27.779603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.557 qpair failed and we were unable to recover it. 00:34:42.557 [2024-07-21 03:44:27.779759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.557 [2024-07-21 03:44:27.779785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.557 qpair failed and we were unable to recover it. 00:34:42.557 [2024-07-21 03:44:27.779918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.557 [2024-07-21 03:44:27.779948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.557 qpair failed and we were unable to recover it. 00:34:42.557 [2024-07-21 03:44:27.780082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.557 [2024-07-21 03:44:27.780111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.557 qpair failed and we were unable to recover it. 00:34:42.557 [2024-07-21 03:44:27.780240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.557 [2024-07-21 03:44:27.780282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.557 qpair failed and we were unable to recover it. 00:34:42.557 [2024-07-21 03:44:27.781030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.557 [2024-07-21 03:44:27.781067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.557 qpair failed and we were unable to recover it. 00:34:42.557 [2024-07-21 03:44:27.781218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.557 [2024-07-21 03:44:27.781247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.557 qpair failed and we were unable to recover it. 00:34:42.557 [2024-07-21 03:44:27.781371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.557 [2024-07-21 03:44:27.781398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.557 qpair failed and we were unable to recover it. 00:34:42.557 [2024-07-21 03:44:27.781510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.557 [2024-07-21 03:44:27.781545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.557 qpair failed and we were unable to recover it. 00:34:42.557 [2024-07-21 03:44:27.781671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.557 [2024-07-21 03:44:27.781699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.557 qpair failed and we were unable to recover it. 00:34:42.557 [2024-07-21 03:44:27.781803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.557 [2024-07-21 03:44:27.781829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.557 qpair failed and we were unable to recover it. 00:34:42.557 [2024-07-21 03:44:27.781937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.557 [2024-07-21 03:44:27.781964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.557 qpair failed and we were unable to recover it. 00:34:42.557 [2024-07-21 03:44:27.782112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.557 [2024-07-21 03:44:27.782156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.557 qpair failed and we were unable to recover it. 00:34:42.557 [2024-07-21 03:44:27.782314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.557 [2024-07-21 03:44:27.782343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.557 qpair failed and we were unable to recover it. 00:34:42.557 [2024-07-21 03:44:27.782470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.557 [2024-07-21 03:44:27.782498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.557 qpair failed and we were unable to recover it. 00:34:42.557 [2024-07-21 03:44:27.782640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.557 [2024-07-21 03:44:27.782688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.557 qpair failed and we were unable to recover it. 00:34:42.557 [2024-07-21 03:44:27.782787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.557 [2024-07-21 03:44:27.782815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.557 qpair failed and we were unable to recover it. 00:34:42.557 [2024-07-21 03:44:27.782917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.557 [2024-07-21 03:44:27.782943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.557 qpair failed and we were unable to recover it. 00:34:42.557 [2024-07-21 03:44:27.783092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.557 [2024-07-21 03:44:27.783135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.557 qpair failed and we were unable to recover it. 00:34:42.557 [2024-07-21 03:44:27.783263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.557 [2024-07-21 03:44:27.783293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.557 qpair failed and we were unable to recover it. 00:34:42.557 [2024-07-21 03:44:27.783434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.557 [2024-07-21 03:44:27.783463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.557 qpair failed and we were unable to recover it. 00:34:42.557 [2024-07-21 03:44:27.783637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.557 [2024-07-21 03:44:27.783665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.557 qpair failed and we were unable to recover it. 00:34:42.557 [2024-07-21 03:44:27.783763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.557 [2024-07-21 03:44:27.783790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.557 qpair failed and we were unable to recover it. 00:34:42.557 [2024-07-21 03:44:27.783929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.557 [2024-07-21 03:44:27.783958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.557 qpair failed and we were unable to recover it. 00:34:42.557 [2024-07-21 03:44:27.784097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.557 [2024-07-21 03:44:27.784142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.557 qpair failed and we were unable to recover it. 00:34:42.557 [2024-07-21 03:44:27.784279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.557 [2024-07-21 03:44:27.784308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.557 qpair failed and we were unable to recover it. 00:34:42.557 [2024-07-21 03:44:27.784467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.557 [2024-07-21 03:44:27.784496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.557 qpair failed and we were unable to recover it. 00:34:42.557 [2024-07-21 03:44:27.784667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.557 [2024-07-21 03:44:27.784707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.557 qpair failed and we were unable to recover it. 00:34:42.557 [2024-07-21 03:44:27.784853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.557 [2024-07-21 03:44:27.784881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.557 qpair failed and we were unable to recover it. 00:34:42.557 [2024-07-21 03:44:27.785049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.557 [2024-07-21 03:44:27.785078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.557 qpair failed and we were unable to recover it. 00:34:42.557 [2024-07-21 03:44:27.785273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.557 [2024-07-21 03:44:27.785323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.557 qpair failed and we were unable to recover it. 00:34:42.557 [2024-07-21 03:44:27.785422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.557 [2024-07-21 03:44:27.785451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.557 qpair failed and we were unable to recover it. 00:34:42.557 [2024-07-21 03:44:27.785592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.557 [2024-07-21 03:44:27.785625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.557 qpair failed and we were unable to recover it. 00:34:42.557 [2024-07-21 03:44:27.785732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.557 [2024-07-21 03:44:27.785760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.557 qpair failed and we were unable to recover it. 00:34:42.557 [2024-07-21 03:44:27.785937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.557 [2024-07-21 03:44:27.785967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.557 qpair failed and we were unable to recover it. 00:34:42.557 [2024-07-21 03:44:27.786106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.557 [2024-07-21 03:44:27.786134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.557 qpair failed and we were unable to recover it. 00:34:42.557 [2024-07-21 03:44:27.786273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.557 [2024-07-21 03:44:27.786306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.557 qpair failed and we were unable to recover it. 00:34:42.557 [2024-07-21 03:44:27.786451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.557 [2024-07-21 03:44:27.786496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.557 qpair failed and we were unable to recover it. 00:34:42.557 [2024-07-21 03:44:27.786666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.557 [2024-07-21 03:44:27.786705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.557 qpair failed and we were unable to recover it. 00:34:42.557 [2024-07-21 03:44:27.786838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.557 [2024-07-21 03:44:27.786867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.557 qpair failed and we were unable to recover it. 00:34:42.557 [2024-07-21 03:44:27.786982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.557 [2024-07-21 03:44:27.787032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.557 qpair failed and we were unable to recover it. 00:34:42.557 [2024-07-21 03:44:27.787274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.557 [2024-07-21 03:44:27.787332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.557 qpair failed and we were unable to recover it. 00:34:42.557 [2024-07-21 03:44:27.787446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.557 [2024-07-21 03:44:27.787472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.557 qpair failed and we were unable to recover it. 00:34:42.557 [2024-07-21 03:44:27.787634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.557 [2024-07-21 03:44:27.787675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.557 qpair failed and we were unable to recover it. 00:34:42.557 [2024-07-21 03:44:27.787834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.557 [2024-07-21 03:44:27.787863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.557 qpair failed and we were unable to recover it. 00:34:42.557 [2024-07-21 03:44:27.788080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.557 [2024-07-21 03:44:27.788131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.557 qpair failed and we were unable to recover it. 00:34:42.557 [2024-07-21 03:44:27.788249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.557 [2024-07-21 03:44:27.788293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.557 qpair failed and we were unable to recover it. 00:34:42.557 [2024-07-21 03:44:27.788491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.557 [2024-07-21 03:44:27.788545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.557 qpair failed and we were unable to recover it. 00:34:42.557 [2024-07-21 03:44:27.788656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.557 [2024-07-21 03:44:27.788704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.557 qpair failed and we were unable to recover it. 00:34:42.557 [2024-07-21 03:44:27.788833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.557 [2024-07-21 03:44:27.788861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.557 qpair failed and we were unable to recover it. 00:34:42.557 [2024-07-21 03:44:27.789020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.557 [2024-07-21 03:44:27.789050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.557 qpair failed and we were unable to recover it. 00:34:42.557 [2024-07-21 03:44:27.789213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.557 [2024-07-21 03:44:27.789283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.557 qpair failed and we were unable to recover it. 00:34:42.557 [2024-07-21 03:44:27.789481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.557 [2024-07-21 03:44:27.789513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.557 qpair failed and we were unable to recover it. 00:34:42.557 [2024-07-21 03:44:27.789652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.557 [2024-07-21 03:44:27.789696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.557 qpair failed and we were unable to recover it. 00:34:42.558 [2024-07-21 03:44:27.789796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.558 [2024-07-21 03:44:27.789823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.558 qpair failed and we were unable to recover it. 00:34:42.558 [2024-07-21 03:44:27.789992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.558 [2024-07-21 03:44:27.790021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.558 qpair failed and we were unable to recover it. 00:34:42.558 [2024-07-21 03:44:27.790221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.558 [2024-07-21 03:44:27.790278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.558 qpair failed and we were unable to recover it. 00:34:42.558 [2024-07-21 03:44:27.790415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.558 [2024-07-21 03:44:27.790445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.558 qpair failed and we were unable to recover it. 00:34:42.558 [2024-07-21 03:44:27.790593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.558 [2024-07-21 03:44:27.790626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.558 qpair failed and we were unable to recover it. 00:34:42.558 [2024-07-21 03:44:27.790724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.558 [2024-07-21 03:44:27.790750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.558 qpair failed and we were unable to recover it. 00:34:42.558 [2024-07-21 03:44:27.790849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.558 [2024-07-21 03:44:27.790875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.558 qpair failed and we were unable to recover it. 00:34:42.558 [2024-07-21 03:44:27.791000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.558 [2024-07-21 03:44:27.791026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.558 qpair failed and we were unable to recover it. 00:34:42.558 [2024-07-21 03:44:27.791165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.558 [2024-07-21 03:44:27.791195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.558 qpair failed and we were unable to recover it. 00:34:42.558 [2024-07-21 03:44:27.791348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.558 [2024-07-21 03:44:27.791378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.558 qpair failed and we were unable to recover it. 00:34:42.558 [2024-07-21 03:44:27.791513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.558 [2024-07-21 03:44:27.791543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.558 qpair failed and we were unable to recover it. 00:34:42.558 [2024-07-21 03:44:27.791745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.558 [2024-07-21 03:44:27.791772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.558 qpair failed and we were unable to recover it. 00:34:42.558 [2024-07-21 03:44:27.791912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.558 [2024-07-21 03:44:27.791941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.558 qpair failed and we were unable to recover it. 00:34:42.558 [2024-07-21 03:44:27.792079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.558 [2024-07-21 03:44:27.792110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.558 qpair failed and we were unable to recover it. 00:34:42.558 [2024-07-21 03:44:27.792268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.558 [2024-07-21 03:44:27.792298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.558 qpair failed and we were unable to recover it. 00:34:42.558 [2024-07-21 03:44:27.792450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.558 [2024-07-21 03:44:27.792478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.558 qpair failed and we were unable to recover it. 00:34:42.558 [2024-07-21 03:44:27.792645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.558 [2024-07-21 03:44:27.792673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.558 qpair failed and we were unable to recover it. 00:34:42.558 [2024-07-21 03:44:27.792796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.558 [2024-07-21 03:44:27.792826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.558 qpair failed and we were unable to recover it. 00:34:42.558 [2024-07-21 03:44:27.792965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.558 [2024-07-21 03:44:27.792995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.558 qpair failed and we were unable to recover it. 00:34:42.558 [2024-07-21 03:44:27.793181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.558 [2024-07-21 03:44:27.793226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.558 qpair failed and we were unable to recover it. 00:34:42.558 [2024-07-21 03:44:27.793363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.558 [2024-07-21 03:44:27.793402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.558 qpair failed and we were unable to recover it. 00:34:42.558 [2024-07-21 03:44:27.793523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.558 [2024-07-21 03:44:27.793554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.558 qpair failed and we were unable to recover it. 00:34:42.558 [2024-07-21 03:44:27.793681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.558 [2024-07-21 03:44:27.793710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.558 qpair failed and we were unable to recover it. 00:34:42.558 [2024-07-21 03:44:27.793828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.558 [2024-07-21 03:44:27.793857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.558 qpair failed and we were unable to recover it. 00:34:42.558 [2024-07-21 03:44:27.793949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.558 [2024-07-21 03:44:27.793992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.558 qpair failed and we were unable to recover it. 00:34:42.558 [2024-07-21 03:44:27.794104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.558 [2024-07-21 03:44:27.794131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.558 qpair failed and we were unable to recover it. 00:34:42.558 [2024-07-21 03:44:27.794325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.558 [2024-07-21 03:44:27.794360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.558 qpair failed and we were unable to recover it. 00:34:42.558 [2024-07-21 03:44:27.794498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.558 [2024-07-21 03:44:27.794547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.558 qpair failed and we were unable to recover it. 00:34:42.558 [2024-07-21 03:44:27.794691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.558 [2024-07-21 03:44:27.794727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.558 qpair failed and we were unable to recover it. 00:34:42.558 [2024-07-21 03:44:27.794824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.558 [2024-07-21 03:44:27.794851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.558 qpair failed and we were unable to recover it. 00:34:42.558 [2024-07-21 03:44:27.794982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.558 [2024-07-21 03:44:27.795021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.558 qpair failed and we were unable to recover it. 00:34:42.558 [2024-07-21 03:44:27.795189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.558 [2024-07-21 03:44:27.795256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.558 qpair failed and we were unable to recover it. 00:34:42.558 [2024-07-21 03:44:27.795408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.558 [2024-07-21 03:44:27.795458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.558 qpair failed and we were unable to recover it. 00:34:42.558 [2024-07-21 03:44:27.795575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.558 [2024-07-21 03:44:27.795606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.558 qpair failed and we were unable to recover it. 00:34:42.558 [2024-07-21 03:44:27.795750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.558 [2024-07-21 03:44:27.795780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.558 qpair failed and we were unable to recover it. 00:34:42.558 [2024-07-21 03:44:27.795881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.558 [2024-07-21 03:44:27.795908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.558 qpair failed and we were unable to recover it. 00:34:42.558 [2024-07-21 03:44:27.796028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.558 [2024-07-21 03:44:27.796055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.558 qpair failed and we were unable to recover it. 00:34:42.558 [2024-07-21 03:44:27.796201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.558 [2024-07-21 03:44:27.796228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.558 qpair failed and we were unable to recover it. 00:34:42.558 [2024-07-21 03:44:27.796368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.558 [2024-07-21 03:44:27.796395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.558 qpair failed and we were unable to recover it. 00:34:42.558 [2024-07-21 03:44:27.796517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.558 [2024-07-21 03:44:27.796544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.558 qpair failed and we were unable to recover it. 00:34:42.558 [2024-07-21 03:44:27.796673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.558 [2024-07-21 03:44:27.796721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.558 qpair failed and we were unable to recover it. 00:34:42.558 [2024-07-21 03:44:27.796870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.558 [2024-07-21 03:44:27.796916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.558 qpair failed and we were unable to recover it. 00:34:42.558 [2024-07-21 03:44:27.797088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.558 [2024-07-21 03:44:27.797132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.558 qpair failed and we were unable to recover it. 00:34:42.840 [2024-07-21 03:44:27.797271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.840 [2024-07-21 03:44:27.797302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.840 qpair failed and we were unable to recover it. 00:34:42.840 [2024-07-21 03:44:27.797436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.840 [2024-07-21 03:44:27.797463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.840 qpair failed and we were unable to recover it. 00:34:42.840 [2024-07-21 03:44:27.797587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.840 [2024-07-21 03:44:27.797620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.840 qpair failed and we were unable to recover it. 00:34:42.840 [2024-07-21 03:44:27.797761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.840 [2024-07-21 03:44:27.797805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.840 qpair failed and we were unable to recover it. 00:34:42.840 [2024-07-21 03:44:27.797918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.840 [2024-07-21 03:44:27.797950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.840 qpair failed and we were unable to recover it. 00:34:42.840 [2024-07-21 03:44:27.798055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.840 [2024-07-21 03:44:27.798085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.840 qpair failed and we were unable to recover it. 00:34:42.840 [2024-07-21 03:44:27.798204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.840 [2024-07-21 03:44:27.798251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.840 qpair failed and we were unable to recover it. 00:34:42.840 [2024-07-21 03:44:27.798388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.840 [2024-07-21 03:44:27.798414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.840 qpair failed and we were unable to recover it. 00:34:42.840 [2024-07-21 03:44:27.798562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.840 [2024-07-21 03:44:27.798592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.840 qpair failed and we were unable to recover it. 00:34:42.840 [2024-07-21 03:44:27.798701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.840 [2024-07-21 03:44:27.798728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.840 qpair failed and we were unable to recover it. 00:34:42.840 [2024-07-21 03:44:27.798853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.840 [2024-07-21 03:44:27.798885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.840 qpair failed and we were unable to recover it. 00:34:42.840 [2024-07-21 03:44:27.798990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.840 [2024-07-21 03:44:27.799017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.840 qpair failed and we were unable to recover it. 00:34:42.840 [2024-07-21 03:44:27.799158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.840 [2024-07-21 03:44:27.799188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.840 qpair failed and we were unable to recover it. 00:34:42.840 [2024-07-21 03:44:27.799346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.840 [2024-07-21 03:44:27.799375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.840 qpair failed and we were unable to recover it. 00:34:42.840 [2024-07-21 03:44:27.799529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.840 [2024-07-21 03:44:27.799558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.840 qpair failed and we were unable to recover it. 00:34:42.840 [2024-07-21 03:44:27.799674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.840 [2024-07-21 03:44:27.799704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.840 qpair failed and we were unable to recover it. 00:34:42.840 [2024-07-21 03:44:27.799829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.840 [2024-07-21 03:44:27.799856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.840 qpair failed and we were unable to recover it. 00:34:42.840 [2024-07-21 03:44:27.799982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.840 [2024-07-21 03:44:27.800010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.840 qpair failed and we were unable to recover it. 00:34:42.840 [2024-07-21 03:44:27.800133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.840 [2024-07-21 03:44:27.800163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.840 qpair failed and we were unable to recover it. 00:34:42.840 [2024-07-21 03:44:27.800317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.840 [2024-07-21 03:44:27.800346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.840 qpair failed and we were unable to recover it. 00:34:42.840 [2024-07-21 03:44:27.800503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.840 [2024-07-21 03:44:27.800532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.840 qpair failed and we were unable to recover it. 00:34:42.840 [2024-07-21 03:44:27.800673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.840 [2024-07-21 03:44:27.800701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.840 qpair failed and we were unable to recover it. 00:34:42.840 [2024-07-21 03:44:27.800859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.841 [2024-07-21 03:44:27.800886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.841 qpair failed and we were unable to recover it. 00:34:42.841 [2024-07-21 03:44:27.801014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.841 [2024-07-21 03:44:27.801040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.841 qpair failed and we were unable to recover it. 00:34:42.841 [2024-07-21 03:44:27.801213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.841 [2024-07-21 03:44:27.801244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.841 qpair failed and we were unable to recover it. 00:34:42.841 [2024-07-21 03:44:27.801355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.841 [2024-07-21 03:44:27.801381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.841 qpair failed and we were unable to recover it. 00:34:42.841 [2024-07-21 03:44:27.801530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.841 [2024-07-21 03:44:27.801559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.841 qpair failed and we were unable to recover it. 00:34:42.841 [2024-07-21 03:44:27.801703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.841 [2024-07-21 03:44:27.801730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.841 qpair failed and we were unable to recover it. 00:34:42.841 [2024-07-21 03:44:27.801857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.841 [2024-07-21 03:44:27.801885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.841 qpair failed and we were unable to recover it. 00:34:42.841 [2024-07-21 03:44:27.802026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.841 [2024-07-21 03:44:27.802056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.841 qpair failed and we were unable to recover it. 00:34:42.841 [2024-07-21 03:44:27.802188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.841 [2024-07-21 03:44:27.802230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.841 qpair failed and we were unable to recover it. 00:34:42.841 [2024-07-21 03:44:27.802369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.841 [2024-07-21 03:44:27.802401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.841 qpair failed and we were unable to recover it. 00:34:42.841 [2024-07-21 03:44:27.802553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.841 [2024-07-21 03:44:27.802598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.841 qpair failed and we were unable to recover it. 00:34:42.841 [2024-07-21 03:44:27.802780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.841 [2024-07-21 03:44:27.802813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.841 qpair failed and we were unable to recover it. 00:34:42.841 [2024-07-21 03:44:27.802959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.841 [2024-07-21 03:44:27.803006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.841 qpair failed and we were unable to recover it. 00:34:42.841 [2024-07-21 03:44:27.803121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.841 [2024-07-21 03:44:27.803167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.841 qpair failed and we were unable to recover it. 00:34:42.841 [2024-07-21 03:44:27.803313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.841 [2024-07-21 03:44:27.803343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.841 qpair failed and we were unable to recover it. 00:34:42.841 [2024-07-21 03:44:27.803464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.841 [2024-07-21 03:44:27.803492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.841 qpair failed and we were unable to recover it. 00:34:42.841 [2024-07-21 03:44:27.803626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.841 [2024-07-21 03:44:27.803653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.841 qpair failed and we were unable to recover it. 00:34:42.841 [2024-07-21 03:44:27.803793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.841 [2024-07-21 03:44:27.803822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.841 qpair failed and we were unable to recover it. 00:34:42.841 [2024-07-21 03:44:27.803989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.841 [2024-07-21 03:44:27.804034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.841 qpair failed and we were unable to recover it. 00:34:42.841 [2024-07-21 03:44:27.804175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.841 [2024-07-21 03:44:27.804206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.841 qpair failed and we were unable to recover it. 00:34:42.841 [2024-07-21 03:44:27.804372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.841 [2024-07-21 03:44:27.804398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.841 qpair failed and we were unable to recover it. 00:34:42.841 [2024-07-21 03:44:27.804521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.841 [2024-07-21 03:44:27.804548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.841 qpair failed and we were unable to recover it. 00:34:42.841 [2024-07-21 03:44:27.804696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.841 [2024-07-21 03:44:27.804742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.841 qpair failed and we were unable to recover it. 00:34:42.841 [2024-07-21 03:44:27.804851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.841 [2024-07-21 03:44:27.804881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.841 qpair failed and we were unable to recover it. 00:34:42.841 [2024-07-21 03:44:27.805063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.841 [2024-07-21 03:44:27.805107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.841 qpair failed and we were unable to recover it. 00:34:42.841 [2024-07-21 03:44:27.805259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.841 [2024-07-21 03:44:27.805286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.841 qpair failed and we were unable to recover it. 00:34:42.841 [2024-07-21 03:44:27.805383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.841 [2024-07-21 03:44:27.805409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.841 qpair failed and we were unable to recover it. 00:34:42.841 [2024-07-21 03:44:27.805527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.841 [2024-07-21 03:44:27.805553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.841 qpair failed and we were unable to recover it. 00:34:42.841 [2024-07-21 03:44:27.805717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.841 [2024-07-21 03:44:27.805763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.841 qpair failed and we were unable to recover it. 00:34:42.841 [2024-07-21 03:44:27.805899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.841 [2024-07-21 03:44:27.805930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.841 qpair failed and we were unable to recover it. 00:34:42.841 [2024-07-21 03:44:27.806054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.841 [2024-07-21 03:44:27.806081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.841 qpair failed and we were unable to recover it. 00:34:42.841 [2024-07-21 03:44:27.806175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.841 [2024-07-21 03:44:27.806201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.841 qpair failed and we were unable to recover it. 00:34:42.841 [2024-07-21 03:44:27.806319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.841 [2024-07-21 03:44:27.806346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.841 qpair failed and we were unable to recover it. 00:34:42.841 [2024-07-21 03:44:27.806444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.841 [2024-07-21 03:44:27.806470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.841 qpair failed and we were unable to recover it. 00:34:42.841 [2024-07-21 03:44:27.806553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.841 [2024-07-21 03:44:27.806580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.841 qpair failed and we were unable to recover it. 00:34:42.841 [2024-07-21 03:44:27.806683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.841 [2024-07-21 03:44:27.806711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.841 qpair failed and we were unable to recover it. 00:34:42.841 [2024-07-21 03:44:27.806796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.841 [2024-07-21 03:44:27.806823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.841 qpair failed and we were unable to recover it. 00:34:42.841 [2024-07-21 03:44:27.806953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.841 [2024-07-21 03:44:27.806982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.841 qpair failed and we were unable to recover it. 00:34:42.841 [2024-07-21 03:44:27.807123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.841 [2024-07-21 03:44:27.807152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.841 qpair failed and we were unable to recover it. 00:34:42.841 [2024-07-21 03:44:27.807291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.841 [2024-07-21 03:44:27.807320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.841 qpair failed and we were unable to recover it. 00:34:42.841 [2024-07-21 03:44:27.807500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.842 [2024-07-21 03:44:27.807531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.842 qpair failed and we were unable to recover it. 00:34:42.842 [2024-07-21 03:44:27.807698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.842 [2024-07-21 03:44:27.807724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.842 qpair failed and we were unable to recover it. 00:34:42.842 [2024-07-21 03:44:27.807844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.842 [2024-07-21 03:44:27.807893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.842 qpair failed and we were unable to recover it. 00:34:42.842 [2024-07-21 03:44:27.808012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.842 [2024-07-21 03:44:27.808057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.842 qpair failed and we were unable to recover it. 00:34:42.842 [2024-07-21 03:44:27.808185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.842 [2024-07-21 03:44:27.808229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.842 qpair failed and we were unable to recover it. 00:34:42.842 [2024-07-21 03:44:27.808370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.842 [2024-07-21 03:44:27.808415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.842 qpair failed and we were unable to recover it. 00:34:42.842 [2024-07-21 03:44:27.808537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.842 [2024-07-21 03:44:27.808563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.842 qpair failed and we were unable to recover it. 00:34:42.842 [2024-07-21 03:44:27.808663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.842 [2024-07-21 03:44:27.808691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.842 qpair failed and we were unable to recover it. 00:34:42.842 [2024-07-21 03:44:27.808792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.842 [2024-07-21 03:44:27.808819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.842 qpair failed and we were unable to recover it. 00:34:42.842 [2024-07-21 03:44:27.808972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.842 [2024-07-21 03:44:27.808998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.842 qpair failed and we were unable to recover it. 00:34:42.842 [2024-07-21 03:44:27.809139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.842 [2024-07-21 03:44:27.809183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.842 qpair failed and we were unable to recover it. 00:34:42.842 [2024-07-21 03:44:27.809306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.842 [2024-07-21 03:44:27.809332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.842 qpair failed and we were unable to recover it. 00:34:42.842 [2024-07-21 03:44:27.809463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.842 [2024-07-21 03:44:27.809492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.842 qpair failed and we were unable to recover it. 00:34:42.842 [2024-07-21 03:44:27.809594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.842 [2024-07-21 03:44:27.809657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.842 qpair failed and we were unable to recover it. 00:34:42.842 [2024-07-21 03:44:27.809798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.842 [2024-07-21 03:44:27.809829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.842 qpair failed and we were unable to recover it. 00:34:42.842 [2024-07-21 03:44:27.810010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.842 [2024-07-21 03:44:27.810069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.842 qpair failed and we were unable to recover it. 00:34:42.842 [2024-07-21 03:44:27.810319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.842 [2024-07-21 03:44:27.810372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.842 qpair failed and we were unable to recover it. 00:34:42.842 [2024-07-21 03:44:27.810518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.842 [2024-07-21 03:44:27.810545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.842 qpair failed and we were unable to recover it. 00:34:42.842 [2024-07-21 03:44:27.810723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.842 [2024-07-21 03:44:27.810767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.842 qpair failed and we were unable to recover it. 00:34:42.842 [2024-07-21 03:44:27.810899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.842 [2024-07-21 03:44:27.810943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.842 qpair failed and we were unable to recover it. 00:34:42.842 [2024-07-21 03:44:27.811084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.842 [2024-07-21 03:44:27.811129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.842 qpair failed and we were unable to recover it. 00:34:42.842 [2024-07-21 03:44:27.811238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.842 [2024-07-21 03:44:27.811268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.842 qpair failed and we were unable to recover it. 00:34:42.842 [2024-07-21 03:44:27.811401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.842 [2024-07-21 03:44:27.811427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.842 qpair failed and we were unable to recover it. 00:34:42.842 [2024-07-21 03:44:27.811518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.842 [2024-07-21 03:44:27.811545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.842 qpair failed and we were unable to recover it. 00:34:42.842 [2024-07-21 03:44:27.811637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.842 [2024-07-21 03:44:27.811664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.842 qpair failed and we were unable to recover it. 00:34:42.842 [2024-07-21 03:44:27.811808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.842 [2024-07-21 03:44:27.811853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.842 qpair failed and we were unable to recover it. 00:34:42.842 [2024-07-21 03:44:27.811996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.842 [2024-07-21 03:44:27.812040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.842 qpair failed and we were unable to recover it. 00:34:42.842 [2024-07-21 03:44:27.812155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.842 [2024-07-21 03:44:27.812181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.842 qpair failed and we were unable to recover it. 00:34:42.842 [2024-07-21 03:44:27.812324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.842 [2024-07-21 03:44:27.812370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.842 qpair failed and we were unable to recover it. 00:34:42.842 [2024-07-21 03:44:27.812529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.842 [2024-07-21 03:44:27.812556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.842 qpair failed and we were unable to recover it. 00:34:42.842 [2024-07-21 03:44:27.812706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.842 [2024-07-21 03:44:27.812734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.842 qpair failed and we were unable to recover it. 00:34:42.842 [2024-07-21 03:44:27.812868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.842 [2024-07-21 03:44:27.812897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.842 qpair failed and we were unable to recover it. 00:34:42.842 [2024-07-21 03:44:27.813061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.842 [2024-07-21 03:44:27.813089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.842 qpair failed and we were unable to recover it. 00:34:42.842 [2024-07-21 03:44:27.813310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.842 [2024-07-21 03:44:27.813362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.842 qpair failed and we were unable to recover it. 00:34:42.842 [2024-07-21 03:44:27.813477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.842 [2024-07-21 03:44:27.813506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.842 qpair failed and we were unable to recover it. 00:34:42.842 [2024-07-21 03:44:27.813672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.842 [2024-07-21 03:44:27.813702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.842 qpair failed and we were unable to recover it. 00:34:42.842 [2024-07-21 03:44:27.813856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.842 [2024-07-21 03:44:27.813900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.842 qpair failed and we were unable to recover it. 00:34:42.842 [2024-07-21 03:44:27.814071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.842 [2024-07-21 03:44:27.814115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.842 qpair failed and we were unable to recover it. 00:34:42.842 [2024-07-21 03:44:27.814252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.842 [2024-07-21 03:44:27.814297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.842 qpair failed and we were unable to recover it. 00:34:42.842 [2024-07-21 03:44:27.814444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.842 [2024-07-21 03:44:27.814471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.842 qpair failed and we were unable to recover it. 00:34:42.842 [2024-07-21 03:44:27.814623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.842 [2024-07-21 03:44:27.814650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.842 qpair failed and we were unable to recover it. 00:34:42.842 [2024-07-21 03:44:27.814817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.843 [2024-07-21 03:44:27.814865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.843 qpair failed and we were unable to recover it. 00:34:42.843 [2024-07-21 03:44:27.814997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.843 [2024-07-21 03:44:27.815024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.843 qpair failed and we were unable to recover it. 00:34:42.843 [2024-07-21 03:44:27.815170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.843 [2024-07-21 03:44:27.815197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.843 qpair failed and we were unable to recover it. 00:34:42.843 [2024-07-21 03:44:27.815292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.843 [2024-07-21 03:44:27.815319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.843 qpair failed and we were unable to recover it. 00:34:42.843 [2024-07-21 03:44:27.815435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.843 [2024-07-21 03:44:27.815463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.843 qpair failed and we were unable to recover it. 00:34:42.843 [2024-07-21 03:44:27.815607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.843 [2024-07-21 03:44:27.815649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.843 qpair failed and we were unable to recover it. 00:34:42.843 [2024-07-21 03:44:27.815798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.843 [2024-07-21 03:44:27.815825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.843 qpair failed and we were unable to recover it. 00:34:42.843 [2024-07-21 03:44:27.815972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.843 [2024-07-21 03:44:27.815999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.843 qpair failed and we were unable to recover it. 00:34:42.843 [2024-07-21 03:44:27.816154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.843 [2024-07-21 03:44:27.816180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.843 qpair failed and we were unable to recover it. 00:34:42.843 [2024-07-21 03:44:27.816300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.843 [2024-07-21 03:44:27.816326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.843 qpair failed and we were unable to recover it. 00:34:42.843 [2024-07-21 03:44:27.816463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.843 [2024-07-21 03:44:27.816505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.843 qpair failed and we were unable to recover it. 00:34:42.843 [2024-07-21 03:44:27.816641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.843 [2024-07-21 03:44:27.816671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.843 qpair failed and we were unable to recover it. 00:34:42.843 [2024-07-21 03:44:27.816792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.843 [2024-07-21 03:44:27.816819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.843 qpair failed and we were unable to recover it. 00:34:42.843 [2024-07-21 03:44:27.816957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.843 [2024-07-21 03:44:27.816986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.843 qpair failed and we were unable to recover it. 00:34:42.843 [2024-07-21 03:44:27.817143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.843 [2024-07-21 03:44:27.817179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.843 qpair failed and we were unable to recover it. 00:34:42.843 [2024-07-21 03:44:27.817319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.843 [2024-07-21 03:44:27.817349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.843 qpair failed and we were unable to recover it. 00:34:42.843 [2024-07-21 03:44:27.817494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.843 [2024-07-21 03:44:27.817522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.843 qpair failed and we were unable to recover it. 00:34:42.843 [2024-07-21 03:44:27.817642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.843 [2024-07-21 03:44:27.817669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.843 qpair failed and we were unable to recover it. 00:34:42.843 [2024-07-21 03:44:27.817802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.843 [2024-07-21 03:44:27.817847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.843 qpair failed and we were unable to recover it. 00:34:42.843 [2024-07-21 03:44:27.818011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.843 [2024-07-21 03:44:27.818055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.843 qpair failed and we were unable to recover it. 00:34:42.843 [2024-07-21 03:44:27.818229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.843 [2024-07-21 03:44:27.818291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.843 qpair failed and we were unable to recover it. 00:34:42.843 [2024-07-21 03:44:27.818389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.843 [2024-07-21 03:44:27.818416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.843 qpair failed and we were unable to recover it. 00:34:42.843 [2024-07-21 03:44:27.818512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.843 [2024-07-21 03:44:27.818541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.843 qpair failed and we were unable to recover it. 00:34:42.843 [2024-07-21 03:44:27.818713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.843 [2024-07-21 03:44:27.818753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.843 qpair failed and we were unable to recover it. 00:34:42.843 [2024-07-21 03:44:27.818858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.843 [2024-07-21 03:44:27.818885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.843 qpair failed and we were unable to recover it. 00:34:42.843 [2024-07-21 03:44:27.819081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.843 [2024-07-21 03:44:27.819155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.843 qpair failed and we were unable to recover it. 00:34:42.843 [2024-07-21 03:44:27.819371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.843 [2024-07-21 03:44:27.819422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.843 qpair failed and we were unable to recover it. 00:34:42.843 [2024-07-21 03:44:27.819569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.843 [2024-07-21 03:44:27.819595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.843 qpair failed and we were unable to recover it. 00:34:42.843 [2024-07-21 03:44:27.819763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.843 [2024-07-21 03:44:27.819790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.843 qpair failed and we were unable to recover it. 00:34:42.843 [2024-07-21 03:44:27.819921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.843 [2024-07-21 03:44:27.819950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.843 qpair failed and we were unable to recover it. 00:34:42.843 [2024-07-21 03:44:27.820083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.843 [2024-07-21 03:44:27.820114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.843 qpair failed and we were unable to recover it. 00:34:42.843 [2024-07-21 03:44:27.820220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.843 [2024-07-21 03:44:27.820249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.843 qpair failed and we were unable to recover it. 00:34:42.843 [2024-07-21 03:44:27.820382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.843 [2024-07-21 03:44:27.820411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.843 qpair failed and we were unable to recover it. 00:34:42.843 [2024-07-21 03:44:27.820540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.843 [2024-07-21 03:44:27.820569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.843 qpair failed and we were unable to recover it. 00:34:42.843 [2024-07-21 03:44:27.820728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.843 [2024-07-21 03:44:27.820768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.843 qpair failed and we were unable to recover it. 00:34:42.843 [2024-07-21 03:44:27.820928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.843 [2024-07-21 03:44:27.820956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.843 qpair failed and we were unable to recover it. 00:34:42.843 [2024-07-21 03:44:27.821103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.843 [2024-07-21 03:44:27.821146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.843 qpair failed and we were unable to recover it. 00:34:42.843 [2024-07-21 03:44:27.821287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.843 [2024-07-21 03:44:27.821333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.843 qpair failed and we were unable to recover it. 00:34:42.843 [2024-07-21 03:44:27.821463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.843 [2024-07-21 03:44:27.821490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.843 qpair failed and we were unable to recover it. 00:34:42.843 [2024-07-21 03:44:27.821638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.843 [2024-07-21 03:44:27.821669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.844 qpair failed and we were unable to recover it. 00:34:42.844 [2024-07-21 03:44:27.821834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.844 [2024-07-21 03:44:27.821867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.844 qpair failed and we were unable to recover it. 00:34:42.844 [2024-07-21 03:44:27.822016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.844 [2024-07-21 03:44:27.822060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.844 qpair failed and we were unable to recover it. 00:34:42.844 [2024-07-21 03:44:27.822203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.844 [2024-07-21 03:44:27.822233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.844 qpair failed and we were unable to recover it. 00:34:42.844 [2024-07-21 03:44:27.822357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.844 [2024-07-21 03:44:27.822386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.844 qpair failed and we were unable to recover it. 00:34:42.844 [2024-07-21 03:44:27.822512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.844 [2024-07-21 03:44:27.822543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.844 qpair failed and we were unable to recover it. 00:34:42.844 [2024-07-21 03:44:27.822661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.844 [2024-07-21 03:44:27.822704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.844 qpair failed and we were unable to recover it. 00:34:42.844 [2024-07-21 03:44:27.822815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.844 [2024-07-21 03:44:27.822844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.844 qpair failed and we were unable to recover it. 00:34:42.844 [2024-07-21 03:44:27.823011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.844 [2024-07-21 03:44:27.823041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.844 qpair failed and we were unable to recover it. 00:34:42.844 [2024-07-21 03:44:27.823176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.844 [2024-07-21 03:44:27.823205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.844 qpair failed and we were unable to recover it. 00:34:42.844 [2024-07-21 03:44:27.823342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.844 [2024-07-21 03:44:27.823372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.844 qpair failed and we were unable to recover it. 00:34:42.844 [2024-07-21 03:44:27.823525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.844 [2024-07-21 03:44:27.823570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.844 qpair failed and we were unable to recover it. 00:34:42.844 [2024-07-21 03:44:27.823707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.844 [2024-07-21 03:44:27.823736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.844 qpair failed and we were unable to recover it. 00:34:42.844 [2024-07-21 03:44:27.823878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.844 [2024-07-21 03:44:27.823917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.844 qpair failed and we were unable to recover it. 00:34:42.844 [2024-07-21 03:44:27.824092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.844 [2024-07-21 03:44:27.824122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.844 qpair failed and we were unable to recover it. 00:34:42.844 [2024-07-21 03:44:27.824273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.844 [2024-07-21 03:44:27.824302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.844 qpair failed and we were unable to recover it. 00:34:42.844 [2024-07-21 03:44:27.824427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.844 [2024-07-21 03:44:27.824457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.844 qpair failed and we were unable to recover it. 00:34:42.844 [2024-07-21 03:44:27.824607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.844 [2024-07-21 03:44:27.824644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.844 qpair failed and we were unable to recover it. 00:34:42.844 [2024-07-21 03:44:27.824736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.844 [2024-07-21 03:44:27.824762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.844 qpair failed and we were unable to recover it. 00:34:42.844 [2024-07-21 03:44:27.824886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.844 [2024-07-21 03:44:27.824931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.844 qpair failed and we were unable to recover it. 00:34:42.844 [2024-07-21 03:44:27.825156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.844 [2024-07-21 03:44:27.825211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.844 qpair failed and we were unable to recover it. 00:34:42.844 [2024-07-21 03:44:27.825310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.844 [2024-07-21 03:44:27.825337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.844 qpair failed and we were unable to recover it. 00:34:42.844 [2024-07-21 03:44:27.825452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.844 [2024-07-21 03:44:27.825479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.844 qpair failed and we were unable to recover it. 00:34:42.844 [2024-07-21 03:44:27.825606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.844 [2024-07-21 03:44:27.825642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.844 qpair failed and we were unable to recover it. 00:34:42.844 [2024-07-21 03:44:27.825783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.844 [2024-07-21 03:44:27.825828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.844 qpair failed and we were unable to recover it. 00:34:42.844 [2024-07-21 03:44:27.825953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.844 [2024-07-21 03:44:27.825979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.844 qpair failed and we were unable to recover it. 00:34:42.844 [2024-07-21 03:44:27.826078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.844 [2024-07-21 03:44:27.826105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.844 qpair failed and we were unable to recover it. 00:34:42.844 [2024-07-21 03:44:27.826249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.844 [2024-07-21 03:44:27.826293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.844 qpair failed and we were unable to recover it. 00:34:42.844 [2024-07-21 03:44:27.826416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.844 [2024-07-21 03:44:27.826443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.844 qpair failed and we were unable to recover it. 00:34:42.844 [2024-07-21 03:44:27.826591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.844 [2024-07-21 03:44:27.826631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.844 qpair failed and we were unable to recover it. 00:34:42.844 [2024-07-21 03:44:27.826788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.844 [2024-07-21 03:44:27.826818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.844 qpair failed and we were unable to recover it. 00:34:42.844 [2024-07-21 03:44:27.826955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.844 [2024-07-21 03:44:27.826987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.844 qpair failed and we were unable to recover it. 00:34:42.844 [2024-07-21 03:44:27.827196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.844 [2024-07-21 03:44:27.827243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.844 qpair failed and we were unable to recover it. 00:34:42.844 [2024-07-21 03:44:27.827341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.844 [2024-07-21 03:44:27.827370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.844 qpair failed and we were unable to recover it. 00:34:42.844 [2024-07-21 03:44:27.827487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.844 [2024-07-21 03:44:27.827525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.845 qpair failed and we were unable to recover it. 00:34:42.845 [2024-07-21 03:44:27.827660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.845 [2024-07-21 03:44:27.827687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.845 qpair failed and we were unable to recover it. 00:34:42.845 [2024-07-21 03:44:27.827801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.845 [2024-07-21 03:44:27.827827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.845 qpair failed and we were unable to recover it. 00:34:42.845 [2024-07-21 03:44:27.827973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.845 [2024-07-21 03:44:27.828002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.845 qpair failed and we were unable to recover it. 00:34:42.845 [2024-07-21 03:44:27.828138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.845 [2024-07-21 03:44:27.828168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.845 qpair failed and we were unable to recover it. 00:34:42.845 [2024-07-21 03:44:27.828277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.845 [2024-07-21 03:44:27.828306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.845 qpair failed and we were unable to recover it. 00:34:42.845 [2024-07-21 03:44:27.828447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.845 [2024-07-21 03:44:27.828475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.845 qpair failed and we were unable to recover it. 00:34:42.845 [2024-07-21 03:44:27.828620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.845 [2024-07-21 03:44:27.828648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.845 qpair failed and we were unable to recover it. 00:34:42.845 [2024-07-21 03:44:27.828787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.845 [2024-07-21 03:44:27.828837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.845 qpair failed and we were unable to recover it. 00:34:42.845 [2024-07-21 03:44:27.828989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.845 [2024-07-21 03:44:27.829020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.845 qpair failed and we were unable to recover it. 00:34:42.845 [2024-07-21 03:44:27.829224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.845 [2024-07-21 03:44:27.829251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.845 qpair failed and we were unable to recover it. 00:34:42.845 [2024-07-21 03:44:27.829423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.845 [2024-07-21 03:44:27.829452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.845 qpair failed and we were unable to recover it. 00:34:42.845 [2024-07-21 03:44:27.829588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.845 [2024-07-21 03:44:27.829621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.845 qpair failed and we were unable to recover it. 00:34:42.845 [2024-07-21 03:44:27.829738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.845 [2024-07-21 03:44:27.829763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.845 qpair failed and we were unable to recover it. 00:34:42.845 [2024-07-21 03:44:27.829916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.845 [2024-07-21 03:44:27.829942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.845 qpair failed and we were unable to recover it. 00:34:42.845 [2024-07-21 03:44:27.830116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.845 [2024-07-21 03:44:27.830145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.845 qpair failed and we were unable to recover it. 00:34:42.845 [2024-07-21 03:44:27.830253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.845 [2024-07-21 03:44:27.830282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.845 qpair failed and we were unable to recover it. 00:34:42.845 [2024-07-21 03:44:27.830407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.845 [2024-07-21 03:44:27.830436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.845 qpair failed and we were unable to recover it. 00:34:42.845 [2024-07-21 03:44:27.830572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.845 [2024-07-21 03:44:27.830601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.845 qpair failed and we were unable to recover it. 00:34:42.845 [2024-07-21 03:44:27.830739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.845 [2024-07-21 03:44:27.830766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.845 qpair failed and we were unable to recover it. 00:34:42.845 [2024-07-21 03:44:27.830857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.845 [2024-07-21 03:44:27.830888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.845 qpair failed and we were unable to recover it. 00:34:42.845 [2024-07-21 03:44:27.831031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.845 [2024-07-21 03:44:27.831074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.845 qpair failed and we were unable to recover it. 00:34:42.845 [2024-07-21 03:44:27.831214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.845 [2024-07-21 03:44:27.831258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.845 qpair failed and we were unable to recover it. 00:34:42.845 [2024-07-21 03:44:27.831387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.845 [2024-07-21 03:44:27.831430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.845 qpair failed and we were unable to recover it. 00:34:42.845 [2024-07-21 03:44:27.831527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.845 [2024-07-21 03:44:27.831554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.845 qpair failed and we were unable to recover it. 00:34:42.845 [2024-07-21 03:44:27.831694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.845 [2024-07-21 03:44:27.831740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.845 qpair failed and we were unable to recover it. 00:34:42.845 [2024-07-21 03:44:27.831868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.845 [2024-07-21 03:44:27.831896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.845 qpair failed and we were unable to recover it. 00:34:42.845 [2024-07-21 03:44:27.832033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.845 [2024-07-21 03:44:27.832064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.845 qpair failed and we were unable to recover it. 00:34:42.845 [2024-07-21 03:44:27.832317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.845 [2024-07-21 03:44:27.832369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.845 qpair failed and we were unable to recover it. 00:34:42.845 [2024-07-21 03:44:27.832517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.845 [2024-07-21 03:44:27.832543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.845 qpair failed and we were unable to recover it. 00:34:42.845 [2024-07-21 03:44:27.832687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.845 [2024-07-21 03:44:27.832718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.845 qpair failed and we were unable to recover it. 00:34:42.845 [2024-07-21 03:44:27.832869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.845 [2024-07-21 03:44:27.832926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.845 qpair failed and we were unable to recover it. 00:34:42.845 [2024-07-21 03:44:27.833052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.845 [2024-07-21 03:44:27.833087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.845 qpair failed and we were unable to recover it. 00:34:42.845 [2024-07-21 03:44:27.833251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.845 [2024-07-21 03:44:27.833283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.845 qpair failed and we were unable to recover it. 00:34:42.845 [2024-07-21 03:44:27.833480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.845 [2024-07-21 03:44:27.833545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.845 qpair failed and we were unable to recover it. 00:34:42.845 [2024-07-21 03:44:27.833719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.845 [2024-07-21 03:44:27.833751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.845 qpair failed and we were unable to recover it. 00:34:42.845 [2024-07-21 03:44:27.833887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.845 [2024-07-21 03:44:27.833920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.845 qpair failed and we were unable to recover it. 00:34:42.845 [2024-07-21 03:44:27.834050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.845 [2024-07-21 03:44:27.834080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.845 qpair failed and we were unable to recover it. 00:34:42.845 [2024-07-21 03:44:27.834213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.845 [2024-07-21 03:44:27.834242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.845 qpair failed and we were unable to recover it. 00:34:42.845 [2024-07-21 03:44:27.834399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.845 [2024-07-21 03:44:27.834445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.845 qpair failed and we were unable to recover it. 00:34:42.845 [2024-07-21 03:44:27.834591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.845 [2024-07-21 03:44:27.834626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.845 qpair failed and we were unable to recover it. 00:34:42.845 [2024-07-21 03:44:27.834763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.846 [2024-07-21 03:44:27.834790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.846 qpair failed and we were unable to recover it. 00:34:42.846 [2024-07-21 03:44:27.834923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.846 [2024-07-21 03:44:27.834949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.846 qpair failed and we were unable to recover it. 00:34:42.846 [2024-07-21 03:44:27.835090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.846 [2024-07-21 03:44:27.835140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.846 qpair failed and we were unable to recover it. 00:34:42.846 [2024-07-21 03:44:27.835247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.846 [2024-07-21 03:44:27.835278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.846 qpair failed and we were unable to recover it. 00:34:42.846 [2024-07-21 03:44:27.835450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.846 [2024-07-21 03:44:27.835476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.846 qpair failed and we were unable to recover it. 00:34:42.846 [2024-07-21 03:44:27.835574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.846 [2024-07-21 03:44:27.835601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.846 qpair failed and we were unable to recover it. 00:34:42.846 [2024-07-21 03:44:27.835775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.846 [2024-07-21 03:44:27.835802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.846 qpair failed and we were unable to recover it. 00:34:42.846 [2024-07-21 03:44:27.835909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.846 [2024-07-21 03:44:27.835956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.846 qpair failed and we were unable to recover it. 00:34:42.846 [2024-07-21 03:44:27.836104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.846 [2024-07-21 03:44:27.836145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.846 qpair failed and we were unable to recover it. 00:34:42.846 [2024-07-21 03:44:27.836276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.846 [2024-07-21 03:44:27.836304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.846 qpair failed and we were unable to recover it. 00:34:42.846 [2024-07-21 03:44:27.836438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.846 [2024-07-21 03:44:27.836466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.846 qpair failed and we were unable to recover it. 00:34:42.846 [2024-07-21 03:44:27.836563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.846 [2024-07-21 03:44:27.836589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.846 qpair failed and we were unable to recover it. 00:34:42.846 [2024-07-21 03:44:27.836724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.846 [2024-07-21 03:44:27.836768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.846 qpair failed and we were unable to recover it. 00:34:42.846 [2024-07-21 03:44:27.836866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.846 [2024-07-21 03:44:27.836899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.846 qpair failed and we were unable to recover it. 00:34:42.846 [2024-07-21 03:44:27.837031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.846 [2024-07-21 03:44:27.837061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.846 qpair failed and we were unable to recover it. 00:34:42.846 [2024-07-21 03:44:27.837194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.846 [2024-07-21 03:44:27.837224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.846 qpair failed and we were unable to recover it. 00:34:42.846 [2024-07-21 03:44:27.837361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.846 [2024-07-21 03:44:27.837403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.846 qpair failed and we were unable to recover it. 00:34:42.846 [2024-07-21 03:44:27.837604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.846 [2024-07-21 03:44:27.837638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.846 qpair failed and we were unable to recover it. 00:34:42.846 [2024-07-21 03:44:27.837737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.846 [2024-07-21 03:44:27.837763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.846 qpair failed and we were unable to recover it. 00:34:42.846 [2024-07-21 03:44:27.837919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.846 [2024-07-21 03:44:27.837946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.846 qpair failed and we were unable to recover it. 00:34:42.846 [2024-07-21 03:44:27.838095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.846 [2024-07-21 03:44:27.838138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.846 qpair failed and we were unable to recover it. 00:34:42.846 [2024-07-21 03:44:27.838299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.846 [2024-07-21 03:44:27.838330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.846 qpair failed and we were unable to recover it. 00:34:42.846 [2024-07-21 03:44:27.838458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.846 [2024-07-21 03:44:27.838487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.846 qpair failed and we were unable to recover it. 00:34:42.846 [2024-07-21 03:44:27.838618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.846 [2024-07-21 03:44:27.838655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.846 qpair failed and we were unable to recover it. 00:34:42.846 [2024-07-21 03:44:27.838823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.846 [2024-07-21 03:44:27.838852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.846 qpair failed and we were unable to recover it. 00:34:42.846 [2024-07-21 03:44:27.838992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.846 [2024-07-21 03:44:27.839021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.846 qpair failed and we were unable to recover it. 00:34:42.846 [2024-07-21 03:44:27.839126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.846 [2024-07-21 03:44:27.839155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.846 qpair failed and we were unable to recover it. 00:34:42.846 [2024-07-21 03:44:27.839257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.846 [2024-07-21 03:44:27.839288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.846 qpair failed and we were unable to recover it. 00:34:42.846 [2024-07-21 03:44:27.839424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.846 [2024-07-21 03:44:27.839455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.846 qpair failed and we were unable to recover it. 00:34:42.846 [2024-07-21 03:44:27.839607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.846 [2024-07-21 03:44:27.839640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.846 qpair failed and we were unable to recover it. 00:34:42.846 [2024-07-21 03:44:27.839731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.846 [2024-07-21 03:44:27.839758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.846 qpair failed and we were unable to recover it. 00:34:42.846 [2024-07-21 03:44:27.839854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.846 [2024-07-21 03:44:27.839905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.846 qpair failed and we were unable to recover it. 00:34:42.846 [2024-07-21 03:44:27.840038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.846 [2024-07-21 03:44:27.840067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.846 qpair failed and we were unable to recover it. 00:34:42.846 [2024-07-21 03:44:27.840223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.846 [2024-07-21 03:44:27.840253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.846 qpair failed and we were unable to recover it. 00:34:42.846 [2024-07-21 03:44:27.840402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.846 [2024-07-21 03:44:27.840450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.846 qpair failed and we were unable to recover it. 00:34:42.846 [2024-07-21 03:44:27.840590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.846 [2024-07-21 03:44:27.840625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.846 qpair failed and we were unable to recover it. 00:34:42.846 [2024-07-21 03:44:27.840726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.846 [2024-07-21 03:44:27.840753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.846 qpair failed and we were unable to recover it. 00:34:42.846 [2024-07-21 03:44:27.840872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.846 [2024-07-21 03:44:27.840901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.846 qpair failed and we were unable to recover it. 00:34:42.846 [2024-07-21 03:44:27.841064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.846 [2024-07-21 03:44:27.841115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.846 qpair failed and we were unable to recover it. 00:34:42.846 [2024-07-21 03:44:27.841217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.846 [2024-07-21 03:44:27.841248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.846 qpair failed and we were unable to recover it. 00:34:42.846 [2024-07-21 03:44:27.841413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.846 [2024-07-21 03:44:27.841444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.846 qpair failed and we were unable to recover it. 00:34:42.846 [2024-07-21 03:44:27.841583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.847 [2024-07-21 03:44:27.841609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.847 qpair failed and we were unable to recover it. 00:34:42.847 [2024-07-21 03:44:27.841778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.847 [2024-07-21 03:44:27.841805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.847 qpair failed and we were unable to recover it. 00:34:42.847 [2024-07-21 03:44:27.841943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.847 [2024-07-21 03:44:27.841972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.847 qpair failed and we were unable to recover it. 00:34:42.847 [2024-07-21 03:44:27.842096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.847 [2024-07-21 03:44:27.842138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.847 qpair failed and we were unable to recover it. 00:34:42.847 [2024-07-21 03:44:27.842297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.847 [2024-07-21 03:44:27.842327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.847 qpair failed and we were unable to recover it. 00:34:42.847 [2024-07-21 03:44:27.842458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.847 [2024-07-21 03:44:27.842487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.847 qpair failed and we were unable to recover it. 00:34:42.847 [2024-07-21 03:44:27.842639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.847 [2024-07-21 03:44:27.842677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.847 qpair failed and we were unable to recover it. 00:34:42.847 [2024-07-21 03:44:27.842802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.847 [2024-07-21 03:44:27.842830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.847 qpair failed and we were unable to recover it. 00:34:42.847 [2024-07-21 03:44:27.842945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.847 [2024-07-21 03:44:27.842975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.847 qpair failed and we were unable to recover it. 00:34:42.847 [2024-07-21 03:44:27.843137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.847 [2024-07-21 03:44:27.843166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.847 qpair failed and we were unable to recover it. 00:34:42.847 [2024-07-21 03:44:27.843295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.847 [2024-07-21 03:44:27.843325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.847 qpair failed and we were unable to recover it. 00:34:42.847 [2024-07-21 03:44:27.843417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.847 [2024-07-21 03:44:27.843446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.847 qpair failed and we were unable to recover it. 00:34:42.847 [2024-07-21 03:44:27.843600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.847 [2024-07-21 03:44:27.843667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.847 qpair failed and we were unable to recover it. 00:34:42.847 [2024-07-21 03:44:27.843811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.847 [2024-07-21 03:44:27.843858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.847 qpair failed and we were unable to recover it. 00:34:42.847 [2024-07-21 03:44:27.843974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.847 [2024-07-21 03:44:27.844019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.847 qpair failed and we were unable to recover it. 00:34:42.847 [2024-07-21 03:44:27.844144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.847 [2024-07-21 03:44:27.844213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.847 qpair failed and we were unable to recover it. 00:34:42.847 [2024-07-21 03:44:27.844332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.847 [2024-07-21 03:44:27.844360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.847 qpair failed and we were unable to recover it. 00:34:42.847 [2024-07-21 03:44:27.844495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.847 [2024-07-21 03:44:27.844524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.847 qpair failed and we were unable to recover it. 00:34:42.847 [2024-07-21 03:44:27.844632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.847 [2024-07-21 03:44:27.844663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.847 qpair failed and we were unable to recover it. 00:34:42.847 [2024-07-21 03:44:27.844784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.847 [2024-07-21 03:44:27.844811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.847 qpair failed and we were unable to recover it. 00:34:42.847 [2024-07-21 03:44:27.844974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.847 [2024-07-21 03:44:27.845000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.847 qpair failed and we were unable to recover it. 00:34:42.847 [2024-07-21 03:44:27.845098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.847 [2024-07-21 03:44:27.845124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.847 qpair failed and we were unable to recover it. 00:34:42.847 [2024-07-21 03:44:27.845242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.847 [2024-07-21 03:44:27.845268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.847 qpair failed and we were unable to recover it. 00:34:42.847 [2024-07-21 03:44:27.845381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.847 [2024-07-21 03:44:27.845428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.847 qpair failed and we were unable to recover it. 00:34:42.847 [2024-07-21 03:44:27.845554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.847 [2024-07-21 03:44:27.845582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.847 qpair failed and we were unable to recover it. 00:34:42.847 [2024-07-21 03:44:27.845768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.847 [2024-07-21 03:44:27.845814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.847 qpair failed and we were unable to recover it. 00:34:42.847 [2024-07-21 03:44:27.845941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.847 [2024-07-21 03:44:27.846010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.847 qpair failed and we were unable to recover it. 00:34:42.847 [2024-07-21 03:44:27.846179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.847 [2024-07-21 03:44:27.846245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.847 qpair failed and we were unable to recover it. 00:34:42.847 [2024-07-21 03:44:27.846358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.847 [2024-07-21 03:44:27.846385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.847 qpair failed and we were unable to recover it. 00:34:42.847 [2024-07-21 03:44:27.846484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.847 [2024-07-21 03:44:27.846512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.847 qpair failed and we were unable to recover it. 00:34:42.847 [2024-07-21 03:44:27.846602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.847 [2024-07-21 03:44:27.846637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.847 qpair failed and we were unable to recover it. 00:34:42.847 [2024-07-21 03:44:27.846743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.847 [2024-07-21 03:44:27.846771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.847 qpair failed and we were unable to recover it. 00:34:42.847 [2024-07-21 03:44:27.846868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.847 [2024-07-21 03:44:27.846896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.847 qpair failed and we were unable to recover it. 00:34:42.847 [2024-07-21 03:44:27.847019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.847 [2024-07-21 03:44:27.847046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.847 qpair failed and we were unable to recover it. 00:34:42.847 [2024-07-21 03:44:27.847176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.847 [2024-07-21 03:44:27.847203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.847 qpair failed and we were unable to recover it. 00:34:42.847 [2024-07-21 03:44:27.847295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.847 [2024-07-21 03:44:27.847322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.847 qpair failed and we were unable to recover it. 00:34:42.847 [2024-07-21 03:44:27.847471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.847 [2024-07-21 03:44:27.847497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.847 qpair failed and we were unable to recover it. 00:34:42.847 [2024-07-21 03:44:27.847623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.847 [2024-07-21 03:44:27.847661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.847 qpair failed and we were unable to recover it. 00:34:42.847 [2024-07-21 03:44:27.847756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.847 [2024-07-21 03:44:27.847785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.847 qpair failed and we were unable to recover it. 00:34:42.847 [2024-07-21 03:44:27.847911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.847 [2024-07-21 03:44:27.847939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.847 qpair failed and we were unable to recover it. 00:34:42.847 [2024-07-21 03:44:27.848080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.847 [2024-07-21 03:44:27.848124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.847 qpair failed and we were unable to recover it. 00:34:42.848 [2024-07-21 03:44:27.848266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.848 [2024-07-21 03:44:27.848311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.848 qpair failed and we were unable to recover it. 00:34:42.848 [2024-07-21 03:44:27.848473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.848 [2024-07-21 03:44:27.848514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.848 qpair failed and we were unable to recover it. 00:34:42.848 [2024-07-21 03:44:27.848631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.848 [2024-07-21 03:44:27.848677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.848 qpair failed and we were unable to recover it. 00:34:42.848 [2024-07-21 03:44:27.848810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.848 [2024-07-21 03:44:27.848840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.848 qpair failed and we were unable to recover it. 00:34:42.848 [2024-07-21 03:44:27.848994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.848 [2024-07-21 03:44:27.849025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.848 qpair failed and we were unable to recover it. 00:34:42.848 [2024-07-21 03:44:27.849217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.848 [2024-07-21 03:44:27.849262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.848 qpair failed and we were unable to recover it. 00:34:42.848 [2024-07-21 03:44:27.849357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.848 [2024-07-21 03:44:27.849389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.848 qpair failed and we were unable to recover it. 00:34:42.848 [2024-07-21 03:44:27.849512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.848 [2024-07-21 03:44:27.849539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.848 qpair failed and we were unable to recover it. 00:34:42.848 [2024-07-21 03:44:27.849679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.848 [2024-07-21 03:44:27.849710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.848 qpair failed and we were unable to recover it. 00:34:42.848 [2024-07-21 03:44:27.849861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.848 [2024-07-21 03:44:27.849911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.848 qpair failed and we were unable to recover it. 00:34:42.848 [2024-07-21 03:44:27.850059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.848 [2024-07-21 03:44:27.850085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.848 qpair failed and we were unable to recover it. 00:34:42.848 [2024-07-21 03:44:27.850230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.848 [2024-07-21 03:44:27.850257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.848 qpair failed and we were unable to recover it. 00:34:42.848 [2024-07-21 03:44:27.850348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.848 [2024-07-21 03:44:27.850375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.848 qpair failed and we were unable to recover it. 00:34:42.848 [2024-07-21 03:44:27.850524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.848 [2024-07-21 03:44:27.850554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.848 qpair failed and we were unable to recover it. 00:34:42.848 [2024-07-21 03:44:27.850687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.848 [2024-07-21 03:44:27.850717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.848 qpair failed and we were unable to recover it. 00:34:42.848 [2024-07-21 03:44:27.850828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.848 [2024-07-21 03:44:27.850859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.848 qpair failed and we were unable to recover it. 00:34:42.848 [2024-07-21 03:44:27.850966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.848 [2024-07-21 03:44:27.850997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.848 qpair failed and we were unable to recover it. 00:34:42.848 [2024-07-21 03:44:27.851157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.848 [2024-07-21 03:44:27.851186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.848 qpair failed and we were unable to recover it. 00:34:42.848 [2024-07-21 03:44:27.851372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.848 [2024-07-21 03:44:27.851436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.848 qpair failed and we were unable to recover it. 00:34:42.848 [2024-07-21 03:44:27.851579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.848 [2024-07-21 03:44:27.851605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.848 qpair failed and we were unable to recover it. 00:34:42.848 [2024-07-21 03:44:27.851745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.848 [2024-07-21 03:44:27.851772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.848 qpair failed and we were unable to recover it. 00:34:42.848 [2024-07-21 03:44:27.851910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.848 [2024-07-21 03:44:27.851940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.848 qpair failed and we were unable to recover it. 00:34:42.848 [2024-07-21 03:44:27.852062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.848 [2024-07-21 03:44:27.852103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.848 qpair failed and we were unable to recover it. 00:34:42.848 [2024-07-21 03:44:27.852214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.848 [2024-07-21 03:44:27.852244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.848 qpair failed and we were unable to recover it. 00:34:42.848 [2024-07-21 03:44:27.852374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.848 [2024-07-21 03:44:27.852402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.848 qpair failed and we were unable to recover it. 00:34:42.848 [2024-07-21 03:44:27.852550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.848 [2024-07-21 03:44:27.852576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.848 qpair failed and we were unable to recover it. 00:34:42.848 [2024-07-21 03:44:27.852683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.848 [2024-07-21 03:44:27.852710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.848 qpair failed and we were unable to recover it. 00:34:42.848 [2024-07-21 03:44:27.852809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.848 [2024-07-21 03:44:27.852835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.848 qpair failed and we were unable to recover it. 00:34:42.848 [2024-07-21 03:44:27.852945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.848 [2024-07-21 03:44:27.852974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.848 qpair failed and we were unable to recover it. 00:34:42.848 [2024-07-21 03:44:27.853083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.848 [2024-07-21 03:44:27.853113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.848 qpair failed and we were unable to recover it. 00:34:42.848 [2024-07-21 03:44:27.853221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.848 [2024-07-21 03:44:27.853251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.848 qpair failed and we were unable to recover it. 00:34:42.848 [2024-07-21 03:44:27.853420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.848 [2024-07-21 03:44:27.853449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.848 qpair failed and we were unable to recover it. 00:34:42.848 [2024-07-21 03:44:27.853580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.848 [2024-07-21 03:44:27.853608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.848 qpair failed and we were unable to recover it. 00:34:42.848 [2024-07-21 03:44:27.853750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.848 [2024-07-21 03:44:27.853780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.848 qpair failed and we were unable to recover it. 00:34:42.848 [2024-07-21 03:44:27.853910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.848 [2024-07-21 03:44:27.853937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.848 qpair failed and we were unable to recover it. 00:34:42.848 [2024-07-21 03:44:27.854059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.848 [2024-07-21 03:44:27.854102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.848 qpair failed and we were unable to recover it. 00:34:42.848 [2024-07-21 03:44:27.854244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.848 [2024-07-21 03:44:27.854277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.848 qpair failed and we were unable to recover it. 00:34:42.848 [2024-07-21 03:44:27.854407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.848 [2024-07-21 03:44:27.854436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.848 qpair failed and we were unable to recover it. 00:34:42.848 [2024-07-21 03:44:27.854588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.848 [2024-07-21 03:44:27.854622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.848 qpair failed and we were unable to recover it. 00:34:42.848 [2024-07-21 03:44:27.854752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.848 [2024-07-21 03:44:27.854778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.848 qpair failed and we were unable to recover it. 00:34:42.848 [2024-07-21 03:44:27.854917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.848 [2024-07-21 03:44:27.854944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.849 qpair failed and we were unable to recover it. 00:34:42.849 [2024-07-21 03:44:27.855061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.849 [2024-07-21 03:44:27.855103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.849 qpair failed and we were unable to recover it. 00:34:42.849 [2024-07-21 03:44:27.855230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.849 [2024-07-21 03:44:27.855261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.849 qpair failed and we were unable to recover it. 00:34:42.849 [2024-07-21 03:44:27.855371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.849 [2024-07-21 03:44:27.855398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.849 qpair failed and we were unable to recover it. 00:34:42.849 [2024-07-21 03:44:27.855582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.849 [2024-07-21 03:44:27.855611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.849 qpair failed and we were unable to recover it. 00:34:42.849 [2024-07-21 03:44:27.855733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.849 [2024-07-21 03:44:27.855760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.849 qpair failed and we were unable to recover it. 00:34:42.849 [2024-07-21 03:44:27.855905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.849 [2024-07-21 03:44:27.855931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.849 qpair failed and we were unable to recover it. 00:34:42.849 [2024-07-21 03:44:27.856073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.849 [2024-07-21 03:44:27.856102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.849 qpair failed and we were unable to recover it. 00:34:42.849 [2024-07-21 03:44:27.856274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.849 [2024-07-21 03:44:27.856309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.849 qpair failed and we were unable to recover it. 00:34:42.849 [2024-07-21 03:44:27.856448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.849 [2024-07-21 03:44:27.856492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.849 qpair failed and we were unable to recover it. 00:34:42.849 [2024-07-21 03:44:27.856621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.849 [2024-07-21 03:44:27.856651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.849 qpair failed and we were unable to recover it. 00:34:42.849 [2024-07-21 03:44:27.856766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.849 [2024-07-21 03:44:27.856792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.849 qpair failed and we were unable to recover it. 00:34:42.849 [2024-07-21 03:44:27.856892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.849 [2024-07-21 03:44:27.856919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.849 qpair failed and we were unable to recover it. 00:34:42.849 [2024-07-21 03:44:27.857013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.849 [2024-07-21 03:44:27.857039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.849 qpair failed and we were unable to recover it. 00:34:42.849 [2024-07-21 03:44:27.857180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.849 [2024-07-21 03:44:27.857210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.849 qpair failed and we were unable to recover it. 00:34:42.849 [2024-07-21 03:44:27.857347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.849 [2024-07-21 03:44:27.857377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.849 qpair failed and we were unable to recover it. 00:34:42.849 [2024-07-21 03:44:27.857555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.849 [2024-07-21 03:44:27.857584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.849 qpair failed and we were unable to recover it. 00:34:42.849 [2024-07-21 03:44:27.857734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.849 [2024-07-21 03:44:27.857760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.849 qpair failed and we were unable to recover it. 00:34:42.849 [2024-07-21 03:44:27.857860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.849 [2024-07-21 03:44:27.857888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.849 qpair failed and we were unable to recover it. 00:34:42.849 [2024-07-21 03:44:27.858007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.849 [2024-07-21 03:44:27.858033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.849 qpair failed and we were unable to recover it. 00:34:42.849 [2024-07-21 03:44:27.858140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.849 [2024-07-21 03:44:27.858173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.849 qpair failed and we were unable to recover it. 00:34:42.849 [2024-07-21 03:44:27.858278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.849 [2024-07-21 03:44:27.858304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.849 qpair failed and we were unable to recover it. 00:34:42.849 [2024-07-21 03:44:27.858453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.849 [2024-07-21 03:44:27.858483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.849 qpair failed and we were unable to recover it. 00:34:42.849 [2024-07-21 03:44:27.858585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.849 [2024-07-21 03:44:27.858622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.849 qpair failed and we were unable to recover it. 00:34:42.849 [2024-07-21 03:44:27.858741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.849 [2024-07-21 03:44:27.858767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.849 qpair failed and we were unable to recover it. 00:34:42.849 [2024-07-21 03:44:27.858889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.849 [2024-07-21 03:44:27.858914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.849 qpair failed and we were unable to recover it. 00:34:42.849 [2024-07-21 03:44:27.859097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.849 [2024-07-21 03:44:27.859126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.849 qpair failed and we were unable to recover it. 00:34:42.849 [2024-07-21 03:44:27.859347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.849 [2024-07-21 03:44:27.859375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.849 qpair failed and we were unable to recover it. 00:34:42.849 [2024-07-21 03:44:27.859482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.849 [2024-07-21 03:44:27.859512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.849 qpair failed and we were unable to recover it. 00:34:42.849 [2024-07-21 03:44:27.859663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.849 [2024-07-21 03:44:27.859694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.849 qpair failed and we were unable to recover it. 00:34:42.849 [2024-07-21 03:44:27.859833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.849 [2024-07-21 03:44:27.859859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.849 qpair failed and we were unable to recover it. 00:34:42.849 [2024-07-21 03:44:27.860031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.849 [2024-07-21 03:44:27.860060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.849 qpair failed and we were unable to recover it. 00:34:42.849 [2024-07-21 03:44:27.860196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.849 [2024-07-21 03:44:27.860222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.849 qpair failed and we were unable to recover it. 00:34:42.849 [2024-07-21 03:44:27.860348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.849 [2024-07-21 03:44:27.860377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.849 qpair failed and we were unable to recover it. 00:34:42.849 [2024-07-21 03:44:27.860548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.849 [2024-07-21 03:44:27.860574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.849 qpair failed and we were unable to recover it. 00:34:42.849 [2024-07-21 03:44:27.860674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.849 [2024-07-21 03:44:27.860701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.849 qpair failed and we were unable to recover it. 00:34:42.849 [2024-07-21 03:44:27.860821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.849 [2024-07-21 03:44:27.860849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.849 qpair failed and we were unable to recover it. 00:34:42.849 [2024-07-21 03:44:27.861025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.850 [2024-07-21 03:44:27.861054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.850 qpair failed and we were unable to recover it. 00:34:42.850 [2024-07-21 03:44:27.861157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.850 [2024-07-21 03:44:27.861186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.850 qpair failed and we were unable to recover it. 00:34:42.850 [2024-07-21 03:44:27.861323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.850 [2024-07-21 03:44:27.861367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.850 qpair failed and we were unable to recover it. 00:34:42.850 [2024-07-21 03:44:27.861506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.850 [2024-07-21 03:44:27.861535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.850 qpair failed and we were unable to recover it. 00:34:42.850 [2024-07-21 03:44:27.861682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.850 [2024-07-21 03:44:27.861709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.850 qpair failed and we were unable to recover it. 00:34:42.850 [2024-07-21 03:44:27.861836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.850 [2024-07-21 03:44:27.861864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.850 qpair failed and we were unable to recover it. 00:34:42.850 [2024-07-21 03:44:27.862014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.850 [2024-07-21 03:44:27.862042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.850 qpair failed and we were unable to recover it. 00:34:42.850 [2024-07-21 03:44:27.862173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.850 [2024-07-21 03:44:27.862202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.850 qpair failed and we were unable to recover it. 00:34:42.850 [2024-07-21 03:44:27.862329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.850 [2024-07-21 03:44:27.862358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.850 qpair failed and we were unable to recover it. 00:34:42.850 [2024-07-21 03:44:27.862462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.850 [2024-07-21 03:44:27.862490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.850 qpair failed and we were unable to recover it. 00:34:42.850 [2024-07-21 03:44:27.862622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.850 [2024-07-21 03:44:27.862676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.850 qpair failed and we were unable to recover it. 00:34:42.850 [2024-07-21 03:44:27.862783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.850 [2024-07-21 03:44:27.862814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.850 qpair failed and we were unable to recover it. 00:34:42.850 [2024-07-21 03:44:27.862957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.850 [2024-07-21 03:44:27.862986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.850 qpair failed and we were unable to recover it. 00:34:42.850 [2024-07-21 03:44:27.863087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.850 [2024-07-21 03:44:27.863116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.850 qpair failed and we were unable to recover it. 00:34:42.850 [2024-07-21 03:44:27.863224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.850 [2024-07-21 03:44:27.863252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.850 qpair failed and we were unable to recover it. 00:34:42.850 [2024-07-21 03:44:27.863384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.850 [2024-07-21 03:44:27.863412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.850 qpair failed and we were unable to recover it. 00:34:42.850 [2024-07-21 03:44:27.863533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.850 [2024-07-21 03:44:27.863561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.850 qpair failed and we were unable to recover it. 00:34:42.850 [2024-07-21 03:44:27.863731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.850 [2024-07-21 03:44:27.863772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.850 qpair failed and we were unable to recover it. 00:34:42.850 [2024-07-21 03:44:27.863950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.850 [2024-07-21 03:44:27.864001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.850 qpair failed and we were unable to recover it. 00:34:42.850 [2024-07-21 03:44:27.864118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.850 [2024-07-21 03:44:27.864163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.850 qpair failed and we were unable to recover it. 00:34:42.850 [2024-07-21 03:44:27.864334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.850 [2024-07-21 03:44:27.864378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.850 qpair failed and we were unable to recover it. 00:34:42.850 [2024-07-21 03:44:27.864476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.850 [2024-07-21 03:44:27.864503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.850 qpair failed and we were unable to recover it. 00:34:42.850 [2024-07-21 03:44:27.864628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.850 [2024-07-21 03:44:27.864656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.850 qpair failed and we were unable to recover it. 00:34:42.850 [2024-07-21 03:44:27.864776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.850 [2024-07-21 03:44:27.864803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.850 qpair failed and we were unable to recover it. 00:34:42.850 [2024-07-21 03:44:27.864941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.850 [2024-07-21 03:44:27.864984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.850 qpair failed and we were unable to recover it. 00:34:42.850 [2024-07-21 03:44:27.865116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.850 [2024-07-21 03:44:27.865144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.850 qpair failed and we were unable to recover it. 00:34:42.850 [2024-07-21 03:44:27.865255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.850 [2024-07-21 03:44:27.865299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.850 qpair failed and we were unable to recover it. 00:34:42.850 [2024-07-21 03:44:27.865469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.850 [2024-07-21 03:44:27.865497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.850 qpair failed and we were unable to recover it. 00:34:42.850 [2024-07-21 03:44:27.865673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.850 [2024-07-21 03:44:27.865698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.850 qpair failed and we were unable to recover it. 00:34:42.850 [2024-07-21 03:44:27.865814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.850 [2024-07-21 03:44:27.865842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.850 qpair failed and we were unable to recover it. 00:34:42.850 [2024-07-21 03:44:27.866001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.850 [2024-07-21 03:44:27.866029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.850 qpair failed and we were unable to recover it. 00:34:42.850 [2024-07-21 03:44:27.866125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.850 [2024-07-21 03:44:27.866153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.850 qpair failed and we were unable to recover it. 00:34:42.850 [2024-07-21 03:44:27.866322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.850 [2024-07-21 03:44:27.866351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.850 qpair failed and we were unable to recover it. 00:34:42.850 [2024-07-21 03:44:27.866456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.850 [2024-07-21 03:44:27.866484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.850 qpair failed and we were unable to recover it. 00:34:42.850 [2024-07-21 03:44:27.866632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.850 [2024-07-21 03:44:27.866700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.850 qpair failed and we were unable to recover it. 00:34:42.850 [2024-07-21 03:44:27.866825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.850 [2024-07-21 03:44:27.866871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.850 qpair failed and we were unable to recover it. 00:34:42.850 [2024-07-21 03:44:27.867008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.850 [2024-07-21 03:44:27.867054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.850 qpair failed and we were unable to recover it. 00:34:42.850 [2024-07-21 03:44:27.867171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.850 [2024-07-21 03:44:27.867202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.850 qpair failed and we were unable to recover it. 00:34:42.850 [2024-07-21 03:44:27.867323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.850 [2024-07-21 03:44:27.867351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.850 qpair failed and we were unable to recover it. 00:34:42.850 [2024-07-21 03:44:27.867500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.850 [2024-07-21 03:44:27.867527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.850 qpair failed and we were unable to recover it. 00:34:42.850 [2024-07-21 03:44:27.867634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.850 [2024-07-21 03:44:27.867671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.850 qpair failed and we were unable to recover it. 00:34:42.850 [2024-07-21 03:44:27.867823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.851 [2024-07-21 03:44:27.867850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.851 qpair failed and we were unable to recover it. 00:34:42.851 [2024-07-21 03:44:27.867938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.851 [2024-07-21 03:44:27.867965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.851 qpair failed and we were unable to recover it. 00:34:42.851 [2024-07-21 03:44:27.868106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.851 [2024-07-21 03:44:27.868152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.851 qpair failed and we were unable to recover it. 00:34:42.851 [2024-07-21 03:44:27.868279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.851 [2024-07-21 03:44:27.868307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.851 qpair failed and we were unable to recover it. 00:34:42.851 [2024-07-21 03:44:27.868406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.851 [2024-07-21 03:44:27.868433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.851 qpair failed and we were unable to recover it. 00:34:42.851 [2024-07-21 03:44:27.868535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.851 [2024-07-21 03:44:27.868563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.851 qpair failed and we were unable to recover it. 00:34:42.851 [2024-07-21 03:44:27.868703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.851 [2024-07-21 03:44:27.868729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.851 qpair failed and we were unable to recover it. 00:34:42.851 [2024-07-21 03:44:27.868826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.851 [2024-07-21 03:44:27.868851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.851 qpair failed and we were unable to recover it. 00:34:42.851 [2024-07-21 03:44:27.868975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.851 [2024-07-21 03:44:27.869002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.851 qpair failed and we were unable to recover it. 00:34:42.851 [2024-07-21 03:44:27.869143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.851 [2024-07-21 03:44:27.869171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.851 qpair failed and we were unable to recover it. 00:34:42.851 [2024-07-21 03:44:27.869306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.851 [2024-07-21 03:44:27.869335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.851 qpair failed and we were unable to recover it. 00:34:42.851 [2024-07-21 03:44:27.869486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.851 [2024-07-21 03:44:27.869514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.851 qpair failed and we were unable to recover it. 00:34:42.851 [2024-07-21 03:44:27.869636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.851 [2024-07-21 03:44:27.869670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.851 qpair failed and we were unable to recover it. 00:34:42.851 [2024-07-21 03:44:27.869841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.851 [2024-07-21 03:44:27.869898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.851 qpair failed and we were unable to recover it. 00:34:42.851 [2024-07-21 03:44:27.870068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.851 [2024-07-21 03:44:27.870113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.851 qpair failed and we were unable to recover it. 00:34:42.851 [2024-07-21 03:44:27.870254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.851 [2024-07-21 03:44:27.870298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.851 qpair failed and we were unable to recover it. 00:34:42.851 [2024-07-21 03:44:27.870415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.851 [2024-07-21 03:44:27.870441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.851 qpair failed and we were unable to recover it. 00:34:42.851 [2024-07-21 03:44:27.870543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.851 [2024-07-21 03:44:27.870571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.851 qpair failed and we were unable to recover it. 00:34:42.851 [2024-07-21 03:44:27.870737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.851 [2024-07-21 03:44:27.870773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.851 qpair failed and we were unable to recover it. 00:34:42.851 [2024-07-21 03:44:27.870901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.851 [2024-07-21 03:44:27.870943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.851 qpair failed and we were unable to recover it. 00:34:42.851 [2024-07-21 03:44:27.871080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.851 [2024-07-21 03:44:27.871110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.851 qpair failed and we were unable to recover it. 00:34:42.851 [2024-07-21 03:44:27.871365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.851 [2024-07-21 03:44:27.871418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.851 qpair failed and we were unable to recover it. 00:34:42.851 [2024-07-21 03:44:27.871559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.851 [2024-07-21 03:44:27.871589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.851 qpair failed and we were unable to recover it. 00:34:42.851 [2024-07-21 03:44:27.871746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.851 [2024-07-21 03:44:27.871778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.851 qpair failed and we were unable to recover it. 00:34:42.851 [2024-07-21 03:44:27.871923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.851 [2024-07-21 03:44:27.871953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.851 qpair failed and we were unable to recover it. 00:34:42.851 [2024-07-21 03:44:27.872123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.851 [2024-07-21 03:44:27.872168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.851 qpair failed and we were unable to recover it. 00:34:42.851 [2024-07-21 03:44:27.872300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.851 [2024-07-21 03:44:27.872345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.851 qpair failed and we were unable to recover it. 00:34:42.851 [2024-07-21 03:44:27.872478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.851 [2024-07-21 03:44:27.872506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.851 qpair failed and we were unable to recover it. 00:34:42.851 [2024-07-21 03:44:27.872650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.851 [2024-07-21 03:44:27.872677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.851 qpair failed and we were unable to recover it. 00:34:42.851 [2024-07-21 03:44:27.872831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.851 [2024-07-21 03:44:27.872860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.851 qpair failed and we were unable to recover it. 00:34:42.851 [2024-07-21 03:44:27.873032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.851 [2024-07-21 03:44:27.873072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.851 qpair failed and we were unable to recover it. 00:34:42.851 [2024-07-21 03:44:27.873180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.851 [2024-07-21 03:44:27.873207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.851 qpair failed and we were unable to recover it. 00:34:42.851 [2024-07-21 03:44:27.873309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.851 [2024-07-21 03:44:27.873338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.851 qpair failed and we were unable to recover it. 00:34:42.851 [2024-07-21 03:44:27.873466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.851 [2024-07-21 03:44:27.873492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.851 qpair failed and we were unable to recover it. 00:34:42.851 [2024-07-21 03:44:27.873610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.851 [2024-07-21 03:44:27.873642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.851 qpair failed and we were unable to recover it. 00:34:42.851 [2024-07-21 03:44:27.873779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.851 [2024-07-21 03:44:27.873804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.851 qpair failed and we were unable to recover it. 00:34:42.851 [2024-07-21 03:44:27.873930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.851 [2024-07-21 03:44:27.873959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.851 qpair failed and we were unable to recover it. 00:34:42.851 [2024-07-21 03:44:27.874093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.851 [2024-07-21 03:44:27.874123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.851 qpair failed and we were unable to recover it. 00:34:42.851 [2024-07-21 03:44:27.874225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.851 [2024-07-21 03:44:27.874253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.851 qpair failed and we were unable to recover it. 00:34:42.851 [2024-07-21 03:44:27.874416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.851 [2024-07-21 03:44:27.874467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.851 qpair failed and we were unable to recover it. 00:34:42.851 [2024-07-21 03:44:27.874602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.851 [2024-07-21 03:44:27.874660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.851 qpair failed and we were unable to recover it. 00:34:42.852 [2024-07-21 03:44:27.874802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.852 [2024-07-21 03:44:27.874847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.852 qpair failed and we were unable to recover it. 00:34:42.852 [2024-07-21 03:44:27.874975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.852 [2024-07-21 03:44:27.875006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.852 qpair failed and we were unable to recover it. 00:34:42.852 [2024-07-21 03:44:27.875197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.852 [2024-07-21 03:44:27.875242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.852 qpair failed and we were unable to recover it. 00:34:42.852 [2024-07-21 03:44:27.875367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.852 [2024-07-21 03:44:27.875394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.852 qpair failed and we were unable to recover it. 00:34:42.852 [2024-07-21 03:44:27.875539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.852 [2024-07-21 03:44:27.875566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.852 qpair failed and we were unable to recover it. 00:34:42.852 [2024-07-21 03:44:27.875710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.852 [2024-07-21 03:44:27.875759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.852 qpair failed and we were unable to recover it. 00:34:42.852 [2024-07-21 03:44:27.875880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.852 [2024-07-21 03:44:27.875936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.852 qpair failed and we were unable to recover it. 00:34:42.852 [2024-07-21 03:44:27.876045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.852 [2024-07-21 03:44:27.876090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.852 qpair failed and we were unable to recover it. 00:34:42.852 [2024-07-21 03:44:27.876240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.852 [2024-07-21 03:44:27.876267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.852 qpair failed and we were unable to recover it. 00:34:42.852 [2024-07-21 03:44:27.876404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.852 [2024-07-21 03:44:27.876446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.852 qpair failed and we were unable to recover it. 00:34:42.852 [2024-07-21 03:44:27.876600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.852 [2024-07-21 03:44:27.876636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.852 qpair failed and we were unable to recover it. 00:34:42.852 [2024-07-21 03:44:27.876794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.852 [2024-07-21 03:44:27.876823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.852 qpair failed and we were unable to recover it. 00:34:42.852 [2024-07-21 03:44:27.876963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.852 [2024-07-21 03:44:27.876990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.852 qpair failed and we were unable to recover it. 00:34:42.852 [2024-07-21 03:44:27.877237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.852 [2024-07-21 03:44:27.877288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.852 qpair failed and we were unable to recover it. 00:34:42.852 [2024-07-21 03:44:27.877430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.852 [2024-07-21 03:44:27.877460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.852 qpair failed and we were unable to recover it. 00:34:42.852 [2024-07-21 03:44:27.877590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.852 [2024-07-21 03:44:27.877626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.852 qpair failed and we were unable to recover it. 00:34:42.852 [2024-07-21 03:44:27.877768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.852 [2024-07-21 03:44:27.877794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.852 qpair failed and we were unable to recover it. 00:34:42.852 [2024-07-21 03:44:27.877907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.852 [2024-07-21 03:44:27.877952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.852 qpair failed and we were unable to recover it. 00:34:42.852 [2024-07-21 03:44:27.878084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.852 [2024-07-21 03:44:27.878129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.852 qpair failed and we were unable to recover it. 00:34:42.852 [2024-07-21 03:44:27.878274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.852 [2024-07-21 03:44:27.878312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.852 qpair failed and we were unable to recover it. 00:34:42.852 [2024-07-21 03:44:27.878433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.852 [2024-07-21 03:44:27.878460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.852 qpair failed and we were unable to recover it. 00:34:42.852 [2024-07-21 03:44:27.878586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.852 [2024-07-21 03:44:27.878621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.852 qpair failed and we were unable to recover it. 00:34:42.852 [2024-07-21 03:44:27.878760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.852 [2024-07-21 03:44:27.878804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.852 qpair failed and we were unable to recover it. 00:34:42.852 [2024-07-21 03:44:27.878941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.852 [2024-07-21 03:44:27.878969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.852 qpair failed and we were unable to recover it. 00:34:42.852 [2024-07-21 03:44:27.879129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.852 [2024-07-21 03:44:27.879159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.852 qpair failed and we were unable to recover it. 00:34:42.852 [2024-07-21 03:44:27.879254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.852 [2024-07-21 03:44:27.879300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.852 qpair failed and we were unable to recover it. 00:34:42.852 [2024-07-21 03:44:27.879468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.852 [2024-07-21 03:44:27.879499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.852 qpair failed and we were unable to recover it. 00:34:42.852 [2024-07-21 03:44:27.879631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.852 [2024-07-21 03:44:27.879682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.852 qpair failed and we were unable to recover it. 00:34:42.852 [2024-07-21 03:44:27.879817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.852 [2024-07-21 03:44:27.879848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.852 qpair failed and we were unable to recover it. 00:34:42.852 [2024-07-21 03:44:27.879997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.852 [2024-07-21 03:44:27.880027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.852 qpair failed and we were unable to recover it. 00:34:42.852 [2024-07-21 03:44:27.880159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.852 [2024-07-21 03:44:27.880188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.852 qpair failed and we were unable to recover it. 00:34:42.852 [2024-07-21 03:44:27.880315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.852 [2024-07-21 03:44:27.880345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.852 qpair failed and we were unable to recover it. 00:34:42.852 [2024-07-21 03:44:27.880451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.852 [2024-07-21 03:44:27.880484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.852 qpair failed and we were unable to recover it. 00:34:42.852 [2024-07-21 03:44:27.880605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.852 [2024-07-21 03:44:27.880641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.852 qpair failed and we were unable to recover it. 00:34:42.852 [2024-07-21 03:44:27.880771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.852 [2024-07-21 03:44:27.880797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.852 qpair failed and we were unable to recover it. 00:34:42.852 [2024-07-21 03:44:27.880943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.852 [2024-07-21 03:44:27.880987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.852 qpair failed and we were unable to recover it. 00:34:42.852 [2024-07-21 03:44:27.881134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.852 [2024-07-21 03:44:27.881179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.852 qpair failed and we were unable to recover it. 00:34:42.852 [2024-07-21 03:44:27.881346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.852 [2024-07-21 03:44:27.881376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.852 qpair failed and we were unable to recover it. 00:34:42.852 [2024-07-21 03:44:27.881539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.852 [2024-07-21 03:44:27.881566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.852 qpair failed and we were unable to recover it. 00:34:42.852 [2024-07-21 03:44:27.881696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.852 [2024-07-21 03:44:27.881725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.852 qpair failed and we were unable to recover it. 00:34:42.852 [2024-07-21 03:44:27.881836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.852 [2024-07-21 03:44:27.881875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.853 qpair failed and we were unable to recover it. 00:34:42.853 [2024-07-21 03:44:27.882006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.853 [2024-07-21 03:44:27.882036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.853 qpair failed and we were unable to recover it. 00:34:42.853 [2024-07-21 03:44:27.882159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.853 [2024-07-21 03:44:27.882203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.853 qpair failed and we were unable to recover it. 00:34:42.853 [2024-07-21 03:44:27.882333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.853 [2024-07-21 03:44:27.882363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.853 qpair failed and we were unable to recover it. 00:34:42.853 [2024-07-21 03:44:27.882505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.853 [2024-07-21 03:44:27.882530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.853 qpair failed and we were unable to recover it. 00:34:42.853 [2024-07-21 03:44:27.882661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.853 [2024-07-21 03:44:27.882688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.853 qpair failed and we were unable to recover it. 00:34:42.853 [2024-07-21 03:44:27.882794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.853 [2024-07-21 03:44:27.882822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.853 qpair failed and we were unable to recover it. 00:34:42.853 [2024-07-21 03:44:27.882964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.853 [2024-07-21 03:44:27.882992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.853 qpair failed and we were unable to recover it. 00:34:42.853 [2024-07-21 03:44:27.883128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.853 [2024-07-21 03:44:27.883157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.853 qpair failed and we were unable to recover it. 00:34:42.853 [2024-07-21 03:44:27.883247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.853 [2024-07-21 03:44:27.883281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.853 qpair failed and we were unable to recover it. 00:34:42.853 [2024-07-21 03:44:27.883390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.853 [2024-07-21 03:44:27.883420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.853 qpair failed and we were unable to recover it. 00:34:42.853 [2024-07-21 03:44:27.883523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.853 [2024-07-21 03:44:27.883548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.853 qpair failed and we were unable to recover it. 00:34:42.853 [2024-07-21 03:44:27.883644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.853 [2024-07-21 03:44:27.883670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.853 qpair failed and we were unable to recover it. 00:34:42.853 [2024-07-21 03:44:27.883793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.853 [2024-07-21 03:44:27.883818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.853 qpair failed and we were unable to recover it. 00:34:42.853 [2024-07-21 03:44:27.883945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.853 [2024-07-21 03:44:27.883972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.853 qpair failed and we were unable to recover it. 00:34:42.853 [2024-07-21 03:44:27.884123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.853 [2024-07-21 03:44:27.884151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.853 qpair failed and we were unable to recover it. 00:34:42.853 [2024-07-21 03:44:27.884276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.853 [2024-07-21 03:44:27.884305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.853 qpair failed and we were unable to recover it. 00:34:42.853 [2024-07-21 03:44:27.884501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.853 [2024-07-21 03:44:27.884530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.853 qpair failed and we were unable to recover it. 00:34:42.853 [2024-07-21 03:44:27.884700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.853 [2024-07-21 03:44:27.884727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.853 qpair failed and we were unable to recover it. 00:34:42.853 [2024-07-21 03:44:27.884847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.853 [2024-07-21 03:44:27.884873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.853 qpair failed and we were unable to recover it. 00:34:42.853 [2024-07-21 03:44:27.885034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.853 [2024-07-21 03:44:27.885080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.853 qpair failed and we were unable to recover it. 00:34:42.853 [2024-07-21 03:44:27.885226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.853 [2024-07-21 03:44:27.885255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.853 qpair failed and we were unable to recover it. 00:34:42.853 [2024-07-21 03:44:27.885424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.853 [2024-07-21 03:44:27.885452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.853 qpair failed and we were unable to recover it. 00:34:42.853 [2024-07-21 03:44:27.885601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.853 [2024-07-21 03:44:27.885633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.853 qpair failed and we were unable to recover it. 00:34:42.853 [2024-07-21 03:44:27.885724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.853 [2024-07-21 03:44:27.885751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.853 qpair failed and we were unable to recover it. 00:34:42.853 [2024-07-21 03:44:27.885874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.853 [2024-07-21 03:44:27.885916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.853 qpair failed and we were unable to recover it. 00:34:42.853 [2024-07-21 03:44:27.886047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.853 [2024-07-21 03:44:27.886076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.853 qpair failed and we were unable to recover it. 00:34:42.853 [2024-07-21 03:44:27.886178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.853 [2024-07-21 03:44:27.886206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.853 qpair failed and we were unable to recover it. 00:34:42.853 [2024-07-21 03:44:27.886341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.853 [2024-07-21 03:44:27.886370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.853 qpair failed and we were unable to recover it. 00:34:42.853 [2024-07-21 03:44:27.886497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.853 [2024-07-21 03:44:27.886523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.853 qpair failed and we were unable to recover it. 00:34:42.853 [2024-07-21 03:44:27.886684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.853 [2024-07-21 03:44:27.886711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.853 qpair failed and we were unable to recover it. 00:34:42.853 [2024-07-21 03:44:27.886797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.853 [2024-07-21 03:44:27.886823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.853 qpair failed and we were unable to recover it. 00:34:42.853 [2024-07-21 03:44:27.886968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.853 [2024-07-21 03:44:27.886996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.853 qpair failed and we were unable to recover it. 00:34:42.853 [2024-07-21 03:44:27.887124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.853 [2024-07-21 03:44:27.887153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.853 qpair failed and we were unable to recover it. 00:34:42.853 [2024-07-21 03:44:27.887280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.853 [2024-07-21 03:44:27.887308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.853 qpair failed and we were unable to recover it. 00:34:42.853 [2024-07-21 03:44:27.887440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.853 [2024-07-21 03:44:27.887492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.853 qpair failed and we were unable to recover it. 00:34:42.853 [2024-07-21 03:44:27.887628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.854 [2024-07-21 03:44:27.887662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.854 qpair failed and we were unable to recover it. 00:34:42.854 [2024-07-21 03:44:27.887788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.854 [2024-07-21 03:44:27.887833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.854 qpair failed and we were unable to recover it. 00:34:42.854 [2024-07-21 03:44:27.887985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.854 [2024-07-21 03:44:27.888012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.854 qpair failed and we were unable to recover it. 00:34:42.854 [2024-07-21 03:44:27.888188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.854 [2024-07-21 03:44:27.888233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.854 qpair failed and we were unable to recover it. 00:34:42.854 [2024-07-21 03:44:27.888330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.854 [2024-07-21 03:44:27.888358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.854 qpair failed and we were unable to recover it. 00:34:42.854 [2024-07-21 03:44:27.888493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.854 [2024-07-21 03:44:27.888521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.854 qpair failed and we were unable to recover it. 00:34:42.854 [2024-07-21 03:44:27.888680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.854 [2024-07-21 03:44:27.888707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.854 qpair failed and we were unable to recover it. 00:34:42.854 [2024-07-21 03:44:27.888824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.854 [2024-07-21 03:44:27.888850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.854 qpair failed and we were unable to recover it. 00:34:42.854 [2024-07-21 03:44:27.889008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.854 [2024-07-21 03:44:27.889052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.854 qpair failed and we were unable to recover it. 00:34:42.854 [2024-07-21 03:44:27.889184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.854 [2024-07-21 03:44:27.889213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.854 qpair failed and we were unable to recover it. 00:34:42.854 [2024-07-21 03:44:27.889315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.854 [2024-07-21 03:44:27.889343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.854 qpair failed and we were unable to recover it. 00:34:42.854 [2024-07-21 03:44:27.889524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.854 [2024-07-21 03:44:27.889565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.854 qpair failed and we were unable to recover it. 00:34:42.854 [2024-07-21 03:44:27.889709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.854 [2024-07-21 03:44:27.889739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.854 qpair failed and we were unable to recover it. 00:34:42.854 [2024-07-21 03:44:27.889842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.854 [2024-07-21 03:44:27.889872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.854 qpair failed and we were unable to recover it. 00:34:42.854 [2024-07-21 03:44:27.889993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.854 [2024-07-21 03:44:27.890022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.854 qpair failed and we were unable to recover it. 00:34:42.854 [2024-07-21 03:44:27.890195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.854 [2024-07-21 03:44:27.890240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.854 qpair failed and we were unable to recover it. 00:34:42.854 [2024-07-21 03:44:27.890386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.854 [2024-07-21 03:44:27.890413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.854 qpair failed and we were unable to recover it. 00:34:42.854 [2024-07-21 03:44:27.890539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.854 [2024-07-21 03:44:27.890565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.854 qpair failed and we were unable to recover it. 00:34:42.854 [2024-07-21 03:44:27.890666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.854 [2024-07-21 03:44:27.890695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.854 qpair failed and we were unable to recover it. 00:34:42.854 [2024-07-21 03:44:27.890829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.854 [2024-07-21 03:44:27.890874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.854 qpair failed and we were unable to recover it. 00:34:42.854 [2024-07-21 03:44:27.890967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.854 [2024-07-21 03:44:27.890995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.854 qpair failed and we were unable to recover it. 00:34:42.854 [2024-07-21 03:44:27.891109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.854 [2024-07-21 03:44:27.891177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.854 qpair failed and we were unable to recover it. 00:34:42.854 [2024-07-21 03:44:27.891327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.854 [2024-07-21 03:44:27.891354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.854 qpair failed and we were unable to recover it. 00:34:42.854 [2024-07-21 03:44:27.891452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.854 [2024-07-21 03:44:27.891481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.854 qpair failed and we were unable to recover it. 00:34:42.854 [2024-07-21 03:44:27.891635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.854 [2024-07-21 03:44:27.891694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.854 qpair failed and we were unable to recover it. 00:34:42.854 [2024-07-21 03:44:27.891854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.854 [2024-07-21 03:44:27.891885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.854 qpair failed and we were unable to recover it. 00:34:42.854 [2024-07-21 03:44:27.892022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.854 [2024-07-21 03:44:27.892052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.854 qpair failed and we were unable to recover it. 00:34:42.854 [2024-07-21 03:44:27.892191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.854 [2024-07-21 03:44:27.892228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.854 qpair failed and we were unable to recover it. 00:34:42.854 [2024-07-21 03:44:27.892362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.854 [2024-07-21 03:44:27.892392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.854 qpair failed and we were unable to recover it. 00:34:42.854 [2024-07-21 03:44:27.892538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.854 [2024-07-21 03:44:27.892567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.854 qpair failed and we were unable to recover it. 00:34:42.854 [2024-07-21 03:44:27.892703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.854 [2024-07-21 03:44:27.892731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.854 qpair failed and we were unable to recover it. 00:34:42.854 [2024-07-21 03:44:27.892899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.854 [2024-07-21 03:44:27.892947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.854 qpair failed and we were unable to recover it. 00:34:42.854 [2024-07-21 03:44:27.893086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.854 [2024-07-21 03:44:27.893135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.854 qpair failed and we were unable to recover it. 00:34:42.854 [2024-07-21 03:44:27.893235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.854 [2024-07-21 03:44:27.893262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.854 qpair failed and we were unable to recover it. 00:34:42.854 [2024-07-21 03:44:27.893387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.854 [2024-07-21 03:44:27.893414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.854 qpair failed and we were unable to recover it. 00:34:42.854 [2024-07-21 03:44:27.893535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.854 [2024-07-21 03:44:27.893562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.854 qpair failed and we were unable to recover it. 00:34:42.854 [2024-07-21 03:44:27.893651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.854 [2024-07-21 03:44:27.893678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.854 qpair failed and we were unable to recover it. 00:34:42.854 [2024-07-21 03:44:27.893821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.854 [2024-07-21 03:44:27.893865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.854 qpair failed and we were unable to recover it. 00:34:42.854 [2024-07-21 03:44:27.894037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.854 [2024-07-21 03:44:27.894067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.854 qpair failed and we were unable to recover it. 00:34:42.854 [2024-07-21 03:44:27.894246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.854 [2024-07-21 03:44:27.894291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.854 qpair failed and we were unable to recover it. 00:34:42.854 [2024-07-21 03:44:27.894418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.854 [2024-07-21 03:44:27.894448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.855 qpair failed and we were unable to recover it. 00:34:42.855 [2024-07-21 03:44:27.894588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.855 [2024-07-21 03:44:27.894636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.855 qpair failed and we were unable to recover it. 00:34:42.855 [2024-07-21 03:44:27.894784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.855 [2024-07-21 03:44:27.894815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.855 qpair failed and we were unable to recover it. 00:34:42.855 [2024-07-21 03:44:27.894919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.855 [2024-07-21 03:44:27.894951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.855 qpair failed and we were unable to recover it. 00:34:42.855 [2024-07-21 03:44:27.895096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.855 [2024-07-21 03:44:27.895128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.855 qpair failed and we were unable to recover it. 00:34:42.855 [2024-07-21 03:44:27.895329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.855 [2024-07-21 03:44:27.895361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.855 qpair failed and we were unable to recover it. 00:34:42.855 [2024-07-21 03:44:27.895504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.855 [2024-07-21 03:44:27.895534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.855 qpair failed and we were unable to recover it. 00:34:42.855 [2024-07-21 03:44:27.895698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.855 [2024-07-21 03:44:27.895739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.855 qpair failed and we were unable to recover it. 00:34:42.855 [2024-07-21 03:44:27.895888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.855 [2024-07-21 03:44:27.895934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.855 qpair failed and we were unable to recover it. 00:34:42.855 [2024-07-21 03:44:27.896049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.855 [2024-07-21 03:44:27.896079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.855 qpair failed and we were unable to recover it. 00:34:42.855 [2024-07-21 03:44:27.896208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.855 [2024-07-21 03:44:27.896253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.855 qpair failed and we were unable to recover it. 00:34:42.855 [2024-07-21 03:44:27.896377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.855 [2024-07-21 03:44:27.896404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.855 qpair failed and we were unable to recover it. 00:34:42.855 [2024-07-21 03:44:27.896527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.855 [2024-07-21 03:44:27.896555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.855 qpair failed and we were unable to recover it. 00:34:42.855 [2024-07-21 03:44:27.896698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.855 [2024-07-21 03:44:27.896743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.855 qpair failed and we were unable to recover it. 00:34:42.855 [2024-07-21 03:44:27.896884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.855 [2024-07-21 03:44:27.896929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.855 qpair failed and we were unable to recover it. 00:34:42.855 [2024-07-21 03:44:27.897068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.855 [2024-07-21 03:44:27.897112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.855 qpair failed and we were unable to recover it. 00:34:42.855 [2024-07-21 03:44:27.897193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.855 [2024-07-21 03:44:27.897220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.855 qpair failed and we were unable to recover it. 00:34:42.855 [2024-07-21 03:44:27.897347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.855 [2024-07-21 03:44:27.897377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.855 qpair failed and we were unable to recover it. 00:34:42.855 [2024-07-21 03:44:27.897502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.855 [2024-07-21 03:44:27.897530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.855 qpair failed and we were unable to recover it. 00:34:42.855 [2024-07-21 03:44:27.897653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.855 [2024-07-21 03:44:27.897683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.855 qpair failed and we were unable to recover it. 00:34:42.855 [2024-07-21 03:44:27.897812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.855 [2024-07-21 03:44:27.897841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.855 qpair failed and we were unable to recover it. 00:34:42.855 [2024-07-21 03:44:27.897980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.855 [2024-07-21 03:44:27.898010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.855 qpair failed and we were unable to recover it. 00:34:42.855 [2024-07-21 03:44:27.898148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.855 [2024-07-21 03:44:27.898178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.855 qpair failed and we were unable to recover it. 00:34:42.855 [2024-07-21 03:44:27.898307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.855 [2024-07-21 03:44:27.898339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.855 qpair failed and we were unable to recover it. 00:34:42.855 [2024-07-21 03:44:27.898488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.855 [2024-07-21 03:44:27.898516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.855 qpair failed and we were unable to recover it. 00:34:42.855 [2024-07-21 03:44:27.898641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.855 [2024-07-21 03:44:27.898669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.855 qpair failed and we were unable to recover it. 00:34:42.855 [2024-07-21 03:44:27.898766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.855 [2024-07-21 03:44:27.898794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.855 qpair failed and we were unable to recover it. 00:34:42.855 [2024-07-21 03:44:27.898915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.855 [2024-07-21 03:44:27.898947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.855 qpair failed and we were unable to recover it. 00:34:42.855 [2024-07-21 03:44:27.899074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.855 [2024-07-21 03:44:27.899100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.855 qpair failed and we were unable to recover it. 00:34:42.855 [2024-07-21 03:44:27.899236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.855 [2024-07-21 03:44:27.899263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.855 qpair failed and we were unable to recover it. 00:34:42.855 [2024-07-21 03:44:27.899361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.855 [2024-07-21 03:44:27.899389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.855 qpair failed and we were unable to recover it. 00:34:42.855 [2024-07-21 03:44:27.899511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.855 [2024-07-21 03:44:27.899538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.855 qpair failed and we were unable to recover it. 00:34:42.855 [2024-07-21 03:44:27.899693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.855 [2024-07-21 03:44:27.899721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.855 qpair failed and we were unable to recover it. 00:34:42.855 [2024-07-21 03:44:27.899851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.855 [2024-07-21 03:44:27.899877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.855 qpair failed and we were unable to recover it. 00:34:42.855 [2024-07-21 03:44:27.900019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.855 [2024-07-21 03:44:27.900046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.855 qpair failed and we were unable to recover it. 00:34:42.855 [2024-07-21 03:44:27.900194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.855 [2024-07-21 03:44:27.900220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.855 qpair failed and we were unable to recover it. 00:34:42.855 [2024-07-21 03:44:27.900372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.855 [2024-07-21 03:44:27.900398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.855 qpair failed and we were unable to recover it. 00:34:42.855 [2024-07-21 03:44:27.900495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.855 [2024-07-21 03:44:27.900524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.855 qpair failed and we were unable to recover it. 00:34:42.855 [2024-07-21 03:44:27.900650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.855 [2024-07-21 03:44:27.900678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.855 qpair failed and we were unable to recover it. 00:34:42.855 [2024-07-21 03:44:27.900769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.855 [2024-07-21 03:44:27.900796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.855 qpair failed and we were unable to recover it. 00:34:42.855 [2024-07-21 03:44:27.900890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.855 [2024-07-21 03:44:27.900916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.855 qpair failed and we were unable to recover it. 00:34:42.856 [2024-07-21 03:44:27.901059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.856 [2024-07-21 03:44:27.901090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.856 qpair failed and we were unable to recover it. 00:34:42.856 [2024-07-21 03:44:27.901223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.856 [2024-07-21 03:44:27.901253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.856 qpair failed and we were unable to recover it. 00:34:42.856 [2024-07-21 03:44:27.901414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.856 [2024-07-21 03:44:27.901444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.856 qpair failed and we were unable to recover it. 00:34:42.856 [2024-07-21 03:44:27.901556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.856 [2024-07-21 03:44:27.901603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.856 qpair failed and we were unable to recover it. 00:34:42.856 [2024-07-21 03:44:27.901724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.856 [2024-07-21 03:44:27.901751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.856 qpair failed and we were unable to recover it. 00:34:42.856 [2024-07-21 03:44:27.901928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.856 [2024-07-21 03:44:27.901958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.856 qpair failed and we were unable to recover it. 00:34:42.856 [2024-07-21 03:44:27.902089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.856 [2024-07-21 03:44:27.902156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.856 qpair failed and we were unable to recover it. 00:34:42.856 [2024-07-21 03:44:27.902330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.856 [2024-07-21 03:44:27.902375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.856 qpair failed and we were unable to recover it. 00:34:42.856 [2024-07-21 03:44:27.902500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.856 [2024-07-21 03:44:27.902527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.856 qpair failed and we were unable to recover it. 00:34:42.856 [2024-07-21 03:44:27.902626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.856 [2024-07-21 03:44:27.902654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.856 qpair failed and we were unable to recover it. 00:34:42.856 [2024-07-21 03:44:27.902765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.856 [2024-07-21 03:44:27.902796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.856 qpair failed and we were unable to recover it. 00:34:42.856 [2024-07-21 03:44:27.902980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.856 [2024-07-21 03:44:27.903029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.856 qpair failed and we were unable to recover it. 00:34:42.856 [2024-07-21 03:44:27.903266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.856 [2024-07-21 03:44:27.903296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.856 qpair failed and we were unable to recover it. 00:34:42.856 [2024-07-21 03:44:27.903427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.856 [2024-07-21 03:44:27.903458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.856 qpair failed and we were unable to recover it. 00:34:42.856 [2024-07-21 03:44:27.903604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.856 [2024-07-21 03:44:27.903646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.856 qpair failed and we were unable to recover it. 00:34:42.856 [2024-07-21 03:44:27.903816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.856 [2024-07-21 03:44:27.903860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.856 qpair failed and we were unable to recover it. 00:34:42.856 [2024-07-21 03:44:27.903955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.856 [2024-07-21 03:44:27.903983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.856 qpair failed and we were unable to recover it. 00:34:42.856 [2024-07-21 03:44:27.904091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.856 [2024-07-21 03:44:27.904121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.856 qpair failed and we were unable to recover it. 00:34:42.856 [2024-07-21 03:44:27.904273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.856 [2024-07-21 03:44:27.904315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.856 qpair failed and we were unable to recover it. 00:34:42.856 [2024-07-21 03:44:27.904460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.856 [2024-07-21 03:44:27.904487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.856 qpair failed and we were unable to recover it. 00:34:42.856 [2024-07-21 03:44:27.904578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.856 [2024-07-21 03:44:27.904604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.856 qpair failed and we were unable to recover it. 00:34:42.856 [2024-07-21 03:44:27.904773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.856 [2024-07-21 03:44:27.904802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.856 qpair failed and we were unable to recover it. 00:34:42.856 [2024-07-21 03:44:27.904902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.856 [2024-07-21 03:44:27.904945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.856 qpair failed and we were unable to recover it. 00:34:42.856 [2024-07-21 03:44:27.905077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.856 [2024-07-21 03:44:27.905107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.856 qpair failed and we were unable to recover it. 00:34:42.856 [2024-07-21 03:44:27.905330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.856 [2024-07-21 03:44:27.905391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.856 qpair failed and we were unable to recover it. 00:34:42.856 [2024-07-21 03:44:27.905528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.856 [2024-07-21 03:44:27.905558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.856 qpair failed and we were unable to recover it. 00:34:42.856 [2024-07-21 03:44:27.905708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.856 [2024-07-21 03:44:27.905735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.856 qpair failed and we were unable to recover it. 00:34:42.856 [2024-07-21 03:44:27.905877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.856 [2024-07-21 03:44:27.905907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.856 qpair failed and we were unable to recover it. 00:34:42.856 [2024-07-21 03:44:27.906022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.856 [2024-07-21 03:44:27.906066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.856 qpair failed and we were unable to recover it. 00:34:42.856 [2024-07-21 03:44:27.906230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.856 [2024-07-21 03:44:27.906260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.856 qpair failed and we were unable to recover it. 00:34:42.856 [2024-07-21 03:44:27.906386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.856 [2024-07-21 03:44:27.906429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.856 qpair failed and we were unable to recover it. 00:34:42.856 [2024-07-21 03:44:27.906526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.856 [2024-07-21 03:44:27.906554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.856 qpair failed and we were unable to recover it. 00:34:42.856 [2024-07-21 03:44:27.906673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.856 [2024-07-21 03:44:27.906714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.856 qpair failed and we were unable to recover it. 00:34:42.856 [2024-07-21 03:44:27.906851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.856 [2024-07-21 03:44:27.906883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.856 qpair failed and we were unable to recover it. 00:34:42.856 [2024-07-21 03:44:27.906984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.856 [2024-07-21 03:44:27.907027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.856 qpair failed and we were unable to recover it. 00:34:42.856 [2024-07-21 03:44:27.907178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.856 [2024-07-21 03:44:27.907208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.856 qpair failed and we were unable to recover it. 00:34:42.856 [2024-07-21 03:44:27.907370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.856 [2024-07-21 03:44:27.907399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.856 qpair failed and we were unable to recover it. 00:34:42.856 [2024-07-21 03:44:27.907536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.856 [2024-07-21 03:44:27.907565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.856 qpair failed and we were unable to recover it. 00:34:42.856 [2024-07-21 03:44:27.907713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.856 [2024-07-21 03:44:27.907741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.856 qpair failed and we were unable to recover it. 00:34:42.856 [2024-07-21 03:44:27.907867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.856 [2024-07-21 03:44:27.907896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.856 qpair failed and we were unable to recover it. 00:34:42.856 [2024-07-21 03:44:27.908035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.856 [2024-07-21 03:44:27.908065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.857 qpair failed and we were unable to recover it. 00:34:42.857 [2024-07-21 03:44:27.908200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.857 [2024-07-21 03:44:27.908231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.857 qpair failed and we were unable to recover it. 00:34:42.857 [2024-07-21 03:44:27.908367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.857 [2024-07-21 03:44:27.908395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.857 qpair failed and we were unable to recover it. 00:34:42.857 [2024-07-21 03:44:27.908490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.857 [2024-07-21 03:44:27.908517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.857 qpair failed and we were unable to recover it. 00:34:42.857 [2024-07-21 03:44:27.908610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.857 [2024-07-21 03:44:27.908645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.857 qpair failed and we were unable to recover it. 00:34:42.857 [2024-07-21 03:44:27.908798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.857 [2024-07-21 03:44:27.908824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.857 qpair failed and we were unable to recover it. 00:34:42.857 [2024-07-21 03:44:27.908916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.857 [2024-07-21 03:44:27.908943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.857 qpair failed and we were unable to recover it. 00:34:42.857 [2024-07-21 03:44:27.909029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.857 [2024-07-21 03:44:27.909056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.857 qpair failed and we were unable to recover it. 00:34:42.857 [2024-07-21 03:44:27.909205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.857 [2024-07-21 03:44:27.909232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.857 qpair failed and we were unable to recover it. 00:34:42.857 [2024-07-21 03:44:27.909364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.857 [2024-07-21 03:44:27.909391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.857 qpair failed and we were unable to recover it. 00:34:42.857 [2024-07-21 03:44:27.909541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.857 [2024-07-21 03:44:27.909569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.857 qpair failed and we were unable to recover it. 00:34:42.857 [2024-07-21 03:44:27.909678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.857 [2024-07-21 03:44:27.909715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.857 qpair failed and we were unable to recover it. 00:34:42.857 [2024-07-21 03:44:27.909858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.857 [2024-07-21 03:44:27.909900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.857 qpair failed and we were unable to recover it. 00:34:42.857 [2024-07-21 03:44:27.910008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.857 [2024-07-21 03:44:27.910051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.857 qpair failed and we were unable to recover it. 00:34:42.857 [2024-07-21 03:44:27.910224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.857 [2024-07-21 03:44:27.910254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.857 qpair failed and we were unable to recover it. 00:34:42.857 [2024-07-21 03:44:27.910365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.857 [2024-07-21 03:44:27.910394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.857 qpair failed and we were unable to recover it. 00:34:42.857 [2024-07-21 03:44:27.910556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.857 [2024-07-21 03:44:27.910583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.857 qpair failed and we were unable to recover it. 00:34:42.857 [2024-07-21 03:44:27.910682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.857 [2024-07-21 03:44:27.910711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.857 qpair failed and we were unable to recover it. 00:34:42.857 [2024-07-21 03:44:27.910822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.857 [2024-07-21 03:44:27.910850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.857 qpair failed and we were unable to recover it. 00:34:42.857 [2024-07-21 03:44:27.911056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.857 [2024-07-21 03:44:27.911114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.857 qpair failed and we were unable to recover it. 00:34:42.857 [2024-07-21 03:44:27.911371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.857 [2024-07-21 03:44:27.911423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.857 qpair failed and we were unable to recover it. 00:34:42.857 [2024-07-21 03:44:27.911526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.857 [2024-07-21 03:44:27.911569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.857 qpair failed and we were unable to recover it. 00:34:42.857 [2024-07-21 03:44:27.911673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.857 [2024-07-21 03:44:27.911701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.857 qpair failed and we were unable to recover it. 00:34:42.857 [2024-07-21 03:44:27.911819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.857 [2024-07-21 03:44:27.911845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.857 qpair failed and we were unable to recover it. 00:34:42.857 [2024-07-21 03:44:27.911971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.857 [2024-07-21 03:44:27.911998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.857 qpair failed and we were unable to recover it. 00:34:42.857 [2024-07-21 03:44:27.912141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.857 [2024-07-21 03:44:27.912170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.857 qpair failed and we were unable to recover it. 00:34:42.857 [2024-07-21 03:44:27.912300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.857 [2024-07-21 03:44:27.912335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.857 qpair failed and we were unable to recover it. 00:34:42.857 [2024-07-21 03:44:27.912463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.857 [2024-07-21 03:44:27.912491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.857 qpair failed and we were unable to recover it. 00:34:42.857 [2024-07-21 03:44:27.912624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.857 [2024-07-21 03:44:27.912651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.857 qpair failed and we were unable to recover it. 00:34:42.857 [2024-07-21 03:44:27.912773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.857 [2024-07-21 03:44:27.912799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.857 qpair failed and we were unable to recover it. 00:34:42.857 [2024-07-21 03:44:27.912935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.857 [2024-07-21 03:44:27.912964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.857 qpair failed and we were unable to recover it. 00:34:42.857 [2024-07-21 03:44:27.913096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.857 [2024-07-21 03:44:27.913125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.857 qpair failed and we were unable to recover it. 00:34:42.857 [2024-07-21 03:44:27.913282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.857 [2024-07-21 03:44:27.913312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.857 qpair failed and we were unable to recover it. 00:34:42.857 [2024-07-21 03:44:27.913442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.857 [2024-07-21 03:44:27.913472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.857 qpair failed and we were unable to recover it. 00:34:42.857 [2024-07-21 03:44:27.913635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.857 [2024-07-21 03:44:27.913663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.857 qpair failed and we were unable to recover it. 00:34:42.857 [2024-07-21 03:44:27.913814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.857 [2024-07-21 03:44:27.913842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.857 qpair failed and we were unable to recover it. 00:34:42.857 [2024-07-21 03:44:27.913982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.858 [2024-07-21 03:44:27.914011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.858 qpair failed and we were unable to recover it. 00:34:42.858 [2024-07-21 03:44:27.914123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.858 [2024-07-21 03:44:27.914164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.858 qpair failed and we were unable to recover it. 00:34:42.858 [2024-07-21 03:44:27.914291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.858 [2024-07-21 03:44:27.914321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.858 qpair failed and we were unable to recover it. 00:34:42.858 [2024-07-21 03:44:27.914480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.858 [2024-07-21 03:44:27.914509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.858 qpair failed and we were unable to recover it. 00:34:42.858 [2024-07-21 03:44:27.914648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.858 [2024-07-21 03:44:27.914680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.858 qpair failed and we were unable to recover it. 00:34:42.858 [2024-07-21 03:44:27.914799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.858 [2024-07-21 03:44:27.914825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.858 qpair failed and we were unable to recover it. 00:34:42.858 [2024-07-21 03:44:27.914938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.858 [2024-07-21 03:44:27.914969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.858 qpair failed and we were unable to recover it. 00:34:42.858 [2024-07-21 03:44:27.915137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.858 [2024-07-21 03:44:27.915169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.858 qpair failed and we were unable to recover it. 00:34:42.858 [2024-07-21 03:44:27.915300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.858 [2024-07-21 03:44:27.915329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.858 qpair failed and we were unable to recover it. 00:34:42.858 [2024-07-21 03:44:27.915487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.858 [2024-07-21 03:44:27.915517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.858 qpair failed and we were unable to recover it. 00:34:42.858 [2024-07-21 03:44:27.915676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.858 [2024-07-21 03:44:27.915703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.858 qpair failed and we were unable to recover it. 00:34:42.858 [2024-07-21 03:44:27.915816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.858 [2024-07-21 03:44:27.915842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.858 qpair failed and we were unable to recover it. 00:34:42.858 [2024-07-21 03:44:27.915952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.858 [2024-07-21 03:44:27.915982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.858 qpair failed and we were unable to recover it. 00:34:42.858 [2024-07-21 03:44:27.916103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.858 [2024-07-21 03:44:27.916132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.858 qpair failed and we were unable to recover it. 00:34:42.858 [2024-07-21 03:44:27.916270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.858 [2024-07-21 03:44:27.916302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.858 qpair failed and we were unable to recover it. 00:34:42.858 [2024-07-21 03:44:27.916439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.858 [2024-07-21 03:44:27.916472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.858 qpair failed and we were unable to recover it. 00:34:42.858 [2024-07-21 03:44:27.916664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.858 [2024-07-21 03:44:27.916706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.858 qpair failed and we were unable to recover it. 00:34:42.858 [2024-07-21 03:44:27.916839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.858 [2024-07-21 03:44:27.916867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.858 qpair failed and we were unable to recover it. 00:34:42.858 [2024-07-21 03:44:27.916965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.858 [2024-07-21 03:44:27.916993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.858 qpair failed and we were unable to recover it. 00:34:42.858 [2024-07-21 03:44:27.917093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.858 [2024-07-21 03:44:27.917121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.858 qpair failed and we were unable to recover it. 00:34:42.858 [2024-07-21 03:44:27.917270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.858 [2024-07-21 03:44:27.917311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.858 qpair failed and we were unable to recover it. 00:34:42.858 [2024-07-21 03:44:27.917404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.858 [2024-07-21 03:44:27.917432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.858 qpair failed and we were unable to recover it. 00:34:42.858 [2024-07-21 03:44:27.917543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.858 [2024-07-21 03:44:27.917570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.858 qpair failed and we were unable to recover it. 00:34:42.858 [2024-07-21 03:44:27.917700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.858 [2024-07-21 03:44:27.917728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.858 qpair failed and we were unable to recover it. 00:34:42.858 [2024-07-21 03:44:27.917835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.858 [2024-07-21 03:44:27.917865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.858 qpair failed and we were unable to recover it. 00:34:42.858 [2024-07-21 03:44:27.917995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.858 [2024-07-21 03:44:27.918025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.858 qpair failed and we were unable to recover it. 00:34:42.858 [2024-07-21 03:44:27.918160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.858 [2024-07-21 03:44:27.918191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.858 qpair failed and we were unable to recover it. 00:34:42.858 [2024-07-21 03:44:27.918290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.858 [2024-07-21 03:44:27.918321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.858 qpair failed and we were unable to recover it. 00:34:42.858 [2024-07-21 03:44:27.918466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.858 [2024-07-21 03:44:27.918499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.858 qpair failed and we were unable to recover it. 00:34:42.858 [2024-07-21 03:44:27.918676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.858 [2024-07-21 03:44:27.918705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.858 qpair failed and we were unable to recover it. 00:34:42.858 [2024-07-21 03:44:27.918802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.858 [2024-07-21 03:44:27.918829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.858 qpair failed and we were unable to recover it. 00:34:42.858 [2024-07-21 03:44:27.918976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.858 [2024-07-21 03:44:27.919008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.858 qpair failed and we were unable to recover it. 00:34:42.858 [2024-07-21 03:44:27.919123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.858 [2024-07-21 03:44:27.919153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.858 qpair failed and we were unable to recover it. 00:34:42.858 [2024-07-21 03:44:27.919242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.858 [2024-07-21 03:44:27.919272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.858 qpair failed and we were unable to recover it. 00:34:42.858 [2024-07-21 03:44:27.919409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.858 [2024-07-21 03:44:27.919439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.858 qpair failed and we were unable to recover it. 00:34:42.858 [2024-07-21 03:44:27.919596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.858 [2024-07-21 03:44:27.919632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.858 qpair failed and we were unable to recover it. 00:34:42.858 [2024-07-21 03:44:27.919767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.858 [2024-07-21 03:44:27.919794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.858 qpair failed and we were unable to recover it. 00:34:42.858 [2024-07-21 03:44:27.919994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.858 [2024-07-21 03:44:27.920066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.858 qpair failed and we were unable to recover it. 00:34:42.858 [2024-07-21 03:44:27.920243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.858 [2024-07-21 03:44:27.920302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.858 qpair failed and we were unable to recover it. 00:34:42.858 [2024-07-21 03:44:27.920402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.858 [2024-07-21 03:44:27.920431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.858 qpair failed and we were unable to recover it. 00:34:42.858 [2024-07-21 03:44:27.920567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.859 [2024-07-21 03:44:27.920595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.859 qpair failed and we were unable to recover it. 00:34:42.859 [2024-07-21 03:44:27.920719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.859 [2024-07-21 03:44:27.920779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.859 qpair failed and we were unable to recover it. 00:34:42.859 [2024-07-21 03:44:27.920893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.859 [2024-07-21 03:44:27.920921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.859 qpair failed and we were unable to recover it. 00:34:42.859 [2024-07-21 03:44:27.921095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.859 [2024-07-21 03:44:27.921125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.859 qpair failed and we were unable to recover it. 00:34:42.859 [2024-07-21 03:44:27.921222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.859 [2024-07-21 03:44:27.921252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.859 qpair failed and we were unable to recover it. 00:34:42.859 [2024-07-21 03:44:27.921393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.859 [2024-07-21 03:44:27.921422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.859 qpair failed and we were unable to recover it. 00:34:42.859 [2024-07-21 03:44:27.921533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.859 [2024-07-21 03:44:27.921562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.859 qpair failed and we were unable to recover it. 00:34:42.859 [2024-07-21 03:44:27.921732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.859 [2024-07-21 03:44:27.921764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.859 qpair failed and we were unable to recover it. 00:34:42.859 [2024-07-21 03:44:27.921897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.859 [2024-07-21 03:44:27.921926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.859 qpair failed and we were unable to recover it. 00:34:42.859 [2024-07-21 03:44:27.922059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.859 [2024-07-21 03:44:27.922089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.859 qpair failed and we were unable to recover it. 00:34:42.859 [2024-07-21 03:44:27.922221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.859 [2024-07-21 03:44:27.922251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.859 qpair failed and we were unable to recover it. 00:34:42.859 [2024-07-21 03:44:27.922381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.859 [2024-07-21 03:44:27.922411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.859 qpair failed and we were unable to recover it. 00:34:42.859 [2024-07-21 03:44:27.922564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.859 [2024-07-21 03:44:27.922610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.859 qpair failed and we were unable to recover it. 00:34:42.859 [2024-07-21 03:44:27.922743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.859 [2024-07-21 03:44:27.922773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.859 qpair failed and we were unable to recover it. 00:34:42.859 [2024-07-21 03:44:27.922909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.859 [2024-07-21 03:44:27.922954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.859 qpair failed and we were unable to recover it. 00:34:42.859 [2024-07-21 03:44:27.923102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.859 [2024-07-21 03:44:27.923147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.859 qpair failed and we were unable to recover it. 00:34:42.859 [2024-07-21 03:44:27.923254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.859 [2024-07-21 03:44:27.923299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.859 qpair failed and we were unable to recover it. 00:34:42.859 [2024-07-21 03:44:27.923422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.859 [2024-07-21 03:44:27.923449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.859 qpair failed and we were unable to recover it. 00:34:42.859 [2024-07-21 03:44:27.923609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.859 [2024-07-21 03:44:27.923643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.859 qpair failed and we were unable to recover it. 00:34:42.859 [2024-07-21 03:44:27.923743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.859 [2024-07-21 03:44:27.923773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.859 qpair failed and we were unable to recover it. 00:34:42.859 [2024-07-21 03:44:27.923886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.859 [2024-07-21 03:44:27.923915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.859 qpair failed and we were unable to recover it. 00:34:42.859 [2024-07-21 03:44:27.924051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.859 [2024-07-21 03:44:27.924080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.859 qpair failed and we were unable to recover it. 00:34:42.859 [2024-07-21 03:44:27.924240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.859 [2024-07-21 03:44:27.924270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.859 qpair failed and we were unable to recover it. 00:34:42.859 [2024-07-21 03:44:27.924401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.859 [2024-07-21 03:44:27.924429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.859 qpair failed and we were unable to recover it. 00:34:42.859 [2024-07-21 03:44:27.924542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.859 [2024-07-21 03:44:27.924587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.859 qpair failed and we were unable to recover it. 00:34:42.859 [2024-07-21 03:44:27.924738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.859 [2024-07-21 03:44:27.924764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.859 qpair failed and we were unable to recover it. 00:34:42.859 [2024-07-21 03:44:27.924872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.859 [2024-07-21 03:44:27.924900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.859 qpair failed and we were unable to recover it. 00:34:42.859 [2024-07-21 03:44:27.925028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.859 [2024-07-21 03:44:27.925058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.859 qpair failed and we were unable to recover it. 00:34:42.859 [2024-07-21 03:44:27.925182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.859 [2024-07-21 03:44:27.925211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.859 qpair failed and we were unable to recover it. 00:34:42.859 [2024-07-21 03:44:27.925335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.859 [2024-07-21 03:44:27.925364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.859 qpair failed and we were unable to recover it. 00:34:42.859 [2024-07-21 03:44:27.925495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.859 [2024-07-21 03:44:27.925523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.859 qpair failed and we were unable to recover it. 00:34:42.859 [2024-07-21 03:44:27.925671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.859 [2024-07-21 03:44:27.925703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.859 qpair failed and we were unable to recover it. 00:34:42.859 [2024-07-21 03:44:27.925838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.859 [2024-07-21 03:44:27.925885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.859 qpair failed and we were unable to recover it. 00:34:42.859 [2024-07-21 03:44:27.926052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.859 [2024-07-21 03:44:27.926097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.859 qpair failed and we were unable to recover it. 00:34:42.859 [2024-07-21 03:44:27.926243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.859 [2024-07-21 03:44:27.926289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.859 qpair failed and we were unable to recover it. 00:34:42.859 [2024-07-21 03:44:27.926416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.859 [2024-07-21 03:44:27.926443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.859 qpair failed and we were unable to recover it. 00:34:42.859 [2024-07-21 03:44:27.926536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.859 [2024-07-21 03:44:27.926564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.859 qpair failed and we were unable to recover it. 00:34:42.859 [2024-07-21 03:44:27.926685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.859 [2024-07-21 03:44:27.926711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.859 qpair failed and we were unable to recover it. 00:34:42.859 [2024-07-21 03:44:27.926818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.859 [2024-07-21 03:44:27.926847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.859 qpair failed and we were unable to recover it. 00:34:42.859 [2024-07-21 03:44:27.927009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.859 [2024-07-21 03:44:27.927037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.859 qpair failed and we were unable to recover it. 00:34:42.859 [2024-07-21 03:44:27.927168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.859 [2024-07-21 03:44:27.927197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.859 qpair failed and we were unable to recover it. 00:34:42.860 [2024-07-21 03:44:27.927331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.860 [2024-07-21 03:44:27.927361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.860 qpair failed and we were unable to recover it. 00:34:42.860 [2024-07-21 03:44:27.927470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.860 [2024-07-21 03:44:27.927496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.860 qpair failed and we were unable to recover it. 00:34:42.860 [2024-07-21 03:44:27.927631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.860 [2024-07-21 03:44:27.927672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.860 qpair failed and we were unable to recover it. 00:34:42.860 [2024-07-21 03:44:27.927807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.860 [2024-07-21 03:44:27.927836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.860 qpair failed and we were unable to recover it. 00:34:42.860 [2024-07-21 03:44:27.927987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.860 [2024-07-21 03:44:27.928018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.860 qpair failed and we were unable to recover it. 00:34:42.860 [2024-07-21 03:44:27.928211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.860 [2024-07-21 03:44:27.928264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.860 qpair failed and we were unable to recover it. 00:34:42.860 [2024-07-21 03:44:27.928397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.860 [2024-07-21 03:44:27.928426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.860 qpair failed and we were unable to recover it. 00:34:42.860 [2024-07-21 03:44:27.928558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.860 [2024-07-21 03:44:27.928586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.860 qpair failed and we were unable to recover it. 00:34:42.860 [2024-07-21 03:44:27.928682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.860 [2024-07-21 03:44:27.928710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.860 qpair failed and we were unable to recover it. 00:34:42.860 [2024-07-21 03:44:27.928831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.860 [2024-07-21 03:44:27.928858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.860 qpair failed and we were unable to recover it. 00:34:42.860 [2024-07-21 03:44:27.929020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.860 [2024-07-21 03:44:27.929048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.860 qpair failed and we were unable to recover it. 00:34:42.860 [2024-07-21 03:44:27.929225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.860 [2024-07-21 03:44:27.929290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.860 qpair failed and we were unable to recover it. 00:34:42.860 [2024-07-21 03:44:27.929447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.860 [2024-07-21 03:44:27.929476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.860 qpair failed and we were unable to recover it. 00:34:42.860 [2024-07-21 03:44:27.929638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.860 [2024-07-21 03:44:27.929666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.860 qpair failed and we were unable to recover it. 00:34:42.860 [2024-07-21 03:44:27.929758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.860 [2024-07-21 03:44:27.929785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.860 qpair failed and we were unable to recover it. 00:34:42.860 [2024-07-21 03:44:27.929922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.860 [2024-07-21 03:44:27.929952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.860 qpair failed and we were unable to recover it. 00:34:42.860 [2024-07-21 03:44:27.930081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.860 [2024-07-21 03:44:27.930111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.860 qpair failed and we were unable to recover it. 00:34:42.860 [2024-07-21 03:44:27.930249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.860 [2024-07-21 03:44:27.930284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.860 qpair failed and we were unable to recover it. 00:34:42.860 [2024-07-21 03:44:27.930396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.860 [2024-07-21 03:44:27.930426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.860 qpair failed and we were unable to recover it. 00:34:42.860 [2024-07-21 03:44:27.930587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.860 [2024-07-21 03:44:27.930626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.860 qpair failed and we were unable to recover it. 00:34:42.860 [2024-07-21 03:44:27.930769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.860 [2024-07-21 03:44:27.930797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.860 qpair failed and we were unable to recover it. 00:34:42.860 [2024-07-21 03:44:27.930895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.860 [2024-07-21 03:44:27.930938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.860 qpair failed and we were unable to recover it. 00:34:42.860 [2024-07-21 03:44:27.931095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.860 [2024-07-21 03:44:27.931125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.860 qpair failed and we were unable to recover it. 00:34:42.860 [2024-07-21 03:44:27.931257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.860 [2024-07-21 03:44:27.931287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.860 qpair failed and we were unable to recover it. 00:34:42.860 [2024-07-21 03:44:27.931417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.860 [2024-07-21 03:44:27.931447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.860 qpair failed and we were unable to recover it. 00:34:42.860 [2024-07-21 03:44:27.931586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.860 [2024-07-21 03:44:27.931627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.860 qpair failed and we were unable to recover it. 00:34:42.860 [2024-07-21 03:44:27.931746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.860 [2024-07-21 03:44:27.931773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.860 qpair failed and we were unable to recover it. 00:34:42.860 [2024-07-21 03:44:27.931915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.860 [2024-07-21 03:44:27.931945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.860 qpair failed and we were unable to recover it. 00:34:42.860 [2024-07-21 03:44:27.932072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.860 [2024-07-21 03:44:27.932101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.860 qpair failed and we were unable to recover it. 00:34:42.860 [2024-07-21 03:44:27.932197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.860 [2024-07-21 03:44:27.932228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.860 qpair failed and we were unable to recover it. 00:34:42.860 [2024-07-21 03:44:27.932346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.860 [2024-07-21 03:44:27.932389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.860 qpair failed and we were unable to recover it. 00:34:42.860 [2024-07-21 03:44:27.932560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.860 [2024-07-21 03:44:27.932587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.860 qpair failed and we were unable to recover it. 00:34:42.860 [2024-07-21 03:44:27.932693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.860 [2024-07-21 03:44:27.932721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.860 qpair failed and we were unable to recover it. 00:34:42.860 [2024-07-21 03:44:27.932873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.860 [2024-07-21 03:44:27.932900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.860 qpair failed and we were unable to recover it. 00:34:42.860 [2024-07-21 03:44:27.933053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.860 [2024-07-21 03:44:27.933115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.860 qpair failed and we were unable to recover it. 00:34:42.860 [2024-07-21 03:44:27.933212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.860 [2024-07-21 03:44:27.933243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.860 qpair failed and we were unable to recover it. 00:34:42.860 [2024-07-21 03:44:27.933383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.860 [2024-07-21 03:44:27.933413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.860 qpair failed and we were unable to recover it. 00:34:42.860 [2024-07-21 03:44:27.933544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.860 [2024-07-21 03:44:27.933571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.860 qpair failed and we were unable to recover it. 00:34:42.860 [2024-07-21 03:44:27.933724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.860 [2024-07-21 03:44:27.933752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.860 qpair failed and we were unable to recover it. 00:34:42.860 [2024-07-21 03:44:27.933877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.860 [2024-07-21 03:44:27.933904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.861 qpair failed and we were unable to recover it. 00:34:42.861 [2024-07-21 03:44:27.934010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.861 [2024-07-21 03:44:27.934040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.861 qpair failed and we were unable to recover it. 00:34:42.861 [2024-07-21 03:44:27.934180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.861 [2024-07-21 03:44:27.934210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.861 qpair failed and we were unable to recover it. 00:34:42.861 [2024-07-21 03:44:27.934318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.861 [2024-07-21 03:44:27.934348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.861 qpair failed and we were unable to recover it. 00:34:42.861 [2024-07-21 03:44:27.934514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.861 [2024-07-21 03:44:27.934542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.861 qpair failed and we were unable to recover it. 00:34:42.861 [2024-07-21 03:44:27.934690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.861 [2024-07-21 03:44:27.934717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.861 qpair failed and we were unable to recover it. 00:34:42.861 [2024-07-21 03:44:27.934831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.861 [2024-07-21 03:44:27.934858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.861 qpair failed and we were unable to recover it. 00:34:42.861 [2024-07-21 03:44:27.934972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.861 [2024-07-21 03:44:27.935002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.861 qpair failed and we were unable to recover it. 00:34:42.861 [2024-07-21 03:44:27.935162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.861 [2024-07-21 03:44:27.935191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.861 qpair failed and we were unable to recover it. 00:34:42.861 [2024-07-21 03:44:27.935349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.861 [2024-07-21 03:44:27.935378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.861 qpair failed and we were unable to recover it. 00:34:42.861 [2024-07-21 03:44:27.935517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.861 [2024-07-21 03:44:27.935544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.861 qpair failed and we were unable to recover it. 00:34:42.861 [2024-07-21 03:44:27.935711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.861 [2024-07-21 03:44:27.935753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.861 qpair failed and we were unable to recover it. 00:34:42.861 [2024-07-21 03:44:27.935885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.861 [2024-07-21 03:44:27.935913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.861 qpair failed and we were unable to recover it. 00:34:42.861 [2024-07-21 03:44:27.936033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.861 [2024-07-21 03:44:27.936062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.861 qpair failed and we were unable to recover it. 00:34:42.861 [2024-07-21 03:44:27.936222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.861 [2024-07-21 03:44:27.936252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.861 qpair failed and we were unable to recover it. 00:34:42.861 [2024-07-21 03:44:27.936377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.861 [2024-07-21 03:44:27.936406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.861 qpair failed and we were unable to recover it. 00:34:42.861 [2024-07-21 03:44:27.936544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.861 [2024-07-21 03:44:27.936570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.861 qpair failed and we were unable to recover it. 00:34:42.861 [2024-07-21 03:44:27.936701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.861 [2024-07-21 03:44:27.936729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.861 qpair failed and we were unable to recover it. 00:34:42.861 [2024-07-21 03:44:27.936819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.861 [2024-07-21 03:44:27.936849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.861 qpair failed and we were unable to recover it. 00:34:42.861 [2024-07-21 03:44:27.937000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.861 [2024-07-21 03:44:27.937029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.861 qpair failed and we were unable to recover it. 00:34:42.861 [2024-07-21 03:44:27.937136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.861 [2024-07-21 03:44:27.937165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.861 qpair failed and we were unable to recover it. 00:34:42.861 [2024-07-21 03:44:27.937294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.861 [2024-07-21 03:44:27.937323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.861 qpair failed and we were unable to recover it. 00:34:42.861 [2024-07-21 03:44:27.937495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.861 [2024-07-21 03:44:27.937553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.861 qpair failed and we were unable to recover it. 00:34:42.861 [2024-07-21 03:44:27.937692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.861 [2024-07-21 03:44:27.937722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.861 qpair failed and we were unable to recover it. 00:34:42.861 [2024-07-21 03:44:27.937895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.861 [2024-07-21 03:44:27.937939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.861 qpair failed and we were unable to recover it. 00:34:42.861 [2024-07-21 03:44:27.938085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.861 [2024-07-21 03:44:27.938131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.861 qpair failed and we were unable to recover it. 00:34:42.861 [2024-07-21 03:44:27.938275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.861 [2024-07-21 03:44:27.938319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.861 qpair failed and we were unable to recover it. 00:34:42.861 [2024-07-21 03:44:27.938433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.861 [2024-07-21 03:44:27.938461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.861 qpair failed and we were unable to recover it. 00:34:42.861 [2024-07-21 03:44:27.938611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.861 [2024-07-21 03:44:27.938645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.861 qpair failed and we were unable to recover it. 00:34:42.861 [2024-07-21 03:44:27.938739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.861 [2024-07-21 03:44:27.938767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.861 qpair failed and we were unable to recover it. 00:34:42.861 [2024-07-21 03:44:27.938909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.861 [2024-07-21 03:44:27.938937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.861 qpair failed and we were unable to recover it. 00:34:42.861 [2024-07-21 03:44:27.939156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.861 [2024-07-21 03:44:27.939185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.861 qpair failed and we were unable to recover it. 00:34:42.861 [2024-07-21 03:44:27.939372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.861 [2024-07-21 03:44:27.939425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.861 qpair failed and we were unable to recover it. 00:34:42.861 [2024-07-21 03:44:27.939555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.861 [2024-07-21 03:44:27.939584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.861 qpair failed and we were unable to recover it. 00:34:42.861 [2024-07-21 03:44:27.939754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.861 [2024-07-21 03:44:27.939796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.861 qpair failed and we were unable to recover it. 00:34:42.861 [2024-07-21 03:44:27.940040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.861 [2024-07-21 03:44:27.940105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.861 qpair failed and we were unable to recover it. 00:34:42.861 [2024-07-21 03:44:27.940255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.861 [2024-07-21 03:44:27.940286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.861 qpair failed and we were unable to recover it. 00:34:42.861 [2024-07-21 03:44:27.940422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.861 [2024-07-21 03:44:27.940452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.862 qpair failed and we were unable to recover it. 00:34:42.862 [2024-07-21 03:44:27.940588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.862 [2024-07-21 03:44:27.940621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.862 qpair failed and we were unable to recover it. 00:34:42.862 [2024-07-21 03:44:27.940749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.862 [2024-07-21 03:44:27.940777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.862 qpair failed and we were unable to recover it. 00:34:42.862 [2024-07-21 03:44:27.940878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.862 [2024-07-21 03:44:27.940922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.862 qpair failed and we were unable to recover it. 00:34:42.862 [2024-07-21 03:44:27.941037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.862 [2024-07-21 03:44:27.941080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.862 qpair failed and we were unable to recover it. 00:34:42.862 [2024-07-21 03:44:27.941218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.862 [2024-07-21 03:44:27.941261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.862 qpair failed and we were unable to recover it. 00:34:42.862 [2024-07-21 03:44:27.941397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.862 [2024-07-21 03:44:27.941426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.862 qpair failed and we were unable to recover it. 00:34:42.862 [2024-07-21 03:44:27.941561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.862 [2024-07-21 03:44:27.941587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.862 qpair failed and we were unable to recover it. 00:34:42.862 [2024-07-21 03:44:27.941726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.862 [2024-07-21 03:44:27.941754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.862 qpair failed and we were unable to recover it. 00:34:42.862 [2024-07-21 03:44:27.941874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.862 [2024-07-21 03:44:27.941900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.862 qpair failed and we were unable to recover it. 00:34:42.862 [2024-07-21 03:44:27.942085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.862 [2024-07-21 03:44:27.942154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.862 qpair failed and we were unable to recover it. 00:34:42.862 [2024-07-21 03:44:27.942285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.862 [2024-07-21 03:44:27.942317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.862 qpair failed and we were unable to recover it. 00:34:42.862 [2024-07-21 03:44:27.942456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.862 [2024-07-21 03:44:27.942486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.862 qpair failed and we were unable to recover it. 00:34:42.862 [2024-07-21 03:44:27.942664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.862 [2024-07-21 03:44:27.942691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.862 qpair failed and we were unable to recover it. 00:34:42.862 [2024-07-21 03:44:27.942787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.862 [2024-07-21 03:44:27.942814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.862 qpair failed and we were unable to recover it. 00:34:42.862 [2024-07-21 03:44:27.942913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.862 [2024-07-21 03:44:27.942942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.862 qpair failed and we were unable to recover it. 00:34:42.862 [2024-07-21 03:44:27.943099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.862 [2024-07-21 03:44:27.943128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.862 qpair failed and we were unable to recover it. 00:34:42.862 [2024-07-21 03:44:27.943239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.862 [2024-07-21 03:44:27.943269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.862 qpair failed and we were unable to recover it. 00:34:42.862 [2024-07-21 03:44:27.943430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.862 [2024-07-21 03:44:27.943459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.862 qpair failed and we were unable to recover it. 00:34:42.862 [2024-07-21 03:44:27.943630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.862 [2024-07-21 03:44:27.943657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.862 qpair failed and we were unable to recover it. 00:34:42.862 [2024-07-21 03:44:27.943780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.862 [2024-07-21 03:44:27.943806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.862 qpair failed and we were unable to recover it. 00:34:42.862 [2024-07-21 03:44:27.943905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.862 [2024-07-21 03:44:27.943953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.862 qpair failed and we were unable to recover it. 00:34:42.862 [2024-07-21 03:44:27.944062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.862 [2024-07-21 03:44:27.944089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.862 qpair failed and we were unable to recover it. 00:34:42.862 [2024-07-21 03:44:27.944240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.862 [2024-07-21 03:44:27.944269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.862 qpair failed and we were unable to recover it. 00:34:42.862 [2024-07-21 03:44:27.944401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.862 [2024-07-21 03:44:27.944429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.862 qpair failed and we were unable to recover it. 00:34:42.862 [2024-07-21 03:44:27.944591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.862 [2024-07-21 03:44:27.944629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.862 qpair failed and we were unable to recover it. 00:34:42.862 [2024-07-21 03:44:27.944797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.862 [2024-07-21 03:44:27.944823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.862 qpair failed and we were unable to recover it. 00:34:42.862 [2024-07-21 03:44:27.944934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.862 [2024-07-21 03:44:27.944965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.862 qpair failed and we were unable to recover it. 00:34:42.862 [2024-07-21 03:44:27.945104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.862 [2024-07-21 03:44:27.945134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.862 qpair failed and we were unable to recover it. 00:34:42.862 [2024-07-21 03:44:27.945281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.862 [2024-07-21 03:44:27.945307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.862 qpair failed and we were unable to recover it. 00:34:42.862 [2024-07-21 03:44:27.945459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.862 [2024-07-21 03:44:27.945503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.862 qpair failed and we were unable to recover it. 00:34:42.862 [2024-07-21 03:44:27.945663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.862 [2024-07-21 03:44:27.945690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.862 qpair failed and we were unable to recover it. 00:34:42.862 [2024-07-21 03:44:27.945807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.862 [2024-07-21 03:44:27.945833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.862 qpair failed and we were unable to recover it. 00:34:42.862 [2024-07-21 03:44:27.945959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.862 [2024-07-21 03:44:27.945988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.862 qpair failed and we were unable to recover it. 00:34:42.862 [2024-07-21 03:44:27.946159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.862 [2024-07-21 03:44:27.946189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.862 qpair failed and we were unable to recover it. 00:34:42.862 [2024-07-21 03:44:27.946353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.862 [2024-07-21 03:44:27.946382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.862 qpair failed and we were unable to recover it. 00:34:42.862 [2024-07-21 03:44:27.946516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.862 [2024-07-21 03:44:27.946545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.862 qpair failed and we were unable to recover it. 00:34:42.862 [2024-07-21 03:44:27.946711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.862 [2024-07-21 03:44:27.946738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.862 qpair failed and we were unable to recover it. 00:34:42.862 [2024-07-21 03:44:27.946861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.862 [2024-07-21 03:44:27.946907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.863 qpair failed and we were unable to recover it. 00:34:42.863 [2024-07-21 03:44:27.947078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.863 [2024-07-21 03:44:27.947107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.863 qpair failed and we were unable to recover it. 00:34:42.863 [2024-07-21 03:44:27.947230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.863 [2024-07-21 03:44:27.947274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.863 qpair failed and we were unable to recover it. 00:34:42.863 [2024-07-21 03:44:27.947386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.863 [2024-07-21 03:44:27.947413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.863 qpair failed and we were unable to recover it. 00:34:42.863 [2024-07-21 03:44:27.947560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.863 [2024-07-21 03:44:27.947589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.863 qpair failed and we were unable to recover it. 00:34:42.863 [2024-07-21 03:44:27.947779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.863 [2024-07-21 03:44:27.947820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.863 qpair failed and we were unable to recover it. 00:34:42.863 [2024-07-21 03:44:27.947935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.863 [2024-07-21 03:44:27.947968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.863 qpair failed and we were unable to recover it. 00:34:42.863 [2024-07-21 03:44:27.948106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.863 [2024-07-21 03:44:27.948151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.863 qpair failed and we were unable to recover it. 00:34:42.863 [2024-07-21 03:44:27.948292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.863 [2024-07-21 03:44:27.948337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.863 qpair failed and we were unable to recover it. 00:34:42.863 [2024-07-21 03:44:27.948459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.863 [2024-07-21 03:44:27.948487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.863 qpair failed and we were unable to recover it. 00:34:42.863 [2024-07-21 03:44:27.948642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.863 [2024-07-21 03:44:27.948700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.863 qpair failed and we were unable to recover it. 00:34:42.863 [2024-07-21 03:44:27.948836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.863 [2024-07-21 03:44:27.948864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.863 qpair failed and we were unable to recover it. 00:34:42.863 [2024-07-21 03:44:27.948990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.863 [2024-07-21 03:44:27.949016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.863 qpair failed and we were unable to recover it. 00:34:42.863 [2024-07-21 03:44:27.949099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.863 [2024-07-21 03:44:27.949126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.863 qpair failed and we were unable to recover it. 00:34:42.863 [2024-07-21 03:44:27.949282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.863 [2024-07-21 03:44:27.949311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.863 qpair failed and we were unable to recover it. 00:34:42.863 [2024-07-21 03:44:27.949433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.863 [2024-07-21 03:44:27.949461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.863 qpair failed and we were unable to recover it. 00:34:42.863 [2024-07-21 03:44:27.949584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.863 [2024-07-21 03:44:27.949622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.863 qpair failed and we were unable to recover it. 00:34:42.863 [2024-07-21 03:44:27.949741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.863 [2024-07-21 03:44:27.949769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.863 qpair failed and we were unable to recover it. 00:34:42.863 [2024-07-21 03:44:27.949859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.863 [2024-07-21 03:44:27.949887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.863 qpair failed and we were unable to recover it. 00:34:42.863 [2024-07-21 03:44:27.950048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.863 [2024-07-21 03:44:27.950092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.863 qpair failed and we were unable to recover it. 00:34:42.863 [2024-07-21 03:44:27.950264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.863 [2024-07-21 03:44:27.950294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.863 qpair failed and we were unable to recover it. 00:34:42.863 [2024-07-21 03:44:27.950462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.863 [2024-07-21 03:44:27.950489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.863 qpair failed and we were unable to recover it. 00:34:42.863 [2024-07-21 03:44:27.950641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.863 [2024-07-21 03:44:27.950669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.863 qpair failed and we were unable to recover it. 00:34:42.863 [2024-07-21 03:44:27.950812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.863 [2024-07-21 03:44:27.950857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.863 qpair failed and we were unable to recover it. 00:34:42.863 [2024-07-21 03:44:27.951004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.863 [2024-07-21 03:44:27.951052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.863 qpair failed and we were unable to recover it. 00:34:42.863 [2024-07-21 03:44:27.951162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.863 [2024-07-21 03:44:27.951191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.863 qpair failed and we were unable to recover it. 00:34:42.863 [2024-07-21 03:44:27.951355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.863 [2024-07-21 03:44:27.951382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.863 qpair failed and we were unable to recover it. 00:34:42.863 [2024-07-21 03:44:27.951476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.863 [2024-07-21 03:44:27.951504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.863 qpair failed and we were unable to recover it. 00:34:42.863 [2024-07-21 03:44:27.951662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.863 [2024-07-21 03:44:27.951690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.863 qpair failed and we were unable to recover it. 00:34:42.863 [2024-07-21 03:44:27.951840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.863 [2024-07-21 03:44:27.951867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.863 qpair failed and we were unable to recover it. 00:34:42.863 [2024-07-21 03:44:27.952005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.863 [2024-07-21 03:44:27.952050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.863 qpair failed and we were unable to recover it. 00:34:42.863 [2024-07-21 03:44:27.952196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.863 [2024-07-21 03:44:27.952223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.863 qpair failed and we were unable to recover it. 00:34:42.863 [2024-07-21 03:44:27.952352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.863 [2024-07-21 03:44:27.952382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.863 qpair failed and we were unable to recover it. 00:34:42.863 [2024-07-21 03:44:27.952528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.863 [2024-07-21 03:44:27.952556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.863 qpair failed and we were unable to recover it. 00:34:42.863 [2024-07-21 03:44:27.952710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.863 [2024-07-21 03:44:27.952756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.863 qpair failed and we were unable to recover it. 00:34:42.863 [2024-07-21 03:44:27.952907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.863 [2024-07-21 03:44:27.952938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.863 qpair failed and we were unable to recover it. 00:34:42.863 [2024-07-21 03:44:27.953076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.863 [2024-07-21 03:44:27.953107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.863 qpair failed and we were unable to recover it. 00:34:42.863 [2024-07-21 03:44:27.953335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.863 [2024-07-21 03:44:27.953392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.863 qpair failed and we were unable to recover it. 00:34:42.863 [2024-07-21 03:44:27.953530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.863 [2024-07-21 03:44:27.953556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.863 qpair failed and we were unable to recover it. 00:34:42.863 [2024-07-21 03:44:27.953680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.863 [2024-07-21 03:44:27.953708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.863 qpair failed and we were unable to recover it. 00:34:42.863 [2024-07-21 03:44:27.953804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.864 [2024-07-21 03:44:27.953830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.864 qpair failed and we were unable to recover it. 00:34:42.864 [2024-07-21 03:44:27.953947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.864 [2024-07-21 03:44:27.953981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.864 qpair failed and we were unable to recover it. 00:34:42.864 [2024-07-21 03:44:27.954144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.864 [2024-07-21 03:44:27.954173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.864 qpair failed and we were unable to recover it. 00:34:42.864 [2024-07-21 03:44:27.954285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.864 [2024-07-21 03:44:27.954311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.864 qpair failed and we were unable to recover it. 00:34:42.864 [2024-07-21 03:44:27.954464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.864 [2024-07-21 03:44:27.954490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.864 qpair failed and we were unable to recover it. 00:34:42.864 [2024-07-21 03:44:27.954635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.864 [2024-07-21 03:44:27.954662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.864 qpair failed and we were unable to recover it. 00:34:42.864 [2024-07-21 03:44:27.954759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.864 [2024-07-21 03:44:27.954785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.864 qpair failed and we were unable to recover it. 00:34:42.864 [2024-07-21 03:44:27.954924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.864 [2024-07-21 03:44:27.954954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.864 qpair failed and we were unable to recover it. 00:34:42.864 [2024-07-21 03:44:27.955088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.864 [2024-07-21 03:44:27.955117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.864 qpair failed and we were unable to recover it. 00:34:42.864 [2024-07-21 03:44:27.955254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.864 [2024-07-21 03:44:27.955284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.864 qpair failed and we were unable to recover it. 00:34:42.864 [2024-07-21 03:44:27.955449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.864 [2024-07-21 03:44:27.955484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.864 qpair failed and we were unable to recover it. 00:34:42.864 [2024-07-21 03:44:27.955630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.864 [2024-07-21 03:44:27.955674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.864 qpair failed and we were unable to recover it. 00:34:42.864 [2024-07-21 03:44:27.955795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.864 [2024-07-21 03:44:27.955822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.864 qpair failed and we were unable to recover it. 00:34:42.864 [2024-07-21 03:44:27.955957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.864 [2024-07-21 03:44:27.955986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.864 qpair failed and we were unable to recover it. 00:34:42.864 [2024-07-21 03:44:27.956139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.864 [2024-07-21 03:44:27.956168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.864 qpair failed and we were unable to recover it. 00:34:42.864 [2024-07-21 03:44:27.956279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.864 [2024-07-21 03:44:27.956309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.864 qpair failed and we were unable to recover it. 00:34:42.864 [2024-07-21 03:44:27.956438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.864 [2024-07-21 03:44:27.956466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.864 qpair failed and we were unable to recover it. 00:34:42.864 [2024-07-21 03:44:27.956565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.864 [2024-07-21 03:44:27.956608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.864 qpair failed and we were unable to recover it. 00:34:42.864 [2024-07-21 03:44:27.956738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.864 [2024-07-21 03:44:27.956766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.864 qpair failed and we were unable to recover it. 00:34:42.864 [2024-07-21 03:44:27.956891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.864 [2024-07-21 03:44:27.956917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.864 qpair failed and we were unable to recover it. 00:34:42.864 [2024-07-21 03:44:27.957040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.864 [2024-07-21 03:44:27.957086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.864 qpair failed and we were unable to recover it. 00:34:42.864 [2024-07-21 03:44:27.957232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.864 [2024-07-21 03:44:27.957263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.864 qpair failed and we were unable to recover it. 00:34:42.864 [2024-07-21 03:44:27.957403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.864 [2024-07-21 03:44:27.957431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.864 qpair failed and we were unable to recover it. 00:34:42.864 [2024-07-21 03:44:27.957565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.864 [2024-07-21 03:44:27.957595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.864 qpair failed and we were unable to recover it. 00:34:42.864 [2024-07-21 03:44:27.957734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.864 [2024-07-21 03:44:27.957775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.864 qpair failed and we were unable to recover it. 00:34:42.864 [2024-07-21 03:44:27.957938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.864 [2024-07-21 03:44:27.957968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.864 qpair failed and we were unable to recover it. 00:34:42.864 [2024-07-21 03:44:27.958100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.864 [2024-07-21 03:44:27.958129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.864 qpair failed and we were unable to recover it. 00:34:42.864 [2024-07-21 03:44:27.958234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.864 [2024-07-21 03:44:27.958264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.864 qpair failed and we were unable to recover it. 00:34:42.864 [2024-07-21 03:44:27.958388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.864 [2024-07-21 03:44:27.958416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.864 qpair failed and we were unable to recover it. 00:34:42.864 [2024-07-21 03:44:27.958513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.864 [2024-07-21 03:44:27.958543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.864 qpair failed and we were unable to recover it. 00:34:42.864 [2024-07-21 03:44:27.958686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.864 [2024-07-21 03:44:27.958713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.864 qpair failed and we were unable to recover it. 00:34:42.864 [2024-07-21 03:44:27.958874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.864 [2024-07-21 03:44:27.958900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.864 qpair failed and we were unable to recover it. 00:34:42.864 [2024-07-21 03:44:27.958994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.864 [2024-07-21 03:44:27.959037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.864 qpair failed and we were unable to recover it. 00:34:42.864 [2024-07-21 03:44:27.959162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.864 [2024-07-21 03:44:27.959191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.864 qpair failed and we were unable to recover it. 00:34:42.864 [2024-07-21 03:44:27.959327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.864 [2024-07-21 03:44:27.959358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.864 qpair failed and we were unable to recover it. 00:34:42.864 [2024-07-21 03:44:27.959492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.864 [2024-07-21 03:44:27.959521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.864 qpair failed and we were unable to recover it. 00:34:42.864 [2024-07-21 03:44:27.959635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.864 [2024-07-21 03:44:27.959661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.864 qpair failed and we were unable to recover it. 00:34:42.864 [2024-07-21 03:44:27.959781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.864 [2024-07-21 03:44:27.959815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.864 qpair failed and we were unable to recover it. 00:34:42.864 [2024-07-21 03:44:27.959907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.864 [2024-07-21 03:44:27.959934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.864 qpair failed and we were unable to recover it. 00:34:42.864 [2024-07-21 03:44:27.960080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.864 [2024-07-21 03:44:27.960109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.864 qpair failed and we were unable to recover it. 00:34:42.864 [2024-07-21 03:44:27.960259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.864 [2024-07-21 03:44:27.960286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.864 qpair failed and we were unable to recover it. 00:34:42.865 [2024-07-21 03:44:27.960409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.865 [2024-07-21 03:44:27.960437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.865 qpair failed and we were unable to recover it. 00:34:42.865 [2024-07-21 03:44:27.960563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.865 [2024-07-21 03:44:27.960607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.865 qpair failed and we were unable to recover it. 00:34:42.865 [2024-07-21 03:44:27.960764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.865 [2024-07-21 03:44:27.960789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.865 qpair failed and we were unable to recover it. 00:34:42.865 [2024-07-21 03:44:27.960926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.865 [2024-07-21 03:44:27.960954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.865 qpair failed and we were unable to recover it. 00:34:42.865 [2024-07-21 03:44:27.961124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.865 [2024-07-21 03:44:27.961154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.865 qpair failed and we were unable to recover it. 00:34:42.865 [2024-07-21 03:44:27.961342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.865 [2024-07-21 03:44:27.961372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.865 qpair failed and we were unable to recover it. 00:34:42.865 [2024-07-21 03:44:27.961472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.865 [2024-07-21 03:44:27.961501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.865 qpair failed and we were unable to recover it. 00:34:42.865 [2024-07-21 03:44:27.961627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.865 [2024-07-21 03:44:27.961672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.865 qpair failed and we were unable to recover it. 00:34:42.865 [2024-07-21 03:44:27.961817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.865 [2024-07-21 03:44:27.961843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.865 qpair failed and we were unable to recover it. 00:34:42.865 [2024-07-21 03:44:27.961961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.865 [2024-07-21 03:44:27.961990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.865 qpair failed and we were unable to recover it. 00:34:42.865 [2024-07-21 03:44:27.962156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.865 [2024-07-21 03:44:27.962185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.865 qpair failed and we were unable to recover it. 00:34:42.865 [2024-07-21 03:44:27.962317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.865 [2024-07-21 03:44:27.962346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.865 qpair failed and we were unable to recover it. 00:34:42.865 [2024-07-21 03:44:27.962473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.865 [2024-07-21 03:44:27.962502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.865 qpair failed and we were unable to recover it. 00:34:42.865 [2024-07-21 03:44:27.962588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.865 [2024-07-21 03:44:27.962623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.865 qpair failed and we were unable to recover it. 00:34:42.865 [2024-07-21 03:44:27.962792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.865 [2024-07-21 03:44:27.962818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.865 qpair failed and we were unable to recover it. 00:34:42.865 [2024-07-21 03:44:27.962959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.865 [2024-07-21 03:44:27.962988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.865 qpair failed and we were unable to recover it. 00:34:42.865 [2024-07-21 03:44:27.963205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.865 [2024-07-21 03:44:27.963235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.865 qpair failed and we were unable to recover it. 00:34:42.865 [2024-07-21 03:44:27.963388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.865 [2024-07-21 03:44:27.963417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.865 qpair failed and we were unable to recover it. 00:34:42.865 [2024-07-21 03:44:27.963554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.865 [2024-07-21 03:44:27.963583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.865 qpair failed and we were unable to recover it. 00:34:42.865 [2024-07-21 03:44:27.963756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.865 [2024-07-21 03:44:27.963782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.865 qpair failed and we were unable to recover it. 00:34:42.865 [2024-07-21 03:44:27.963909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.865 [2024-07-21 03:44:27.963938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.865 qpair failed and we were unable to recover it. 00:34:42.865 [2024-07-21 03:44:27.964064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.865 [2024-07-21 03:44:27.964107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.865 qpair failed and we were unable to recover it. 00:34:42.865 [2024-07-21 03:44:27.964240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.865 [2024-07-21 03:44:27.964270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.865 qpair failed and we were unable to recover it. 00:34:42.865 [2024-07-21 03:44:27.964436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.865 [2024-07-21 03:44:27.964471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.865 qpair failed and we were unable to recover it. 00:34:42.865 [2024-07-21 03:44:27.964598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.865 [2024-07-21 03:44:27.964631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.865 qpair failed and we were unable to recover it. 00:34:42.865 [2024-07-21 03:44:27.964752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.865 [2024-07-21 03:44:27.964778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.865 qpair failed and we were unable to recover it. 00:34:42.865 [2024-07-21 03:44:27.964904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.865 [2024-07-21 03:44:27.964931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.865 qpair failed and we were unable to recover it. 00:34:42.865 [2024-07-21 03:44:27.965094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.865 [2024-07-21 03:44:27.965123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.865 qpair failed and we were unable to recover it. 00:34:42.865 [2024-07-21 03:44:27.965263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.865 [2024-07-21 03:44:27.965292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.865 qpair failed and we were unable to recover it. 00:34:42.865 [2024-07-21 03:44:27.965420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.865 [2024-07-21 03:44:27.965451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.865 qpair failed and we were unable to recover it. 00:34:42.865 [2024-07-21 03:44:27.965585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.865 [2024-07-21 03:44:27.965621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.865 qpair failed and we were unable to recover it. 00:34:42.865 [2024-07-21 03:44:27.965742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.865 [2024-07-21 03:44:27.965785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.865 qpair failed and we were unable to recover it. 00:34:42.865 [2024-07-21 03:44:27.965942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.865 [2024-07-21 03:44:27.965972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.865 qpair failed and we were unable to recover it. 00:34:42.865 [2024-07-21 03:44:27.966099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.865 [2024-07-21 03:44:27.966129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.865 qpair failed and we were unable to recover it. 00:34:42.865 [2024-07-21 03:44:27.966263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.865 [2024-07-21 03:44:27.966292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.865 qpair failed and we were unable to recover it. 00:34:42.865 [2024-07-21 03:44:27.966424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.865 [2024-07-21 03:44:27.966450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.865 qpair failed and we were unable to recover it. 00:34:42.865 [2024-07-21 03:44:27.966591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.865 [2024-07-21 03:44:27.966627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.865 qpair failed and we were unable to recover it. 00:34:42.865 [2024-07-21 03:44:27.966766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.865 [2024-07-21 03:44:27.966793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.865 qpair failed and we were unable to recover it. 00:34:42.865 [2024-07-21 03:44:27.966891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.865 [2024-07-21 03:44:27.966918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.865 qpair failed and we were unable to recover it. 00:34:42.865 [2024-07-21 03:44:27.967048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.865 [2024-07-21 03:44:27.967075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.865 qpair failed and we were unable to recover it. 00:34:42.865 [2024-07-21 03:44:27.967225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.865 [2024-07-21 03:44:27.967254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.865 qpair failed and we were unable to recover it. 00:34:42.866 [2024-07-21 03:44:27.967414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.866 [2024-07-21 03:44:27.967444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.866 qpair failed and we were unable to recover it. 00:34:42.866 [2024-07-21 03:44:27.967579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.866 [2024-07-21 03:44:27.967609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.866 qpair failed and we were unable to recover it. 00:34:42.866 [2024-07-21 03:44:27.967767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.866 [2024-07-21 03:44:27.967795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.866 qpair failed and we were unable to recover it. 00:34:42.866 [2024-07-21 03:44:27.967918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.866 [2024-07-21 03:44:27.967944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.866 qpair failed and we were unable to recover it. 00:34:42.866 [2024-07-21 03:44:27.968087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.866 [2024-07-21 03:44:27.968117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.866 qpair failed and we were unable to recover it. 00:34:42.866 [2024-07-21 03:44:27.968246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.866 [2024-07-21 03:44:27.968275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.866 qpair failed and we were unable to recover it. 00:34:42.866 [2024-07-21 03:44:27.968378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.866 [2024-07-21 03:44:27.968407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.866 qpair failed and we were unable to recover it. 00:34:42.866 [2024-07-21 03:44:27.968566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.866 [2024-07-21 03:44:27.968596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.866 qpair failed and we were unable to recover it. 00:34:42.866 [2024-07-21 03:44:27.968735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.866 [2024-07-21 03:44:27.968776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.866 qpair failed and we were unable to recover it. 00:34:42.866 [2024-07-21 03:44:27.968904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.866 [2024-07-21 03:44:27.968936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.866 qpair failed and we were unable to recover it. 00:34:42.866 [2024-07-21 03:44:27.969034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.866 [2024-07-21 03:44:27.969062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.866 qpair failed and we were unable to recover it. 00:34:42.866 [2024-07-21 03:44:27.969179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.866 [2024-07-21 03:44:27.969206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.866 qpair failed and we were unable to recover it. 00:34:42.866 [2024-07-21 03:44:27.969349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.866 [2024-07-21 03:44:27.969376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.866 qpair failed and we were unable to recover it. 00:34:42.866 [2024-07-21 03:44:27.969496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.866 [2024-07-21 03:44:27.969540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.866 qpair failed and we were unable to recover it. 00:34:42.866 [2024-07-21 03:44:27.969674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.866 [2024-07-21 03:44:27.969706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.866 qpair failed and we were unable to recover it. 00:34:42.866 [2024-07-21 03:44:27.969872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.866 [2024-07-21 03:44:27.969899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.866 qpair failed and we were unable to recover it. 00:34:42.866 [2024-07-21 03:44:27.970017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.866 [2024-07-21 03:44:27.970044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.866 qpair failed and we were unable to recover it. 00:34:42.866 [2024-07-21 03:44:27.970174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.866 [2024-07-21 03:44:27.970203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.866 qpair failed and we were unable to recover it. 00:34:42.866 [2024-07-21 03:44:27.970347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.866 [2024-07-21 03:44:27.970375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.866 qpair failed and we were unable to recover it. 00:34:42.866 [2024-07-21 03:44:27.970477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.866 [2024-07-21 03:44:27.970504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.866 qpair failed and we were unable to recover it. 00:34:42.866 [2024-07-21 03:44:27.970650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.866 [2024-07-21 03:44:27.970681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.866 qpair failed and we were unable to recover it. 00:34:42.866 [2024-07-21 03:44:27.970821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.866 [2024-07-21 03:44:27.970848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.866 qpair failed and we were unable to recover it. 00:34:42.866 [2024-07-21 03:44:27.970963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.866 [2024-07-21 03:44:27.970988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.866 qpair failed and we were unable to recover it. 00:34:42.866 [2024-07-21 03:44:27.971156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.866 [2024-07-21 03:44:27.971184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.866 qpair failed and we were unable to recover it. 00:34:42.866 [2024-07-21 03:44:27.971304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.866 [2024-07-21 03:44:27.971331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.866 qpair failed and we were unable to recover it. 00:34:42.866 [2024-07-21 03:44:27.971479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.866 [2024-07-21 03:44:27.971505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.866 qpair failed and we were unable to recover it. 00:34:42.866 [2024-07-21 03:44:27.971660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.866 [2024-07-21 03:44:27.971706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.866 qpair failed and we were unable to recover it. 00:34:42.866 [2024-07-21 03:44:27.971830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.866 [2024-07-21 03:44:27.971858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.866 qpair failed and we were unable to recover it. 00:34:42.866 [2024-07-21 03:44:27.971977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.866 [2024-07-21 03:44:27.972005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.866 qpair failed and we were unable to recover it. 00:34:42.866 [2024-07-21 03:44:27.972106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.866 [2024-07-21 03:44:27.972132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.866 qpair failed and we were unable to recover it. 00:34:42.866 [2024-07-21 03:44:27.972254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.866 [2024-07-21 03:44:27.972280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.866 qpair failed and we were unable to recover it. 00:34:42.866 [2024-07-21 03:44:27.972425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.866 [2024-07-21 03:44:27.972469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.866 qpair failed and we were unable to recover it. 00:34:42.866 [2024-07-21 03:44:27.972575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.866 [2024-07-21 03:44:27.972625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.866 qpair failed and we were unable to recover it. 00:34:42.866 [2024-07-21 03:44:27.972783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.866 [2024-07-21 03:44:27.972810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.866 qpair failed and we were unable to recover it. 00:34:42.866 [2024-07-21 03:44:27.972955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.866 [2024-07-21 03:44:27.972984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.866 qpair failed and we were unable to recover it. 00:34:42.867 [2024-07-21 03:44:27.973080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.867 [2024-07-21 03:44:27.973109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.867 qpair failed and we were unable to recover it. 00:34:42.867 [2024-07-21 03:44:27.973215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.867 [2024-07-21 03:44:27.973249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.867 qpair failed and we were unable to recover it. 00:34:42.867 [2024-07-21 03:44:27.973409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.867 [2024-07-21 03:44:27.973435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.867 qpair failed and we were unable to recover it. 00:34:42.867 [2024-07-21 03:44:27.973578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.867 [2024-07-21 03:44:27.973605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.867 qpair failed and we were unable to recover it. 00:34:42.867 [2024-07-21 03:44:27.973765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.867 [2024-07-21 03:44:27.973792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.867 qpair failed and we were unable to recover it. 00:34:42.867 [2024-07-21 03:44:27.973955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.867 [2024-07-21 03:44:27.973984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.867 qpair failed and we were unable to recover it. 00:34:42.867 [2024-07-21 03:44:27.974088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.867 [2024-07-21 03:44:27.974117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.867 qpair failed and we were unable to recover it. 00:34:42.867 [2024-07-21 03:44:27.974236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.867 [2024-07-21 03:44:27.974263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.867 qpair failed and we were unable to recover it. 00:34:42.867 [2024-07-21 03:44:27.974410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.867 [2024-07-21 03:44:27.974436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.867 qpair failed and we were unable to recover it. 00:34:42.867 [2024-07-21 03:44:27.974587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.867 [2024-07-21 03:44:27.974624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.867 qpair failed and we were unable to recover it. 00:34:42.867 [2024-07-21 03:44:27.974771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.867 [2024-07-21 03:44:27.974797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.867 qpair failed and we were unable to recover it. 00:34:42.867 [2024-07-21 03:44:27.974919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.867 [2024-07-21 03:44:27.974960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.867 qpair failed and we were unable to recover it. 00:34:42.867 [2024-07-21 03:44:27.975131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.867 [2024-07-21 03:44:27.975157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.867 qpair failed and we were unable to recover it. 00:34:42.867 [2024-07-21 03:44:27.975306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.867 [2024-07-21 03:44:27.975333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.867 qpair failed and we were unable to recover it. 00:34:42.867 [2024-07-21 03:44:27.975449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.867 [2024-07-21 03:44:27.975492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.867 qpair failed and we were unable to recover it. 00:34:42.867 [2024-07-21 03:44:27.975608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.867 [2024-07-21 03:44:27.975670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.867 qpair failed and we were unable to recover it. 00:34:42.867 [2024-07-21 03:44:27.975790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.867 [2024-07-21 03:44:27.975816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.867 qpair failed and we were unable to recover it. 00:34:42.867 [2024-07-21 03:44:27.975963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.867 [2024-07-21 03:44:27.975989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.867 qpair failed and we were unable to recover it. 00:34:42.867 [2024-07-21 03:44:27.976103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.867 [2024-07-21 03:44:27.976133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.867 qpair failed and we were unable to recover it. 00:34:42.867 [2024-07-21 03:44:27.976278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.867 [2024-07-21 03:44:27.976304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.867 qpair failed and we were unable to recover it. 00:34:42.867 [2024-07-21 03:44:27.976395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.867 [2024-07-21 03:44:27.976422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.867 qpair failed and we were unable to recover it. 00:34:42.867 [2024-07-21 03:44:27.976552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.867 [2024-07-21 03:44:27.976594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.867 qpair failed and we were unable to recover it. 00:34:42.867 [2024-07-21 03:44:27.976761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.867 [2024-07-21 03:44:27.976790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.867 qpair failed and we were unable to recover it. 00:34:42.867 [2024-07-21 03:44:27.976931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.867 [2024-07-21 03:44:27.976961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.867 qpair failed and we were unable to recover it. 00:34:42.867 [2024-07-21 03:44:27.977062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.867 [2024-07-21 03:44:27.977092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.867 qpair failed and we were unable to recover it. 00:34:42.867 [2024-07-21 03:44:27.977226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.867 [2024-07-21 03:44:27.977252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.867 qpair failed and we were unable to recover it. 00:34:42.867 [2024-07-21 03:44:27.977377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.867 [2024-07-21 03:44:27.977403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.867 qpair failed and we were unable to recover it. 00:34:42.867 [2024-07-21 03:44:27.977555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.867 [2024-07-21 03:44:27.977582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.867 qpair failed and we were unable to recover it. 00:34:42.867 [2024-07-21 03:44:27.977714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.867 [2024-07-21 03:44:27.977747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.867 qpair failed and we were unable to recover it. 00:34:42.867 [2024-07-21 03:44:27.977872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.867 [2024-07-21 03:44:27.977899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.867 qpair failed and we were unable to recover it. 00:34:42.867 [2024-07-21 03:44:27.978018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.867 [2024-07-21 03:44:27.978044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.867 qpair failed and we were unable to recover it. 00:34:42.867 [2024-07-21 03:44:27.978171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.867 [2024-07-21 03:44:27.978197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.867 qpair failed and we were unable to recover it. 00:34:42.867 [2024-07-21 03:44:27.978319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.867 [2024-07-21 03:44:27.978362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.867 qpair failed and we were unable to recover it. 00:34:42.867 [2024-07-21 03:44:27.978498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.867 [2024-07-21 03:44:27.978527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.867 qpair failed and we were unable to recover it. 00:34:42.867 [2024-07-21 03:44:27.978701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.867 [2024-07-21 03:44:27.978729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.867 qpair failed and we were unable to recover it. 00:34:42.867 [2024-07-21 03:44:27.978854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.867 [2024-07-21 03:44:27.978882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.867 qpair failed and we were unable to recover it. 00:34:42.867 [2024-07-21 03:44:27.979031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.867 [2024-07-21 03:44:27.979058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.867 qpair failed and we were unable to recover it. 00:34:42.867 [2024-07-21 03:44:27.979254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.867 [2024-07-21 03:44:27.979281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.867 qpair failed and we were unable to recover it. 00:34:42.867 [2024-07-21 03:44:27.979444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.867 [2024-07-21 03:44:27.979473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.867 qpair failed and we were unable to recover it. 00:34:42.867 [2024-07-21 03:44:27.979649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.867 [2024-07-21 03:44:27.979676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.867 qpair failed and we were unable to recover it. 00:34:42.867 [2024-07-21 03:44:27.979799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.868 [2024-07-21 03:44:27.979826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.868 qpair failed and we were unable to recover it. 00:34:42.868 [2024-07-21 03:44:27.979953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.868 [2024-07-21 03:44:27.979995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.868 qpair failed and we were unable to recover it. 00:34:42.868 [2024-07-21 03:44:27.980167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.868 [2024-07-21 03:44:27.980195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.868 qpair failed and we were unable to recover it. 00:34:42.868 [2024-07-21 03:44:27.980310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.868 [2024-07-21 03:44:27.980337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.868 qpair failed and we were unable to recover it. 00:34:42.868 [2024-07-21 03:44:27.980450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.868 [2024-07-21 03:44:27.980476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.868 qpair failed and we were unable to recover it. 00:34:42.868 [2024-07-21 03:44:27.980605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.868 [2024-07-21 03:44:27.980642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.868 qpair failed and we were unable to recover it. 00:34:42.868 [2024-07-21 03:44:27.980786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.868 [2024-07-21 03:44:27.980813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.868 qpair failed and we were unable to recover it. 00:34:42.868 [2024-07-21 03:44:27.980910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.868 [2024-07-21 03:44:27.980939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.868 qpair failed and we were unable to recover it. 00:34:42.868 [2024-07-21 03:44:27.981067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.868 [2024-07-21 03:44:27.981093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.868 qpair failed and we were unable to recover it. 00:34:42.868 [2024-07-21 03:44:27.981187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.868 [2024-07-21 03:44:27.981214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.868 qpair failed and we were unable to recover it. 00:34:42.868 [2024-07-21 03:44:27.981339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.868 [2024-07-21 03:44:27.981366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.868 qpair failed and we were unable to recover it. 00:34:42.868 [2024-07-21 03:44:27.981513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.868 [2024-07-21 03:44:27.981540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.868 qpair failed and we were unable to recover it. 00:34:42.868 [2024-07-21 03:44:27.981681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.868 [2024-07-21 03:44:27.981709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.868 qpair failed and we were unable to recover it. 00:34:42.868 [2024-07-21 03:44:27.981853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.868 [2024-07-21 03:44:27.981879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.868 qpair failed and we were unable to recover it. 00:34:42.868 [2024-07-21 03:44:27.982013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.868 [2024-07-21 03:44:27.982043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.868 qpair failed and we were unable to recover it. 00:34:42.868 [2024-07-21 03:44:27.982189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.868 [2024-07-21 03:44:27.982217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.868 qpair failed and we were unable to recover it. 00:34:42.868 [2024-07-21 03:44:27.982335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.868 [2024-07-21 03:44:27.982362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.868 qpair failed and we were unable to recover it. 00:34:42.868 [2024-07-21 03:44:27.982514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.868 [2024-07-21 03:44:27.982543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.868 qpair failed and we were unable to recover it. 00:34:42.868 [2024-07-21 03:44:27.982664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.868 [2024-07-21 03:44:27.982703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.868 qpair failed and we were unable to recover it. 00:34:42.868 [2024-07-21 03:44:27.982802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.868 [2024-07-21 03:44:27.982829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.868 qpair failed and we were unable to recover it. 00:34:42.868 [2024-07-21 03:44:27.982957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.868 [2024-07-21 03:44:27.982984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.868 qpair failed and we were unable to recover it. 00:34:42.868 [2024-07-21 03:44:27.983131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.868 [2024-07-21 03:44:27.983157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.868 qpair failed and we were unable to recover it. 00:34:42.868 [2024-07-21 03:44:27.983257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.868 [2024-07-21 03:44:27.983299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.868 qpair failed and we were unable to recover it. 00:34:42.868 [2024-07-21 03:44:27.983403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.868 [2024-07-21 03:44:27.983432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.868 qpair failed and we were unable to recover it. 00:34:42.868 [2024-07-21 03:44:27.983552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.868 [2024-07-21 03:44:27.983578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.868 qpair failed and we were unable to recover it. 00:34:42.868 [2024-07-21 03:44:27.983731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.868 [2024-07-21 03:44:27.983758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.868 qpair failed and we were unable to recover it. 00:34:42.868 [2024-07-21 03:44:27.983884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.868 [2024-07-21 03:44:27.983910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.868 qpair failed and we were unable to recover it. 00:34:42.868 [2024-07-21 03:44:27.984058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.868 [2024-07-21 03:44:27.984084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.868 qpair failed and we were unable to recover it. 00:34:42.868 [2024-07-21 03:44:27.984227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.868 [2024-07-21 03:44:27.984261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.868 qpair failed and we were unable to recover it. 00:34:42.868 [2024-07-21 03:44:27.984400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.868 [2024-07-21 03:44:27.984429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.868 qpair failed and we were unable to recover it. 00:34:42.868 [2024-07-21 03:44:27.984576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.868 [2024-07-21 03:44:27.984602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.868 qpair failed and we were unable to recover it. 00:34:42.868 [2024-07-21 03:44:27.984702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.868 [2024-07-21 03:44:27.984729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.868 qpair failed and we were unable to recover it. 00:34:42.868 [2024-07-21 03:44:27.984842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.868 [2024-07-21 03:44:27.984868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.868 qpair failed and we were unable to recover it. 00:34:42.868 [2024-07-21 03:44:27.984959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.868 [2024-07-21 03:44:27.984985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.868 qpair failed and we were unable to recover it. 00:34:42.868 [2024-07-21 03:44:27.985074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.868 [2024-07-21 03:44:27.985100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.868 qpair failed and we were unable to recover it. 00:34:42.868 [2024-07-21 03:44:27.985215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.868 [2024-07-21 03:44:27.985242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.868 qpair failed and we were unable to recover it. 00:34:42.868 [2024-07-21 03:44:27.985336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.868 [2024-07-21 03:44:27.985362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.868 qpair failed and we were unable to recover it. 00:34:42.868 [2024-07-21 03:44:27.985480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.868 [2024-07-21 03:44:27.985507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.868 qpair failed and we were unable to recover it. 00:34:42.868 [2024-07-21 03:44:27.985595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.868 [2024-07-21 03:44:27.985627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.868 qpair failed and we were unable to recover it. 00:34:42.868 [2024-07-21 03:44:27.985729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.868 [2024-07-21 03:44:27.985756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.868 qpair failed and we were unable to recover it. 00:34:42.868 [2024-07-21 03:44:27.985856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.868 [2024-07-21 03:44:27.985884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.868 qpair failed and we were unable to recover it. 00:34:42.869 [2024-07-21 03:44:27.985971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.869 [2024-07-21 03:44:27.985999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.869 qpair failed and we were unable to recover it. 00:34:42.869 [2024-07-21 03:44:27.986119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.869 [2024-07-21 03:44:27.986146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.869 qpair failed and we were unable to recover it. 00:34:42.869 [2024-07-21 03:44:27.986243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.869 [2024-07-21 03:44:27.986270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.869 qpair failed and we were unable to recover it. 00:34:42.869 [2024-07-21 03:44:27.986417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.869 [2024-07-21 03:44:27.986447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.869 qpair failed and we were unable to recover it. 00:34:42.869 [2024-07-21 03:44:27.986563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.869 [2024-07-21 03:44:27.986590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.869 qpair failed and we were unable to recover it. 00:34:42.869 [2024-07-21 03:44:27.986701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.869 [2024-07-21 03:44:27.986729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.869 qpair failed and we were unable to recover it. 00:34:42.869 [2024-07-21 03:44:27.986862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.869 [2024-07-21 03:44:27.986889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.869 qpair failed and we were unable to recover it. 00:34:42.869 [2024-07-21 03:44:27.987064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.869 [2024-07-21 03:44:27.987092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.869 qpair failed and we were unable to recover it. 00:34:42.869 [2024-07-21 03:44:27.987188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.869 [2024-07-21 03:44:27.987215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.869 qpair failed and we were unable to recover it. 00:34:42.869 [2024-07-21 03:44:27.987353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.869 [2024-07-21 03:44:27.987383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.869 qpair failed and we were unable to recover it. 00:34:42.869 [2024-07-21 03:44:27.987559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.869 [2024-07-21 03:44:27.987586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.869 qpair failed and we were unable to recover it. 00:34:42.869 [2024-07-21 03:44:27.987688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.869 [2024-07-21 03:44:27.987715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.869 qpair failed and we were unable to recover it. 00:34:42.869 [2024-07-21 03:44:27.987838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.869 [2024-07-21 03:44:27.987864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.869 qpair failed and we were unable to recover it. 00:34:42.869 [2024-07-21 03:44:27.988010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.869 [2024-07-21 03:44:27.988036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.869 qpair failed and we were unable to recover it. 00:34:42.869 [2024-07-21 03:44:27.988182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.869 [2024-07-21 03:44:27.988213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.869 qpair failed and we were unable to recover it. 00:34:42.869 [2024-07-21 03:44:27.988346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.869 [2024-07-21 03:44:27.988376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.869 qpair failed and we were unable to recover it. 00:34:42.869 [2024-07-21 03:44:27.988540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.869 [2024-07-21 03:44:27.988567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.869 qpair failed and we were unable to recover it. 00:34:42.869 [2024-07-21 03:44:27.988701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.869 [2024-07-21 03:44:27.988730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.869 qpair failed and we were unable to recover it. 00:34:42.869 [2024-07-21 03:44:27.988855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.869 [2024-07-21 03:44:27.988882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.869 qpair failed and we were unable to recover it. 00:34:42.869 [2024-07-21 03:44:27.989009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.869 [2024-07-21 03:44:27.989035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.869 qpair failed and we were unable to recover it. 00:34:42.869 [2024-07-21 03:44:27.989152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.869 [2024-07-21 03:44:27.989178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.869 qpair failed and we were unable to recover it. 00:34:42.869 [2024-07-21 03:44:27.989354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.869 [2024-07-21 03:44:27.989384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.869 qpair failed and we were unable to recover it. 00:34:42.869 [2024-07-21 03:44:27.989520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.869 [2024-07-21 03:44:27.989548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.869 qpair failed and we were unable to recover it. 00:34:42.869 [2024-07-21 03:44:27.989652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.869 [2024-07-21 03:44:27.989679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.869 qpair failed and we were unable to recover it. 00:34:42.869 [2024-07-21 03:44:27.989831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.869 [2024-07-21 03:44:27.989861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.869 qpair failed and we were unable to recover it. 00:34:42.869 [2024-07-21 03:44:27.990004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.869 [2024-07-21 03:44:27.990031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.869 qpair failed and we were unable to recover it. 00:34:42.869 [2024-07-21 03:44:27.990149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.869 [2024-07-21 03:44:27.990176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.869 qpair failed and we were unable to recover it. 00:34:42.869 [2024-07-21 03:44:27.990326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.869 [2024-07-21 03:44:27.990360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.869 qpair failed and we were unable to recover it. 00:34:42.869 [2024-07-21 03:44:27.990483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.869 [2024-07-21 03:44:27.990511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.869 qpair failed and we were unable to recover it. 00:34:42.869 [2024-07-21 03:44:27.990604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.869 [2024-07-21 03:44:27.990638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.869 qpair failed and we were unable to recover it. 00:34:42.869 [2024-07-21 03:44:27.990728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.869 [2024-07-21 03:44:27.990756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.869 qpair failed and we were unable to recover it. 00:34:42.869 [2024-07-21 03:44:27.990873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.869 [2024-07-21 03:44:27.990901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.869 qpair failed and we were unable to recover it. 00:34:42.869 [2024-07-21 03:44:27.991024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.869 [2024-07-21 03:44:27.991051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.869 qpair failed and we were unable to recover it. 00:34:42.869 [2024-07-21 03:44:27.991198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.869 [2024-07-21 03:44:27.991225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.869 qpair failed and we were unable to recover it. 00:34:42.869 [2024-07-21 03:44:27.991353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.869 [2024-07-21 03:44:27.991380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.869 qpair failed and we were unable to recover it. 00:34:42.869 [2024-07-21 03:44:27.991507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.869 [2024-07-21 03:44:27.991534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.869 qpair failed and we were unable to recover it. 00:34:42.869 [2024-07-21 03:44:27.991653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.869 [2024-07-21 03:44:27.991681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.869 qpair failed and we were unable to recover it. 00:34:42.869 [2024-07-21 03:44:27.991803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.869 [2024-07-21 03:44:27.991830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.869 qpair failed and we were unable to recover it. 00:34:42.869 [2024-07-21 03:44:27.991954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.869 [2024-07-21 03:44:27.991997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.869 qpair failed and we were unable to recover it. 00:34:42.869 [2024-07-21 03:44:27.992125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.869 [2024-07-21 03:44:27.992154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.869 qpair failed and we were unable to recover it. 00:34:42.869 [2024-07-21 03:44:27.992288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.870 [2024-07-21 03:44:27.992315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.870 qpair failed and we were unable to recover it. 00:34:42.870 [2024-07-21 03:44:27.992407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.870 [2024-07-21 03:44:27.992434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.870 qpair failed and we were unable to recover it. 00:34:42.870 [2024-07-21 03:44:27.992553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.870 [2024-07-21 03:44:27.992583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.870 qpair failed and we were unable to recover it. 00:34:42.870 [2024-07-21 03:44:27.992751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.870 [2024-07-21 03:44:27.992779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.870 qpair failed and we were unable to recover it. 00:34:42.870 [2024-07-21 03:44:27.992894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.870 [2024-07-21 03:44:27.992937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.870 qpair failed and we were unable to recover it. 00:34:42.870 [2024-07-21 03:44:27.993070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.870 [2024-07-21 03:44:27.993100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.870 qpair failed and we were unable to recover it. 00:34:42.870 [2024-07-21 03:44:27.993247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.870 [2024-07-21 03:44:27.993273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.870 qpair failed and we were unable to recover it. 00:34:42.870 [2024-07-21 03:44:27.993396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.870 [2024-07-21 03:44:27.993423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.870 qpair failed and we were unable to recover it. 00:34:42.870 [2024-07-21 03:44:27.993593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.870 [2024-07-21 03:44:27.993630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.870 qpair failed and we were unable to recover it. 00:34:42.870 [2024-07-21 03:44:27.993750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.870 [2024-07-21 03:44:27.993778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.870 qpair failed and we were unable to recover it. 00:34:42.870 [2024-07-21 03:44:27.993871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.870 [2024-07-21 03:44:27.993897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.870 qpair failed and we were unable to recover it. 00:34:42.870 [2024-07-21 03:44:27.994018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.870 [2024-07-21 03:44:27.994048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.870 qpair failed and we were unable to recover it. 00:34:42.870 [2024-07-21 03:44:27.994189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.870 [2024-07-21 03:44:27.994216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.870 qpair failed and we were unable to recover it. 00:34:42.870 [2024-07-21 03:44:27.994313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.870 [2024-07-21 03:44:27.994340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.870 qpair failed and we were unable to recover it. 00:34:42.870 [2024-07-21 03:44:27.994473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.870 [2024-07-21 03:44:27.994500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.870 qpair failed and we were unable to recover it. 00:34:42.870 [2024-07-21 03:44:27.994625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.870 [2024-07-21 03:44:27.994652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.870 qpair failed and we were unable to recover it. 00:34:42.870 [2024-07-21 03:44:27.994750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.870 [2024-07-21 03:44:27.994777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.870 qpair failed and we were unable to recover it. 00:34:42.870 [2024-07-21 03:44:27.994926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.870 [2024-07-21 03:44:27.994955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.870 qpair failed and we were unable to recover it. 00:34:42.870 [2024-07-21 03:44:27.995128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.870 [2024-07-21 03:44:27.995155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.870 qpair failed and we were unable to recover it. 00:34:42.870 [2024-07-21 03:44:27.995282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.870 [2024-07-21 03:44:27.995309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.870 qpair failed and we were unable to recover it. 00:34:42.870 [2024-07-21 03:44:27.995437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.870 [2024-07-21 03:44:27.995463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.870 qpair failed and we were unable to recover it. 00:34:42.870 [2024-07-21 03:44:27.995591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.870 [2024-07-21 03:44:27.995626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.870 qpair failed and we were unable to recover it. 00:34:42.870 [2024-07-21 03:44:27.995717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.870 [2024-07-21 03:44:27.995743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.870 qpair failed and we were unable to recover it. 00:34:42.870 [2024-07-21 03:44:27.995889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.870 [2024-07-21 03:44:27.995918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.870 qpair failed and we were unable to recover it. 00:34:42.870 [2024-07-21 03:44:27.996045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.870 [2024-07-21 03:44:27.996072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.870 qpair failed and we were unable to recover it. 00:34:42.870 [2024-07-21 03:44:27.996188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.870 [2024-07-21 03:44:27.996215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.870 qpair failed and we were unable to recover it. 00:34:42.870 [2024-07-21 03:44:27.996300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.870 [2024-07-21 03:44:27.996342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.870 qpair failed and we were unable to recover it. 00:34:42.870 [2024-07-21 03:44:27.996492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.870 [2024-07-21 03:44:27.996524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.870 qpair failed and we were unable to recover it. 00:34:42.870 [2024-07-21 03:44:27.996627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.870 [2024-07-21 03:44:27.996655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.870 qpair failed and we were unable to recover it. 00:34:42.870 [2024-07-21 03:44:27.996773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.870 [2024-07-21 03:44:27.996800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.870 qpair failed and we were unable to recover it. 00:34:42.870 [2024-07-21 03:44:27.996957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.870 [2024-07-21 03:44:27.996984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.870 qpair failed and we were unable to recover it. 00:34:42.870 [2024-07-21 03:44:27.997100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.870 [2024-07-21 03:44:27.997143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.870 qpair failed and we were unable to recover it. 00:34:42.870 [2024-07-21 03:44:27.997265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.870 [2024-07-21 03:44:27.997295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.870 qpair failed and we were unable to recover it. 00:34:42.870 [2024-07-21 03:44:27.997443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.870 [2024-07-21 03:44:27.997469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.870 qpair failed and we were unable to recover it. 00:34:42.870 [2024-07-21 03:44:27.997625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.870 [2024-07-21 03:44:27.997652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.870 qpair failed and we were unable to recover it. 00:34:42.870 [2024-07-21 03:44:27.997814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.870 [2024-07-21 03:44:27.997842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.870 qpair failed and we were unable to recover it. 00:34:42.870 [2024-07-21 03:44:27.997994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.871 [2024-07-21 03:44:27.998021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.871 qpair failed and we were unable to recover it. 00:34:42.871 [2024-07-21 03:44:27.998120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.871 [2024-07-21 03:44:27.998163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.871 qpair failed and we were unable to recover it. 00:34:42.871 [2024-07-21 03:44:27.998305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.871 [2024-07-21 03:44:27.998333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.871 qpair failed and we were unable to recover it. 00:34:42.871 [2024-07-21 03:44:27.998479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.871 [2024-07-21 03:44:27.998506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.871 qpair failed and we were unable to recover it. 00:34:42.871 [2024-07-21 03:44:27.998653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.871 [2024-07-21 03:44:27.998681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.871 qpair failed and we were unable to recover it. 00:34:42.871 [2024-07-21 03:44:27.998837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.871 [2024-07-21 03:44:27.998865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.871 qpair failed and we were unable to recover it. 00:34:42.871 [2024-07-21 03:44:27.999017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.871 [2024-07-21 03:44:27.999044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.871 qpair failed and we were unable to recover it. 00:34:42.871 [2024-07-21 03:44:27.999198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.871 [2024-07-21 03:44:27.999240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.871 qpair failed and we were unable to recover it. 00:34:42.871 [2024-07-21 03:44:27.999405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.871 [2024-07-21 03:44:27.999434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.871 qpair failed and we were unable to recover it. 00:34:42.871 [2024-07-21 03:44:27.999576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.871 [2024-07-21 03:44:27.999602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.871 qpair failed and we were unable to recover it. 00:34:42.871 [2024-07-21 03:44:27.999732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.871 [2024-07-21 03:44:27.999760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.871 qpair failed and we were unable to recover it. 00:34:42.871 [2024-07-21 03:44:27.999910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.871 [2024-07-21 03:44:27.999936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.871 qpair failed and we were unable to recover it. 00:34:42.871 [2024-07-21 03:44:28.000098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.871 [2024-07-21 03:44:28.000125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.871 qpair failed and we were unable to recover it. 00:34:42.871 [2024-07-21 03:44:28.000275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.871 [2024-07-21 03:44:28.000301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.871 qpair failed and we were unable to recover it. 00:34:42.871 [2024-07-21 03:44:28.000468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.871 [2024-07-21 03:44:28.000497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.871 qpair failed and we were unable to recover it. 00:34:42.871 [2024-07-21 03:44:28.000641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.871 [2024-07-21 03:44:28.000668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.871 qpair failed and we were unable to recover it. 00:34:42.871 [2024-07-21 03:44:28.000788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.871 [2024-07-21 03:44:28.000814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.871 qpair failed and we were unable to recover it. 00:34:42.871 [2024-07-21 03:44:28.000934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.871 [2024-07-21 03:44:28.000960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.871 qpair failed and we were unable to recover it. 00:34:42.871 [2024-07-21 03:44:28.001105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.871 [2024-07-21 03:44:28.001145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.871 qpair failed and we were unable to recover it. 00:34:42.871 [2024-07-21 03:44:28.001292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.871 [2024-07-21 03:44:28.001337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.871 qpair failed and we were unable to recover it. 00:34:42.871 [2024-07-21 03:44:28.001473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.871 [2024-07-21 03:44:28.001518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.871 qpair failed and we were unable to recover it. 00:34:42.871 [2024-07-21 03:44:28.001609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.871 [2024-07-21 03:44:28.001645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.871 qpair failed and we were unable to recover it. 00:34:42.871 [2024-07-21 03:44:28.001782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.871 [2024-07-21 03:44:28.001827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.871 qpair failed and we were unable to recover it. 00:34:42.871 [2024-07-21 03:44:28.001948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.871 [2024-07-21 03:44:28.001974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.871 qpair failed and we were unable to recover it. 00:34:42.871 [2024-07-21 03:44:28.002095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.871 [2024-07-21 03:44:28.002121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.871 qpair failed and we were unable to recover it. 00:34:42.871 [2024-07-21 03:44:28.002254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.871 [2024-07-21 03:44:28.002299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.871 qpair failed and we were unable to recover it. 00:34:42.871 [2024-07-21 03:44:28.002427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.871 [2024-07-21 03:44:28.002453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.871 qpair failed and we were unable to recover it. 00:34:42.871 [2024-07-21 03:44:28.002599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.871 [2024-07-21 03:44:28.002635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.871 qpair failed and we were unable to recover it. 00:34:42.871 [2024-07-21 03:44:28.002779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.871 [2024-07-21 03:44:28.002824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.871 qpair failed and we were unable to recover it. 00:34:42.871 [2024-07-21 03:44:28.002914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.871 [2024-07-21 03:44:28.002942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.871 qpair failed and we were unable to recover it. 00:34:42.871 [2024-07-21 03:44:28.003111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.871 [2024-07-21 03:44:28.003157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.871 qpair failed and we were unable to recover it. 00:34:42.871 [2024-07-21 03:44:28.003304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.871 [2024-07-21 03:44:28.003356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.871 qpair failed and we were unable to recover it. 00:34:42.871 [2024-07-21 03:44:28.003504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.871 [2024-07-21 03:44:28.003530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.871 qpair failed and we were unable to recover it. 00:34:42.871 [2024-07-21 03:44:28.003699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.871 [2024-07-21 03:44:28.003744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.871 qpair failed and we were unable to recover it. 00:34:42.871 [2024-07-21 03:44:28.003885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.871 [2024-07-21 03:44:28.003929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.871 qpair failed and we were unable to recover it. 00:34:42.871 [2024-07-21 03:44:28.004098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.871 [2024-07-21 03:44:28.004128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.871 qpair failed and we were unable to recover it. 00:34:42.871 [2024-07-21 03:44:28.004266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.871 [2024-07-21 03:44:28.004313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.871 qpair failed and we were unable to recover it. 00:34:42.871 [2024-07-21 03:44:28.004439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.871 [2024-07-21 03:44:28.004465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.871 qpair failed and we were unable to recover it. 00:34:42.871 [2024-07-21 03:44:28.004601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.871 [2024-07-21 03:44:28.004653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.871 qpair failed and we were unable to recover it. 00:34:42.871 [2024-07-21 03:44:28.004764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.871 [2024-07-21 03:44:28.004808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.871 qpair failed and we were unable to recover it. 00:34:42.871 [2024-07-21 03:44:28.004976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.871 [2024-07-21 03:44:28.005022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.871 qpair failed and we were unable to recover it. 00:34:42.872 [2024-07-21 03:44:28.005191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.872 [2024-07-21 03:44:28.005234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.872 qpair failed and we were unable to recover it. 00:34:42.872 [2024-07-21 03:44:28.005329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.872 [2024-07-21 03:44:28.005355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.872 qpair failed and we were unable to recover it. 00:34:42.872 [2024-07-21 03:44:28.005479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.872 [2024-07-21 03:44:28.005506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.872 qpair failed and we were unable to recover it. 00:34:42.872 [2024-07-21 03:44:28.005627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.872 [2024-07-21 03:44:28.005655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.872 qpair failed and we were unable to recover it. 00:34:42.872 [2024-07-21 03:44:28.005749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.872 [2024-07-21 03:44:28.005776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.872 qpair failed and we were unable to recover it. 00:34:42.872 [2024-07-21 03:44:28.005916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.872 [2024-07-21 03:44:28.005961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.872 qpair failed and we were unable to recover it. 00:34:42.872 [2024-07-21 03:44:28.006133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.872 [2024-07-21 03:44:28.006178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.872 qpair failed and we were unable to recover it. 00:34:42.872 [2024-07-21 03:44:28.006275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.872 [2024-07-21 03:44:28.006303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.872 qpair failed and we were unable to recover it. 00:34:42.872 [2024-07-21 03:44:28.006436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.872 [2024-07-21 03:44:28.006463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.872 qpair failed and we were unable to recover it. 00:34:42.872 [2024-07-21 03:44:28.006572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.872 [2024-07-21 03:44:28.006621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.872 qpair failed and we were unable to recover it. 00:34:42.872 [2024-07-21 03:44:28.006781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.872 [2024-07-21 03:44:28.006808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.872 qpair failed and we were unable to recover it. 00:34:42.872 [2024-07-21 03:44:28.006948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.872 [2024-07-21 03:44:28.006991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.872 qpair failed and we were unable to recover it. 00:34:42.872 [2024-07-21 03:44:28.007108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.872 [2024-07-21 03:44:28.007134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.872 qpair failed and we were unable to recover it. 00:34:42.872 [2024-07-21 03:44:28.007259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.872 [2024-07-21 03:44:28.007286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.872 qpair failed and we were unable to recover it. 00:34:42.872 [2024-07-21 03:44:28.007386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.872 [2024-07-21 03:44:28.007413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.872 qpair failed and we were unable to recover it. 00:34:42.872 [2024-07-21 03:44:28.007536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.872 [2024-07-21 03:44:28.007562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.872 qpair failed and we were unable to recover it. 00:34:42.872 [2024-07-21 03:44:28.007680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.872 [2024-07-21 03:44:28.007711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.872 qpair failed and we were unable to recover it. 00:34:42.872 [2024-07-21 03:44:28.007926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.872 [2024-07-21 03:44:28.007971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.872 qpair failed and we were unable to recover it. 00:34:42.872 [2024-07-21 03:44:28.008123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.872 [2024-07-21 03:44:28.008153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.872 qpair failed and we were unable to recover it. 00:34:42.872 [2024-07-21 03:44:28.008313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.872 [2024-07-21 03:44:28.008343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.872 qpair failed and we were unable to recover it. 00:34:42.872 [2024-07-21 03:44:28.008454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.872 [2024-07-21 03:44:28.008481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.872 qpair failed and we were unable to recover it. 00:34:42.872 [2024-07-21 03:44:28.008601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.872 [2024-07-21 03:44:28.008636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.872 qpair failed and we were unable to recover it. 00:34:42.872 [2024-07-21 03:44:28.008810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.872 [2024-07-21 03:44:28.008839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.872 qpair failed and we were unable to recover it. 00:34:42.872 [2024-07-21 03:44:28.008999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.872 [2024-07-21 03:44:28.009045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.872 qpair failed and we were unable to recover it. 00:34:42.872 [2024-07-21 03:44:28.009161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.872 [2024-07-21 03:44:28.009204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.872 qpair failed and we were unable to recover it. 00:34:42.872 [2024-07-21 03:44:28.009368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.872 [2024-07-21 03:44:28.009395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.872 qpair failed and we were unable to recover it. 00:34:42.872 [2024-07-21 03:44:28.009517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.872 [2024-07-21 03:44:28.009544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.872 qpair failed and we were unable to recover it. 00:34:42.872 [2024-07-21 03:44:28.009711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.872 [2024-07-21 03:44:28.009752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.872 qpair failed and we were unable to recover it. 00:34:42.872 [2024-07-21 03:44:28.009886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.872 [2024-07-21 03:44:28.009915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.872 qpair failed and we were unable to recover it. 00:34:42.872 [2024-07-21 03:44:28.010038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.872 [2024-07-21 03:44:28.010065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.872 qpair failed and we were unable to recover it. 00:34:42.872 [2024-07-21 03:44:28.010161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.872 [2024-07-21 03:44:28.010194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.872 qpair failed and we were unable to recover it. 00:34:42.872 [2024-07-21 03:44:28.010318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.872 [2024-07-21 03:44:28.010344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.872 qpair failed and we were unable to recover it. 00:34:42.872 [2024-07-21 03:44:28.010479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.872 [2024-07-21 03:44:28.010517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.872 qpair failed and we were unable to recover it. 00:34:42.872 [2024-07-21 03:44:28.010672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.872 [2024-07-21 03:44:28.010700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.872 qpair failed and we were unable to recover it. 00:34:42.872 [2024-07-21 03:44:28.010803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.872 [2024-07-21 03:44:28.010831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.872 qpair failed and we were unable to recover it. 00:34:42.872 [2024-07-21 03:44:28.010928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.872 [2024-07-21 03:44:28.010954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.872 qpair failed and we were unable to recover it. 00:34:42.872 [2024-07-21 03:44:28.011057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.872 [2024-07-21 03:44:28.011085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.872 qpair failed and we were unable to recover it. 00:34:42.872 [2024-07-21 03:44:28.011204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.872 [2024-07-21 03:44:28.011231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.872 qpair failed and we were unable to recover it. 00:34:42.872 [2024-07-21 03:44:28.011330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.872 [2024-07-21 03:44:28.011358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.872 qpair failed and we were unable to recover it. 00:34:42.872 [2024-07-21 03:44:28.011475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.872 [2024-07-21 03:44:28.011501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.872 qpair failed and we were unable to recover it. 00:34:42.872 [2024-07-21 03:44:28.011625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.873 [2024-07-21 03:44:28.011653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.873 qpair failed and we were unable to recover it. 00:34:42.873 [2024-07-21 03:44:28.011775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.873 [2024-07-21 03:44:28.011803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.873 qpair failed and we were unable to recover it. 00:34:42.873 [2024-07-21 03:44:28.011891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.873 [2024-07-21 03:44:28.011918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.873 qpair failed and we were unable to recover it. 00:34:42.873 [2024-07-21 03:44:28.012040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.873 [2024-07-21 03:44:28.012066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.873 qpair failed and we were unable to recover it. 00:34:42.873 [2024-07-21 03:44:28.012165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.873 [2024-07-21 03:44:28.012192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.873 qpair failed and we were unable to recover it. 00:34:42.873 [2024-07-21 03:44:28.012340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.873 [2024-07-21 03:44:28.012366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.873 qpair failed and we were unable to recover it. 00:34:42.873 [2024-07-21 03:44:28.012463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.873 [2024-07-21 03:44:28.012490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.873 qpair failed and we were unable to recover it. 00:34:42.873 [2024-07-21 03:44:28.012622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.873 [2024-07-21 03:44:28.012669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.873 qpair failed and we were unable to recover it. 00:34:42.873 [2024-07-21 03:44:28.012774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.873 [2024-07-21 03:44:28.012804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.873 qpair failed and we were unable to recover it. 00:34:42.873 [2024-07-21 03:44:28.012949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.873 [2024-07-21 03:44:28.012977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.873 qpair failed and we were unable to recover it. 00:34:42.873 [2024-07-21 03:44:28.013090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.873 [2024-07-21 03:44:28.013119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.873 qpair failed and we were unable to recover it. 00:34:42.873 [2024-07-21 03:44:28.013256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.873 [2024-07-21 03:44:28.013327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.873 qpair failed and we were unable to recover it. 00:34:42.873 [2024-07-21 03:44:28.013465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.873 [2024-07-21 03:44:28.013495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.873 qpair failed and we were unable to recover it. 00:34:42.873 [2024-07-21 03:44:28.013639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.873 [2024-07-21 03:44:28.013686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.873 qpair failed and we were unable to recover it. 00:34:42.873 [2024-07-21 03:44:28.013798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.873 [2024-07-21 03:44:28.013828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.873 qpair failed and we were unable to recover it. 00:34:42.873 [2024-07-21 03:44:28.013992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.873 [2024-07-21 03:44:28.014022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.873 qpair failed and we were unable to recover it. 00:34:42.873 [2024-07-21 03:44:28.014181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.873 [2024-07-21 03:44:28.014212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.873 qpair failed and we were unable to recover it. 00:34:42.873 [2024-07-21 03:44:28.014343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.873 [2024-07-21 03:44:28.014379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.873 qpair failed and we were unable to recover it. 00:34:42.873 [2024-07-21 03:44:28.014506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.873 [2024-07-21 03:44:28.014534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.873 qpair failed and we were unable to recover it. 00:34:42.873 [2024-07-21 03:44:28.014653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.873 [2024-07-21 03:44:28.014680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.873 qpair failed and we were unable to recover it. 00:34:42.873 [2024-07-21 03:44:28.014787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.873 [2024-07-21 03:44:28.014816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.873 qpair failed and we were unable to recover it. 00:34:42.873 [2024-07-21 03:44:28.014970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.873 [2024-07-21 03:44:28.014998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.873 qpair failed and we were unable to recover it. 00:34:42.873 [2024-07-21 03:44:28.015090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.873 [2024-07-21 03:44:28.015119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.873 qpair failed and we were unable to recover it. 00:34:42.873 [2024-07-21 03:44:28.015248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.873 [2024-07-21 03:44:28.015276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.873 qpair failed and we were unable to recover it. 00:34:42.873 [2024-07-21 03:44:28.015377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.873 [2024-07-21 03:44:28.015406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.873 qpair failed and we were unable to recover it. 00:34:42.873 [2024-07-21 03:44:28.015530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.873 [2024-07-21 03:44:28.015558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.873 qpair failed and we were unable to recover it. 00:34:42.873 [2024-07-21 03:44:28.015700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.873 [2024-07-21 03:44:28.015725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.873 qpair failed and we were unable to recover it. 00:34:42.873 [2024-07-21 03:44:28.015845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.873 [2024-07-21 03:44:28.015871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.873 qpair failed and we were unable to recover it. 00:34:42.873 [2024-07-21 03:44:28.015980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.873 [2024-07-21 03:44:28.016008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.873 qpair failed and we were unable to recover it. 00:34:42.873 [2024-07-21 03:44:28.016141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.873 [2024-07-21 03:44:28.016184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.873 qpair failed and we were unable to recover it. 00:34:42.873 [2024-07-21 03:44:28.016282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.873 [2024-07-21 03:44:28.016309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.873 qpair failed and we were unable to recover it. 00:34:42.873 [2024-07-21 03:44:28.016427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.873 [2024-07-21 03:44:28.016459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.873 qpair failed and we were unable to recover it. 00:34:42.873 [2024-07-21 03:44:28.016620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.873 [2024-07-21 03:44:28.016647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.873 qpair failed and we were unable to recover it. 00:34:42.873 [2024-07-21 03:44:28.016750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.873 [2024-07-21 03:44:28.016777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.873 qpair failed and we were unable to recover it. 00:34:42.873 [2024-07-21 03:44:28.016925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.873 [2024-07-21 03:44:28.016951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.873 qpair failed and we were unable to recover it. 00:34:42.873 [2024-07-21 03:44:28.017091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.873 [2024-07-21 03:44:28.017121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.873 qpair failed and we were unable to recover it. 00:34:42.873 [2024-07-21 03:44:28.017254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.873 [2024-07-21 03:44:28.017284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.873 qpair failed and we were unable to recover it. 00:34:42.873 [2024-07-21 03:44:28.017385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.873 [2024-07-21 03:44:28.017414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.873 qpair failed and we were unable to recover it. 00:34:42.873 [2024-07-21 03:44:28.017550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.873 [2024-07-21 03:44:28.017591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.873 qpair failed and we were unable to recover it. 00:34:42.873 [2024-07-21 03:44:28.017766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.873 [2024-07-21 03:44:28.017806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.873 qpair failed and we were unable to recover it. 00:34:42.873 [2024-07-21 03:44:28.017950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.873 [2024-07-21 03:44:28.017981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.874 qpair failed and we were unable to recover it. 00:34:42.874 [2024-07-21 03:44:28.018092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.874 [2024-07-21 03:44:28.018120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.874 qpair failed and we were unable to recover it. 00:34:42.874 [2024-07-21 03:44:28.018236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.874 [2024-07-21 03:44:28.018277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.874 qpair failed and we were unable to recover it. 00:34:42.874 [2024-07-21 03:44:28.018446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.874 [2024-07-21 03:44:28.018476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.874 qpair failed and we were unable to recover it. 00:34:42.874 [2024-07-21 03:44:28.018651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.874 [2024-07-21 03:44:28.018682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.874 qpair failed and we were unable to recover it. 00:34:42.874 [2024-07-21 03:44:28.018805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.874 [2024-07-21 03:44:28.018830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.874 qpair failed and we were unable to recover it. 00:34:42.874 [2024-07-21 03:44:28.018994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.874 [2024-07-21 03:44:28.019023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.874 qpair failed and we were unable to recover it. 00:34:42.874 [2024-07-21 03:44:28.019129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.874 [2024-07-21 03:44:28.019172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.874 qpair failed and we were unable to recover it. 00:34:42.874 [2024-07-21 03:44:28.019305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.874 [2024-07-21 03:44:28.019333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.874 qpair failed and we were unable to recover it. 00:34:42.874 [2024-07-21 03:44:28.019464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.874 [2024-07-21 03:44:28.019493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.874 qpair failed and we were unable to recover it. 00:34:42.874 [2024-07-21 03:44:28.019596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.874 [2024-07-21 03:44:28.019631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.874 qpair failed and we were unable to recover it. 00:34:42.874 [2024-07-21 03:44:28.019792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.874 [2024-07-21 03:44:28.019820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.874 qpair failed and we were unable to recover it. 00:34:42.874 [2024-07-21 03:44:28.019958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.874 [2024-07-21 03:44:28.019986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.874 qpair failed and we were unable to recover it. 00:34:42.874 [2024-07-21 03:44:28.020111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.874 [2024-07-21 03:44:28.020139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.874 qpair failed and we were unable to recover it. 00:34:42.874 [2024-07-21 03:44:28.020244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.874 [2024-07-21 03:44:28.020273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.874 qpair failed and we were unable to recover it. 00:34:42.874 [2024-07-21 03:44:28.020409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.874 [2024-07-21 03:44:28.020438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.874 qpair failed and we were unable to recover it. 00:34:42.874 [2024-07-21 03:44:28.020568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.874 [2024-07-21 03:44:28.020600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.874 qpair failed and we were unable to recover it. 00:34:42.874 [2024-07-21 03:44:28.020760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.874 [2024-07-21 03:44:28.020800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.874 qpair failed and we were unable to recover it. 00:34:42.874 [2024-07-21 03:44:28.020958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.874 [2024-07-21 03:44:28.021004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.874 qpair failed and we were unable to recover it. 00:34:42.874 [2024-07-21 03:44:28.021148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.874 [2024-07-21 03:44:28.021191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.874 qpair failed and we were unable to recover it. 00:34:42.874 [2024-07-21 03:44:28.021358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.874 [2024-07-21 03:44:28.021387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.874 qpair failed and we were unable to recover it. 00:34:42.874 [2024-07-21 03:44:28.021565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.874 [2024-07-21 03:44:28.021605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.874 qpair failed and we were unable to recover it. 00:34:42.874 [2024-07-21 03:44:28.021752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.874 [2024-07-21 03:44:28.021783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.874 qpair failed and we were unable to recover it. 00:34:42.874 [2024-07-21 03:44:28.021917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.874 [2024-07-21 03:44:28.021947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.874 qpair failed and we were unable to recover it. 00:34:42.874 [2024-07-21 03:44:28.022142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.874 [2024-07-21 03:44:28.022171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.874 qpair failed and we were unable to recover it. 00:34:42.874 [2024-07-21 03:44:28.022359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.874 [2024-07-21 03:44:28.022413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.874 qpair failed and we were unable to recover it. 00:34:42.874 [2024-07-21 03:44:28.022556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.874 [2024-07-21 03:44:28.022586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.874 qpair failed and we were unable to recover it. 00:34:42.874 [2024-07-21 03:44:28.022706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.874 [2024-07-21 03:44:28.022735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.874 qpair failed and we were unable to recover it. 00:34:42.874 [2024-07-21 03:44:28.022877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.874 [2024-07-21 03:44:28.022922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.874 qpair failed and we were unable to recover it. 00:34:42.874 [2024-07-21 03:44:28.023094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.874 [2024-07-21 03:44:28.023138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.874 qpair failed and we were unable to recover it. 00:34:42.874 [2024-07-21 03:44:28.023379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.874 [2024-07-21 03:44:28.023430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.874 qpair failed and we were unable to recover it. 00:34:42.874 [2024-07-21 03:44:28.023525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.874 [2024-07-21 03:44:28.023559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.874 qpair failed and we were unable to recover it. 00:34:42.874 [2024-07-21 03:44:28.023715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.874 [2024-07-21 03:44:28.023756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.874 qpair failed and we were unable to recover it. 00:34:42.874 [2024-07-21 03:44:28.023855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.874 [2024-07-21 03:44:28.023883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.874 qpair failed and we were unable to recover it. 00:34:42.874 [2024-07-21 03:44:28.024047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.874 [2024-07-21 03:44:28.024088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.874 qpair failed and we were unable to recover it. 00:34:42.874 [2024-07-21 03:44:28.024258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.874 [2024-07-21 03:44:28.024289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.875 qpair failed and we were unable to recover it. 00:34:42.875 [2024-07-21 03:44:28.024424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.875 [2024-07-21 03:44:28.024454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.875 qpair failed and we were unable to recover it. 00:34:42.875 [2024-07-21 03:44:28.024560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.875 [2024-07-21 03:44:28.024591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.875 qpair failed and we were unable to recover it. 00:34:42.875 [2024-07-21 03:44:28.024792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.875 [2024-07-21 03:44:28.024838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.875 qpair failed and we were unable to recover it. 00:34:42.875 [2024-07-21 03:44:28.024982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.875 [2024-07-21 03:44:28.025026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.875 qpair failed and we were unable to recover it. 00:34:42.875 [2024-07-21 03:44:28.025283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.875 [2024-07-21 03:44:28.025333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.875 qpair failed and we were unable to recover it. 00:34:42.875 [2024-07-21 03:44:28.025459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.875 [2024-07-21 03:44:28.025485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.875 qpair failed and we were unable to recover it. 00:34:42.875 [2024-07-21 03:44:28.025586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.875 [2024-07-21 03:44:28.025618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.875 qpair failed and we were unable to recover it. 00:34:42.875 [2024-07-21 03:44:28.025767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.875 [2024-07-21 03:44:28.025798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.875 qpair failed and we were unable to recover it. 00:34:42.875 [2024-07-21 03:44:28.025903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.875 [2024-07-21 03:44:28.025932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.875 qpair failed and we were unable to recover it. 00:34:42.875 [2024-07-21 03:44:28.026125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.875 [2024-07-21 03:44:28.026180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.875 qpair failed and we were unable to recover it. 00:34:42.875 [2024-07-21 03:44:28.026361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.875 [2024-07-21 03:44:28.026421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.875 qpair failed and we were unable to recover it. 00:34:42.875 [2024-07-21 03:44:28.026558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.875 [2024-07-21 03:44:28.026590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.875 qpair failed and we were unable to recover it. 00:34:42.875 [2024-07-21 03:44:28.026718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.875 [2024-07-21 03:44:28.026745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.875 qpair failed and we were unable to recover it. 00:34:42.875 [2024-07-21 03:44:28.026885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.875 [2024-07-21 03:44:28.026914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.875 qpair failed and we were unable to recover it. 00:34:42.875 [2024-07-21 03:44:28.027011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.875 [2024-07-21 03:44:28.027040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.875 qpair failed and we were unable to recover it. 00:34:42.875 [2024-07-21 03:44:28.027207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.875 [2024-07-21 03:44:28.027238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.875 qpair failed and we were unable to recover it. 00:34:42.875 [2024-07-21 03:44:28.027417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.875 [2024-07-21 03:44:28.027463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.875 qpair failed and we were unable to recover it. 00:34:42.875 [2024-07-21 03:44:28.027599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.875 [2024-07-21 03:44:28.027652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:42.875 qpair failed and we were unable to recover it. 00:34:42.875 [2024-07-21 03:44:28.027780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.875 [2024-07-21 03:44:28.027820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.875 qpair failed and we were unable to recover it. 00:34:42.875 [2024-07-21 03:44:28.027990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.875 [2024-07-21 03:44:28.028061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.875 qpair failed and we were unable to recover it. 00:34:42.875 [2024-07-21 03:44:28.028219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.875 [2024-07-21 03:44:28.028292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.875 qpair failed and we were unable to recover it. 00:34:42.875 [2024-07-21 03:44:28.028424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.875 [2024-07-21 03:44:28.028453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.875 qpair failed and we were unable to recover it. 00:34:42.875 [2024-07-21 03:44:28.028629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.875 [2024-07-21 03:44:28.028674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.875 qpair failed and we were unable to recover it. 00:34:42.875 [2024-07-21 03:44:28.028830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.875 [2024-07-21 03:44:28.028857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.875 qpair failed and we were unable to recover it. 00:34:42.875 [2024-07-21 03:44:28.029000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.875 [2024-07-21 03:44:28.029029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.875 qpair failed and we were unable to recover it. 00:34:42.875 [2024-07-21 03:44:28.029236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.875 [2024-07-21 03:44:28.029286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.875 qpair failed and we were unable to recover it. 00:34:42.875 [2024-07-21 03:44:28.029423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.875 [2024-07-21 03:44:28.029468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.875 qpair failed and we were unable to recover it. 00:34:42.875 [2024-07-21 03:44:28.029562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.875 [2024-07-21 03:44:28.029592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.875 qpair failed and we were unable to recover it. 00:34:42.875 [2024-07-21 03:44:28.029754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.875 [2024-07-21 03:44:28.029782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.875 qpair failed and we were unable to recover it. 00:34:42.875 [2024-07-21 03:44:28.029886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.875 [2024-07-21 03:44:28.029914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.875 qpair failed and we were unable to recover it. 00:34:42.875 [2024-07-21 03:44:28.030037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.875 [2024-07-21 03:44:28.030064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.875 qpair failed and we were unable to recover it. 00:34:42.875 [2024-07-21 03:44:28.030233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.875 [2024-07-21 03:44:28.030300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.875 qpair failed and we were unable to recover it. 00:34:42.875 [2024-07-21 03:44:28.030426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.875 [2024-07-21 03:44:28.030456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.875 qpair failed and we were unable to recover it. 00:34:42.875 [2024-07-21 03:44:28.030624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.875 [2024-07-21 03:44:28.030670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.875 qpair failed and we were unable to recover it. 00:34:42.875 [2024-07-21 03:44:28.030756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.875 [2024-07-21 03:44:28.030782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.875 qpair failed and we were unable to recover it. 00:34:42.875 [2024-07-21 03:44:28.030878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.875 [2024-07-21 03:44:28.030909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.875 qpair failed and we were unable to recover it. 00:34:42.875 [2024-07-21 03:44:28.031058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.875 [2024-07-21 03:44:28.031101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.875 qpair failed and we were unable to recover it. 00:34:42.875 [2024-07-21 03:44:28.031210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.875 [2024-07-21 03:44:28.031240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.875 qpair failed and we were unable to recover it. 00:34:42.875 [2024-07-21 03:44:28.031395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.875 [2024-07-21 03:44:28.031425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.875 qpair failed and we were unable to recover it. 00:34:42.875 [2024-07-21 03:44:28.031548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.875 [2024-07-21 03:44:28.031574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.876 qpair failed and we were unable to recover it. 00:34:42.876 [2024-07-21 03:44:28.031699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.876 [2024-07-21 03:44:28.031729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.876 qpair failed and we were unable to recover it. 00:34:42.876 [2024-07-21 03:44:28.031863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.876 [2024-07-21 03:44:28.031890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.876 qpair failed and we were unable to recover it. 00:34:42.876 [2024-07-21 03:44:28.032017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.876 [2024-07-21 03:44:28.032044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.876 qpair failed and we were unable to recover it. 00:34:42.876 [2024-07-21 03:44:28.032192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.876 [2024-07-21 03:44:28.032222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.876 qpair failed and we were unable to recover it. 00:34:42.876 [2024-07-21 03:44:28.032357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.876 [2024-07-21 03:44:28.032401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.876 qpair failed and we were unable to recover it. 00:34:42.876 [2024-07-21 03:44:28.032555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.876 [2024-07-21 03:44:28.032586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.876 qpair failed and we were unable to recover it. 00:34:42.876 [2024-07-21 03:44:28.032746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.876 [2024-07-21 03:44:28.032772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.876 qpair failed and we were unable to recover it. 00:34:42.876 [2024-07-21 03:44:28.032890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.876 [2024-07-21 03:44:28.032917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.876 qpair failed and we were unable to recover it. 00:34:42.876 [2024-07-21 03:44:28.033021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.876 [2024-07-21 03:44:28.033048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.876 qpair failed and we were unable to recover it. 00:34:42.876 [2024-07-21 03:44:28.033242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.876 [2024-07-21 03:44:28.033273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.876 qpair failed and we were unable to recover it. 00:34:42.876 [2024-07-21 03:44:28.033384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.876 [2024-07-21 03:44:28.033411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.876 qpair failed and we were unable to recover it. 00:34:42.876 [2024-07-21 03:44:28.033567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.876 [2024-07-21 03:44:28.033598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.876 qpair failed and we were unable to recover it. 00:34:42.876 [2024-07-21 03:44:28.033755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.876 [2024-07-21 03:44:28.033787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.876 qpair failed and we were unable to recover it. 00:34:42.876 [2024-07-21 03:44:28.033883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.876 [2024-07-21 03:44:28.033910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.876 qpair failed and we were unable to recover it. 00:34:42.876 [2024-07-21 03:44:28.034006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.876 [2024-07-21 03:44:28.034034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.876 qpair failed and we were unable to recover it. 00:34:42.876 [2024-07-21 03:44:28.034179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.876 [2024-07-21 03:44:28.034213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.876 qpair failed and we were unable to recover it. 00:34:42.876 [2024-07-21 03:44:28.034373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.876 [2024-07-21 03:44:28.034404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.876 qpair failed and we were unable to recover it. 00:34:42.876 [2024-07-21 03:44:28.034552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.876 [2024-07-21 03:44:28.034580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.876 qpair failed and we were unable to recover it. 00:34:42.876 [2024-07-21 03:44:28.034681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.876 [2024-07-21 03:44:28.034708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.876 qpair failed and we were unable to recover it. 00:34:42.876 [2024-07-21 03:44:28.034807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.876 [2024-07-21 03:44:28.034835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.876 qpair failed and we were unable to recover it. 00:34:42.876 [2024-07-21 03:44:28.034929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.876 [2024-07-21 03:44:28.034956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.876 qpair failed and we were unable to recover it. 00:34:42.876 [2024-07-21 03:44:28.035073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.876 [2024-07-21 03:44:28.035103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.876 qpair failed and we were unable to recover it. 00:34:42.876 [2024-07-21 03:44:28.035242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.876 [2024-07-21 03:44:28.035287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.876 qpair failed and we were unable to recover it. 00:34:42.876 [2024-07-21 03:44:28.035430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.876 [2024-07-21 03:44:28.035460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.876 qpair failed and we were unable to recover it. 00:34:42.876 [2024-07-21 03:44:28.035567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.876 [2024-07-21 03:44:28.035597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.876 qpair failed and we were unable to recover it. 00:34:42.876 [2024-07-21 03:44:28.035747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.876 [2024-07-21 03:44:28.035776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.876 qpair failed and we were unable to recover it. 00:34:42.876 [2024-07-21 03:44:28.035872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.876 [2024-07-21 03:44:28.035899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.876 qpair failed and we were unable to recover it. 00:34:42.876 [2024-07-21 03:44:28.036070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.876 [2024-07-21 03:44:28.036100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.876 qpair failed and we were unable to recover it. 00:34:42.876 [2024-07-21 03:44:28.036203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.876 [2024-07-21 03:44:28.036233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.876 qpair failed and we were unable to recover it. 00:34:42.876 [2024-07-21 03:44:28.036378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.876 [2024-07-21 03:44:28.036422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.876 qpair failed and we were unable to recover it. 00:34:42.876 [2024-07-21 03:44:28.036534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.876 [2024-07-21 03:44:28.036566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.876 qpair failed and we were unable to recover it. 00:34:42.876 [2024-07-21 03:44:28.036715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.876 [2024-07-21 03:44:28.036743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.876 qpair failed and we were unable to recover it. 00:34:42.876 [2024-07-21 03:44:28.036860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.876 [2024-07-21 03:44:28.036887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.876 qpair failed and we were unable to recover it. 00:34:42.876 [2024-07-21 03:44:28.037041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.876 [2024-07-21 03:44:28.037071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.876 qpair failed and we were unable to recover it. 00:34:42.876 [2024-07-21 03:44:28.037264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.876 [2024-07-21 03:44:28.037293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.876 qpair failed and we were unable to recover it. 00:34:42.876 [2024-07-21 03:44:28.037460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.876 [2024-07-21 03:44:28.037496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.876 qpair failed and we were unable to recover it. 00:34:42.876 [2024-07-21 03:44:28.037601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.876 [2024-07-21 03:44:28.037641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.876 qpair failed and we were unable to recover it. 00:34:42.876 [2024-07-21 03:44:28.037787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.876 [2024-07-21 03:44:28.037814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.876 qpair failed and we were unable to recover it. 00:34:42.876 [2024-07-21 03:44:28.037915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.876 [2024-07-21 03:44:28.037942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.876 qpair failed and we were unable to recover it. 00:34:42.876 [2024-07-21 03:44:28.038105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.876 [2024-07-21 03:44:28.038132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.877 qpair failed and we were unable to recover it. 00:34:42.877 [2024-07-21 03:44:28.038305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.877 [2024-07-21 03:44:28.038335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.877 qpair failed and we were unable to recover it. 00:34:42.877 [2024-07-21 03:44:28.038472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.877 [2024-07-21 03:44:28.038502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.877 qpair failed and we were unable to recover it. 00:34:42.877 [2024-07-21 03:44:28.038643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.877 [2024-07-21 03:44:28.038687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.877 qpair failed and we were unable to recover it. 00:34:42.877 [2024-07-21 03:44:28.038823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.877 [2024-07-21 03:44:28.038851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.877 qpair failed and we were unable to recover it. 00:34:42.877 [2024-07-21 03:44:28.038986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.877 [2024-07-21 03:44:28.039013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.877 qpair failed and we were unable to recover it. 00:34:42.877 [2024-07-21 03:44:28.039137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.877 [2024-07-21 03:44:28.039166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.877 qpair failed and we were unable to recover it. 00:34:42.877 [2024-07-21 03:44:28.039298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.877 [2024-07-21 03:44:28.039344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.877 qpair failed and we were unable to recover it. 00:34:42.877 [2024-07-21 03:44:28.039451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.877 [2024-07-21 03:44:28.039482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.877 qpair failed and we were unable to recover it. 00:34:42.877 [2024-07-21 03:44:28.039590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.877 [2024-07-21 03:44:28.039626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.877 qpair failed and we were unable to recover it. 00:34:42.877 [2024-07-21 03:44:28.039746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.877 [2024-07-21 03:44:28.039774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.877 qpair failed and we were unable to recover it. 00:34:42.877 [2024-07-21 03:44:28.039917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.877 [2024-07-21 03:44:28.039960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.877 qpair failed and we were unable to recover it. 00:34:42.877 [2024-07-21 03:44:28.040122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.877 [2024-07-21 03:44:28.040152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.877 qpair failed and we were unable to recover it. 00:34:42.877 [2024-07-21 03:44:28.040272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.877 [2024-07-21 03:44:28.040316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.877 qpair failed and we were unable to recover it. 00:34:42.877 [2024-07-21 03:44:28.040428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.877 [2024-07-21 03:44:28.040458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.877 qpair failed and we were unable to recover it. 00:34:42.877 [2024-07-21 03:44:28.040587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.877 [2024-07-21 03:44:28.040624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.877 qpair failed and we were unable to recover it. 00:34:42.877 [2024-07-21 03:44:28.040897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.877 [2024-07-21 03:44:28.040923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.877 qpair failed and we were unable to recover it. 00:34:42.877 [2024-07-21 03:44:28.041100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.877 [2024-07-21 03:44:28.041130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.877 qpair failed and we were unable to recover it. 00:34:42.877 [2024-07-21 03:44:28.041267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.877 [2024-07-21 03:44:28.041297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.877 qpair failed and we were unable to recover it. 00:34:42.877 [2024-07-21 03:44:28.041415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.877 [2024-07-21 03:44:28.041458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.877 qpair failed and we were unable to recover it. 00:34:42.877 [2024-07-21 03:44:28.041600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.877 [2024-07-21 03:44:28.041638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.877 qpair failed and we were unable to recover it. 00:34:42.877 [2024-07-21 03:44:28.041762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.877 [2024-07-21 03:44:28.041789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.877 qpair failed and we were unable to recover it. 00:34:42.877 [2024-07-21 03:44:28.041878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.877 [2024-07-21 03:44:28.041905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.877 qpair failed and we were unable to recover it. 00:34:42.877 [2024-07-21 03:44:28.042030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.877 [2024-07-21 03:44:28.042059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.877 qpair failed and we were unable to recover it. 00:34:42.877 [2024-07-21 03:44:28.042184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.877 [2024-07-21 03:44:28.042212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.877 qpair failed and we were unable to recover it. 00:34:42.877 [2024-07-21 03:44:28.042366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.877 [2024-07-21 03:44:28.042393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.877 qpair failed and we were unable to recover it. 00:34:42.877 [2024-07-21 03:44:28.042515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.877 [2024-07-21 03:44:28.042543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.877 qpair failed and we were unable to recover it. 00:34:42.877 [2024-07-21 03:44:28.042687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.877 [2024-07-21 03:44:28.042733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.877 qpair failed and we were unable to recover it. 00:34:42.877 [2024-07-21 03:44:28.042861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.877 [2024-07-21 03:44:28.042890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.877 qpair failed and we were unable to recover it. 00:34:42.877 [2024-07-21 03:44:28.043034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.877 [2024-07-21 03:44:28.043062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.877 qpair failed and we were unable to recover it. 00:34:42.877 [2024-07-21 03:44:28.043202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.877 [2024-07-21 03:44:28.043232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.877 qpair failed and we were unable to recover it. 00:34:42.877 [2024-07-21 03:44:28.043336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.877 [2024-07-21 03:44:28.043382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.877 qpair failed and we were unable to recover it. 00:34:42.877 [2024-07-21 03:44:28.043501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.877 [2024-07-21 03:44:28.043528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.877 qpair failed and we were unable to recover it. 00:34:42.877 [2024-07-21 03:44:28.043648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.877 [2024-07-21 03:44:28.043679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.877 qpair failed and we were unable to recover it. 00:34:42.877 [2024-07-21 03:44:28.043803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.877 [2024-07-21 03:44:28.043830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.877 qpair failed and we were unable to recover it. 00:34:42.877 [2024-07-21 03:44:28.043931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.877 [2024-07-21 03:44:28.043959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.877 qpair failed and we were unable to recover it. 00:34:42.877 [2024-07-21 03:44:28.044082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.877 [2024-07-21 03:44:28.044116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.877 qpair failed and we were unable to recover it. 00:34:42.877 [2024-07-21 03:44:28.044241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.877 [2024-07-21 03:44:28.044269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.877 qpair failed and we were unable to recover it. 00:34:42.877 [2024-07-21 03:44:28.044389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.877 [2024-07-21 03:44:28.044434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.877 qpair failed and we were unable to recover it. 00:34:42.877 [2024-07-21 03:44:28.044592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.877 [2024-07-21 03:44:28.044629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.877 qpair failed and we were unable to recover it. 00:34:42.877 [2024-07-21 03:44:28.044750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.877 [2024-07-21 03:44:28.044777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.877 qpair failed and we were unable to recover it. 00:34:42.877 [2024-07-21 03:44:28.044871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.878 [2024-07-21 03:44:28.044899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.878 qpair failed and we were unable to recover it. 00:34:42.878 [2024-07-21 03:44:28.045046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.878 [2024-07-21 03:44:28.045073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.878 qpair failed and we were unable to recover it. 00:34:42.878 [2024-07-21 03:44:28.045173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.878 [2024-07-21 03:44:28.045200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.878 qpair failed and we were unable to recover it. 00:34:42.878 [2024-07-21 03:44:28.045299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.878 [2024-07-21 03:44:28.045327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.878 qpair failed and we were unable to recover it. 00:34:42.878 [2024-07-21 03:44:28.045453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.878 [2024-07-21 03:44:28.045483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.878 qpair failed and we were unable to recover it. 00:34:42.878 [2024-07-21 03:44:28.045602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.878 [2024-07-21 03:44:28.045636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.878 qpair failed and we were unable to recover it. 00:34:42.878 [2024-07-21 03:44:28.045727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.878 [2024-07-21 03:44:28.045754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.878 qpair failed and we were unable to recover it. 00:34:42.878 [2024-07-21 03:44:28.045857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.878 [2024-07-21 03:44:28.045884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.878 qpair failed and we were unable to recover it. 00:34:42.878 [2024-07-21 03:44:28.045998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.878 [2024-07-21 03:44:28.046024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.878 qpair failed and we were unable to recover it. 00:34:42.878 [2024-07-21 03:44:28.046147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.878 [2024-07-21 03:44:28.046189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.878 qpair failed and we were unable to recover it. 00:34:42.878 [2024-07-21 03:44:28.046326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.878 [2024-07-21 03:44:28.046356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.878 qpair failed and we were unable to recover it. 00:34:42.878 [2024-07-21 03:44:28.046503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.878 [2024-07-21 03:44:28.046530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.878 qpair failed and we were unable to recover it. 00:34:42.878 [2024-07-21 03:44:28.046660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.878 [2024-07-21 03:44:28.046687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.878 qpair failed and we were unable to recover it. 00:34:42.878 [2024-07-21 03:44:28.046782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.878 [2024-07-21 03:44:28.046809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.878 qpair failed and we were unable to recover it. 00:34:42.878 [2024-07-21 03:44:28.046906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.878 [2024-07-21 03:44:28.046934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.878 qpair failed and we were unable to recover it. 00:34:42.878 [2024-07-21 03:44:28.047026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.878 [2024-07-21 03:44:28.047054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.878 qpair failed and we were unable to recover it. 00:34:42.878 [2024-07-21 03:44:28.047191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.878 [2024-07-21 03:44:28.047219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.878 qpair failed and we were unable to recover it. 00:34:42.878 [2024-07-21 03:44:28.047352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.878 [2024-07-21 03:44:28.047379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.878 qpair failed and we were unable to recover it. 00:34:42.878 [2024-07-21 03:44:28.047475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.878 [2024-07-21 03:44:28.047502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.878 qpair failed and we were unable to recover it. 00:34:42.878 [2024-07-21 03:44:28.047652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.878 [2024-07-21 03:44:28.047684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.878 qpair failed and we were unable to recover it. 00:34:42.878 [2024-07-21 03:44:28.047804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.878 [2024-07-21 03:44:28.047832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.878 qpair failed and we were unable to recover it. 00:34:42.878 [2024-07-21 03:44:28.047958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.878 [2024-07-21 03:44:28.047985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.878 qpair failed and we were unable to recover it. 00:34:42.878 [2024-07-21 03:44:28.048126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.878 [2024-07-21 03:44:28.048158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.878 qpair failed and we were unable to recover it. 00:34:42.878 [2024-07-21 03:44:28.048315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.878 [2024-07-21 03:44:28.048342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.878 qpair failed and we were unable to recover it. 00:34:42.878 [2024-07-21 03:44:28.048440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.878 [2024-07-21 03:44:28.048466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.878 qpair failed and we were unable to recover it. 00:34:42.878 [2024-07-21 03:44:28.048594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.878 [2024-07-21 03:44:28.048629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.878 qpair failed and we were unable to recover it. 00:34:42.878 [2024-07-21 03:44:28.048727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.878 [2024-07-21 03:44:28.048755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.878 qpair failed and we were unable to recover it. 00:34:42.878 [2024-07-21 03:44:28.048856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.878 [2024-07-21 03:44:28.048883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.878 qpair failed and we were unable to recover it. 00:34:42.878 [2024-07-21 03:44:28.048997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.878 [2024-07-21 03:44:28.049029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.878 qpair failed and we were unable to recover it. 00:34:42.878 [2024-07-21 03:44:28.049204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.878 [2024-07-21 03:44:28.049232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.878 qpair failed and we were unable to recover it. 00:34:42.878 [2024-07-21 03:44:28.049354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.878 [2024-07-21 03:44:28.049399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.878 qpair failed and we were unable to recover it. 00:34:42.878 [2024-07-21 03:44:28.049545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.878 [2024-07-21 03:44:28.049573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.878 qpair failed and we were unable to recover it. 00:34:42.878 [2024-07-21 03:44:28.049685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.878 [2024-07-21 03:44:28.049713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.878 qpair failed and we were unable to recover it. 00:34:42.878 [2024-07-21 03:44:28.049818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.878 [2024-07-21 03:44:28.049845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.878 qpair failed and we were unable to recover it. 00:34:42.878 [2024-07-21 03:44:28.049985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.878 [2024-07-21 03:44:28.050014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.878 qpair failed and we were unable to recover it. 00:34:42.878 [2024-07-21 03:44:28.050150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.878 [2024-07-21 03:44:28.050181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.878 qpair failed and we were unable to recover it. 00:34:42.878 [2024-07-21 03:44:28.050292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.878 [2024-07-21 03:44:28.050319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.878 qpair failed and we were unable to recover it. 00:34:42.878 [2024-07-21 03:44:28.050500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.878 [2024-07-21 03:44:28.050527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.878 qpair failed and we were unable to recover it. 00:34:42.878 [2024-07-21 03:44:28.050623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.878 [2024-07-21 03:44:28.050650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.878 qpair failed and we were unable to recover it. 00:34:42.878 [2024-07-21 03:44:28.050741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.878 [2024-07-21 03:44:28.050768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.878 qpair failed and we were unable to recover it. 00:34:42.878 [2024-07-21 03:44:28.050922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.878 [2024-07-21 03:44:28.050949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.878 qpair failed and we were unable to recover it. 00:34:42.878 [2024-07-21 03:44:28.051066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.879 [2024-07-21 03:44:28.051093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.879 qpair failed and we were unable to recover it. 00:34:42.879 [2024-07-21 03:44:28.051210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.879 [2024-07-21 03:44:28.051237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.879 qpair failed and we were unable to recover it. 00:34:42.879 [2024-07-21 03:44:28.051377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.879 [2024-07-21 03:44:28.051407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.879 qpair failed and we were unable to recover it. 00:34:42.879 [2024-07-21 03:44:28.051553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.879 [2024-07-21 03:44:28.051580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.879 qpair failed and we were unable to recover it. 00:34:42.879 [2024-07-21 03:44:28.051702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.879 [2024-07-21 03:44:28.051730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.879 qpair failed and we were unable to recover it. 00:34:42.879 [2024-07-21 03:44:28.051856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.879 [2024-07-21 03:44:28.051885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.879 qpair failed and we were unable to recover it. 00:34:42.879 [2024-07-21 03:44:28.052035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.879 [2024-07-21 03:44:28.052062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.879 qpair failed and we were unable to recover it. 00:34:42.879 [2024-07-21 03:44:28.052184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.879 [2024-07-21 03:44:28.052228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.879 qpair failed and we were unable to recover it. 00:34:42.879 [2024-07-21 03:44:28.052344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.879 [2024-07-21 03:44:28.052376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.879 qpair failed and we were unable to recover it. 00:34:42.879 [2024-07-21 03:44:28.052521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.879 [2024-07-21 03:44:28.052549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.879 qpair failed and we were unable to recover it. 00:34:42.879 [2024-07-21 03:44:28.052677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.879 [2024-07-21 03:44:28.052704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.879 qpair failed and we were unable to recover it. 00:34:42.879 [2024-07-21 03:44:28.052797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.879 [2024-07-21 03:44:28.052824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.879 qpair failed and we were unable to recover it. 00:34:42.879 [2024-07-21 03:44:28.052953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.879 [2024-07-21 03:44:28.052980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.879 qpair failed and we were unable to recover it. 00:34:42.879 [2024-07-21 03:44:28.053143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.879 [2024-07-21 03:44:28.053173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.879 qpair failed and we were unable to recover it. 00:34:42.879 [2024-07-21 03:44:28.053280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.879 [2024-07-21 03:44:28.053310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.879 qpair failed and we were unable to recover it. 00:34:42.879 [2024-07-21 03:44:28.053462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.879 [2024-07-21 03:44:28.053490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.879 qpair failed and we were unable to recover it. 00:34:42.879 [2024-07-21 03:44:28.053618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.879 [2024-07-21 03:44:28.053662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.879 qpair failed and we were unable to recover it. 00:34:42.879 [2024-07-21 03:44:28.053768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.879 [2024-07-21 03:44:28.053798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.879 qpair failed and we were unable to recover it. 00:34:42.879 [2024-07-21 03:44:28.053944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.879 [2024-07-21 03:44:28.053971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.879 qpair failed and we were unable to recover it. 00:34:42.879 [2024-07-21 03:44:28.054063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.879 [2024-07-21 03:44:28.054091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.879 qpair failed and we were unable to recover it. 00:34:42.879 [2024-07-21 03:44:28.054245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.879 [2024-07-21 03:44:28.054272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.879 qpair failed and we were unable to recover it. 00:34:42.879 [2024-07-21 03:44:28.054405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.879 [2024-07-21 03:44:28.054433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.879 qpair failed and we were unable to recover it. 00:34:42.879 [2024-07-21 03:44:28.054529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.879 [2024-07-21 03:44:28.054556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.879 qpair failed and we were unable to recover it. 00:34:42.879 [2024-07-21 03:44:28.054667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.879 [2024-07-21 03:44:28.054694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.879 qpair failed and we were unable to recover it. 00:34:42.879 [2024-07-21 03:44:28.054786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.879 [2024-07-21 03:44:28.054813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.879 qpair failed and we were unable to recover it. 00:34:42.879 [2024-07-21 03:44:28.054903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.879 [2024-07-21 03:44:28.054932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.879 qpair failed and we were unable to recover it. 00:34:42.879 [2024-07-21 03:44:28.055062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.879 [2024-07-21 03:44:28.055092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.879 qpair failed and we were unable to recover it. 00:34:42.879 [2024-07-21 03:44:28.055262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.879 [2024-07-21 03:44:28.055289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.879 qpair failed and we were unable to recover it. 00:34:42.879 [2024-07-21 03:44:28.055405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.879 [2024-07-21 03:44:28.055449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.879 qpair failed and we were unable to recover it. 00:34:42.879 [2024-07-21 03:44:28.055563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.879 [2024-07-21 03:44:28.055609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.879 qpair failed and we were unable to recover it. 00:34:42.879 [2024-07-21 03:44:28.055718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.879 [2024-07-21 03:44:28.055744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.879 qpair failed and we were unable to recover it. 00:34:42.879 [2024-07-21 03:44:28.055844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.879 [2024-07-21 03:44:28.055871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.879 qpair failed and we were unable to recover it. 00:34:42.879 [2024-07-21 03:44:28.055963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.879 [2024-07-21 03:44:28.055991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.879 qpair failed and we were unable to recover it. 00:34:42.879 [2024-07-21 03:44:28.056117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.879 [2024-07-21 03:44:28.056144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.879 qpair failed and we were unable to recover it. 00:34:42.879 [2024-07-21 03:44:28.056309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.879 [2024-07-21 03:44:28.056344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.879 qpair failed and we were unable to recover it. 00:34:42.880 [2024-07-21 03:44:28.056514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.880 [2024-07-21 03:44:28.056542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.880 qpair failed and we were unable to recover it. 00:34:42.880 [2024-07-21 03:44:28.056672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.880 [2024-07-21 03:44:28.056699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.880 qpair failed and we were unable to recover it. 00:34:42.880 [2024-07-21 03:44:28.056793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.880 [2024-07-21 03:44:28.056821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.880 qpair failed and we were unable to recover it. 00:34:42.880 [2024-07-21 03:44:28.056970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.880 [2024-07-21 03:44:28.057001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.880 qpair failed and we were unable to recover it. 00:34:42.880 [2024-07-21 03:44:28.057145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.880 [2024-07-21 03:44:28.057173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.880 qpair failed and we were unable to recover it. 00:34:42.880 [2024-07-21 03:44:28.057294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.880 [2024-07-21 03:44:28.057321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.880 qpair failed and we were unable to recover it. 00:34:42.880 [2024-07-21 03:44:28.057469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.880 [2024-07-21 03:44:28.057499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.880 qpair failed and we were unable to recover it. 00:34:42.880 [2024-07-21 03:44:28.057625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.880 [2024-07-21 03:44:28.057654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.880 qpair failed and we were unable to recover it. 00:34:42.880 [2024-07-21 03:44:28.057779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.880 [2024-07-21 03:44:28.057807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.880 qpair failed and we were unable to recover it. 00:34:42.880 [2024-07-21 03:44:28.057923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.880 [2024-07-21 03:44:28.057954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.880 qpair failed and we were unable to recover it. 00:34:42.880 [2024-07-21 03:44:28.058089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.880 [2024-07-21 03:44:28.058116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.880 qpair failed and we were unable to recover it. 00:34:42.880 [2024-07-21 03:44:28.058239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.880 [2024-07-21 03:44:28.058266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.880 qpair failed and we were unable to recover it. 00:34:42.880 [2024-07-21 03:44:28.058386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.880 [2024-07-21 03:44:28.058414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.880 qpair failed and we were unable to recover it. 00:34:42.880 [2024-07-21 03:44:28.058519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.880 [2024-07-21 03:44:28.058547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.880 qpair failed and we were unable to recover it. 00:34:42.880 [2024-07-21 03:44:28.058654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.880 [2024-07-21 03:44:28.058682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.880 qpair failed and we were unable to recover it. 00:34:42.880 [2024-07-21 03:44:28.058806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.880 [2024-07-21 03:44:28.058836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.880 qpair failed and we were unable to recover it. 00:34:42.880 [2024-07-21 03:44:28.058969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.880 [2024-07-21 03:44:28.058996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.880 qpair failed and we were unable to recover it. 00:34:42.880 [2024-07-21 03:44:28.059118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.880 [2024-07-21 03:44:28.059146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.880 qpair failed and we were unable to recover it. 00:34:42.880 [2024-07-21 03:44:28.059295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.880 [2024-07-21 03:44:28.059326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.880 qpair failed and we were unable to recover it. 00:34:42.880 [2024-07-21 03:44:28.059437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.880 [2024-07-21 03:44:28.059464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.880 qpair failed and we were unable to recover it. 00:34:42.880 [2024-07-21 03:44:28.059592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.880 [2024-07-21 03:44:28.059625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.880 qpair failed and we were unable to recover it. 00:34:42.880 [2024-07-21 03:44:28.059750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.880 [2024-07-21 03:44:28.059780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.880 qpair failed and we were unable to recover it. 00:34:42.880 [2024-07-21 03:44:28.059898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.880 [2024-07-21 03:44:28.059925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.880 qpair failed and we were unable to recover it. 00:34:42.880 [2024-07-21 03:44:28.060050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.880 [2024-07-21 03:44:28.060077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.880 qpair failed and we were unable to recover it. 00:34:42.880 [2024-07-21 03:44:28.060180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.880 [2024-07-21 03:44:28.060210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.880 qpair failed and we were unable to recover it. 00:34:42.880 [2024-07-21 03:44:28.060380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.880 [2024-07-21 03:44:28.060407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.880 qpair failed and we were unable to recover it. 00:34:42.880 [2024-07-21 03:44:28.060549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.880 [2024-07-21 03:44:28.060579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.880 qpair failed and we were unable to recover it. 00:34:42.880 [2024-07-21 03:44:28.060694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.880 [2024-07-21 03:44:28.060724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.880 qpair failed and we were unable to recover it. 00:34:42.880 [2024-07-21 03:44:28.060840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.880 [2024-07-21 03:44:28.060869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.880 qpair failed and we were unable to recover it. 00:34:42.880 [2024-07-21 03:44:28.061004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.880 [2024-07-21 03:44:28.061032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.880 qpair failed and we were unable to recover it. 00:34:42.880 [2024-07-21 03:44:28.061159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.880 [2024-07-21 03:44:28.061186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.880 qpair failed and we were unable to recover it. 00:34:42.880 [2024-07-21 03:44:28.061308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.880 [2024-07-21 03:44:28.061336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.880 qpair failed and we were unable to recover it. 00:34:42.880 [2024-07-21 03:44:28.061424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.880 [2024-07-21 03:44:28.061452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.880 qpair failed and we were unable to recover it. 00:34:42.880 [2024-07-21 03:44:28.061574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.880 [2024-07-21 03:44:28.061601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.880 qpair failed and we were unable to recover it. 00:34:42.880 [2024-07-21 03:44:28.061704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.880 [2024-07-21 03:44:28.061731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.880 qpair failed and we were unable to recover it. 00:34:42.880 [2024-07-21 03:44:28.061833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.880 [2024-07-21 03:44:28.061860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.880 qpair failed and we were unable to recover it. 00:34:42.881 [2024-07-21 03:44:28.061970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.881 [2024-07-21 03:44:28.062000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.881 qpair failed and we were unable to recover it. 00:34:42.881 [2024-07-21 03:44:28.062174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.881 [2024-07-21 03:44:28.062201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.881 qpair failed and we were unable to recover it. 00:34:42.881 [2024-07-21 03:44:28.062369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.881 [2024-07-21 03:44:28.062399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.881 qpair failed and we were unable to recover it. 00:34:42.881 [2024-07-21 03:44:28.062502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.881 [2024-07-21 03:44:28.062537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.881 qpair failed and we were unable to recover it. 00:34:42.881 [2024-07-21 03:44:28.062693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.881 [2024-07-21 03:44:28.062721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.881 qpair failed and we were unable to recover it. 00:34:42.881 [2024-07-21 03:44:28.062817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.881 [2024-07-21 03:44:28.062850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.881 qpair failed and we were unable to recover it. 00:34:42.881 [2024-07-21 03:44:28.063004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.881 [2024-07-21 03:44:28.063031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.881 qpair failed and we were unable to recover it. 00:34:42.881 [2024-07-21 03:44:28.063160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.881 [2024-07-21 03:44:28.063187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.881 qpair failed and we were unable to recover it. 00:34:42.881 [2024-07-21 03:44:28.063281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.881 [2024-07-21 03:44:28.063307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.881 qpair failed and we were unable to recover it. 00:34:42.881 [2024-07-21 03:44:28.063450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.881 [2024-07-21 03:44:28.063480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.881 qpair failed and we were unable to recover it. 00:34:42.881 [2024-07-21 03:44:28.063647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.881 [2024-07-21 03:44:28.063674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.881 qpair failed and we were unable to recover it. 00:34:42.881 [2024-07-21 03:44:28.063775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.881 [2024-07-21 03:44:28.063802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.881 qpair failed and we were unable to recover it. 00:34:42.881 [2024-07-21 03:44:28.063923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.881 [2024-07-21 03:44:28.063950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.881 qpair failed and we were unable to recover it. 00:34:42.881 [2024-07-21 03:44:28.064068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.881 [2024-07-21 03:44:28.064095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.881 qpair failed and we were unable to recover it. 00:34:42.881 [2024-07-21 03:44:28.064210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.881 [2024-07-21 03:44:28.064253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.881 qpair failed and we were unable to recover it. 00:34:42.881 [2024-07-21 03:44:28.064387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.881 [2024-07-21 03:44:28.064419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.881 qpair failed and we were unable to recover it. 00:34:42.881 [2024-07-21 03:44:28.064567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.881 [2024-07-21 03:44:28.064595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.881 qpair failed and we were unable to recover it. 00:34:42.881 [2024-07-21 03:44:28.064714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.881 [2024-07-21 03:44:28.064742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.881 qpair failed and we were unable to recover it. 00:34:42.881 [2024-07-21 03:44:28.064869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.881 [2024-07-21 03:44:28.064897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.881 qpair failed and we were unable to recover it. 00:34:42.881 [2024-07-21 03:44:28.065040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.881 [2024-07-21 03:44:28.065067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.881 qpair failed and we were unable to recover it. 00:34:42.881 [2024-07-21 03:44:28.065166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.881 [2024-07-21 03:44:28.065195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.881 qpair failed and we were unable to recover it. 00:34:42.881 [2024-07-21 03:44:28.065372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.881 [2024-07-21 03:44:28.065403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.881 qpair failed and we were unable to recover it. 00:34:42.881 [2024-07-21 03:44:28.065522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.881 [2024-07-21 03:44:28.065551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.881 qpair failed and we were unable to recover it. 00:34:42.881 [2024-07-21 03:44:28.065652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.881 [2024-07-21 03:44:28.065681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.881 qpair failed and we were unable to recover it. 00:34:42.881 [2024-07-21 03:44:28.065865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.881 [2024-07-21 03:44:28.065892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.881 qpair failed and we were unable to recover it. 00:34:42.881 [2024-07-21 03:44:28.065991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.881 [2024-07-21 03:44:28.066018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.881 qpair failed and we were unable to recover it. 00:34:42.881 [2024-07-21 03:44:28.066168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.881 [2024-07-21 03:44:28.066195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.881 qpair failed and we were unable to recover it. 00:34:42.881 [2024-07-21 03:44:28.066315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.881 [2024-07-21 03:44:28.066346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.881 qpair failed and we were unable to recover it. 00:34:42.881 [2024-07-21 03:44:28.066522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.881 [2024-07-21 03:44:28.066549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.881 qpair failed and we were unable to recover it. 00:34:42.881 [2024-07-21 03:44:28.066643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.881 [2024-07-21 03:44:28.066671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.881 qpair failed and we were unable to recover it. 00:34:42.881 [2024-07-21 03:44:28.066819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.881 [2024-07-21 03:44:28.066851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.881 qpair failed and we were unable to recover it. 00:34:42.881 [2024-07-21 03:44:28.066994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.881 [2024-07-21 03:44:28.067022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.881 qpair failed and we were unable to recover it. 00:34:42.881 [2024-07-21 03:44:28.067168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.881 [2024-07-21 03:44:28.067195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.881 qpair failed and we were unable to recover it. 00:34:42.881 [2024-07-21 03:44:28.067389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.881 [2024-07-21 03:44:28.067441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.881 qpair failed and we were unable to recover it. 00:34:42.881 [2024-07-21 03:44:28.067564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.881 [2024-07-21 03:44:28.067592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.881 qpair failed and we were unable to recover it. 00:34:42.881 [2024-07-21 03:44:28.067747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.881 [2024-07-21 03:44:28.067774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.881 qpair failed and we were unable to recover it. 00:34:42.881 [2024-07-21 03:44:28.067899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.881 [2024-07-21 03:44:28.067944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.881 qpair failed and we were unable to recover it. 00:34:42.881 [2024-07-21 03:44:28.068089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.881 [2024-07-21 03:44:28.068117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.881 qpair failed and we were unable to recover it. 00:34:42.881 [2024-07-21 03:44:28.068266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.881 [2024-07-21 03:44:28.068293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.881 qpair failed and we were unable to recover it. 00:34:42.881 [2024-07-21 03:44:28.068455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.881 [2024-07-21 03:44:28.068487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.881 qpair failed and we were unable to recover it. 00:34:42.881 [2024-07-21 03:44:28.068602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.882 [2024-07-21 03:44:28.068638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.882 qpair failed and we were unable to recover it. 00:34:42.882 [2024-07-21 03:44:28.068737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.882 [2024-07-21 03:44:28.068764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.882 qpair failed and we were unable to recover it. 00:34:42.882 [2024-07-21 03:44:28.068885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.882 [2024-07-21 03:44:28.068929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.882 qpair failed and we were unable to recover it. 00:34:42.882 [2024-07-21 03:44:28.069044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.882 [2024-07-21 03:44:28.069076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.882 qpair failed and we were unable to recover it. 00:34:42.882 [2024-07-21 03:44:28.069225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.882 [2024-07-21 03:44:28.069252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.882 qpair failed and we were unable to recover it. 00:34:42.882 [2024-07-21 03:44:28.069369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.882 [2024-07-21 03:44:28.069400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.882 qpair failed and we were unable to recover it. 00:34:42.882 [2024-07-21 03:44:28.069510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.882 [2024-07-21 03:44:28.069537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.882 qpair failed and we were unable to recover it. 00:34:42.882 [2024-07-21 03:44:28.069643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.882 [2024-07-21 03:44:28.069674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.882 qpair failed and we were unable to recover it. 00:34:42.882 [2024-07-21 03:44:28.069785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.882 [2024-07-21 03:44:28.069813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.882 qpair failed and we were unable to recover it. 00:34:42.882 [2024-07-21 03:44:28.069939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.882 [2024-07-21 03:44:28.069965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.882 qpair failed and we were unable to recover it. 00:34:42.882 [2024-07-21 03:44:28.070088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.882 [2024-07-21 03:44:28.070116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.882 qpair failed and we were unable to recover it. 00:34:42.882 [2024-07-21 03:44:28.070260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.882 [2024-07-21 03:44:28.070292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.882 qpair failed and we were unable to recover it. 00:34:42.882 [2024-07-21 03:44:28.070471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.882 [2024-07-21 03:44:28.070498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.882 qpair failed and we were unable to recover it. 00:34:42.882 [2024-07-21 03:44:28.070654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.882 [2024-07-21 03:44:28.070686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.882 qpair failed and we were unable to recover it. 00:34:42.882 [2024-07-21 03:44:28.070857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.882 [2024-07-21 03:44:28.070884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.882 qpair failed and we were unable to recover it. 00:34:42.882 [2024-07-21 03:44:28.071005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.882 [2024-07-21 03:44:28.071031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.882 qpair failed and we were unable to recover it. 00:34:42.882 [2024-07-21 03:44:28.071148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.882 [2024-07-21 03:44:28.071175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.882 qpair failed and we were unable to recover it. 00:34:42.882 [2024-07-21 03:44:28.071345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.882 [2024-07-21 03:44:28.071375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.882 qpair failed and we were unable to recover it. 00:34:42.882 [2024-07-21 03:44:28.071521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.882 [2024-07-21 03:44:28.071548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.882 qpair failed and we were unable to recover it. 00:34:42.882 [2024-07-21 03:44:28.071670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.882 [2024-07-21 03:44:28.071698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.882 qpair failed and we were unable to recover it. 00:34:42.882 [2024-07-21 03:44:28.071815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.882 [2024-07-21 03:44:28.071845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.882 qpair failed and we were unable to recover it. 00:34:42.882 [2024-07-21 03:44:28.071990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.882 [2024-07-21 03:44:28.072017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.882 qpair failed and we were unable to recover it. 00:34:42.882 [2024-07-21 03:44:28.072114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.882 [2024-07-21 03:44:28.072142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.882 qpair failed and we were unable to recover it. 00:34:42.882 [2024-07-21 03:44:28.072290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.882 [2024-07-21 03:44:28.072317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.882 qpair failed and we were unable to recover it. 00:34:42.882 [2024-07-21 03:44:28.072447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.882 [2024-07-21 03:44:28.072474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.882 qpair failed and we were unable to recover it. 00:34:42.882 [2024-07-21 03:44:28.072604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.882 [2024-07-21 03:44:28.072655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.882 qpair failed and we were unable to recover it. 00:34:42.882 [2024-07-21 03:44:28.072760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.882 [2024-07-21 03:44:28.072790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.882 qpair failed and we were unable to recover it. 00:34:42.882 [2024-07-21 03:44:28.072940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.882 [2024-07-21 03:44:28.072968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.882 qpair failed and we were unable to recover it. 00:34:42.882 [2024-07-21 03:44:28.073108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.882 [2024-07-21 03:44:28.073152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.882 qpair failed and we were unable to recover it. 00:34:42.882 [2024-07-21 03:44:28.073279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.882 [2024-07-21 03:44:28.073308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.882 qpair failed and we were unable to recover it. 00:34:42.882 [2024-07-21 03:44:28.073431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.882 [2024-07-21 03:44:28.073458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.882 qpair failed and we were unable to recover it. 00:34:42.882 [2024-07-21 03:44:28.073575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.882 [2024-07-21 03:44:28.073603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.882 qpair failed and we were unable to recover it. 00:34:42.882 [2024-07-21 03:44:28.073733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.882 [2024-07-21 03:44:28.073765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.882 qpair failed and we were unable to recover it. 00:34:42.882 [2024-07-21 03:44:28.073904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.882 [2024-07-21 03:44:28.073932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.882 qpair failed and we were unable to recover it. 00:34:42.882 [2024-07-21 03:44:28.074052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.882 [2024-07-21 03:44:28.074080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.882 qpair failed and we were unable to recover it. 00:34:42.882 [2024-07-21 03:44:28.074257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.882 [2024-07-21 03:44:28.074284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.882 qpair failed and we were unable to recover it. 00:34:42.882 [2024-07-21 03:44:28.074406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.882 [2024-07-21 03:44:28.074433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.882 qpair failed and we were unable to recover it. 00:34:42.882 [2024-07-21 03:44:28.074559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.882 [2024-07-21 03:44:28.074586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.882 qpair failed and we were unable to recover it. 00:34:42.882 [2024-07-21 03:44:28.074714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.882 [2024-07-21 03:44:28.074745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.882 qpair failed and we were unable to recover it. 00:34:42.882 [2024-07-21 03:44:28.074867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.882 [2024-07-21 03:44:28.074894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.882 qpair failed and we were unable to recover it. 00:34:42.882 [2024-07-21 03:44:28.074992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.882 [2024-07-21 03:44:28.075020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.882 qpair failed and we were unable to recover it. 00:34:42.883 [2024-07-21 03:44:28.075136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.883 [2024-07-21 03:44:28.075166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.883 qpair failed and we were unable to recover it. 00:34:42.883 [2024-07-21 03:44:28.075311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.883 [2024-07-21 03:44:28.075339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.883 qpair failed and we were unable to recover it. 00:34:42.883 [2024-07-21 03:44:28.075429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.883 [2024-07-21 03:44:28.075460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.883 qpair failed and we were unable to recover it. 00:34:42.883 [2024-07-21 03:44:28.075588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.883 [2024-07-21 03:44:28.075621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.883 qpair failed and we were unable to recover it. 00:34:42.883 [2024-07-21 03:44:28.075774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.883 [2024-07-21 03:44:28.075801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.883 qpair failed and we were unable to recover it. 00:34:42.883 [2024-07-21 03:44:28.075978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.883 [2024-07-21 03:44:28.076008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.883 qpair failed and we were unable to recover it. 00:34:42.883 [2024-07-21 03:44:28.076147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.883 [2024-07-21 03:44:28.076195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.883 qpair failed and we were unable to recover it. 00:34:42.883 [2024-07-21 03:44:28.076367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.883 [2024-07-21 03:44:28.076394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.883 qpair failed and we were unable to recover it. 00:34:42.883 [2024-07-21 03:44:28.076520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.883 [2024-07-21 03:44:28.076564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.883 qpair failed and we were unable to recover it. 00:34:42.883 [2024-07-21 03:44:28.076676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.883 [2024-07-21 03:44:28.076707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.883 qpair failed and we were unable to recover it. 00:34:42.883 [2024-07-21 03:44:28.076828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.883 [2024-07-21 03:44:28.076855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.883 qpair failed and we were unable to recover it. 00:34:42.883 [2024-07-21 03:44:28.076976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.883 [2024-07-21 03:44:28.077004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.883 qpair failed and we were unable to recover it. 00:34:42.883 [2024-07-21 03:44:28.077168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.883 [2024-07-21 03:44:28.077201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.883 qpair failed and we were unable to recover it. 00:34:42.883 [2024-07-21 03:44:28.077354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.883 [2024-07-21 03:44:28.077381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.883 qpair failed and we were unable to recover it. 00:34:42.883 [2024-07-21 03:44:28.077507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.883 [2024-07-21 03:44:28.077534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.883 qpair failed and we were unable to recover it. 00:34:42.883 [2024-07-21 03:44:28.077674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.883 [2024-07-21 03:44:28.077703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.883 qpair failed and we were unable to recover it. 00:34:42.883 [2024-07-21 03:44:28.077859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.883 [2024-07-21 03:44:28.077886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.883 qpair failed and we were unable to recover it. 00:34:42.883 [2024-07-21 03:44:28.078020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.883 [2024-07-21 03:44:28.078050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.883 qpair failed and we were unable to recover it. 00:34:42.883 [2024-07-21 03:44:28.078222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.883 [2024-07-21 03:44:28.078249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.883 qpair failed and we were unable to recover it. 00:34:42.883 [2024-07-21 03:44:28.078399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.883 [2024-07-21 03:44:28.078426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.883 qpair failed and we were unable to recover it. 00:34:42.883 [2024-07-21 03:44:28.078518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.883 [2024-07-21 03:44:28.078546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.883 qpair failed and we were unable to recover it. 00:34:42.883 [2024-07-21 03:44:28.078670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.883 [2024-07-21 03:44:28.078700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.883 qpair failed and we were unable to recover it. 00:34:42.883 [2024-07-21 03:44:28.078799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.883 [2024-07-21 03:44:28.078826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.883 qpair failed and we were unable to recover it. 00:34:42.883 [2024-07-21 03:44:28.078921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.883 [2024-07-21 03:44:28.078948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.883 qpair failed and we were unable to recover it. 00:34:42.883 [2024-07-21 03:44:28.079095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.883 [2024-07-21 03:44:28.079121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.883 qpair failed and we were unable to recover it. 00:34:42.883 [2024-07-21 03:44:28.079250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.883 [2024-07-21 03:44:28.079277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.883 qpair failed and we were unable to recover it. 00:34:42.883 [2024-07-21 03:44:28.079407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.883 [2024-07-21 03:44:28.079451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.883 qpair failed and we were unable to recover it. 00:34:42.883 [2024-07-21 03:44:28.079552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.883 [2024-07-21 03:44:28.079583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.883 qpair failed and we were unable to recover it. 00:34:42.883 [2024-07-21 03:44:28.079736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.883 [2024-07-21 03:44:28.079763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.883 qpair failed and we were unable to recover it. 00:34:42.883 [2024-07-21 03:44:28.079892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.883 [2024-07-21 03:44:28.079938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.883 qpair failed and we were unable to recover it. 00:34:42.883 [2024-07-21 03:44:28.080082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.883 [2024-07-21 03:44:28.080110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.883 qpair failed and we were unable to recover it. 00:34:42.883 [2024-07-21 03:44:28.080233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.883 [2024-07-21 03:44:28.080261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.883 qpair failed and we were unable to recover it. 00:34:42.883 [2024-07-21 03:44:28.080376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.883 [2024-07-21 03:44:28.080403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.883 qpair failed and we were unable to recover it. 00:34:42.883 [2024-07-21 03:44:28.080554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.883 [2024-07-21 03:44:28.080581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.883 qpair failed and we were unable to recover it. 00:34:42.883 [2024-07-21 03:44:28.080745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.883 [2024-07-21 03:44:28.080773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.883 qpair failed and we were unable to recover it. 00:34:42.883 [2024-07-21 03:44:28.080894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.883 [2024-07-21 03:44:28.080922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.883 qpair failed and we were unable to recover it. 00:34:42.883 [2024-07-21 03:44:28.081108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.883 [2024-07-21 03:44:28.081140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.883 qpair failed and we were unable to recover it. 00:34:42.883 [2024-07-21 03:44:28.081316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.883 [2024-07-21 03:44:28.081343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.883 qpair failed and we were unable to recover it. 00:34:42.883 [2024-07-21 03:44:28.081437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.883 [2024-07-21 03:44:28.081465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.883 qpair failed and we were unable to recover it. 00:34:42.883 [2024-07-21 03:44:28.081637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.883 [2024-07-21 03:44:28.081668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.883 qpair failed and we were unable to recover it. 00:34:42.883 [2024-07-21 03:44:28.081811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.883 [2024-07-21 03:44:28.081838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.884 qpair failed and we were unable to recover it. 00:34:42.884 [2024-07-21 03:44:28.081984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.884 [2024-07-21 03:44:28.082028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.884 qpair failed and we were unable to recover it. 00:34:42.884 [2024-07-21 03:44:28.082199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.884 [2024-07-21 03:44:28.082227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.884 qpair failed and we were unable to recover it. 00:34:42.884 [2024-07-21 03:44:28.082356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.884 [2024-07-21 03:44:28.082384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.884 qpair failed and we were unable to recover it. 00:34:42.884 [2024-07-21 03:44:28.082480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.884 [2024-07-21 03:44:28.082508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.884 qpair failed and we were unable to recover it. 00:34:42.884 [2024-07-21 03:44:28.082655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.884 [2024-07-21 03:44:28.082682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.884 qpair failed and we were unable to recover it. 00:34:42.884 [2024-07-21 03:44:28.082806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.884 [2024-07-21 03:44:28.082833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.884 qpair failed and we were unable to recover it. 00:34:42.884 [2024-07-21 03:44:28.082928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.884 [2024-07-21 03:44:28.082956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.884 qpair failed and we were unable to recover it. 00:34:42.884 [2024-07-21 03:44:28.083044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.884 [2024-07-21 03:44:28.083072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.884 qpair failed and we were unable to recover it. 00:34:42.884 [2024-07-21 03:44:28.083224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.884 [2024-07-21 03:44:28.083252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.884 qpair failed and we were unable to recover it. 00:34:42.884 [2024-07-21 03:44:28.083374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.884 [2024-07-21 03:44:28.083401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.884 qpair failed and we were unable to recover it. 00:34:42.884 [2024-07-21 03:44:28.083523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.884 [2024-07-21 03:44:28.083552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.884 qpair failed and we were unable to recover it. 00:34:42.884 [2024-07-21 03:44:28.083664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.884 [2024-07-21 03:44:28.083706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.884 qpair failed and we were unable to recover it. 00:34:42.884 [2024-07-21 03:44:28.083851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.884 [2024-07-21 03:44:28.083879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.884 qpair failed and we were unable to recover it. 00:34:42.884 [2024-07-21 03:44:28.083991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.884 [2024-07-21 03:44:28.084022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.884 qpair failed and we were unable to recover it. 00:34:42.884 [2024-07-21 03:44:28.084188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.884 [2024-07-21 03:44:28.084214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.884 qpair failed and we were unable to recover it. 00:34:42.884 [2024-07-21 03:44:28.084360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.884 [2024-07-21 03:44:28.084407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.884 qpair failed and we were unable to recover it. 00:34:42.884 [2024-07-21 03:44:28.084508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.884 [2024-07-21 03:44:28.084536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.884 qpair failed and we were unable to recover it. 00:34:42.884 [2024-07-21 03:44:28.084672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.884 [2024-07-21 03:44:28.084701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.884 qpair failed and we were unable to recover it. 00:34:42.884 [2024-07-21 03:44:28.084815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.884 [2024-07-21 03:44:28.084842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.884 qpair failed and we were unable to recover it. 00:34:42.884 [2024-07-21 03:44:28.084959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.884 [2024-07-21 03:44:28.084986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.884 qpair failed and we were unable to recover it. 00:34:42.884 [2024-07-21 03:44:28.085099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.884 [2024-07-21 03:44:28.085126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.884 qpair failed and we were unable to recover it. 00:34:42.884 [2024-07-21 03:44:28.085272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.884 [2024-07-21 03:44:28.085299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.884 qpair failed and we were unable to recover it. 00:34:42.884 [2024-07-21 03:44:28.085416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.884 [2024-07-21 03:44:28.085445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.884 qpair failed and we were unable to recover it. 00:34:42.884 [2024-07-21 03:44:28.085575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.884 [2024-07-21 03:44:28.085626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.884 qpair failed and we were unable to recover it. 00:34:42.884 [2024-07-21 03:44:28.085785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.884 [2024-07-21 03:44:28.085814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.884 qpair failed and we were unable to recover it. 00:34:42.884 [2024-07-21 03:44:28.085938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.885 [2024-07-21 03:44:28.085966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.885 qpair failed and we were unable to recover it. 00:34:42.885 [2024-07-21 03:44:28.086066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.885 [2024-07-21 03:44:28.086094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.885 qpair failed and we were unable to recover it. 00:34:42.885 [2024-07-21 03:44:28.086211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.885 [2024-07-21 03:44:28.086238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.885 qpair failed and we were unable to recover it. 00:34:42.885 [2024-07-21 03:44:28.086373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.885 [2024-07-21 03:44:28.086406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.885 qpair failed and we were unable to recover it. 00:34:42.885 [2024-07-21 03:44:28.086568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.885 [2024-07-21 03:44:28.086596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.885 qpair failed and we were unable to recover it. 00:34:42.885 [2024-07-21 03:44:28.086748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.885 [2024-07-21 03:44:28.086780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.885 qpair failed and we were unable to recover it. 00:34:42.885 [2024-07-21 03:44:28.086910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.885 [2024-07-21 03:44:28.086940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.885 qpair failed and we were unable to recover it. 00:34:42.885 [2024-07-21 03:44:28.087100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.885 [2024-07-21 03:44:28.087130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.885 qpair failed and we were unable to recover it. 00:34:42.885 [2024-07-21 03:44:28.087243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.885 [2024-07-21 03:44:28.087275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.885 qpair failed and we were unable to recover it. 00:34:42.885 [2024-07-21 03:44:28.087489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.885 [2024-07-21 03:44:28.087519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.885 qpair failed and we were unable to recover it. 00:34:42.885 [2024-07-21 03:44:28.087669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.885 [2024-07-21 03:44:28.087698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.885 qpair failed and we were unable to recover it. 00:34:42.885 [2024-07-21 03:44:28.087815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.885 [2024-07-21 03:44:28.087842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.885 qpair failed and we were unable to recover it. 00:34:42.885 [2024-07-21 03:44:28.088021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.885 [2024-07-21 03:44:28.088051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.885 qpair failed and we were unable to recover it. 00:34:42.885 [2024-07-21 03:44:28.088180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.885 [2024-07-21 03:44:28.088210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.885 qpair failed and we were unable to recover it. 00:34:42.885 [2024-07-21 03:44:28.088316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.885 [2024-07-21 03:44:28.088347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.885 qpair failed and we were unable to recover it. 00:34:42.885 [2024-07-21 03:44:28.088509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.885 [2024-07-21 03:44:28.088540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.885 qpair failed and we were unable to recover it. 00:34:42.885 [2024-07-21 03:44:28.088683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.885 [2024-07-21 03:44:28.088710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.885 qpair failed and we were unable to recover it. 00:34:42.885 [2024-07-21 03:44:28.088831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.885 [2024-07-21 03:44:28.088858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.885 qpair failed and we were unable to recover it. 00:34:42.885 [2024-07-21 03:44:28.088964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.885 [2024-07-21 03:44:28.088995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.885 qpair failed and we were unable to recover it. 00:34:42.885 [2024-07-21 03:44:28.089174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.885 [2024-07-21 03:44:28.089231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.885 qpair failed and we were unable to recover it. 00:34:42.885 [2024-07-21 03:44:28.089374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.885 [2024-07-21 03:44:28.089418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.885 qpair failed and we were unable to recover it. 00:34:42.885 [2024-07-21 03:44:28.089587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.885 [2024-07-21 03:44:28.089621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.885 qpair failed and we were unable to recover it. 00:34:42.885 [2024-07-21 03:44:28.089745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.885 [2024-07-21 03:44:28.089772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.885 qpair failed and we were unable to recover it. 00:34:42.885 [2024-07-21 03:44:28.089934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.885 [2024-07-21 03:44:28.089964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.885 qpair failed and we were unable to recover it. 00:34:42.885 [2024-07-21 03:44:28.090136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.885 [2024-07-21 03:44:28.090183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.885 qpair failed and we were unable to recover it. 00:34:42.885 [2024-07-21 03:44:28.090311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.885 [2024-07-21 03:44:28.090340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.885 qpair failed and we were unable to recover it. 00:34:42.885 [2024-07-21 03:44:28.090481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.885 [2024-07-21 03:44:28.090511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.885 qpair failed and we were unable to recover it. 00:34:42.885 [2024-07-21 03:44:28.090621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.885 [2024-07-21 03:44:28.090649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.885 qpair failed and we were unable to recover it. 00:34:42.885 [2024-07-21 03:44:28.090779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.885 [2024-07-21 03:44:28.090820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.885 qpair failed and we were unable to recover it. 00:34:42.885 [2024-07-21 03:44:28.090960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.885 [2024-07-21 03:44:28.090990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.885 qpair failed and we were unable to recover it. 00:34:42.885 [2024-07-21 03:44:28.091230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.885 [2024-07-21 03:44:28.091284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.885 qpair failed and we were unable to recover it. 00:34:42.885 [2024-07-21 03:44:28.091416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.885 [2024-07-21 03:44:28.091446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.885 qpair failed and we were unable to recover it. 00:34:42.885 [2024-07-21 03:44:28.091585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.886 [2024-07-21 03:44:28.091626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.886 qpair failed and we were unable to recover it. 00:34:42.886 [2024-07-21 03:44:28.091772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.886 [2024-07-21 03:44:28.091800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.886 qpair failed and we were unable to recover it. 00:34:42.886 [2024-07-21 03:44:28.091944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.886 [2024-07-21 03:44:28.091974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.886 qpair failed and we were unable to recover it. 00:34:42.886 [2024-07-21 03:44:28.092107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.886 [2024-07-21 03:44:28.092137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.886 qpair failed and we were unable to recover it. 00:34:42.886 [2024-07-21 03:44:28.092273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.886 [2024-07-21 03:44:28.092304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.886 qpair failed and we were unable to recover it. 00:34:42.886 [2024-07-21 03:44:28.092405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.886 [2024-07-21 03:44:28.092435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.886 qpair failed and we were unable to recover it. 00:34:42.886 [2024-07-21 03:44:28.092598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.886 [2024-07-21 03:44:28.092638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.886 qpair failed and we were unable to recover it. 00:34:42.886 [2024-07-21 03:44:28.092790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.886 [2024-07-21 03:44:28.092817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.886 qpair failed and we were unable to recover it. 00:34:42.886 [2024-07-21 03:44:28.092962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.886 [2024-07-21 03:44:28.092993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.886 qpair failed and we were unable to recover it. 00:34:42.886 [2024-07-21 03:44:28.093122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.886 [2024-07-21 03:44:28.093152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.886 qpair failed and we were unable to recover it. 00:34:42.886 [2024-07-21 03:44:28.093317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.886 [2024-07-21 03:44:28.093349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.886 qpair failed and we were unable to recover it. 00:34:42.886 [2024-07-21 03:44:28.093485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.886 [2024-07-21 03:44:28.093520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.886 qpair failed and we were unable to recover it. 00:34:42.886 [2024-07-21 03:44:28.093625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.886 [2024-07-21 03:44:28.093671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.886 qpair failed and we were unable to recover it. 00:34:42.886 [2024-07-21 03:44:28.093765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.886 [2024-07-21 03:44:28.093808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.886 qpair failed and we were unable to recover it. 00:34:42.886 [2024-07-21 03:44:28.093919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.886 [2024-07-21 03:44:28.093969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.886 qpair failed and we were unable to recover it. 00:34:42.886 [2024-07-21 03:44:28.094099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.886 [2024-07-21 03:44:28.094129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.886 qpair failed and we were unable to recover it. 00:34:42.886 [2024-07-21 03:44:28.094251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.886 [2024-07-21 03:44:28.094281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.886 qpair failed and we were unable to recover it. 00:34:42.886 [2024-07-21 03:44:28.094421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.886 [2024-07-21 03:44:28.094451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.886 qpair failed and we were unable to recover it. 00:34:42.886 [2024-07-21 03:44:28.094584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.886 [2024-07-21 03:44:28.094620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.886 qpair failed and we were unable to recover it. 00:34:42.886 [2024-07-21 03:44:28.094791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.886 [2024-07-21 03:44:28.094818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.886 qpair failed and we were unable to recover it. 00:34:42.886 [2024-07-21 03:44:28.094910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.886 [2024-07-21 03:44:28.094956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.886 qpair failed and we were unable to recover it. 00:34:42.886 [2024-07-21 03:44:28.095115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.886 [2024-07-21 03:44:28.095145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.886 qpair failed and we were unable to recover it. 00:34:42.886 [2024-07-21 03:44:28.095250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.886 [2024-07-21 03:44:28.095279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.886 qpair failed and we were unable to recover it. 00:34:42.886 [2024-07-21 03:44:28.095378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.886 [2024-07-21 03:44:28.095408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.886 qpair failed and we were unable to recover it. 00:34:42.886 [2024-07-21 03:44:28.095499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.886 [2024-07-21 03:44:28.095529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.886 qpair failed and we were unable to recover it. 00:34:42.886 [2024-07-21 03:44:28.095697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.886 [2024-07-21 03:44:28.095737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.886 qpair failed and we were unable to recover it. 00:34:42.886 [2024-07-21 03:44:28.095893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.886 [2024-07-21 03:44:28.095922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.886 qpair failed and we were unable to recover it. 00:34:42.886 [2024-07-21 03:44:28.096064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.886 [2024-07-21 03:44:28.096094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.886 qpair failed and we were unable to recover it. 00:34:42.886 [2024-07-21 03:44:28.096257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.886 [2024-07-21 03:44:28.096303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.886 qpair failed and we were unable to recover it. 00:34:42.886 [2024-07-21 03:44:28.096397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.886 [2024-07-21 03:44:28.096425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.886 qpair failed and we were unable to recover it. 00:34:42.886 [2024-07-21 03:44:28.096541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.886 [2024-07-21 03:44:28.096568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.886 qpair failed and we were unable to recover it. 00:34:42.886 [2024-07-21 03:44:28.096701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.886 [2024-07-21 03:44:28.096729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.886 qpair failed and we were unable to recover it. 00:34:42.886 [2024-07-21 03:44:28.096826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.886 [2024-07-21 03:44:28.096855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.886 qpair failed and we were unable to recover it. 00:34:42.886 [2024-07-21 03:44:28.096979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.886 [2024-07-21 03:44:28.097007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.886 qpair failed and we were unable to recover it. 00:34:42.886 [2024-07-21 03:44:28.097101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.886 [2024-07-21 03:44:28.097128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.886 qpair failed and we were unable to recover it. 00:34:42.886 [2024-07-21 03:44:28.097254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.886 [2024-07-21 03:44:28.097281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.886 qpair failed and we were unable to recover it. 00:34:42.886 [2024-07-21 03:44:28.097406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.886 [2024-07-21 03:44:28.097433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.886 qpair failed and we were unable to recover it. 00:34:42.886 [2024-07-21 03:44:28.097524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.886 [2024-07-21 03:44:28.097552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.886 qpair failed and we were unable to recover it. 00:34:42.886 [2024-07-21 03:44:28.097681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.886 [2024-07-21 03:44:28.097709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.886 qpair failed and we were unable to recover it. 00:34:42.886 [2024-07-21 03:44:28.097828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.886 [2024-07-21 03:44:28.097855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.886 qpair failed and we were unable to recover it. 00:34:42.887 [2024-07-21 03:44:28.097998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.887 [2024-07-21 03:44:28.098028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.887 qpair failed and we were unable to recover it. 00:34:42.887 [2024-07-21 03:44:28.098148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.887 [2024-07-21 03:44:28.098197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.887 qpair failed and we were unable to recover it. 00:34:42.887 [2024-07-21 03:44:28.098361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.887 [2024-07-21 03:44:28.098391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.887 qpair failed and we were unable to recover it. 00:34:42.887 [2024-07-21 03:44:28.098526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.887 [2024-07-21 03:44:28.098552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.887 qpair failed and we were unable to recover it. 00:34:42.887 [2024-07-21 03:44:28.098668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.887 [2024-07-21 03:44:28.098696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.887 qpair failed and we were unable to recover it. 00:34:42.887 [2024-07-21 03:44:28.098796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.887 [2024-07-21 03:44:28.098824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.887 qpair failed and we were unable to recover it. 00:34:42.887 [2024-07-21 03:44:28.098939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.887 [2024-07-21 03:44:28.098969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.887 qpair failed and we were unable to recover it. 00:34:42.887 [2024-07-21 03:44:28.099092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.887 [2024-07-21 03:44:28.099123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.887 qpair failed and we were unable to recover it. 00:34:42.887 [2024-07-21 03:44:28.099219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.887 [2024-07-21 03:44:28.099262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.887 qpair failed and we were unable to recover it. 00:34:42.887 [2024-07-21 03:44:28.099402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.887 [2024-07-21 03:44:28.099445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.887 qpair failed and we were unable to recover it. 00:34:42.887 [2024-07-21 03:44:28.099539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.887 [2024-07-21 03:44:28.099567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.887 qpair failed and we were unable to recover it. 00:34:42.887 [2024-07-21 03:44:28.099701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.887 [2024-07-21 03:44:28.099733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.887 qpair failed and we were unable to recover it. 00:34:42.887 [2024-07-21 03:44:28.099850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.887 [2024-07-21 03:44:28.099878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.887 qpair failed and we were unable to recover it. 00:34:42.887 [2024-07-21 03:44:28.099990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.887 [2024-07-21 03:44:28.100017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.887 qpair failed and we were unable to recover it. 00:34:42.887 [2024-07-21 03:44:28.100163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.887 [2024-07-21 03:44:28.100194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.887 qpair failed and we were unable to recover it. 00:34:42.887 [2024-07-21 03:44:28.100427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.887 [2024-07-21 03:44:28.100457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.887 qpair failed and we were unable to recover it. 00:34:42.887 [2024-07-21 03:44:28.100590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.887 [2024-07-21 03:44:28.100629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.887 qpair failed and we were unable to recover it. 00:34:42.887 [2024-07-21 03:44:28.100769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.887 [2024-07-21 03:44:28.100797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.887 qpair failed and we were unable to recover it. 00:34:42.887 [2024-07-21 03:44:28.100941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.887 [2024-07-21 03:44:28.100969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.887 qpair failed and we were unable to recover it. 00:34:42.887 [2024-07-21 03:44:28.101132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.887 [2024-07-21 03:44:28.101162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.887 qpair failed and we were unable to recover it. 00:34:42.887 [2024-07-21 03:44:28.101307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.887 [2024-07-21 03:44:28.101350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.887 qpair failed and we were unable to recover it. 00:34:42.887 [2024-07-21 03:44:28.101533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.887 [2024-07-21 03:44:28.101563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.887 qpair failed and we were unable to recover it. 00:34:42.887 [2024-07-21 03:44:28.101723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.887 [2024-07-21 03:44:28.101751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.887 qpair failed and we were unable to recover it. 00:34:42.887 [2024-07-21 03:44:28.101877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.887 [2024-07-21 03:44:28.101922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.887 qpair failed and we were unable to recover it. 00:34:42.887 [2024-07-21 03:44:28.102028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.887 [2024-07-21 03:44:28.102057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.887 qpair failed and we were unable to recover it. 00:34:42.887 [2024-07-21 03:44:28.102187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.887 [2024-07-21 03:44:28.102217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.887 qpair failed and we were unable to recover it. 00:34:42.887 [2024-07-21 03:44:28.102377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.887 [2024-07-21 03:44:28.102407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.887 qpair failed and we were unable to recover it. 00:34:42.887 [2024-07-21 03:44:28.102545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.887 [2024-07-21 03:44:28.102571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.887 qpair failed and we were unable to recover it. 00:34:42.887 [2024-07-21 03:44:28.102728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.887 [2024-07-21 03:44:28.102755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.887 qpair failed and we were unable to recover it. 00:34:42.887 [2024-07-21 03:44:28.102851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.887 [2024-07-21 03:44:28.102878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.887 qpair failed and we were unable to recover it. 00:34:42.887 [2024-07-21 03:44:28.103018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.887 [2024-07-21 03:44:28.103048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.887 qpair failed and we were unable to recover it. 00:34:42.887 [2024-07-21 03:44:28.103182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.887 [2024-07-21 03:44:28.103212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.887 qpair failed and we were unable to recover it. 00:34:42.887 [2024-07-21 03:44:28.103368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.887 [2024-07-21 03:44:28.103398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.887 qpair failed and we were unable to recover it. 00:34:42.887 [2024-07-21 03:44:28.103528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.887 [2024-07-21 03:44:28.103558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.887 qpair failed and we were unable to recover it. 00:34:42.887 [2024-07-21 03:44:28.103683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.887 [2024-07-21 03:44:28.103712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.887 qpair failed and we were unable to recover it. 00:34:42.887 [2024-07-21 03:44:28.103808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.887 [2024-07-21 03:44:28.103835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.887 qpair failed and we were unable to recover it. 00:34:42.887 [2024-07-21 03:44:28.103953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.887 [2024-07-21 03:44:28.103998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.887 qpair failed and we were unable to recover it. 00:34:42.887 [2024-07-21 03:44:28.104157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.887 [2024-07-21 03:44:28.104187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.887 qpair failed and we were unable to recover it. 00:34:42.887 [2024-07-21 03:44:28.104290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.887 [2024-07-21 03:44:28.104320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.887 qpair failed and we were unable to recover it. 00:34:42.887 [2024-07-21 03:44:28.104415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.888 [2024-07-21 03:44:28.104445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.888 qpair failed and we were unable to recover it. 00:34:42.888 [2024-07-21 03:44:28.104594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.888 [2024-07-21 03:44:28.104631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.888 qpair failed and we were unable to recover it. 00:34:42.888 [2024-07-21 03:44:28.104776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.888 [2024-07-21 03:44:28.104803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.888 qpair failed and we were unable to recover it. 00:34:42.888 [2024-07-21 03:44:28.104908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.888 [2024-07-21 03:44:28.104948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.888 qpair failed and we were unable to recover it. 00:34:42.888 [2024-07-21 03:44:28.105083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.888 [2024-07-21 03:44:28.105129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.888 qpair failed and we were unable to recover it. 00:34:42.888 [2024-07-21 03:44:28.105268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.888 [2024-07-21 03:44:28.105312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.888 qpair failed and we were unable to recover it. 00:34:42.888 [2024-07-21 03:44:28.105434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.888 [2024-07-21 03:44:28.105461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.888 qpair failed and we were unable to recover it. 00:34:42.888 [2024-07-21 03:44:28.105561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.888 [2024-07-21 03:44:28.105603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.888 qpair failed and we were unable to recover it. 00:34:42.888 [2024-07-21 03:44:28.105801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.888 [2024-07-21 03:44:28.105833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.888 qpair failed and we were unable to recover it. 00:34:42.888 [2024-07-21 03:44:28.105965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.888 [2024-07-21 03:44:28.105996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.888 qpair failed and we were unable to recover it. 00:34:42.888 [2024-07-21 03:44:28.106188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.888 [2024-07-21 03:44:28.106215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.888 qpair failed and we were unable to recover it. 00:34:42.888 [2024-07-21 03:44:28.106367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.888 [2024-07-21 03:44:28.106394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.888 qpair failed and we were unable to recover it. 00:34:42.888 [2024-07-21 03:44:28.106517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.888 [2024-07-21 03:44:28.106549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.888 qpair failed and we were unable to recover it. 00:34:42.888 [2024-07-21 03:44:28.106673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.888 [2024-07-21 03:44:28.106700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.888 qpair failed and we were unable to recover it. 00:34:42.888 [2024-07-21 03:44:28.106804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.888 [2024-07-21 03:44:28.106832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.888 qpair failed and we were unable to recover it. 00:34:42.888 [2024-07-21 03:44:28.106952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.888 [2024-07-21 03:44:28.106983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.888 qpair failed and we were unable to recover it. 00:34:42.888 [2024-07-21 03:44:28.107096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.888 [2024-07-21 03:44:28.107138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.888 qpair failed and we were unable to recover it. 00:34:42.888 [2024-07-21 03:44:28.107273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.888 [2024-07-21 03:44:28.107303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.888 qpair failed and we were unable to recover it. 00:34:42.888 [2024-07-21 03:44:28.107433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.888 [2024-07-21 03:44:28.107463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.888 qpair failed and we were unable to recover it. 00:34:42.888 [2024-07-21 03:44:28.107582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.888 [2024-07-21 03:44:28.107609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.888 qpair failed and we were unable to recover it. 00:34:42.888 [2024-07-21 03:44:28.107738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.888 [2024-07-21 03:44:28.107765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.888 qpair failed and we were unable to recover it. 00:34:42.888 [2024-07-21 03:44:28.107860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.888 [2024-07-21 03:44:28.107888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.888 qpair failed and we were unable to recover it. 00:34:42.888 [2024-07-21 03:44:28.108080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.888 [2024-07-21 03:44:28.108132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.888 qpair failed and we were unable to recover it. 00:34:42.888 [2024-07-21 03:44:28.108241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.888 [2024-07-21 03:44:28.108272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.888 qpair failed and we were unable to recover it. 00:34:42.888 [2024-07-21 03:44:28.108369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.888 [2024-07-21 03:44:28.108399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.888 qpair failed and we were unable to recover it. 00:34:42.888 [2024-07-21 03:44:28.108551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.888 [2024-07-21 03:44:28.108592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.888 qpair failed and we were unable to recover it. 00:34:42.888 [2024-07-21 03:44:28.108708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.888 [2024-07-21 03:44:28.108737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.888 qpair failed and we were unable to recover it. 00:34:42.888 [2024-07-21 03:44:28.108912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.888 [2024-07-21 03:44:28.108957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.888 qpair failed and we were unable to recover it. 00:34:42.888 [2024-07-21 03:44:28.109099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.888 [2024-07-21 03:44:28.109143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.888 qpair failed and we were unable to recover it. 00:34:42.888 [2024-07-21 03:44:28.109311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.888 [2024-07-21 03:44:28.109341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.888 qpair failed and we were unable to recover it. 00:34:42.888 [2024-07-21 03:44:28.109482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.888 [2024-07-21 03:44:28.109511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.888 qpair failed and we were unable to recover it. 00:34:42.888 [2024-07-21 03:44:28.109632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.888 [2024-07-21 03:44:28.109660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.888 qpair failed and we were unable to recover it. 00:34:42.888 [2024-07-21 03:44:28.109825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.888 [2024-07-21 03:44:28.109870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.888 qpair failed and we were unable to recover it. 00:34:42.888 [2024-07-21 03:44:28.110015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.888 [2024-07-21 03:44:28.110064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.888 qpair failed and we were unable to recover it. 00:34:42.888 [2024-07-21 03:44:28.110228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.888 [2024-07-21 03:44:28.110256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.888 qpair failed and we were unable to recover it. 00:34:42.888 [2024-07-21 03:44:28.110380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.888 [2024-07-21 03:44:28.110408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.888 qpair failed and we were unable to recover it. 00:34:42.888 [2024-07-21 03:44:28.110555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.888 [2024-07-21 03:44:28.110583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.888 qpair failed and we were unable to recover it. 00:34:42.888 [2024-07-21 03:44:28.110745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.888 [2024-07-21 03:44:28.110792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.888 qpair failed and we were unable to recover it. 00:34:42.888 [2024-07-21 03:44:28.110930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.888 [2024-07-21 03:44:28.110961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.889 qpair failed and we were unable to recover it. 00:34:42.889 [2024-07-21 03:44:28.111165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.889 [2024-07-21 03:44:28.111230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.889 qpair failed and we were unable to recover it. 00:34:42.889 [2024-07-21 03:44:28.111485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.889 [2024-07-21 03:44:28.111539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.889 qpair failed and we were unable to recover it. 00:34:42.889 [2024-07-21 03:44:28.111708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.889 [2024-07-21 03:44:28.111736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.889 qpair failed and we were unable to recover it. 00:34:42.889 [2024-07-21 03:44:28.111904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.889 [2024-07-21 03:44:28.111934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.889 qpair failed and we were unable to recover it. 00:34:42.889 [2024-07-21 03:44:28.112051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.889 [2024-07-21 03:44:28.112119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.889 qpair failed and we were unable to recover it. 00:34:42.889 [2024-07-21 03:44:28.112277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.889 [2024-07-21 03:44:28.112307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.889 qpair failed and we were unable to recover it. 00:34:42.889 [2024-07-21 03:44:28.112438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.889 [2024-07-21 03:44:28.112468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.889 qpair failed and we were unable to recover it. 00:34:42.889 [2024-07-21 03:44:28.112628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.889 [2024-07-21 03:44:28.112656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.889 qpair failed and we were unable to recover it. 00:34:42.889 [2024-07-21 03:44:28.112751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.889 [2024-07-21 03:44:28.112779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.889 qpair failed and we were unable to recover it. 00:34:42.889 [2024-07-21 03:44:28.112928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.889 [2024-07-21 03:44:28.112955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.889 qpair failed and we were unable to recover it. 00:34:42.889 [2024-07-21 03:44:28.113130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.889 [2024-07-21 03:44:28.113184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.889 qpair failed and we were unable to recover it. 00:34:42.889 [2024-07-21 03:44:28.113338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.889 [2024-07-21 03:44:28.113368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.889 qpair failed and we were unable to recover it. 00:34:42.889 [2024-07-21 03:44:28.113527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.889 [2024-07-21 03:44:28.113557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.889 qpair failed and we were unable to recover it. 00:34:42.889 [2024-07-21 03:44:28.113671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.889 [2024-07-21 03:44:28.113705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.889 qpair failed and we were unable to recover it. 00:34:42.889 [2024-07-21 03:44:28.113831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.889 [2024-07-21 03:44:28.113858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.889 qpair failed and we were unable to recover it. 00:34:42.889 [2024-07-21 03:44:28.114005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.889 [2024-07-21 03:44:28.114035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.889 qpair failed and we were unable to recover it. 00:34:42.889 [2024-07-21 03:44:28.114227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.889 [2024-07-21 03:44:28.114257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.889 qpair failed and we were unable to recover it. 00:34:42.889 [2024-07-21 03:44:28.114383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.889 [2024-07-21 03:44:28.114413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.889 qpair failed and we were unable to recover it. 00:34:42.889 [2024-07-21 03:44:28.114573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.889 [2024-07-21 03:44:28.114603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.889 qpair failed and we were unable to recover it. 00:34:42.889 [2024-07-21 03:44:28.114774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.889 [2024-07-21 03:44:28.114802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.889 qpair failed and we were unable to recover it. 00:34:42.889 [2024-07-21 03:44:28.114897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.889 [2024-07-21 03:44:28.114939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.889 qpair failed and we were unable to recover it. 00:34:42.889 [2024-07-21 03:44:28.115083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.889 [2024-07-21 03:44:28.115113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.889 qpair failed and we were unable to recover it. 00:34:42.889 [2024-07-21 03:44:28.115305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.889 [2024-07-21 03:44:28.115334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.889 qpair failed and we were unable to recover it. 00:34:42.889 [2024-07-21 03:44:28.115463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.889 [2024-07-21 03:44:28.115508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.889 qpair failed and we were unable to recover it. 00:34:42.889 [2024-07-21 03:44:28.115662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.889 [2024-07-21 03:44:28.115690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.889 qpair failed and we were unable to recover it. 00:34:42.889 [2024-07-21 03:44:28.115806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.889 [2024-07-21 03:44:28.115833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.889 qpair failed and we were unable to recover it. 00:34:42.889 [2024-07-21 03:44:28.115953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.889 [2024-07-21 03:44:28.115980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.889 qpair failed and we were unable to recover it. 00:34:42.889 [2024-07-21 03:44:28.116129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.889 [2024-07-21 03:44:28.116158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.889 qpair failed and we were unable to recover it. 00:34:42.889 [2024-07-21 03:44:28.116318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.889 [2024-07-21 03:44:28.116347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.889 qpair failed and we were unable to recover it. 00:34:42.889 [2024-07-21 03:44:28.116506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.889 [2024-07-21 03:44:28.116536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.889 qpair failed and we were unable to recover it. 00:34:42.889 [2024-07-21 03:44:28.116696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.889 [2024-07-21 03:44:28.116737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.889 qpair failed and we were unable to recover it. 00:34:42.889 [2024-07-21 03:44:28.116869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.889 [2024-07-21 03:44:28.116899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.889 qpair failed and we were unable to recover it. 00:34:42.889 [2024-07-21 03:44:28.117019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.889 [2024-07-21 03:44:28.117063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.889 qpair failed and we were unable to recover it. 00:34:42.889 [2024-07-21 03:44:28.117169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.889 [2024-07-21 03:44:28.117199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.889 qpair failed and we were unable to recover it. 00:34:42.889 [2024-07-21 03:44:28.117332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.890 [2024-07-21 03:44:28.117361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.890 qpair failed and we were unable to recover it. 00:34:42.890 [2024-07-21 03:44:28.117494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.890 [2024-07-21 03:44:28.117524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.890 qpair failed and we were unable to recover it. 00:34:42.890 [2024-07-21 03:44:28.117679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.890 [2024-07-21 03:44:28.117707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.890 qpair failed and we were unable to recover it. 00:34:42.890 [2024-07-21 03:44:28.117828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.890 [2024-07-21 03:44:28.117856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.890 qpair failed and we were unable to recover it. 00:34:42.890 [2024-07-21 03:44:28.117984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.890 [2024-07-21 03:44:28.118011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.890 qpair failed and we were unable to recover it. 00:34:42.890 [2024-07-21 03:44:28.118108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.890 [2024-07-21 03:44:28.118135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.890 qpair failed and we were unable to recover it. 00:34:42.890 [2024-07-21 03:44:28.118296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.890 [2024-07-21 03:44:28.118326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.890 qpair failed and we were unable to recover it. 00:34:42.890 [2024-07-21 03:44:28.118463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.890 [2024-07-21 03:44:28.118493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.890 qpair failed and we were unable to recover it. 00:34:42.890 [2024-07-21 03:44:28.118668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.890 [2024-07-21 03:44:28.118698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.890 qpair failed and we were unable to recover it. 00:34:42.890 [2024-07-21 03:44:28.118818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.890 [2024-07-21 03:44:28.118845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.890 qpair failed and we were unable to recover it. 00:34:42.890 [2024-07-21 03:44:28.118939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.890 [2024-07-21 03:44:28.118967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.890 qpair failed and we were unable to recover it. 00:34:42.890 [2024-07-21 03:44:28.119122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.890 [2024-07-21 03:44:28.119150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.890 qpair failed and we were unable to recover it. 00:34:42.890 [2024-07-21 03:44:28.119348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.890 [2024-07-21 03:44:28.119378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.890 qpair failed and we were unable to recover it. 00:34:42.890 [2024-07-21 03:44:28.119539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.890 [2024-07-21 03:44:28.119569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.890 qpair failed and we were unable to recover it. 00:34:42.890 [2024-07-21 03:44:28.119691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.890 [2024-07-21 03:44:28.119718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.890 qpair failed and we were unable to recover it. 00:34:42.890 [2024-07-21 03:44:28.119854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.890 [2024-07-21 03:44:28.119885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.890 qpair failed and we were unable to recover it. 00:34:42.890 [2024-07-21 03:44:28.119985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.890 [2024-07-21 03:44:28.120015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.890 qpair failed and we were unable to recover it. 00:34:42.890 [2024-07-21 03:44:28.120144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.890 [2024-07-21 03:44:28.120174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.890 qpair failed and we were unable to recover it. 00:34:42.890 [2024-07-21 03:44:28.120353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.890 [2024-07-21 03:44:28.120411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.890 qpair failed and we were unable to recover it. 00:34:42.890 [2024-07-21 03:44:28.120544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.890 [2024-07-21 03:44:28.120577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.890 qpair failed and we were unable to recover it. 00:34:42.890 [2024-07-21 03:44:28.120726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.890 [2024-07-21 03:44:28.120771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.890 qpair failed and we were unable to recover it. 00:34:42.890 [2024-07-21 03:44:28.120943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.890 [2024-07-21 03:44:28.120974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.890 qpair failed and we were unable to recover it. 00:34:42.890 [2024-07-21 03:44:28.121125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.890 [2024-07-21 03:44:28.121170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.890 qpair failed and we were unable to recover it. 00:34:42.890 [2024-07-21 03:44:28.121294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.890 [2024-07-21 03:44:28.121321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.890 qpair failed and we were unable to recover it. 00:34:42.890 [2024-07-21 03:44:28.121443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.890 [2024-07-21 03:44:28.121470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.890 qpair failed and we were unable to recover it. 00:34:42.890 [2024-07-21 03:44:28.121595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.890 [2024-07-21 03:44:28.121629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.890 qpair failed and we were unable to recover it. 00:34:42.890 [2024-07-21 03:44:28.121753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.890 [2024-07-21 03:44:28.121781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.890 qpair failed and we were unable to recover it. 00:34:42.890 [2024-07-21 03:44:28.121868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.890 [2024-07-21 03:44:28.121894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.890 qpair failed and we were unable to recover it. 00:34:42.890 [2024-07-21 03:44:28.121992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.890 [2024-07-21 03:44:28.122019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.890 qpair failed and we were unable to recover it. 00:34:42.890 [2024-07-21 03:44:28.122101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.890 [2024-07-21 03:44:28.122127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.890 qpair failed and we were unable to recover it. 00:34:42.890 [2024-07-21 03:44:28.122252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.890 [2024-07-21 03:44:28.122279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.890 qpair failed and we were unable to recover it. 00:34:42.890 [2024-07-21 03:44:28.122378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.890 [2024-07-21 03:44:28.122405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.890 qpair failed and we were unable to recover it. 00:34:42.890 [2024-07-21 03:44:28.122518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.890 [2024-07-21 03:44:28.122545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.890 qpair failed and we were unable to recover it. 00:34:42.890 [2024-07-21 03:44:28.122674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.890 [2024-07-21 03:44:28.122703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.890 qpair failed and we were unable to recover it. 00:34:42.890 [2024-07-21 03:44:28.122831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.890 [2024-07-21 03:44:28.122859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.890 qpair failed and we were unable to recover it. 00:34:42.890 [2024-07-21 03:44:28.123004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.890 [2024-07-21 03:44:28.123031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.890 qpair failed and we were unable to recover it. 00:34:42.890 [2024-07-21 03:44:28.123176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.890 [2024-07-21 03:44:28.123203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.890 qpair failed and we were unable to recover it. 00:34:42.890 [2024-07-21 03:44:28.123346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.890 [2024-07-21 03:44:28.123373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.890 qpair failed and we were unable to recover it. 00:34:42.890 [2024-07-21 03:44:28.123466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.890 [2024-07-21 03:44:28.123493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.890 qpair failed and we were unable to recover it. 00:34:42.890 [2024-07-21 03:44:28.123583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.890 [2024-07-21 03:44:28.123610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.890 qpair failed and we were unable to recover it. 00:34:42.891 [2024-07-21 03:44:28.123775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.891 [2024-07-21 03:44:28.123805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.891 qpair failed and we were unable to recover it. 00:34:42.891 [2024-07-21 03:44:28.123934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.891 [2024-07-21 03:44:28.123964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.891 qpair failed and we were unable to recover it. 00:34:42.891 [2024-07-21 03:44:28.124122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.891 [2024-07-21 03:44:28.124151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.891 qpair failed and we were unable to recover it. 00:34:42.891 [2024-07-21 03:44:28.124284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.891 [2024-07-21 03:44:28.124314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.891 qpair failed and we were unable to recover it. 00:34:42.891 [2024-07-21 03:44:28.124436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.891 [2024-07-21 03:44:28.124465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.891 qpair failed and we were unable to recover it. 00:34:42.891 [2024-07-21 03:44:28.124617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.891 [2024-07-21 03:44:28.124645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.891 qpair failed and we were unable to recover it. 00:34:42.891 [2024-07-21 03:44:28.124798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.891 [2024-07-21 03:44:28.124825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.891 qpair failed and we were unable to recover it. 00:34:42.891 [2024-07-21 03:44:28.124973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.891 [2024-07-21 03:44:28.125000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.891 qpair failed and we were unable to recover it. 00:34:42.891 [2024-07-21 03:44:28.125109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.891 [2024-07-21 03:44:28.125153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.891 qpair failed and we were unable to recover it. 00:34:42.891 [2024-07-21 03:44:28.125290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.891 [2024-07-21 03:44:28.125320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:42.891 qpair failed and we were unable to recover it. 00:34:42.891 [2024-07-21 03:44:28.125508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.891 [2024-07-21 03:44:28.125549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.891 qpair failed and we were unable to recover it. 00:34:42.891 [2024-07-21 03:44:28.125655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.891 [2024-07-21 03:44:28.125686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.891 qpair failed and we were unable to recover it. 00:34:42.891 [2024-07-21 03:44:28.125834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.891 [2024-07-21 03:44:28.125861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.891 qpair failed and we were unable to recover it. 00:34:42.891 [2024-07-21 03:44:28.125986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.891 [2024-07-21 03:44:28.126018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.891 qpair failed and we were unable to recover it. 00:34:42.891 [2024-07-21 03:44:28.126131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.891 [2024-07-21 03:44:28.126159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.891 qpair failed and we were unable to recover it. 00:34:42.891 [2024-07-21 03:44:28.126313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.891 [2024-07-21 03:44:28.126345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.891 qpair failed and we were unable to recover it. 00:34:42.891 [2024-07-21 03:44:28.126480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.891 [2024-07-21 03:44:28.126524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.891 qpair failed and we were unable to recover it. 00:34:42.891 [2024-07-21 03:44:28.126739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.891 [2024-07-21 03:44:28.126790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.891 qpair failed and we were unable to recover it. 00:34:42.891 [2024-07-21 03:44:28.126904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.891 [2024-07-21 03:44:28.126934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.891 qpair failed and we were unable to recover it. 00:34:42.891 [2024-07-21 03:44:28.127090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.891 [2024-07-21 03:44:28.127139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.891 qpair failed and we were unable to recover it. 00:34:42.891 [2024-07-21 03:44:28.127309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.891 [2024-07-21 03:44:28.127358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.891 qpair failed and we were unable to recover it. 00:34:42.891 [2024-07-21 03:44:28.127483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.891 [2024-07-21 03:44:28.127510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.891 qpair failed and we were unable to recover it. 00:34:42.891 [2024-07-21 03:44:28.127641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.891 [2024-07-21 03:44:28.127670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:42.891 qpair failed and we were unable to recover it. 00:34:42.891 [2024-07-21 03:44:28.127828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.891 [2024-07-21 03:44:28.127861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.891 qpair failed and we were unable to recover it. 00:34:42.891 [2024-07-21 03:44:28.128007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.891 [2024-07-21 03:44:28.128040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.891 qpair failed and we were unable to recover it. 00:34:42.891 [2024-07-21 03:44:28.128178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.891 [2024-07-21 03:44:28.128222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.891 qpair failed and we were unable to recover it. 00:34:42.891 [2024-07-21 03:44:28.128358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.891 [2024-07-21 03:44:28.128392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:42.891 qpair failed and we were unable to recover it. 00:34:43.173 [2024-07-21 03:44:28.128529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.173 [2024-07-21 03:44:28.128559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.173 qpair failed and we were unable to recover it. 00:34:43.173 [2024-07-21 03:44:28.128682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.173 [2024-07-21 03:44:28.128712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.173 qpair failed and we were unable to recover it. 00:34:43.173 [2024-07-21 03:44:28.128816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.173 [2024-07-21 03:44:28.128846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.173 qpair failed and we were unable to recover it. 00:34:43.173 [2024-07-21 03:44:28.128975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.173 [2024-07-21 03:44:28.129006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.173 qpair failed and we were unable to recover it. 00:34:43.173 [2024-07-21 03:44:28.129111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.173 [2024-07-21 03:44:28.129142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.173 qpair failed and we were unable to recover it. 00:34:43.173 [2024-07-21 03:44:28.129280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.173 [2024-07-21 03:44:28.129310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.173 qpair failed and we were unable to recover it. 00:34:43.173 [2024-07-21 03:44:28.129461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.173 [2024-07-21 03:44:28.129489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.173 qpair failed and we were unable to recover it. 00:34:43.173 [2024-07-21 03:44:28.129646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.173 [2024-07-21 03:44:28.129674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.173 qpair failed and we were unable to recover it. 00:34:43.173 [2024-07-21 03:44:28.129774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.173 [2024-07-21 03:44:28.129801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.173 qpair failed and we were unable to recover it. 00:34:43.173 [2024-07-21 03:44:28.129928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.173 [2024-07-21 03:44:28.129955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.173 qpair failed and we were unable to recover it. 00:34:43.173 [2024-07-21 03:44:28.130053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.173 [2024-07-21 03:44:28.130080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.173 qpair failed and we were unable to recover it. 00:34:43.173 [2024-07-21 03:44:28.130243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.173 [2024-07-21 03:44:28.130273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.173 qpair failed and we were unable to recover it. 00:34:43.173 [2024-07-21 03:44:28.130364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.173 [2024-07-21 03:44:28.130394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.173 qpair failed and we were unable to recover it. 00:34:43.173 [2024-07-21 03:44:28.130531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.173 [2024-07-21 03:44:28.130562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.173 qpair failed and we were unable to recover it. 00:34:43.173 [2024-07-21 03:44:28.130724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.173 [2024-07-21 03:44:28.130753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.173 qpair failed and we were unable to recover it. 00:34:43.173 [2024-07-21 03:44:28.130850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.173 [2024-07-21 03:44:28.130898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.173 qpair failed and we were unable to recover it. 00:34:43.173 [2024-07-21 03:44:28.131023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.173 [2024-07-21 03:44:28.131052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.173 qpair failed and we were unable to recover it. 00:34:43.173 [2024-07-21 03:44:28.131183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.173 [2024-07-21 03:44:28.131212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.173 qpair failed and we were unable to recover it. 00:34:43.173 [2024-07-21 03:44:28.131374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.173 [2024-07-21 03:44:28.131404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.173 qpair failed and we were unable to recover it. 00:34:43.173 [2024-07-21 03:44:28.131546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.173 [2024-07-21 03:44:28.131591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.173 qpair failed and we were unable to recover it. 00:34:43.173 [2024-07-21 03:44:28.131735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.173 [2024-07-21 03:44:28.131764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.173 qpair failed and we were unable to recover it. 00:34:43.173 [2024-07-21 03:44:28.131887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.173 [2024-07-21 03:44:28.131914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.173 qpair failed and we were unable to recover it. 00:34:43.173 [2024-07-21 03:44:28.132035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.173 [2024-07-21 03:44:28.132063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.173 qpair failed and we were unable to recover it. 00:34:43.173 [2024-07-21 03:44:28.132312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.173 [2024-07-21 03:44:28.132369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.173 qpair failed and we were unable to recover it. 00:34:43.173 [2024-07-21 03:44:28.132478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.173 [2024-07-21 03:44:28.132507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.173 qpair failed and we were unable to recover it. 00:34:43.173 [2024-07-21 03:44:28.132682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.173 [2024-07-21 03:44:28.132711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.173 qpair failed and we were unable to recover it. 00:34:43.173 [2024-07-21 03:44:28.132805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.173 [2024-07-21 03:44:28.132832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.173 qpair failed and we were unable to recover it. 00:34:43.173 [2024-07-21 03:44:28.132928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.173 [2024-07-21 03:44:28.132957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.173 qpair failed and we were unable to recover it. 00:34:43.173 [2024-07-21 03:44:28.133088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.173 [2024-07-21 03:44:28.133117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.173 qpair failed and we were unable to recover it. 00:34:43.173 [2024-07-21 03:44:28.133228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.173 [2024-07-21 03:44:28.133257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.173 qpair failed and we were unable to recover it. 00:34:43.173 [2024-07-21 03:44:28.133390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.173 [2024-07-21 03:44:28.133420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.173 qpair failed and we were unable to recover it. 00:34:43.173 [2024-07-21 03:44:28.133575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.173 [2024-07-21 03:44:28.133604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.173 qpair failed and we were unable to recover it. 00:34:43.173 [2024-07-21 03:44:28.133730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.173 [2024-07-21 03:44:28.133762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.173 qpair failed and we were unable to recover it. 00:34:43.173 [2024-07-21 03:44:28.133850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.174 [2024-07-21 03:44:28.133877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.174 qpair failed and we were unable to recover it. 00:34:43.174 [2024-07-21 03:44:28.134012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.174 [2024-07-21 03:44:28.134043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.174 qpair failed and we were unable to recover it. 00:34:43.174 [2024-07-21 03:44:28.134177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.174 [2024-07-21 03:44:28.134207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.174 qpair failed and we were unable to recover it. 00:34:43.174 [2024-07-21 03:44:28.134342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.174 [2024-07-21 03:44:28.134372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.174 qpair failed and we were unable to recover it. 00:34:43.174 [2024-07-21 03:44:28.134523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.174 [2024-07-21 03:44:28.134569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.174 qpair failed and we were unable to recover it. 00:34:43.174 [2024-07-21 03:44:28.134753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.174 [2024-07-21 03:44:28.134786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.174 qpair failed and we were unable to recover it. 00:34:43.174 [2024-07-21 03:44:28.134904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.174 [2024-07-21 03:44:28.134935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.174 qpair failed and we were unable to recover it. 00:34:43.174 [2024-07-21 03:44:28.135128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.174 [2024-07-21 03:44:28.135175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.174 qpair failed and we were unable to recover it. 00:34:43.174 [2024-07-21 03:44:28.135296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.174 [2024-07-21 03:44:28.135344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.174 qpair failed and we were unable to recover it. 00:34:43.174 [2024-07-21 03:44:28.135465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.174 [2024-07-21 03:44:28.135493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.174 qpair failed and we were unable to recover it. 00:34:43.174 [2024-07-21 03:44:28.135633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.174 [2024-07-21 03:44:28.135661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.174 qpair failed and we were unable to recover it. 00:34:43.174 [2024-07-21 03:44:28.135790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.174 [2024-07-21 03:44:28.135818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.174 qpair failed and we were unable to recover it. 00:34:43.174 [2024-07-21 03:44:28.135940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.174 [2024-07-21 03:44:28.135967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.174 qpair failed and we were unable to recover it. 00:34:43.174 [2024-07-21 03:44:28.136135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.174 [2024-07-21 03:44:28.136184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.174 qpair failed and we were unable to recover it. 00:34:43.174 [2024-07-21 03:44:28.136332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.174 [2024-07-21 03:44:28.136359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.174 qpair failed and we were unable to recover it. 00:34:43.174 [2024-07-21 03:44:28.136447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.174 [2024-07-21 03:44:28.136475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.174 qpair failed and we were unable to recover it. 00:34:43.174 [2024-07-21 03:44:28.136597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.174 [2024-07-21 03:44:28.136632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.174 qpair failed and we were unable to recover it. 00:34:43.174 [2024-07-21 03:44:28.136763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.174 [2024-07-21 03:44:28.136809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.174 qpair failed and we were unable to recover it. 00:34:43.174 [2024-07-21 03:44:28.136975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.174 [2024-07-21 03:44:28.137020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.174 qpair failed and we were unable to recover it. 00:34:43.174 [2024-07-21 03:44:28.137271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.174 [2024-07-21 03:44:28.137324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.174 qpair failed and we were unable to recover it. 00:34:43.174 [2024-07-21 03:44:28.137496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.174 [2024-07-21 03:44:28.137523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.174 qpair failed and we were unable to recover it. 00:34:43.174 [2024-07-21 03:44:28.137668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.174 [2024-07-21 03:44:28.137697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.174 qpair failed and we were unable to recover it. 00:34:43.174 [2024-07-21 03:44:28.137793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.174 [2024-07-21 03:44:28.137819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.174 qpair failed and we were unable to recover it. 00:34:43.174 [2024-07-21 03:44:28.137994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.174 [2024-07-21 03:44:28.138038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.174 qpair failed and we were unable to recover it. 00:34:43.174 [2024-07-21 03:44:28.138267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.174 [2024-07-21 03:44:28.138316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.174 qpair failed and we were unable to recover it. 00:34:43.174 [2024-07-21 03:44:28.138462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.174 [2024-07-21 03:44:28.138514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.174 qpair failed and we were unable to recover it. 00:34:43.174 [2024-07-21 03:44:28.138644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.174 [2024-07-21 03:44:28.138676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.174 qpair failed and we were unable to recover it. 00:34:43.174 [2024-07-21 03:44:28.138818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.174 [2024-07-21 03:44:28.138848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.174 qpair failed and we were unable to recover it. 00:34:43.174 [2024-07-21 03:44:28.138997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.174 [2024-07-21 03:44:28.139045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.174 qpair failed and we were unable to recover it. 00:34:43.174 [2024-07-21 03:44:28.139216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.174 [2024-07-21 03:44:28.139245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.174 qpair failed and we were unable to recover it. 00:34:43.174 [2024-07-21 03:44:28.139374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.174 [2024-07-21 03:44:28.139403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.174 qpair failed and we were unable to recover it. 00:34:43.174 [2024-07-21 03:44:28.139520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.174 [2024-07-21 03:44:28.139547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.174 qpair failed and we were unable to recover it. 00:34:43.174 [2024-07-21 03:44:28.139635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.174 [2024-07-21 03:44:28.139663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.174 qpair failed and we were unable to recover it. 00:34:43.174 [2024-07-21 03:44:28.139816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.174 [2024-07-21 03:44:28.139846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.174 qpair failed and we were unable to recover it. 00:34:43.174 [2024-07-21 03:44:28.139985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.174 [2024-07-21 03:44:28.140016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.174 qpair failed and we were unable to recover it. 00:34:43.174 [2024-07-21 03:44:28.140171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.174 [2024-07-21 03:44:28.140220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.174 qpair failed and we were unable to recover it. 00:34:43.174 [2024-07-21 03:44:28.140339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.174 [2024-07-21 03:44:28.140389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.174 qpair failed and we were unable to recover it. 00:34:43.174 [2024-07-21 03:44:28.140551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.174 [2024-07-21 03:44:28.140581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.174 qpair failed and we were unable to recover it. 00:34:43.174 [2024-07-21 03:44:28.140705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.174 [2024-07-21 03:44:28.140732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.174 qpair failed and we were unable to recover it. 00:34:43.174 [2024-07-21 03:44:28.140877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.174 [2024-07-21 03:44:28.140907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.174 qpair failed and we were unable to recover it. 00:34:43.174 [2024-07-21 03:44:28.141056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.175 [2024-07-21 03:44:28.141113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.175 qpair failed and we were unable to recover it. 00:34:43.175 [2024-07-21 03:44:28.141266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.175 [2024-07-21 03:44:28.141299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.175 qpair failed and we were unable to recover it. 00:34:43.175 [2024-07-21 03:44:28.141413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.175 [2024-07-21 03:44:28.141441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.175 qpair failed and we were unable to recover it. 00:34:43.175 [2024-07-21 03:44:28.141598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.175 [2024-07-21 03:44:28.141632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.175 qpair failed and we were unable to recover it. 00:34:43.175 [2024-07-21 03:44:28.141755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.175 [2024-07-21 03:44:28.141780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.175 qpair failed and we were unable to recover it. 00:34:43.175 [2024-07-21 03:44:28.141932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.175 [2024-07-21 03:44:28.141990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.175 qpair failed and we were unable to recover it. 00:34:43.175 [2024-07-21 03:44:28.142192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.175 [2024-07-21 03:44:28.142248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.175 qpair failed and we were unable to recover it. 00:34:43.175 [2024-07-21 03:44:28.142443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.175 [2024-07-21 03:44:28.142495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.175 qpair failed and we were unable to recover it. 00:34:43.175 [2024-07-21 03:44:28.142603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.175 [2024-07-21 03:44:28.142643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.175 qpair failed and we were unable to recover it. 00:34:43.175 [2024-07-21 03:44:28.142812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.175 [2024-07-21 03:44:28.142839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.175 qpair failed and we were unable to recover it. 00:34:43.175 [2024-07-21 03:44:28.142955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.175 [2024-07-21 03:44:28.142985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.175 qpair failed and we were unable to recover it. 00:34:43.175 [2024-07-21 03:44:28.143147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.175 [2024-07-21 03:44:28.143178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.175 qpair failed and we were unable to recover it. 00:34:43.175 [2024-07-21 03:44:28.143308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.175 [2024-07-21 03:44:28.143390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.175 qpair failed and we were unable to recover it. 00:34:43.175 [2024-07-21 03:44:28.143570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.175 [2024-07-21 03:44:28.143618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.175 qpair failed and we were unable to recover it. 00:34:43.175 [2024-07-21 03:44:28.143752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.175 [2024-07-21 03:44:28.143780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.175 qpair failed and we were unable to recover it. 00:34:43.175 [2024-07-21 03:44:28.143881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.175 [2024-07-21 03:44:28.143907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.175 qpair failed and we were unable to recover it. 00:34:43.175 [2024-07-21 03:44:28.144114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.175 [2024-07-21 03:44:28.144168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.175 qpair failed and we were unable to recover it. 00:34:43.175 [2024-07-21 03:44:28.144316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.175 [2024-07-21 03:44:28.144385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.175 qpair failed and we were unable to recover it. 00:34:43.175 [2024-07-21 03:44:28.144522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.175 [2024-07-21 03:44:28.144550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.175 qpair failed and we were unable to recover it. 00:34:43.175 [2024-07-21 03:44:28.144649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.175 [2024-07-21 03:44:28.144677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.175 qpair failed and we were unable to recover it. 00:34:43.175 [2024-07-21 03:44:28.144791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.175 [2024-07-21 03:44:28.144819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.175 qpair failed and we were unable to recover it. 00:34:43.175 [2024-07-21 03:44:28.144985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.175 [2024-07-21 03:44:28.145021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.175 qpair failed and we were unable to recover it. 00:34:43.175 [2024-07-21 03:44:28.145181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.175 [2024-07-21 03:44:28.145233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.175 qpair failed and we were unable to recover it. 00:34:43.175 [2024-07-21 03:44:28.145529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.175 [2024-07-21 03:44:28.145582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.175 qpair failed and we were unable to recover it. 00:34:43.175 [2024-07-21 03:44:28.145742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.175 [2024-07-21 03:44:28.145771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.175 qpair failed and we were unable to recover it. 00:34:43.175 [2024-07-21 03:44:28.145911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.175 [2024-07-21 03:44:28.145942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.175 qpair failed and we were unable to recover it. 00:34:43.175 [2024-07-21 03:44:28.146151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.175 [2024-07-21 03:44:28.146208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.175 qpair failed and we were unable to recover it. 00:34:43.175 [2024-07-21 03:44:28.146403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.175 [2024-07-21 03:44:28.146465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.175 qpair failed and we were unable to recover it. 00:34:43.175 [2024-07-21 03:44:28.146568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.175 [2024-07-21 03:44:28.146597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.175 qpair failed and we were unable to recover it. 00:34:43.175 [2024-07-21 03:44:28.146744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.175 [2024-07-21 03:44:28.146785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.175 qpair failed and we were unable to recover it. 00:34:43.175 [2024-07-21 03:44:28.146919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.175 [2024-07-21 03:44:28.146947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.175 qpair failed and we were unable to recover it. 00:34:43.175 [2024-07-21 03:44:28.147057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.175 [2024-07-21 03:44:28.147103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.175 qpair failed and we were unable to recover it. 00:34:43.175 [2024-07-21 03:44:28.147310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.175 [2024-07-21 03:44:28.147355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.175 qpair failed and we were unable to recover it. 00:34:43.175 [2024-07-21 03:44:28.147504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.175 [2024-07-21 03:44:28.147531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.175 qpair failed and we were unable to recover it. 00:34:43.175 [2024-07-21 03:44:28.147656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.175 [2024-07-21 03:44:28.147683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.175 qpair failed and we were unable to recover it. 00:34:43.175 [2024-07-21 03:44:28.147781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.175 [2024-07-21 03:44:28.147807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.175 qpair failed and we were unable to recover it. 00:34:43.175 [2024-07-21 03:44:28.147907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.175 [2024-07-21 03:44:28.147934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.175 qpair failed and we were unable to recover it. 00:34:43.175 [2024-07-21 03:44:28.148029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.175 [2024-07-21 03:44:28.148056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.175 qpair failed and we were unable to recover it. 00:34:43.175 [2024-07-21 03:44:28.148180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.175 [2024-07-21 03:44:28.148208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.175 qpair failed and we were unable to recover it. 00:34:43.175 [2024-07-21 03:44:28.148322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.175 [2024-07-21 03:44:28.148363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.175 qpair failed and we were unable to recover it. 00:34:43.175 [2024-07-21 03:44:28.148499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.175 [2024-07-21 03:44:28.148528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.176 qpair failed and we were unable to recover it. 00:34:43.176 [2024-07-21 03:44:28.148656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.176 [2024-07-21 03:44:28.148685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.176 qpair failed and we were unable to recover it. 00:34:43.176 [2024-07-21 03:44:28.148824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.176 [2024-07-21 03:44:28.148853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.176 qpair failed and we were unable to recover it. 00:34:43.176 [2024-07-21 03:44:28.149063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.176 [2024-07-21 03:44:28.149092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.176 qpair failed and we were unable to recover it. 00:34:43.176 [2024-07-21 03:44:28.149291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.176 [2024-07-21 03:44:28.149353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.176 qpair failed and we were unable to recover it. 00:34:43.176 [2024-07-21 03:44:28.149471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.176 [2024-07-21 03:44:28.149496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.176 qpair failed and we were unable to recover it. 00:34:43.176 [2024-07-21 03:44:28.149644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.176 [2024-07-21 03:44:28.149672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.176 qpair failed and we were unable to recover it. 00:34:43.176 [2024-07-21 03:44:28.149788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.176 [2024-07-21 03:44:28.149821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.176 qpair failed and we were unable to recover it. 00:34:43.176 [2024-07-21 03:44:28.150012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.176 [2024-07-21 03:44:28.150042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.176 qpair failed and we were unable to recover it. 00:34:43.176 [2024-07-21 03:44:28.150178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.176 [2024-07-21 03:44:28.150208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.176 qpair failed and we were unable to recover it. 00:34:43.176 [2024-07-21 03:44:28.150439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.176 [2024-07-21 03:44:28.150488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.176 qpair failed and we were unable to recover it. 00:34:43.176 [2024-07-21 03:44:28.150588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.176 [2024-07-21 03:44:28.150638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.176 qpair failed and we were unable to recover it. 00:34:43.176 [2024-07-21 03:44:28.150761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.176 [2024-07-21 03:44:28.150788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.176 qpair failed and we were unable to recover it. 00:34:43.176 [2024-07-21 03:44:28.150891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.176 [2024-07-21 03:44:28.150923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.176 qpair failed and we were unable to recover it. 00:34:43.176 [2024-07-21 03:44:28.151033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.176 [2024-07-21 03:44:28.151065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.176 qpair failed and we were unable to recover it. 00:34:43.176 [2024-07-21 03:44:28.151300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.176 [2024-07-21 03:44:28.151354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.176 qpair failed and we were unable to recover it. 00:34:43.176 [2024-07-21 03:44:28.151515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.176 [2024-07-21 03:44:28.151545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.176 qpair failed and we were unable to recover it. 00:34:43.176 [2024-07-21 03:44:28.151673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.176 [2024-07-21 03:44:28.151700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.176 qpair failed and we were unable to recover it. 00:34:43.176 [2024-07-21 03:44:28.151797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.176 [2024-07-21 03:44:28.151825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.176 qpair failed and we were unable to recover it. 00:34:43.176 [2024-07-21 03:44:28.152008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.176 [2024-07-21 03:44:28.152096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.176 qpair failed and we were unable to recover it. 00:34:43.176 [2024-07-21 03:44:28.152397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.176 [2024-07-21 03:44:28.152451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.176 qpair failed and we were unable to recover it. 00:34:43.176 [2024-07-21 03:44:28.152551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.176 [2024-07-21 03:44:28.152579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.176 qpair failed and we were unable to recover it. 00:34:43.176 [2024-07-21 03:44:28.152732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.176 [2024-07-21 03:44:28.152759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.176 qpair failed and we were unable to recover it. 00:34:43.176 [2024-07-21 03:44:28.152878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.176 [2024-07-21 03:44:28.152907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.176 qpair failed and we were unable to recover it. 00:34:43.176 [2024-07-21 03:44:28.153020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.176 [2024-07-21 03:44:28.153048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.176 qpair failed and we were unable to recover it. 00:34:43.176 [2024-07-21 03:44:28.153174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.176 [2024-07-21 03:44:28.153201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.176 qpair failed and we were unable to recover it. 00:34:43.176 [2024-07-21 03:44:28.153353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.176 [2024-07-21 03:44:28.153383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.176 qpair failed and we were unable to recover it. 00:34:43.176 [2024-07-21 03:44:28.153512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.176 [2024-07-21 03:44:28.153542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.176 qpair failed and we were unable to recover it. 00:34:43.176 [2024-07-21 03:44:28.153712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.176 [2024-07-21 03:44:28.153739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.176 qpair failed and we were unable to recover it. 00:34:43.176 [2024-07-21 03:44:28.153868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.176 [2024-07-21 03:44:28.153898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.176 qpair failed and we were unable to recover it. 00:34:43.176 [2024-07-21 03:44:28.154034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.176 [2024-07-21 03:44:28.154065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.176 qpair failed and we were unable to recover it. 00:34:43.176 [2024-07-21 03:44:28.154162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.176 [2024-07-21 03:44:28.154193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.176 qpair failed and we were unable to recover it. 00:34:43.176 [2024-07-21 03:44:28.154339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.176 [2024-07-21 03:44:28.154400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.176 qpair failed and we were unable to recover it. 00:34:43.176 [2024-07-21 03:44:28.154556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.176 [2024-07-21 03:44:28.154585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.176 qpair failed and we were unable to recover it. 00:34:43.176 [2024-07-21 03:44:28.154694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.176 [2024-07-21 03:44:28.154723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.176 qpair failed and we were unable to recover it. 00:34:43.176 [2024-07-21 03:44:28.154860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.176 [2024-07-21 03:44:28.154890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.176 qpair failed and we were unable to recover it. 00:34:43.176 [2024-07-21 03:44:28.155074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.176 [2024-07-21 03:44:28.155117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.176 qpair failed and we were unable to recover it. 00:34:43.176 [2024-07-21 03:44:28.155234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.176 [2024-07-21 03:44:28.155280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.176 qpair failed and we were unable to recover it. 00:34:43.176 [2024-07-21 03:44:28.155402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.176 [2024-07-21 03:44:28.155431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.176 qpair failed and we were unable to recover it. 00:34:43.176 [2024-07-21 03:44:28.155523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.176 [2024-07-21 03:44:28.155550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.176 qpair failed and we were unable to recover it. 00:34:43.176 [2024-07-21 03:44:28.155712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.176 [2024-07-21 03:44:28.155764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.177 qpair failed and we were unable to recover it. 00:34:43.177 [2024-07-21 03:44:28.155984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.177 [2024-07-21 03:44:28.156027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.177 qpair failed and we were unable to recover it. 00:34:43.177 [2024-07-21 03:44:28.156155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.177 [2024-07-21 03:44:28.156199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.177 qpair failed and we were unable to recover it. 00:34:43.177 [2024-07-21 03:44:28.156295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.177 [2024-07-21 03:44:28.156323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.177 qpair failed and we were unable to recover it. 00:34:43.177 [2024-07-21 03:44:28.156438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.177 [2024-07-21 03:44:28.156466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.177 qpair failed and we were unable to recover it. 00:34:43.177 [2024-07-21 03:44:28.156592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.177 [2024-07-21 03:44:28.156627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.177 qpair failed and we were unable to recover it. 00:34:43.177 [2024-07-21 03:44:28.156798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.177 [2024-07-21 03:44:28.156843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.177 qpair failed and we were unable to recover it. 00:34:43.177 [2024-07-21 03:44:28.156958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.177 [2024-07-21 03:44:28.157005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.177 qpair failed and we were unable to recover it. 00:34:43.177 [2024-07-21 03:44:28.157129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.177 [2024-07-21 03:44:28.157157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.177 qpair failed and we were unable to recover it. 00:34:43.177 [2024-07-21 03:44:28.157249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.177 [2024-07-21 03:44:28.157277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.177 qpair failed and we were unable to recover it. 00:34:43.177 [2024-07-21 03:44:28.157391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.177 [2024-07-21 03:44:28.157419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.177 qpair failed and we were unable to recover it. 00:34:43.177 [2024-07-21 03:44:28.157514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.177 [2024-07-21 03:44:28.157542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.177 qpair failed and we were unable to recover it. 00:34:43.177 [2024-07-21 03:44:28.157661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.177 [2024-07-21 03:44:28.157690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.177 qpair failed and we were unable to recover it. 00:34:43.177 [2024-07-21 03:44:28.157812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.177 [2024-07-21 03:44:28.157840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.177 qpair failed and we were unable to recover it. 00:34:43.177 [2024-07-21 03:44:28.157996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.177 [2024-07-21 03:44:28.158023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.177 qpair failed and we were unable to recover it. 00:34:43.177 [2024-07-21 03:44:28.158149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.177 [2024-07-21 03:44:28.158176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.177 qpair failed and we were unable to recover it. 00:34:43.177 [2024-07-21 03:44:28.158266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.177 [2024-07-21 03:44:28.158294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.177 qpair failed and we were unable to recover it. 00:34:43.177 [2024-07-21 03:44:28.158387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.177 [2024-07-21 03:44:28.158415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.177 qpair failed and we were unable to recover it. 00:34:43.177 [2024-07-21 03:44:28.158573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.177 [2024-07-21 03:44:28.158600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.177 qpair failed and we were unable to recover it. 00:34:43.177 [2024-07-21 03:44:28.158743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.177 [2024-07-21 03:44:28.158788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.177 qpair failed and we were unable to recover it. 00:34:43.177 [2024-07-21 03:44:28.158909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.177 [2024-07-21 03:44:28.158956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.177 qpair failed and we were unable to recover it. 00:34:43.177 [2024-07-21 03:44:28.159096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.177 [2024-07-21 03:44:28.159141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.177 qpair failed and we were unable to recover it. 00:34:43.177 [2024-07-21 03:44:28.159292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.177 [2024-07-21 03:44:28.159321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.177 qpair failed and we were unable to recover it. 00:34:43.177 [2024-07-21 03:44:28.159408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.177 [2024-07-21 03:44:28.159434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.177 qpair failed and we were unable to recover it. 00:34:43.177 [2024-07-21 03:44:28.159525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.177 [2024-07-21 03:44:28.159551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.177 qpair failed and we were unable to recover it. 00:34:43.177 [2024-07-21 03:44:28.159694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.177 [2024-07-21 03:44:28.159723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.177 qpair failed and we were unable to recover it. 00:34:43.177 [2024-07-21 03:44:28.159865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.177 [2024-07-21 03:44:28.159893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.177 qpair failed and we were unable to recover it. 00:34:43.177 [2024-07-21 03:44:28.160053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.177 [2024-07-21 03:44:28.160129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.177 qpair failed and we were unable to recover it. 00:34:43.177 [2024-07-21 03:44:28.160283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.177 [2024-07-21 03:44:28.160328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.177 qpair failed and we were unable to recover it. 00:34:43.177 [2024-07-21 03:44:28.160453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.177 [2024-07-21 03:44:28.160481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.177 qpair failed and we were unable to recover it. 00:34:43.177 [2024-07-21 03:44:28.160631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.177 [2024-07-21 03:44:28.160688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.177 qpair failed and we were unable to recover it. 00:34:43.177 [2024-07-21 03:44:28.160843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.177 [2024-07-21 03:44:28.160876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.177 qpair failed and we were unable to recover it. 00:34:43.177 [2024-07-21 03:44:28.161038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.177 [2024-07-21 03:44:28.161068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.177 qpair failed and we were unable to recover it. 00:34:43.177 [2024-07-21 03:44:28.161197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.177 [2024-07-21 03:44:28.161227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.178 qpair failed and we were unable to recover it. 00:34:43.178 [2024-07-21 03:44:28.161478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.178 [2024-07-21 03:44:28.161533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.178 qpair failed and we were unable to recover it. 00:34:43.178 [2024-07-21 03:44:28.161705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.178 [2024-07-21 03:44:28.161733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.178 qpair failed and we were unable to recover it. 00:34:43.178 [2024-07-21 03:44:28.161836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.178 [2024-07-21 03:44:28.161863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.178 qpair failed and we were unable to recover it. 00:34:43.178 [2024-07-21 03:44:28.162018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.178 [2024-07-21 03:44:28.162047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.178 qpair failed and we were unable to recover it. 00:34:43.178 [2024-07-21 03:44:28.162151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.178 [2024-07-21 03:44:28.162180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.178 qpair failed and we were unable to recover it. 00:34:43.178 [2024-07-21 03:44:28.162318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.178 [2024-07-21 03:44:28.162347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.178 qpair failed and we were unable to recover it. 00:34:43.178 [2024-07-21 03:44:28.162451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.178 [2024-07-21 03:44:28.162486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.178 qpair failed and we were unable to recover it. 00:34:43.178 [2024-07-21 03:44:28.162611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.178 [2024-07-21 03:44:28.162647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.178 qpair failed and we were unable to recover it. 00:34:43.178 [2024-07-21 03:44:28.162771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.178 [2024-07-21 03:44:28.162799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.178 qpair failed and we were unable to recover it. 00:34:43.178 [2024-07-21 03:44:28.162966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.178 [2024-07-21 03:44:28.162996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.178 qpair failed and we were unable to recover it. 00:34:43.178 [2024-07-21 03:44:28.163119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.178 [2024-07-21 03:44:28.163149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.178 qpair failed and we were unable to recover it. 00:34:43.178 [2024-07-21 03:44:28.163286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.178 [2024-07-21 03:44:28.163316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.178 qpair failed and we were unable to recover it. 00:34:43.178 [2024-07-21 03:44:28.163476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.178 [2024-07-21 03:44:28.163506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.178 qpair failed and we were unable to recover it. 00:34:43.178 [2024-07-21 03:44:28.163604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.178 [2024-07-21 03:44:28.163663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.178 qpair failed and we were unable to recover it. 00:34:43.178 [2024-07-21 03:44:28.163758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.178 [2024-07-21 03:44:28.163785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.178 qpair failed and we were unable to recover it. 00:34:43.178 [2024-07-21 03:44:28.163891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.178 [2024-07-21 03:44:28.163923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.178 qpair failed and we were unable to recover it. 00:34:43.178 [2024-07-21 03:44:28.164083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.178 [2024-07-21 03:44:28.164113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.178 qpair failed and we were unable to recover it. 00:34:43.178 [2024-07-21 03:44:28.164218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.178 [2024-07-21 03:44:28.164248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.178 qpair failed and we were unable to recover it. 00:34:43.178 [2024-07-21 03:44:28.164414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.178 [2024-07-21 03:44:28.164444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.178 qpair failed and we were unable to recover it. 00:34:43.178 [2024-07-21 03:44:28.164582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.178 [2024-07-21 03:44:28.164611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.178 qpair failed and we were unable to recover it. 00:34:43.178 [2024-07-21 03:44:28.164764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.178 [2024-07-21 03:44:28.164792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.178 qpair failed and we were unable to recover it. 00:34:43.178 [2024-07-21 03:44:28.164927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.178 [2024-07-21 03:44:28.164968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.178 qpair failed and we were unable to recover it. 00:34:43.178 [2024-07-21 03:44:28.165119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.178 [2024-07-21 03:44:28.165167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.178 qpair failed and we were unable to recover it. 00:34:43.178 [2024-07-21 03:44:28.165313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.178 [2024-07-21 03:44:28.165402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.178 qpair failed and we were unable to recover it. 00:34:43.178 [2024-07-21 03:44:28.165553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.178 [2024-07-21 03:44:28.165580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.178 qpair failed and we were unable to recover it. 00:34:43.178 [2024-07-21 03:44:28.165710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.178 [2024-07-21 03:44:28.165738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.178 qpair failed and we were unable to recover it. 00:34:43.178 [2024-07-21 03:44:28.165865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.178 [2024-07-21 03:44:28.165895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.178 qpair failed and we were unable to recover it. 00:34:43.178 [2024-07-21 03:44:28.166041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.178 [2024-07-21 03:44:28.166084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.178 qpair failed and we were unable to recover it. 00:34:43.178 [2024-07-21 03:44:28.166202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.178 [2024-07-21 03:44:28.166233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.178 qpair failed and we were unable to recover it. 00:34:43.178 [2024-07-21 03:44:28.166407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.178 [2024-07-21 03:44:28.166448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.178 qpair failed and we were unable to recover it. 00:34:43.178 [2024-07-21 03:44:28.166583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.178 [2024-07-21 03:44:28.166618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.178 qpair failed and we were unable to recover it. 00:34:43.178 [2024-07-21 03:44:28.166767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.178 [2024-07-21 03:44:28.166810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.178 qpair failed and we were unable to recover it. 00:34:43.178 [2024-07-21 03:44:28.166952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.178 [2024-07-21 03:44:28.166980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.178 qpair failed and we were unable to recover it. 00:34:43.178 [2024-07-21 03:44:28.167182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.178 [2024-07-21 03:44:28.167246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.178 qpair failed and we were unable to recover it. 00:34:43.178 [2024-07-21 03:44:28.167343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.178 [2024-07-21 03:44:28.167372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.178 qpair failed and we were unable to recover it. 00:34:43.178 [2024-07-21 03:44:28.167491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.178 [2024-07-21 03:44:28.167520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.178 qpair failed and we were unable to recover it. 00:34:43.178 [2024-07-21 03:44:28.167660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.178 [2024-07-21 03:44:28.167690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.178 qpair failed and we were unable to recover it. 00:34:43.178 [2024-07-21 03:44:28.167814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.178 [2024-07-21 03:44:28.167859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.178 qpair failed and we were unable to recover it. 00:34:43.178 [2024-07-21 03:44:28.167977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.178 [2024-07-21 03:44:28.168005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.178 qpair failed and we were unable to recover it. 00:34:43.178 [2024-07-21 03:44:28.168130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.179 [2024-07-21 03:44:28.168159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.179 qpair failed and we were unable to recover it. 00:34:43.179 [2024-07-21 03:44:28.168299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.179 [2024-07-21 03:44:28.168326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.179 qpair failed and we were unable to recover it. 00:34:43.179 [2024-07-21 03:44:28.168445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.179 [2024-07-21 03:44:28.168472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.179 qpair failed and we were unable to recover it. 00:34:43.179 [2024-07-21 03:44:28.168560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.179 [2024-07-21 03:44:28.168587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.179 qpair failed and we were unable to recover it. 00:34:43.179 [2024-07-21 03:44:28.168695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.179 [2024-07-21 03:44:28.168723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.179 qpair failed and we were unable to recover it. 00:34:43.179 [2024-07-21 03:44:28.168816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.179 [2024-07-21 03:44:28.168844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.179 qpair failed and we were unable to recover it. 00:34:43.179 [2024-07-21 03:44:28.168934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.179 [2024-07-21 03:44:28.168962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.179 qpair failed and we were unable to recover it. 00:34:43.179 [2024-07-21 03:44:28.169068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.179 [2024-07-21 03:44:28.169096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.179 qpair failed and we were unable to recover it. 00:34:43.179 [2024-07-21 03:44:28.169200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.179 [2024-07-21 03:44:28.169229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.179 qpair failed and we were unable to recover it. 00:34:43.179 [2024-07-21 03:44:28.169357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.179 [2024-07-21 03:44:28.169385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.179 qpair failed and we were unable to recover it. 00:34:43.179 [2024-07-21 03:44:28.169487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.179 [2024-07-21 03:44:28.169514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.179 qpair failed and we were unable to recover it. 00:34:43.179 [2024-07-21 03:44:28.169685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.179 [2024-07-21 03:44:28.169715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.179 qpair failed and we were unable to recover it. 00:34:43.179 [2024-07-21 03:44:28.169849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.179 [2024-07-21 03:44:28.169878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.179 qpair failed and we were unable to recover it. 00:34:43.179 [2024-07-21 03:44:28.170036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.179 [2024-07-21 03:44:28.170065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.179 qpair failed and we were unable to recover it. 00:34:43.179 [2024-07-21 03:44:28.170226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.179 [2024-07-21 03:44:28.170255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.179 qpair failed and we were unable to recover it. 00:34:43.179 [2024-07-21 03:44:28.170386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.179 [2024-07-21 03:44:28.170414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.179 qpair failed and we were unable to recover it. 00:34:43.179 [2024-07-21 03:44:28.170536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.179 [2024-07-21 03:44:28.170564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.179 qpair failed and we were unable to recover it. 00:34:43.179 [2024-07-21 03:44:28.170690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.179 [2024-07-21 03:44:28.170719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.179 qpair failed and we were unable to recover it. 00:34:43.179 [2024-07-21 03:44:28.170889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.179 [2024-07-21 03:44:28.170920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.179 qpair failed and we were unable to recover it. 00:34:43.179 [2024-07-21 03:44:28.171049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.179 [2024-07-21 03:44:28.171098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.179 qpair failed and we were unable to recover it. 00:34:43.179 [2024-07-21 03:44:28.171275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.179 [2024-07-21 03:44:28.171320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.179 qpair failed and we were unable to recover it. 00:34:43.179 [2024-07-21 03:44:28.171458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.179 [2024-07-21 03:44:28.171499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.179 qpair failed and we were unable to recover it. 00:34:43.179 [2024-07-21 03:44:28.171632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.179 [2024-07-21 03:44:28.171677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.179 qpair failed and we were unable to recover it. 00:34:43.179 [2024-07-21 03:44:28.171839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.179 [2024-07-21 03:44:28.171869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.179 qpair failed and we were unable to recover it. 00:34:43.179 [2024-07-21 03:44:28.172023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.179 [2024-07-21 03:44:28.172072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.179 qpair failed and we were unable to recover it. 00:34:43.179 [2024-07-21 03:44:28.172184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.179 [2024-07-21 03:44:28.172215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.179 qpair failed and we were unable to recover it. 00:34:43.179 [2024-07-21 03:44:28.172352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.179 [2024-07-21 03:44:28.172382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.179 qpair failed and we were unable to recover it. 00:34:43.179 [2024-07-21 03:44:28.172551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.179 [2024-07-21 03:44:28.172580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.179 qpair failed and we were unable to recover it. 00:34:43.179 [2024-07-21 03:44:28.172714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.179 [2024-07-21 03:44:28.172741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.179 qpair failed and we were unable to recover it. 00:34:43.179 [2024-07-21 03:44:28.172854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.179 [2024-07-21 03:44:28.172899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.179 qpair failed and we were unable to recover it. 00:34:43.179 [2024-07-21 03:44:28.173049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.179 [2024-07-21 03:44:28.173095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.179 qpair failed and we were unable to recover it. 00:34:43.179 [2024-07-21 03:44:28.173236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.179 [2024-07-21 03:44:28.173281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.179 qpair failed and we were unable to recover it. 00:34:43.179 [2024-07-21 03:44:28.173403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.179 [2024-07-21 03:44:28.173430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.179 qpair failed and we were unable to recover it. 00:34:43.179 [2024-07-21 03:44:28.173549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.179 [2024-07-21 03:44:28.173576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.179 qpair failed and we were unable to recover it. 00:34:43.179 [2024-07-21 03:44:28.173677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.179 [2024-07-21 03:44:28.173709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.179 qpair failed and we were unable to recover it. 00:34:43.179 [2024-07-21 03:44:28.173834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.179 [2024-07-21 03:44:28.173861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.179 qpair failed and we were unable to recover it. 00:34:43.179 [2024-07-21 03:44:28.174007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.179 [2024-07-21 03:44:28.174034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.179 qpair failed and we were unable to recover it. 00:34:43.179 [2024-07-21 03:44:28.174156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.179 [2024-07-21 03:44:28.174184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.179 qpair failed and we were unable to recover it. 00:34:43.179 [2024-07-21 03:44:28.174299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.179 [2024-07-21 03:44:28.174326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.179 qpair failed and we were unable to recover it. 00:34:43.179 [2024-07-21 03:44:28.174454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.179 [2024-07-21 03:44:28.174495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.179 qpair failed and we were unable to recover it. 00:34:43.179 [2024-07-21 03:44:28.174632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.179 [2024-07-21 03:44:28.174663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.180 qpair failed and we were unable to recover it. 00:34:43.180 [2024-07-21 03:44:28.174754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.180 [2024-07-21 03:44:28.174781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.180 qpair failed and we were unable to recover it. 00:34:43.180 [2024-07-21 03:44:28.174905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.180 [2024-07-21 03:44:28.174932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.180 qpair failed and we were unable to recover it. 00:34:43.180 [2024-07-21 03:44:28.175023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.180 [2024-07-21 03:44:28.175066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.180 qpair failed and we were unable to recover it. 00:34:43.180 [2024-07-21 03:44:28.175204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.180 [2024-07-21 03:44:28.175234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.180 qpair failed and we were unable to recover it. 00:34:43.180 [2024-07-21 03:44:28.175404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.180 [2024-07-21 03:44:28.175431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.180 qpair failed and we were unable to recover it. 00:34:43.180 [2024-07-21 03:44:28.175525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.180 [2024-07-21 03:44:28.175552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.180 qpair failed and we were unable to recover it. 00:34:43.180 [2024-07-21 03:44:28.175701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.180 [2024-07-21 03:44:28.175746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.180 qpair failed and we were unable to recover it. 00:34:43.180 [2024-07-21 03:44:28.175895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.180 [2024-07-21 03:44:28.175926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.180 qpair failed and we were unable to recover it. 00:34:43.180 [2024-07-21 03:44:28.176064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.180 [2024-07-21 03:44:28.176095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.180 qpair failed and we were unable to recover it. 00:34:43.180 [2024-07-21 03:44:28.176255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.180 [2024-07-21 03:44:28.176285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.180 qpair failed and we were unable to recover it. 00:34:43.180 [2024-07-21 03:44:28.176461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.180 [2024-07-21 03:44:28.176490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.180 qpair failed and we were unable to recover it. 00:34:43.180 [2024-07-21 03:44:28.176586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.180 [2024-07-21 03:44:28.176622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.180 qpair failed and we were unable to recover it. 00:34:43.180 [2024-07-21 03:44:28.176769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.180 [2024-07-21 03:44:28.176814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.180 qpair failed and we were unable to recover it. 00:34:43.180 [2024-07-21 03:44:28.177005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.180 [2024-07-21 03:44:28.177057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.180 qpair failed and we were unable to recover it. 00:34:43.180 [2024-07-21 03:44:28.177191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.180 [2024-07-21 03:44:28.177236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.180 qpair failed and we were unable to recover it. 00:34:43.180 [2024-07-21 03:44:28.177378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.180 [2024-07-21 03:44:28.177422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.180 qpair failed and we were unable to recover it. 00:34:43.180 [2024-07-21 03:44:28.177543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.180 [2024-07-21 03:44:28.177571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.180 qpair failed and we were unable to recover it. 00:34:43.180 [2024-07-21 03:44:28.177736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.180 [2024-07-21 03:44:28.177781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.180 qpair failed and we were unable to recover it. 00:34:43.180 [2024-07-21 03:44:28.177888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.180 [2024-07-21 03:44:28.177918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.180 qpair failed and we were unable to recover it. 00:34:43.180 [2024-07-21 03:44:28.178053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.180 [2024-07-21 03:44:28.178085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.180 qpair failed and we were unable to recover it. 00:34:43.180 [2024-07-21 03:44:28.178196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.180 [2024-07-21 03:44:28.178231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.180 qpair failed and we were unable to recover it. 00:34:43.180 [2024-07-21 03:44:28.178358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.180 [2024-07-21 03:44:28.178386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.180 qpair failed and we were unable to recover it. 00:34:43.180 [2024-07-21 03:44:28.178531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.180 [2024-07-21 03:44:28.178559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.180 qpair failed and we were unable to recover it. 00:34:43.180 [2024-07-21 03:44:28.178684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.180 [2024-07-21 03:44:28.178712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.180 qpair failed and we were unable to recover it. 00:34:43.180 [2024-07-21 03:44:28.178811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.180 [2024-07-21 03:44:28.178837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.180 qpair failed and we were unable to recover it. 00:34:43.180 [2024-07-21 03:44:28.178974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.180 [2024-07-21 03:44:28.179003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.180 qpair failed and we were unable to recover it. 00:34:43.180 [2024-07-21 03:44:28.179130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.180 [2024-07-21 03:44:28.179159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.180 qpair failed and we were unable to recover it. 00:34:43.180 [2024-07-21 03:44:28.179263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.180 [2024-07-21 03:44:28.179292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.180 qpair failed and we were unable to recover it. 00:34:43.180 [2024-07-21 03:44:28.179423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.180 [2024-07-21 03:44:28.179451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.180 qpair failed and we were unable to recover it. 00:34:43.180 [2024-07-21 03:44:28.179607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.180 [2024-07-21 03:44:28.179662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.180 qpair failed and we were unable to recover it. 00:34:43.180 [2024-07-21 03:44:28.179785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.180 [2024-07-21 03:44:28.179812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.180 qpair failed and we were unable to recover it. 00:34:43.180 [2024-07-21 03:44:28.179959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.180 [2024-07-21 03:44:28.179989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.180 qpair failed and we were unable to recover it. 00:34:43.180 [2024-07-21 03:44:28.180148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.180 [2024-07-21 03:44:28.180177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.180 qpair failed and we were unable to recover it. 00:34:43.180 [2024-07-21 03:44:28.180310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.180 [2024-07-21 03:44:28.180340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.180 qpair failed and we were unable to recover it. 00:34:43.180 [2024-07-21 03:44:28.180511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.180 [2024-07-21 03:44:28.180539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.180 qpair failed and we were unable to recover it. 00:34:43.180 [2024-07-21 03:44:28.180700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.180 [2024-07-21 03:44:28.180728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.180 qpair failed and we were unable to recover it. 00:34:43.180 [2024-07-21 03:44:28.180830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.180 [2024-07-21 03:44:28.180857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.180 qpair failed and we were unable to recover it. 00:34:43.180 [2024-07-21 03:44:28.181024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.180 [2024-07-21 03:44:28.181070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.180 qpair failed and we were unable to recover it. 00:34:43.180 [2024-07-21 03:44:28.181213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.180 [2024-07-21 03:44:28.181258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.180 qpair failed and we were unable to recover it. 00:34:43.180 [2024-07-21 03:44:28.181385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.180 [2024-07-21 03:44:28.181412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.180 qpair failed and we were unable to recover it. 00:34:43.181 [2024-07-21 03:44:28.181537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.181 [2024-07-21 03:44:28.181565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.181 qpair failed and we were unable to recover it. 00:34:43.181 [2024-07-21 03:44:28.181715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.181 [2024-07-21 03:44:28.181760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.181 qpair failed and we were unable to recover it. 00:34:43.181 [2024-07-21 03:44:28.181902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.181 [2024-07-21 03:44:28.181952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.181 qpair failed and we were unable to recover it. 00:34:43.181 [2024-07-21 03:44:28.182043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.181 [2024-07-21 03:44:28.182071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.181 qpair failed and we were unable to recover it. 00:34:43.181 [2024-07-21 03:44:28.182239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.181 [2024-07-21 03:44:28.182284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.181 qpair failed and we were unable to recover it. 00:34:43.181 [2024-07-21 03:44:28.182417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.181 [2024-07-21 03:44:28.182445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.181 qpair failed and we were unable to recover it. 00:34:43.181 [2024-07-21 03:44:28.182571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.181 [2024-07-21 03:44:28.182599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.181 qpair failed and we were unable to recover it. 00:34:43.181 [2024-07-21 03:44:28.182749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.181 [2024-07-21 03:44:28.182791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.181 qpair failed and we were unable to recover it. 00:34:43.181 [2024-07-21 03:44:28.182924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.181 [2024-07-21 03:44:28.182952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.181 qpair failed and we were unable to recover it. 00:34:43.181 [2024-07-21 03:44:28.183086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.181 [2024-07-21 03:44:28.183117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.181 qpair failed and we were unable to recover it. 00:34:43.181 [2024-07-21 03:44:28.183280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.181 [2024-07-21 03:44:28.183310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.181 qpair failed and we were unable to recover it. 00:34:43.181 [2024-07-21 03:44:28.183440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.181 [2024-07-21 03:44:28.183470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.181 qpair failed and we were unable to recover it. 00:34:43.181 [2024-07-21 03:44:28.183581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.181 [2024-07-21 03:44:28.183610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.181 qpair failed and we were unable to recover it. 00:34:43.181 [2024-07-21 03:44:28.183716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.181 [2024-07-21 03:44:28.183744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.181 qpair failed and we were unable to recover it. 00:34:43.181 [2024-07-21 03:44:28.183871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.181 [2024-07-21 03:44:28.183916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.181 qpair failed and we were unable to recover it. 00:34:43.181 [2024-07-21 03:44:28.184049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.181 [2024-07-21 03:44:28.184078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.181 qpair failed and we were unable to recover it. 00:34:43.181 [2024-07-21 03:44:28.184207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.181 [2024-07-21 03:44:28.184237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.181 qpair failed and we were unable to recover it. 00:34:43.181 [2024-07-21 03:44:28.184344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.181 [2024-07-21 03:44:28.184374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.181 qpair failed and we were unable to recover it. 00:34:43.181 [2024-07-21 03:44:28.184498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.181 [2024-07-21 03:44:28.184528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.181 qpair failed and we were unable to recover it. 00:34:43.181 [2024-07-21 03:44:28.184694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.181 [2024-07-21 03:44:28.184736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.181 qpair failed and we were unable to recover it. 00:34:43.181 [2024-07-21 03:44:28.184881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.181 [2024-07-21 03:44:28.184946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.181 qpair failed and we were unable to recover it. 00:34:43.181 [2024-07-21 03:44:28.185067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.181 [2024-07-21 03:44:28.185125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.181 qpair failed and we were unable to recover it. 00:34:43.181 [2024-07-21 03:44:28.185288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.181 [2024-07-21 03:44:28.185319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.181 qpair failed and we were unable to recover it. 00:34:43.181 [2024-07-21 03:44:28.185452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.181 [2024-07-21 03:44:28.185483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.181 qpair failed and we were unable to recover it. 00:34:43.181 [2024-07-21 03:44:28.185601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.181 [2024-07-21 03:44:28.185638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.181 qpair failed and we were unable to recover it. 00:34:43.181 [2024-07-21 03:44:28.185763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.181 [2024-07-21 03:44:28.185790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.181 qpair failed and we were unable to recover it. 00:34:43.181 [2024-07-21 03:44:28.185916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.181 [2024-07-21 03:44:28.185944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.181 qpair failed and we were unable to recover it. 00:34:43.181 [2024-07-21 03:44:28.186086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.181 [2024-07-21 03:44:28.186116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.181 qpair failed and we were unable to recover it. 00:34:43.181 [2024-07-21 03:44:28.186247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.181 [2024-07-21 03:44:28.186280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.181 qpair failed and we were unable to recover it. 00:34:43.181 [2024-07-21 03:44:28.186425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.181 [2024-07-21 03:44:28.186470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.181 qpair failed and we were unable to recover it. 00:34:43.181 [2024-07-21 03:44:28.186633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.181 [2024-07-21 03:44:28.186661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.181 qpair failed and we were unable to recover it. 00:34:43.181 [2024-07-21 03:44:28.186780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.181 [2024-07-21 03:44:28.186807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.181 qpair failed and we were unable to recover it. 00:34:43.181 [2024-07-21 03:44:28.186898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.181 [2024-07-21 03:44:28.186925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.181 qpair failed and we were unable to recover it. 00:34:43.181 [2024-07-21 03:44:28.187062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.181 [2024-07-21 03:44:28.187092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.181 qpair failed and we were unable to recover it. 00:34:43.181 [2024-07-21 03:44:28.187229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.181 [2024-07-21 03:44:28.187259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.181 qpair failed and we were unable to recover it. 00:34:43.182 [2024-07-21 03:44:28.187388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.182 [2024-07-21 03:44:28.187418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.182 qpair failed and we were unable to recover it. 00:34:43.182 [2024-07-21 03:44:28.187544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.182 [2024-07-21 03:44:28.187586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.182 qpair failed and we were unable to recover it. 00:34:43.182 [2024-07-21 03:44:28.187729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.182 [2024-07-21 03:44:28.187759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.182 qpair failed and we were unable to recover it. 00:34:43.182 [2024-07-21 03:44:28.187894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.182 [2024-07-21 03:44:28.187939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.182 qpair failed and we were unable to recover it. 00:34:43.182 [2024-07-21 03:44:28.188034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.182 [2024-07-21 03:44:28.188063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.182 qpair failed and we were unable to recover it. 00:34:43.182 [2024-07-21 03:44:28.188193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.182 [2024-07-21 03:44:28.188221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.182 qpair failed and we were unable to recover it. 00:34:43.182 [2024-07-21 03:44:28.188427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.182 [2024-07-21 03:44:28.188497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.182 qpair failed and we were unable to recover it. 00:34:43.182 [2024-07-21 03:44:28.188639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.182 [2024-07-21 03:44:28.188684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.182 qpair failed and we were unable to recover it. 00:34:43.182 [2024-07-21 03:44:28.188831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.182 [2024-07-21 03:44:28.188858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.182 qpair failed and we were unable to recover it. 00:34:43.182 [2024-07-21 03:44:28.189065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.182 [2024-07-21 03:44:28.189132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.182 qpair failed and we were unable to recover it. 00:34:43.182 [2024-07-21 03:44:28.189377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.182 [2024-07-21 03:44:28.189428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.182 qpair failed and we were unable to recover it. 00:34:43.182 [2024-07-21 03:44:28.189554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.182 [2024-07-21 03:44:28.189585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.182 qpair failed and we were unable to recover it. 00:34:43.182 [2024-07-21 03:44:28.189754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.182 [2024-07-21 03:44:28.189794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.182 qpair failed and we were unable to recover it. 00:34:43.182 [2024-07-21 03:44:28.189990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.182 [2024-07-21 03:44:28.190035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.182 qpair failed and we were unable to recover it. 00:34:43.182 [2024-07-21 03:44:28.190208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.182 [2024-07-21 03:44:28.190269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.182 qpair failed and we were unable to recover it. 00:34:43.182 [2024-07-21 03:44:28.190473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.182 [2024-07-21 03:44:28.190562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.182 qpair failed and we were unable to recover it. 00:34:43.182 [2024-07-21 03:44:28.190699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.182 [2024-07-21 03:44:28.190727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.182 qpair failed and we were unable to recover it. 00:34:43.182 [2024-07-21 03:44:28.190852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.182 [2024-07-21 03:44:28.190893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.182 qpair failed and we were unable to recover it. 00:34:43.182 [2024-07-21 03:44:28.190995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.182 [2024-07-21 03:44:28.191023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.182 qpair failed and we were unable to recover it. 00:34:43.182 [2024-07-21 03:44:28.191185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.182 [2024-07-21 03:44:28.191243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.182 qpair failed and we were unable to recover it. 00:34:43.182 [2024-07-21 03:44:28.191436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.182 [2024-07-21 03:44:28.191490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.182 qpair failed and we were unable to recover it. 00:34:43.182 [2024-07-21 03:44:28.191666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.182 [2024-07-21 03:44:28.191693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.182 qpair failed and we were unable to recover it. 00:34:43.182 [2024-07-21 03:44:28.191810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.182 [2024-07-21 03:44:28.191835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.182 qpair failed and we were unable to recover it. 00:34:43.182 [2024-07-21 03:44:28.192000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.182 [2024-07-21 03:44:28.192030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.182 qpair failed and we were unable to recover it. 00:34:43.182 [2024-07-21 03:44:28.192246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.182 [2024-07-21 03:44:28.192297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.182 qpair failed and we were unable to recover it. 00:34:43.182 [2024-07-21 03:44:28.192457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.182 [2024-07-21 03:44:28.192486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.182 qpair failed and we were unable to recover it. 00:34:43.182 [2024-07-21 03:44:28.192672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.182 [2024-07-21 03:44:28.192700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.182 qpair failed and we were unable to recover it. 00:34:43.182 [2024-07-21 03:44:28.192851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.182 [2024-07-21 03:44:28.192893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.182 qpair failed and we were unable to recover it. 00:34:43.182 [2024-07-21 03:44:28.193094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.182 [2024-07-21 03:44:28.193153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.182 qpair failed and we were unable to recover it. 00:34:43.182 [2024-07-21 03:44:28.193261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.182 [2024-07-21 03:44:28.193291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.182 qpair failed and we were unable to recover it. 00:34:43.182 [2024-07-21 03:44:28.193456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.182 [2024-07-21 03:44:28.193485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.182 qpair failed and we were unable to recover it. 00:34:43.182 [2024-07-21 03:44:28.193595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.182 [2024-07-21 03:44:28.193626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.182 qpair failed and we were unable to recover it. 00:34:43.182 [2024-07-21 03:44:28.193742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.182 [2024-07-21 03:44:28.193769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.182 qpair failed and we were unable to recover it. 00:34:43.182 [2024-07-21 03:44:28.193867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.182 [2024-07-21 03:44:28.193893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.182 qpair failed and we were unable to recover it. 00:34:43.182 [2024-07-21 03:44:28.194058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.182 [2024-07-21 03:44:28.194087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.182 qpair failed and we were unable to recover it. 00:34:43.182 [2024-07-21 03:44:28.194218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.182 [2024-07-21 03:44:28.194247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.182 qpair failed and we were unable to recover it. 00:34:43.182 [2024-07-21 03:44:28.194387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.182 [2024-07-21 03:44:28.194433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.182 qpair failed and we were unable to recover it. 00:34:43.182 [2024-07-21 03:44:28.194583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.182 [2024-07-21 03:44:28.194611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.182 qpair failed and we were unable to recover it. 00:34:43.182 [2024-07-21 03:44:28.194725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.182 [2024-07-21 03:44:28.194754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.182 qpair failed and we were unable to recover it. 00:34:43.182 [2024-07-21 03:44:28.194876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.182 [2024-07-21 03:44:28.194928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.182 qpair failed and we were unable to recover it. 00:34:43.182 [2024-07-21 03:44:28.195092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.183 [2024-07-21 03:44:28.195123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.183 qpair failed and we were unable to recover it. 00:34:43.183 [2024-07-21 03:44:28.195263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.183 [2024-07-21 03:44:28.195293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.183 qpair failed and we were unable to recover it. 00:34:43.183 [2024-07-21 03:44:28.195418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.183 [2024-07-21 03:44:28.195448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.183 qpair failed and we were unable to recover it. 00:34:43.183 [2024-07-21 03:44:28.195566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.183 [2024-07-21 03:44:28.195594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.183 qpair failed and we were unable to recover it. 00:34:43.183 [2024-07-21 03:44:28.195735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.183 [2024-07-21 03:44:28.195775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.183 qpair failed and we were unable to recover it. 00:34:43.183 [2024-07-21 03:44:28.195930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.183 [2024-07-21 03:44:28.195958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.183 qpair failed and we were unable to recover it. 00:34:43.183 [2024-07-21 03:44:28.196071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.183 [2024-07-21 03:44:28.196101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.183 qpair failed and we were unable to recover it. 00:34:43.183 [2024-07-21 03:44:28.196236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.183 [2024-07-21 03:44:28.196289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.183 qpair failed and we were unable to recover it. 00:34:43.183 [2024-07-21 03:44:28.196470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.183 [2024-07-21 03:44:28.196503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.183 qpair failed and we were unable to recover it. 00:34:43.183 [2024-07-21 03:44:28.196633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.183 [2024-07-21 03:44:28.196674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.183 qpair failed and we were unable to recover it. 00:34:43.183 [2024-07-21 03:44:28.196795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.183 [2024-07-21 03:44:28.196824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.183 qpair failed and we were unable to recover it. 00:34:43.183 [2024-07-21 03:44:28.196941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.183 [2024-07-21 03:44:28.196968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.183 qpair failed and we were unable to recover it. 00:34:43.183 [2024-07-21 03:44:28.197080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.183 [2024-07-21 03:44:28.197110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.183 qpair failed and we were unable to recover it. 00:34:43.183 [2024-07-21 03:44:28.197260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.183 [2024-07-21 03:44:28.197290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.183 qpair failed and we were unable to recover it. 00:34:43.183 [2024-07-21 03:44:28.197425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.183 [2024-07-21 03:44:28.197469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.183 qpair failed and we were unable to recover it. 00:34:43.183 [2024-07-21 03:44:28.197618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.183 [2024-07-21 03:44:28.197645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.183 qpair failed and we were unable to recover it. 00:34:43.183 [2024-07-21 03:44:28.197734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.183 [2024-07-21 03:44:28.197761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.183 qpair failed and we were unable to recover it. 00:34:43.183 [2024-07-21 03:44:28.197882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.183 [2024-07-21 03:44:28.197910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.183 qpair failed and we were unable to recover it. 00:34:43.183 [2024-07-21 03:44:28.198110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.183 [2024-07-21 03:44:28.198166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.183 qpair failed and we were unable to recover it. 00:34:43.183 [2024-07-21 03:44:28.198275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.183 [2024-07-21 03:44:28.198301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.183 qpair failed and we were unable to recover it. 00:34:43.183 [2024-07-21 03:44:28.198477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.183 [2024-07-21 03:44:28.198506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.183 qpair failed and we were unable to recover it. 00:34:43.183 [2024-07-21 03:44:28.198619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.183 [2024-07-21 03:44:28.198665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.183 qpair failed and we were unable to recover it. 00:34:43.183 [2024-07-21 03:44:28.198770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.183 [2024-07-21 03:44:28.198797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.183 qpair failed and we were unable to recover it. 00:34:43.183 [2024-07-21 03:44:28.198929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.183 [2024-07-21 03:44:28.198956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.183 qpair failed and we were unable to recover it. 00:34:43.183 [2024-07-21 03:44:28.199118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.183 [2024-07-21 03:44:28.199148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.183 qpair failed and we were unable to recover it. 00:34:43.183 [2024-07-21 03:44:28.199339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.183 [2024-07-21 03:44:28.199369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.183 qpair failed and we were unable to recover it. 00:34:43.183 [2024-07-21 03:44:28.199535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.183 [2024-07-21 03:44:28.199565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.183 qpair failed and we were unable to recover it. 00:34:43.183 [2024-07-21 03:44:28.199721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.183 [2024-07-21 03:44:28.199749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.183 qpair failed and we were unable to recover it. 00:34:43.183 [2024-07-21 03:44:28.199851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.183 [2024-07-21 03:44:28.199879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.183 qpair failed and we were unable to recover it. 00:34:43.183 [2024-07-21 03:44:28.199998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.183 [2024-07-21 03:44:28.200028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.183 qpair failed and we were unable to recover it. 00:34:43.183 [2024-07-21 03:44:28.200158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.183 [2024-07-21 03:44:28.200188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.183 qpair failed and we were unable to recover it. 00:34:43.183 [2024-07-21 03:44:28.200319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.183 [2024-07-21 03:44:28.200350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.183 qpair failed and we were unable to recover it. 00:34:43.183 [2024-07-21 03:44:28.200491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.183 [2024-07-21 03:44:28.200519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.183 qpair failed and we were unable to recover it. 00:34:43.183 [2024-07-21 03:44:28.200644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.183 [2024-07-21 03:44:28.200672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.183 qpair failed and we were unable to recover it. 00:34:43.183 [2024-07-21 03:44:28.200795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.183 [2024-07-21 03:44:28.200822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.183 qpair failed and we were unable to recover it. 00:34:43.183 [2024-07-21 03:44:28.200913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.183 [2024-07-21 03:44:28.200955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.183 qpair failed and we were unable to recover it. 00:34:43.183 [2024-07-21 03:44:28.201099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.183 [2024-07-21 03:44:28.201129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.183 qpair failed and we were unable to recover it. 00:34:43.183 [2024-07-21 03:44:28.201258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.183 [2024-07-21 03:44:28.201305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.183 qpair failed and we were unable to recover it. 00:34:43.183 [2024-07-21 03:44:28.201450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.183 [2024-07-21 03:44:28.201479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.183 qpair failed and we were unable to recover it. 00:34:43.183 [2024-07-21 03:44:28.201579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.183 [2024-07-21 03:44:28.201620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.184 qpair failed and we were unable to recover it. 00:34:43.184 [2024-07-21 03:44:28.201746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.184 [2024-07-21 03:44:28.201776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.184 qpair failed and we were unable to recover it. 00:34:43.184 [2024-07-21 03:44:28.201880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.184 [2024-07-21 03:44:28.201910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.184 qpair failed and we were unable to recover it. 00:34:43.184 [2024-07-21 03:44:28.202035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.184 [2024-07-21 03:44:28.202065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.184 qpair failed and we were unable to recover it. 00:34:43.184 [2024-07-21 03:44:28.202223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.184 [2024-07-21 03:44:28.202253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.184 qpair failed and we were unable to recover it. 00:34:43.184 [2024-07-21 03:44:28.202354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.184 [2024-07-21 03:44:28.202384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.184 qpair failed and we were unable to recover it. 00:34:43.184 [2024-07-21 03:44:28.202526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.184 [2024-07-21 03:44:28.202553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.184 qpair failed and we were unable to recover it. 00:34:43.184 [2024-07-21 03:44:28.202650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.184 [2024-07-21 03:44:28.202678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.184 qpair failed and we were unable to recover it. 00:34:43.184 [2024-07-21 03:44:28.202775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.184 [2024-07-21 03:44:28.202804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.184 qpair failed and we were unable to recover it. 00:34:43.184 [2024-07-21 03:44:28.202937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.184 [2024-07-21 03:44:28.202964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.184 qpair failed and we were unable to recover it. 00:34:43.184 [2024-07-21 03:44:28.203108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.184 [2024-07-21 03:44:28.203138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.184 qpair failed and we were unable to recover it. 00:34:43.184 [2024-07-21 03:44:28.203306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.184 [2024-07-21 03:44:28.203341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.184 qpair failed and we were unable to recover it. 00:34:43.184 [2024-07-21 03:44:28.203504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.184 [2024-07-21 03:44:28.203534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.184 qpair failed and we were unable to recover it. 00:34:43.184 [2024-07-21 03:44:28.203678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.184 [2024-07-21 03:44:28.203705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.184 qpair failed and we were unable to recover it. 00:34:43.184 [2024-07-21 03:44:28.203830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.184 [2024-07-21 03:44:28.203858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.184 qpair failed and we were unable to recover it. 00:34:43.184 [2024-07-21 03:44:28.204020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.184 [2024-07-21 03:44:28.204050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.184 qpair failed and we were unable to recover it. 00:34:43.184 [2024-07-21 03:44:28.204182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.184 [2024-07-21 03:44:28.204212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.184 qpair failed and we were unable to recover it. 00:34:43.184 [2024-07-21 03:44:28.204317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.184 [2024-07-21 03:44:28.204349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.184 qpair failed and we were unable to recover it. 00:34:43.184 [2024-07-21 03:44:28.204457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.184 [2024-07-21 03:44:28.204488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.184 qpair failed and we were unable to recover it. 00:34:43.184 [2024-07-21 03:44:28.204669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.184 [2024-07-21 03:44:28.204710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.184 qpair failed and we were unable to recover it. 00:34:43.184 [2024-07-21 03:44:28.204843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.184 [2024-07-21 03:44:28.204874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.184 qpair failed and we were unable to recover it. 00:34:43.184 [2024-07-21 03:44:28.204970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.184 [2024-07-21 03:44:28.204998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.184 qpair failed and we were unable to recover it. 00:34:43.184 [2024-07-21 03:44:28.205237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.184 [2024-07-21 03:44:28.205293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.184 qpair failed and we were unable to recover it. 00:34:43.184 [2024-07-21 03:44:28.205534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.184 [2024-07-21 03:44:28.205585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.184 qpair failed and we were unable to recover it. 00:34:43.184 [2024-07-21 03:44:28.205747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.184 [2024-07-21 03:44:28.205776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.184 qpair failed and we were unable to recover it. 00:34:43.184 [2024-07-21 03:44:28.205922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.184 [2024-07-21 03:44:28.205968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.184 qpair failed and we were unable to recover it. 00:34:43.184 [2024-07-21 03:44:28.206112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.184 [2024-07-21 03:44:28.206152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.184 qpair failed and we were unable to recover it. 00:34:43.184 [2024-07-21 03:44:28.206403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.184 [2024-07-21 03:44:28.206470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.184 qpair failed and we were unable to recover it. 00:34:43.184 [2024-07-21 03:44:28.206608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.184 [2024-07-21 03:44:28.206658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.184 qpair failed and we were unable to recover it. 00:34:43.184 [2024-07-21 03:44:28.206812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.184 [2024-07-21 03:44:28.206839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.184 qpair failed and we were unable to recover it. 00:34:43.184 [2024-07-21 03:44:28.206979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.184 [2024-07-21 03:44:28.207008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.184 qpair failed and we were unable to recover it. 00:34:43.184 [2024-07-21 03:44:28.207139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.184 [2024-07-21 03:44:28.207168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.184 qpair failed and we were unable to recover it. 00:34:43.184 [2024-07-21 03:44:28.207313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.184 [2024-07-21 03:44:28.207348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.184 qpair failed and we were unable to recover it. 00:34:43.184 [2024-07-21 03:44:28.207493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.184 [2024-07-21 03:44:28.207520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.184 qpair failed and we were unable to recover it. 00:34:43.184 [2024-07-21 03:44:28.207668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.184 [2024-07-21 03:44:28.207696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.184 qpair failed and we were unable to recover it. 00:34:43.184 [2024-07-21 03:44:28.207789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.184 [2024-07-21 03:44:28.207816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.184 qpair failed and we were unable to recover it. 00:34:43.184 [2024-07-21 03:44:28.207980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.184 [2024-07-21 03:44:28.208010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.184 qpair failed and we were unable to recover it. 00:34:43.184 [2024-07-21 03:44:28.208145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.184 [2024-07-21 03:44:28.208175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.184 qpair failed and we were unable to recover it. 00:34:43.184 [2024-07-21 03:44:28.208319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.184 [2024-07-21 03:44:28.208349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.184 qpair failed and we were unable to recover it. 00:34:43.184 [2024-07-21 03:44:28.208485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.184 [2024-07-21 03:44:28.208515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.184 qpair failed and we were unable to recover it. 00:34:43.184 [2024-07-21 03:44:28.208636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.185 [2024-07-21 03:44:28.208671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.185 qpair failed and we were unable to recover it. 00:34:43.185 [2024-07-21 03:44:28.208794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.185 [2024-07-21 03:44:28.208821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.185 qpair failed and we were unable to recover it. 00:34:43.185 [2024-07-21 03:44:28.208941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.185 [2024-07-21 03:44:28.208969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.185 qpair failed and we were unable to recover it. 00:34:43.185 [2024-07-21 03:44:28.209107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.185 [2024-07-21 03:44:28.209138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.185 qpair failed and we were unable to recover it. 00:34:43.185 [2024-07-21 03:44:28.209272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.185 [2024-07-21 03:44:28.209303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.185 qpair failed and we were unable to recover it. 00:34:43.185 [2024-07-21 03:44:28.209408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.185 [2024-07-21 03:44:28.209438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.185 qpair failed and we were unable to recover it. 00:34:43.185 [2024-07-21 03:44:28.209575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.185 [2024-07-21 03:44:28.209604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.185 qpair failed and we were unable to recover it. 00:34:43.185 [2024-07-21 03:44:28.209727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.185 [2024-07-21 03:44:28.209755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.185 qpair failed and we were unable to recover it. 00:34:43.185 [2024-07-21 03:44:28.209869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.185 [2024-07-21 03:44:28.209903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.185 qpair failed and we were unable to recover it. 00:34:43.185 [2024-07-21 03:44:28.210088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.185 [2024-07-21 03:44:28.210132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.185 qpair failed and we were unable to recover it. 00:34:43.185 [2024-07-21 03:44:28.210272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.185 [2024-07-21 03:44:28.210302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.185 qpair failed and we were unable to recover it. 00:34:43.185 [2024-07-21 03:44:28.210467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.185 [2024-07-21 03:44:28.210494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.185 qpair failed and we were unable to recover it. 00:34:43.185 [2024-07-21 03:44:28.210623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.185 [2024-07-21 03:44:28.210650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.185 qpair failed and we were unable to recover it. 00:34:43.185 [2024-07-21 03:44:28.210793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.185 [2024-07-21 03:44:28.210839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.185 qpair failed and we were unable to recover it. 00:34:43.185 [2024-07-21 03:44:28.211015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.185 [2024-07-21 03:44:28.211065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.185 qpair failed and we were unable to recover it. 00:34:43.185 [2024-07-21 03:44:28.211229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.185 [2024-07-21 03:44:28.211301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.185 qpair failed and we were unable to recover it. 00:34:43.185 [2024-07-21 03:44:28.211450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.185 [2024-07-21 03:44:28.211477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.185 qpair failed and we were unable to recover it. 00:34:43.185 [2024-07-21 03:44:28.211626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.185 [2024-07-21 03:44:28.211654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.185 qpair failed and we were unable to recover it. 00:34:43.185 [2024-07-21 03:44:28.211793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.185 [2024-07-21 03:44:28.211839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.185 qpair failed and we were unable to recover it. 00:34:43.185 [2024-07-21 03:44:28.211954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.185 [2024-07-21 03:44:28.211985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.185 qpair failed and we were unable to recover it. 00:34:43.185 [2024-07-21 03:44:28.212122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.185 [2024-07-21 03:44:28.212150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.185 qpair failed and we were unable to recover it. 00:34:43.185 [2024-07-21 03:44:28.212266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.185 [2024-07-21 03:44:28.212293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.185 qpair failed and we were unable to recover it. 00:34:43.185 [2024-07-21 03:44:28.212418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.185 [2024-07-21 03:44:28.212446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.185 qpair failed and we were unable to recover it. 00:34:43.185 [2024-07-21 03:44:28.212586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.185 [2024-07-21 03:44:28.212634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.185 qpair failed and we were unable to recover it. 00:34:43.185 [2024-07-21 03:44:28.212808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.185 [2024-07-21 03:44:28.212853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.185 qpair failed and we were unable to recover it. 00:34:43.185 [2024-07-21 03:44:28.212991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.185 [2024-07-21 03:44:28.213022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.185 qpair failed and we were unable to recover it. 00:34:43.185 [2024-07-21 03:44:28.213260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.185 [2024-07-21 03:44:28.213327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.185 qpair failed and we were unable to recover it. 00:34:43.185 [2024-07-21 03:44:28.213577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.185 [2024-07-21 03:44:28.213641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.185 qpair failed and we were unable to recover it. 00:34:43.185 [2024-07-21 03:44:28.213757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.185 [2024-07-21 03:44:28.213785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.185 qpair failed and we were unable to recover it. 00:34:43.185 [2024-07-21 03:44:28.213920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.185 [2024-07-21 03:44:28.213949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.185 qpair failed and we were unable to recover it. 00:34:43.185 [2024-07-21 03:44:28.214162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.185 [2024-07-21 03:44:28.214229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.185 qpair failed and we were unable to recover it. 00:34:43.185 [2024-07-21 03:44:28.214331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.185 [2024-07-21 03:44:28.214360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.185 qpair failed and we were unable to recover it. 00:34:43.185 [2024-07-21 03:44:28.214497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.185 [2024-07-21 03:44:28.214524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.185 qpair failed and we were unable to recover it. 00:34:43.185 [2024-07-21 03:44:28.214625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.185 [2024-07-21 03:44:28.214653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.185 qpair failed and we were unable to recover it. 00:34:43.185 [2024-07-21 03:44:28.214743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.185 [2024-07-21 03:44:28.214769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.185 qpair failed and we were unable to recover it. 00:34:43.185 [2024-07-21 03:44:28.214871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.185 [2024-07-21 03:44:28.214899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.185 qpair failed and we were unable to recover it. 00:34:43.185 [2024-07-21 03:44:28.215028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.185 [2024-07-21 03:44:28.215056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.185 qpair failed and we were unable to recover it. 00:34:43.185 [2024-07-21 03:44:28.215192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.185 [2024-07-21 03:44:28.215221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.185 qpair failed and we were unable to recover it. 00:34:43.185 [2024-07-21 03:44:28.215381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.185 [2024-07-21 03:44:28.215410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.185 qpair failed and we were unable to recover it. 00:34:43.185 [2024-07-21 03:44:28.215574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.185 [2024-07-21 03:44:28.215602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.186 qpair failed and we were unable to recover it. 00:34:43.186 [2024-07-21 03:44:28.215725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.186 [2024-07-21 03:44:28.215751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.186 qpair failed and we were unable to recover it. 00:34:43.186 [2024-07-21 03:44:28.215848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.186 [2024-07-21 03:44:28.215875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.186 qpair failed and we were unable to recover it. 00:34:43.186 [2024-07-21 03:44:28.216036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.186 [2024-07-21 03:44:28.216066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.186 qpair failed and we were unable to recover it. 00:34:43.186 [2024-07-21 03:44:28.216266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.186 [2024-07-21 03:44:28.216293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.186 qpair failed and we were unable to recover it. 00:34:43.186 [2024-07-21 03:44:28.216403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.186 [2024-07-21 03:44:28.216431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.186 qpair failed and we were unable to recover it. 00:34:43.186 [2024-07-21 03:44:28.216564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.186 [2024-07-21 03:44:28.216592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.186 qpair failed and we were unable to recover it. 00:34:43.186 [2024-07-21 03:44:28.216720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.186 [2024-07-21 03:44:28.216747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.186 qpair failed and we were unable to recover it. 00:34:43.186 [2024-07-21 03:44:28.216865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.186 [2024-07-21 03:44:28.216908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.186 qpair failed and we were unable to recover it. 00:34:43.186 [2024-07-21 03:44:28.217068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.186 [2024-07-21 03:44:28.217098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.186 qpair failed and we were unable to recover it. 00:34:43.186 [2024-07-21 03:44:28.217209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.186 [2024-07-21 03:44:28.217238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.186 qpair failed and we were unable to recover it. 00:34:43.186 [2024-07-21 03:44:28.217395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.186 [2024-07-21 03:44:28.217424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.186 qpair failed and we were unable to recover it. 00:34:43.186 [2024-07-21 03:44:28.217555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.186 [2024-07-21 03:44:28.217584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.186 qpair failed and we were unable to recover it. 00:34:43.186 [2024-07-21 03:44:28.217711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.186 [2024-07-21 03:44:28.217738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.186 qpair failed and we were unable to recover it. 00:34:43.186 [2024-07-21 03:44:28.217837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.186 [2024-07-21 03:44:28.217865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.186 qpair failed and we were unable to recover it. 00:34:43.186 [2024-07-21 03:44:28.218010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.186 [2024-07-21 03:44:28.218070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.186 qpair failed and we were unable to recover it. 00:34:43.186 [2024-07-21 03:44:28.218222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.186 [2024-07-21 03:44:28.218267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.186 qpair failed and we were unable to recover it. 00:34:43.186 [2024-07-21 03:44:28.218477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.186 [2024-07-21 03:44:28.218504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.186 qpair failed and we were unable to recover it. 00:34:43.186 [2024-07-21 03:44:28.218651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.186 [2024-07-21 03:44:28.218679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.186 qpair failed and we were unable to recover it. 00:34:43.186 [2024-07-21 03:44:28.218766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.186 [2024-07-21 03:44:28.218793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.186 qpair failed and we were unable to recover it. 00:34:43.186 [2024-07-21 03:44:28.218934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.186 [2024-07-21 03:44:28.218964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.186 qpair failed and we were unable to recover it. 00:34:43.186 [2024-07-21 03:44:28.219099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.186 [2024-07-21 03:44:28.219130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.186 qpair failed and we were unable to recover it. 00:34:43.186 [2024-07-21 03:44:28.219226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.186 [2024-07-21 03:44:28.219254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.186 qpair failed and we were unable to recover it. 00:34:43.186 [2024-07-21 03:44:28.219376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.186 [2024-07-21 03:44:28.219404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.186 qpair failed and we were unable to recover it. 00:34:43.186 [2024-07-21 03:44:28.219536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.186 [2024-07-21 03:44:28.219565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.186 qpair failed and we were unable to recover it. 00:34:43.186 [2024-07-21 03:44:28.219710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.186 [2024-07-21 03:44:28.219737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.186 qpair failed and we were unable to recover it. 00:34:43.186 [2024-07-21 03:44:28.219863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.186 [2024-07-21 03:44:28.219904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.186 qpair failed and we were unable to recover it. 00:34:43.186 [2024-07-21 03:44:28.220051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.186 [2024-07-21 03:44:28.220097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.186 qpair failed and we were unable to recover it. 00:34:43.186 [2024-07-21 03:44:28.220246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.186 [2024-07-21 03:44:28.220274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.186 qpair failed and we were unable to recover it. 00:34:43.186 [2024-07-21 03:44:28.220376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.186 [2024-07-21 03:44:28.220403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.186 qpair failed and we were unable to recover it. 00:34:43.186 [2024-07-21 03:44:28.220525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.186 [2024-07-21 03:44:28.220552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.186 qpair failed and we were unable to recover it. 00:34:43.186 [2024-07-21 03:44:28.220669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.186 [2024-07-21 03:44:28.220697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.186 qpair failed and we were unable to recover it. 00:34:43.186 [2024-07-21 03:44:28.220844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.186 [2024-07-21 03:44:28.220871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.186 qpair failed and we were unable to recover it. 00:34:43.186 [2024-07-21 03:44:28.220993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.186 [2024-07-21 03:44:28.221020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.186 qpair failed and we were unable to recover it. 00:34:43.186 [2024-07-21 03:44:28.221183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.186 [2024-07-21 03:44:28.221229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.186 qpair failed and we were unable to recover it. 00:34:43.186 [2024-07-21 03:44:28.221378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.187 [2024-07-21 03:44:28.221407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.187 qpair failed and we were unable to recover it. 00:34:43.187 [2024-07-21 03:44:28.221506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.187 [2024-07-21 03:44:28.221533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.187 qpair failed and we were unable to recover it. 00:34:43.187 [2024-07-21 03:44:28.221657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.187 [2024-07-21 03:44:28.221683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.187 qpair failed and we were unable to recover it. 00:34:43.187 [2024-07-21 03:44:28.221890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.187 [2024-07-21 03:44:28.221955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.187 qpair failed and we were unable to recover it. 00:34:43.187 [2024-07-21 03:44:28.222166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.187 [2024-07-21 03:44:28.222234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.187 qpair failed and we were unable to recover it. 00:34:43.187 [2024-07-21 03:44:28.222352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.187 [2024-07-21 03:44:28.222384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.187 qpair failed and we were unable to recover it. 00:34:43.187 [2024-07-21 03:44:28.222517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.187 [2024-07-21 03:44:28.222545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.187 qpair failed and we were unable to recover it. 00:34:43.187 [2024-07-21 03:44:28.222679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.187 [2024-07-21 03:44:28.222708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.187 qpair failed and we were unable to recover it. 00:34:43.187 [2024-07-21 03:44:28.222856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.187 [2024-07-21 03:44:28.222898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.187 qpair failed and we were unable to recover it. 00:34:43.187 [2024-07-21 03:44:28.223018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.187 [2024-07-21 03:44:28.223063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.187 qpair failed and we were unable to recover it. 00:34:43.187 [2024-07-21 03:44:28.223218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.187 [2024-07-21 03:44:28.223245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.187 qpair failed and we were unable to recover it. 00:34:43.187 [2024-07-21 03:44:28.223393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.187 [2024-07-21 03:44:28.223424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.187 qpair failed and we were unable to recover it. 00:34:43.187 [2024-07-21 03:44:28.223560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.187 [2024-07-21 03:44:28.223587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.187 qpair failed and we were unable to recover it. 00:34:43.187 [2024-07-21 03:44:28.223743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.187 [2024-07-21 03:44:28.223770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.187 qpair failed and we were unable to recover it. 00:34:43.187 [2024-07-21 03:44:28.223918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.187 [2024-07-21 03:44:28.223948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.187 qpair failed and we were unable to recover it. 00:34:43.187 [2024-07-21 03:44:28.224062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.187 [2024-07-21 03:44:28.224105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.187 qpair failed and we were unable to recover it. 00:34:43.187 [2024-07-21 03:44:28.224242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.187 [2024-07-21 03:44:28.224273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.187 qpair failed and we were unable to recover it. 00:34:43.187 [2024-07-21 03:44:28.224402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.187 [2024-07-21 03:44:28.224432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.187 qpair failed and we were unable to recover it. 00:34:43.187 [2024-07-21 03:44:28.224568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.187 [2024-07-21 03:44:28.224595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.187 qpair failed and we were unable to recover it. 00:34:43.187 [2024-07-21 03:44:28.224702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.187 [2024-07-21 03:44:28.224728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.187 qpair failed and we were unable to recover it. 00:34:43.187 [2024-07-21 03:44:28.224854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.187 [2024-07-21 03:44:28.224886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.187 qpair failed and we were unable to recover it. 00:34:43.187 [2024-07-21 03:44:28.225025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.187 [2024-07-21 03:44:28.225054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.187 qpair failed and we were unable to recover it. 00:34:43.187 [2024-07-21 03:44:28.225220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.187 [2024-07-21 03:44:28.225250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.187 qpair failed and we were unable to recover it. 00:34:43.187 [2024-07-21 03:44:28.225379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.187 [2024-07-21 03:44:28.225409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.187 qpair failed and we were unable to recover it. 00:34:43.187 [2024-07-21 03:44:28.225539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.187 [2024-07-21 03:44:28.225566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.187 qpair failed and we were unable to recover it. 00:34:43.187 [2024-07-21 03:44:28.225683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.187 [2024-07-21 03:44:28.225723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.187 qpair failed and we were unable to recover it. 00:34:43.187 [2024-07-21 03:44:28.225894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.187 [2024-07-21 03:44:28.225924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.187 qpair failed and we were unable to recover it. 00:34:43.187 [2024-07-21 03:44:28.226062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.187 [2024-07-21 03:44:28.226106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.187 qpair failed and we were unable to recover it. 00:34:43.187 [2024-07-21 03:44:28.226269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.187 [2024-07-21 03:44:28.226299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.187 qpair failed and we were unable to recover it. 00:34:43.187 [2024-07-21 03:44:28.226434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.187 [2024-07-21 03:44:28.226464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.187 qpair failed and we were unable to recover it. 00:34:43.187 [2024-07-21 03:44:28.226601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.187 [2024-07-21 03:44:28.226637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.187 qpair failed and we were unable to recover it. 00:34:43.187 [2024-07-21 03:44:28.226759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.187 [2024-07-21 03:44:28.226785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.187 qpair failed and we were unable to recover it. 00:34:43.187 [2024-07-21 03:44:28.226880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.187 [2024-07-21 03:44:28.226922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.187 qpair failed and we were unable to recover it. 00:34:43.187 [2024-07-21 03:44:28.227035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.187 [2024-07-21 03:44:28.227079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.187 qpair failed and we were unable to recover it. 00:34:43.187 [2024-07-21 03:44:28.227232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.187 [2024-07-21 03:44:28.227263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.187 qpair failed and we were unable to recover it. 00:34:43.187 [2024-07-21 03:44:28.227396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.187 [2024-07-21 03:44:28.227427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.187 qpair failed and we were unable to recover it. 00:34:43.187 [2024-07-21 03:44:28.227559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.187 [2024-07-21 03:44:28.227586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.187 qpair failed and we were unable to recover it. 00:34:43.187 [2024-07-21 03:44:28.227713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.187 [2024-07-21 03:44:28.227741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.187 qpair failed and we were unable to recover it. 00:34:43.187 [2024-07-21 03:44:28.227864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.187 [2024-07-21 03:44:28.227907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.187 qpair failed and we were unable to recover it. 00:34:43.187 [2024-07-21 03:44:28.228031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.187 [2024-07-21 03:44:28.228059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.187 qpair failed and we were unable to recover it. 00:34:43.188 [2024-07-21 03:44:28.228181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.188 [2024-07-21 03:44:28.228209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.188 qpair failed and we were unable to recover it. 00:34:43.188 [2024-07-21 03:44:28.228376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.188 [2024-07-21 03:44:28.228406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.188 qpair failed and we were unable to recover it. 00:34:43.188 [2024-07-21 03:44:28.228513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.188 [2024-07-21 03:44:28.228541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.188 qpair failed and we were unable to recover it. 00:34:43.188 [2024-07-21 03:44:28.228652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.188 [2024-07-21 03:44:28.228681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.188 qpair failed and we were unable to recover it. 00:34:43.188 [2024-07-21 03:44:28.228830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.188 [2024-07-21 03:44:28.228856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.188 qpair failed and we were unable to recover it. 00:34:43.188 [2024-07-21 03:44:28.228950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.188 [2024-07-21 03:44:28.228976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.188 qpair failed and we were unable to recover it. 00:34:43.188 [2024-07-21 03:44:28.229101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.188 [2024-07-21 03:44:28.229127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.188 qpair failed and we were unable to recover it. 00:34:43.188 [2024-07-21 03:44:28.229270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.188 [2024-07-21 03:44:28.229303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.188 qpair failed and we were unable to recover it. 00:34:43.188 [2024-07-21 03:44:28.229434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.188 [2024-07-21 03:44:28.229477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.188 qpair failed and we were unable to recover it. 00:34:43.188 [2024-07-21 03:44:28.229585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.188 [2024-07-21 03:44:28.229623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.188 qpair failed and we were unable to recover it. 00:34:43.188 [2024-07-21 03:44:28.229772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.188 [2024-07-21 03:44:28.229800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.188 qpair failed and we were unable to recover it. 00:34:43.188 [2024-07-21 03:44:28.229919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.188 [2024-07-21 03:44:28.229946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.188 qpair failed and we were unable to recover it. 00:34:43.188 [2024-07-21 03:44:28.230035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.188 [2024-07-21 03:44:28.230080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.188 qpair failed and we were unable to recover it. 00:34:43.188 [2024-07-21 03:44:28.230185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.188 [2024-07-21 03:44:28.230228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.188 qpair failed and we were unable to recover it. 00:34:43.188 [2024-07-21 03:44:28.230400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.188 [2024-07-21 03:44:28.230430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.188 qpair failed and we were unable to recover it. 00:34:43.188 [2024-07-21 03:44:28.230530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.188 [2024-07-21 03:44:28.230560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.188 qpair failed and we were unable to recover it. 00:34:43.188 [2024-07-21 03:44:28.230703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.188 [2024-07-21 03:44:28.230733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.188 qpair failed and we were unable to recover it. 00:34:43.188 [2024-07-21 03:44:28.230883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.188 [2024-07-21 03:44:28.230909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.188 qpair failed and we were unable to recover it. 00:34:43.188 [2024-07-21 03:44:28.231027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.188 [2024-07-21 03:44:28.231071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.188 qpair failed and we were unable to recover it. 00:34:43.188 [2024-07-21 03:44:28.231196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.188 [2024-07-21 03:44:28.231226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.188 qpair failed and we were unable to recover it. 00:34:43.188 [2024-07-21 03:44:28.231359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.188 [2024-07-21 03:44:28.231404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.188 qpair failed and we were unable to recover it. 00:34:43.188 [2024-07-21 03:44:28.231528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.188 [2024-07-21 03:44:28.231585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.188 qpair failed and we were unable to recover it. 00:34:43.188 [2024-07-21 03:44:28.231735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.188 [2024-07-21 03:44:28.231775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.188 qpair failed and we were unable to recover it. 00:34:43.188 [2024-07-21 03:44:28.231905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.188 [2024-07-21 03:44:28.231933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.188 qpair failed and we were unable to recover it. 00:34:43.188 [2024-07-21 03:44:28.232059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.188 [2024-07-21 03:44:28.232103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.188 qpair failed and we were unable to recover it. 00:34:43.188 [2024-07-21 03:44:28.232235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.188 [2024-07-21 03:44:28.232265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.188 qpair failed and we were unable to recover it. 00:34:43.188 [2024-07-21 03:44:28.232426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.188 [2024-07-21 03:44:28.232455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.188 qpair failed and we were unable to recover it. 00:34:43.188 [2024-07-21 03:44:28.232585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.188 [2024-07-21 03:44:28.232611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.188 qpair failed and we were unable to recover it. 00:34:43.188 [2024-07-21 03:44:28.232742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.188 [2024-07-21 03:44:28.232769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.188 qpair failed and we were unable to recover it. 00:34:43.188 [2024-07-21 03:44:28.232866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.188 [2024-07-21 03:44:28.232893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.188 qpair failed and we were unable to recover it. 00:34:43.188 [2024-07-21 03:44:28.232999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.188 [2024-07-21 03:44:28.233028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.188 qpair failed and we were unable to recover it. 00:34:43.188 [2024-07-21 03:44:28.233123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.188 [2024-07-21 03:44:28.233153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.188 qpair failed and we were unable to recover it. 00:34:43.188 [2024-07-21 03:44:28.233313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.188 [2024-07-21 03:44:28.233342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.188 qpair failed and we were unable to recover it. 00:34:43.188 [2024-07-21 03:44:28.233444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.188 [2024-07-21 03:44:28.233476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.188 qpair failed and we were unable to recover it. 00:34:43.188 [2024-07-21 03:44:28.233635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.188 [2024-07-21 03:44:28.233693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.188 qpair failed and we were unable to recover it. 00:34:43.188 [2024-07-21 03:44:28.233822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.188 [2024-07-21 03:44:28.233849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.188 qpair failed and we were unable to recover it. 00:34:43.188 [2024-07-21 03:44:28.233971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.188 [2024-07-21 03:44:28.233998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.188 qpair failed and we were unable to recover it. 00:34:43.188 [2024-07-21 03:44:28.234221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.188 [2024-07-21 03:44:28.234273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.188 qpair failed and we were unable to recover it. 00:34:43.188 [2024-07-21 03:44:28.234449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.188 [2024-07-21 03:44:28.234475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.188 qpair failed and we were unable to recover it. 00:34:43.188 [2024-07-21 03:44:28.234564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.188 [2024-07-21 03:44:28.234590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.188 qpair failed and we were unable to recover it. 00:34:43.188 [2024-07-21 03:44:28.234719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.189 [2024-07-21 03:44:28.234748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.189 qpair failed and we were unable to recover it. 00:34:43.189 [2024-07-21 03:44:28.234869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.189 [2024-07-21 03:44:28.234896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.189 qpair failed and we were unable to recover it. 00:34:43.189 [2024-07-21 03:44:28.234989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.189 [2024-07-21 03:44:28.235016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.189 qpair failed and we were unable to recover it. 00:34:43.189 [2024-07-21 03:44:28.235146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.189 [2024-07-21 03:44:28.235172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.189 qpair failed and we were unable to recover it. 00:34:43.189 [2024-07-21 03:44:28.235297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.189 [2024-07-21 03:44:28.235324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.189 qpair failed and we were unable to recover it. 00:34:43.189 [2024-07-21 03:44:28.235425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.189 [2024-07-21 03:44:28.235453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.189 qpair failed and we were unable to recover it. 00:34:43.189 [2024-07-21 03:44:28.235602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.189 [2024-07-21 03:44:28.235634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.189 qpair failed and we were unable to recover it. 00:34:43.189 [2024-07-21 03:44:28.235780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.189 [2024-07-21 03:44:28.235812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.189 qpair failed and we were unable to recover it. 00:34:43.189 [2024-07-21 03:44:28.235910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.189 [2024-07-21 03:44:28.235955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.189 qpair failed and we were unable to recover it. 00:34:43.189 [2024-07-21 03:44:28.236149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.189 [2024-07-21 03:44:28.236213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.189 qpair failed and we were unable to recover it. 00:34:43.189 [2024-07-21 03:44:28.236380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.189 [2024-07-21 03:44:28.236407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.189 qpair failed and we were unable to recover it. 00:34:43.189 [2024-07-21 03:44:28.236575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.189 [2024-07-21 03:44:28.236608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.189 qpair failed and we were unable to recover it. 00:34:43.189 [2024-07-21 03:44:28.236779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.189 [2024-07-21 03:44:28.236820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.189 qpair failed and we were unable to recover it. 00:34:43.189 [2024-07-21 03:44:28.236972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.189 [2024-07-21 03:44:28.237000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.189 qpair failed and we were unable to recover it. 00:34:43.189 [2024-07-21 03:44:28.237100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.189 [2024-07-21 03:44:28.237126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.189 qpair failed and we were unable to recover it. 00:34:43.189 [2024-07-21 03:44:28.237217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.189 [2024-07-21 03:44:28.237243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.189 qpair failed and we were unable to recover it. 00:34:43.189 [2024-07-21 03:44:28.237360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.189 [2024-07-21 03:44:28.237386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.189 qpair failed and we were unable to recover it. 00:34:43.189 [2024-07-21 03:44:28.237490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.189 [2024-07-21 03:44:28.237532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.189 qpair failed and we were unable to recover it. 00:34:43.189 [2024-07-21 03:44:28.237660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.189 [2024-07-21 03:44:28.237693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.189 qpair failed and we were unable to recover it. 00:34:43.189 [2024-07-21 03:44:28.237867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.189 [2024-07-21 03:44:28.237894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.189 qpair failed and we were unable to recover it. 00:34:43.189 [2024-07-21 03:44:28.238016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.189 [2024-07-21 03:44:28.238060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.189 qpair failed and we were unable to recover it. 00:34:43.189 [2024-07-21 03:44:28.238282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.189 [2024-07-21 03:44:28.238336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.189 qpair failed and we were unable to recover it. 00:34:43.189 [2024-07-21 03:44:28.238508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.189 [2024-07-21 03:44:28.238535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.189 qpair failed and we were unable to recover it. 00:34:43.189 [2024-07-21 03:44:28.238632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.189 [2024-07-21 03:44:28.238661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.189 qpair failed and we were unable to recover it. 00:34:43.189 [2024-07-21 03:44:28.238759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.189 [2024-07-21 03:44:28.238787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.189 qpair failed and we were unable to recover it. 00:34:43.189 [2024-07-21 03:44:28.238912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.189 [2024-07-21 03:44:28.238939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.189 qpair failed and we were unable to recover it. 00:34:43.189 [2024-07-21 03:44:28.239099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.189 [2024-07-21 03:44:28.239129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.189 qpair failed and we were unable to recover it. 00:34:43.189 [2024-07-21 03:44:28.239342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.189 [2024-07-21 03:44:28.239400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.189 qpair failed and we were unable to recover it. 00:34:43.189 [2024-07-21 03:44:28.239565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.189 [2024-07-21 03:44:28.239592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.189 qpair failed and we were unable to recover it. 00:34:43.189 [2024-07-21 03:44:28.239693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.189 [2024-07-21 03:44:28.239722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.189 qpair failed and we were unable to recover it. 00:34:43.189 [2024-07-21 03:44:28.239840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.189 [2024-07-21 03:44:28.239866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.189 qpair failed and we were unable to recover it. 00:34:43.189 [2024-07-21 03:44:28.240018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.189 [2024-07-21 03:44:28.240045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.189 qpair failed and we were unable to recover it. 00:34:43.189 [2024-07-21 03:44:28.240169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.189 [2024-07-21 03:44:28.240196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.189 qpair failed and we were unable to recover it. 00:34:43.189 [2024-07-21 03:44:28.240288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.189 [2024-07-21 03:44:28.240315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.189 qpair failed and we were unable to recover it. 00:34:43.189 [2024-07-21 03:44:28.240412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.189 [2024-07-21 03:44:28.240440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.189 qpair failed and we were unable to recover it. 00:34:43.189 [2024-07-21 03:44:28.240591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.189 [2024-07-21 03:44:28.240625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.189 qpair failed and we were unable to recover it. 00:34:43.189 [2024-07-21 03:44:28.240719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.189 [2024-07-21 03:44:28.240746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.189 qpair failed and we were unable to recover it. 00:34:43.189 [2024-07-21 03:44:28.240861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.189 [2024-07-21 03:44:28.240887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.189 qpair failed and we were unable to recover it. 00:34:43.189 [2024-07-21 03:44:28.241008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.189 [2024-07-21 03:44:28.241035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.189 qpair failed and we were unable to recover it. 00:34:43.189 [2024-07-21 03:44:28.241129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.189 [2024-07-21 03:44:28.241173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.189 qpair failed and we were unable to recover it. 00:34:43.190 [2024-07-21 03:44:28.241315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.190 [2024-07-21 03:44:28.241342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.190 qpair failed and we were unable to recover it. 00:34:43.190 [2024-07-21 03:44:28.241426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.190 [2024-07-21 03:44:28.241453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.190 qpair failed and we were unable to recover it. 00:34:43.190 [2024-07-21 03:44:28.241552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.190 [2024-07-21 03:44:28.241582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.190 qpair failed and we were unable to recover it. 00:34:43.190 [2024-07-21 03:44:28.241731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.190 [2024-07-21 03:44:28.241760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.190 qpair failed and we were unable to recover it. 00:34:43.190 [2024-07-21 03:44:28.241874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.190 [2024-07-21 03:44:28.241901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.190 qpair failed and we were unable to recover it. 00:34:43.190 [2024-07-21 03:44:28.242019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.190 [2024-07-21 03:44:28.242063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.190 qpair failed and we were unable to recover it. 00:34:43.190 [2024-07-21 03:44:28.242102] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bc8390 (9): Bad file descriptor 00:34:43.190 [2024-07-21 03:44:28.242292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.190 [2024-07-21 03:44:28.242332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.190 qpair failed and we were unable to recover it. 00:34:43.190 [2024-07-21 03:44:28.242517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.190 [2024-07-21 03:44:28.242562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.190 qpair failed and we were unable to recover it. 00:34:43.190 [2024-07-21 03:44:28.242724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.190 [2024-07-21 03:44:28.242753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.190 qpair failed and we were unable to recover it. 00:34:43.190 [2024-07-21 03:44:28.242842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.190 [2024-07-21 03:44:28.242869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.190 qpair failed and we were unable to recover it. 00:34:43.190 [2024-07-21 03:44:28.243020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.190 [2024-07-21 03:44:28.243048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.190 qpair failed and we were unable to recover it. 00:34:43.190 [2024-07-21 03:44:28.243204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.190 [2024-07-21 03:44:28.243232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.190 qpair failed and we were unable to recover it. 00:34:43.190 [2024-07-21 03:44:28.243377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.190 [2024-07-21 03:44:28.243404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.190 qpair failed and we were unable to recover it. 00:34:43.190 [2024-07-21 03:44:28.243580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.190 [2024-07-21 03:44:28.243609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.190 qpair failed and we were unable to recover it. 00:34:43.190 [2024-07-21 03:44:28.243734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.190 [2024-07-21 03:44:28.243762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.190 qpair failed and we were unable to recover it. 00:34:43.190 [2024-07-21 03:44:28.243883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.190 [2024-07-21 03:44:28.243911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.190 qpair failed and we were unable to recover it. 00:34:43.190 [2024-07-21 03:44:28.244072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.190 [2024-07-21 03:44:28.244099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.190 qpair failed and we were unable to recover it. 00:34:43.190 [2024-07-21 03:44:28.244306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.190 [2024-07-21 03:44:28.244361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.190 qpair failed and we were unable to recover it. 00:34:43.190 [2024-07-21 03:44:28.244500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.190 [2024-07-21 03:44:28.244527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.190 qpair failed and we were unable to recover it. 00:34:43.190 [2024-07-21 03:44:28.244654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.190 [2024-07-21 03:44:28.244681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.190 qpair failed and we were unable to recover it. 00:34:43.190 [2024-07-21 03:44:28.244835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.190 [2024-07-21 03:44:28.244872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.190 qpair failed and we were unable to recover it. 00:34:43.190 [2024-07-21 03:44:28.245045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.190 [2024-07-21 03:44:28.245072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.190 qpair failed and we were unable to recover it. 00:34:43.190 [2024-07-21 03:44:28.245186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.191 [2024-07-21 03:44:28.245230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.191 qpair failed and we were unable to recover it. 00:34:43.191 [2024-07-21 03:44:28.245416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.191 [2024-07-21 03:44:28.245444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.191 qpair failed and we were unable to recover it. 00:34:43.191 [2024-07-21 03:44:28.245599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.191 [2024-07-21 03:44:28.245631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.191 qpair failed and we were unable to recover it. 00:34:43.191 [2024-07-21 03:44:28.245721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.191 [2024-07-21 03:44:28.245748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.191 qpair failed and we were unable to recover it. 00:34:43.191 [2024-07-21 03:44:28.245847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.191 [2024-07-21 03:44:28.245874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.191 qpair failed and we were unable to recover it. 00:34:43.191 [2024-07-21 03:44:28.245998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.191 [2024-07-21 03:44:28.246026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.191 qpair failed and we were unable to recover it. 00:34:43.191 [2024-07-21 03:44:28.246153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.191 [2024-07-21 03:44:28.246180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.191 qpair failed and we were unable to recover it. 00:34:43.191 [2024-07-21 03:44:28.246300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.191 [2024-07-21 03:44:28.246327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.191 qpair failed and we were unable to recover it. 00:34:43.191 [2024-07-21 03:44:28.246416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.191 [2024-07-21 03:44:28.246443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.191 qpair failed and we were unable to recover it. 00:34:43.191 [2024-07-21 03:44:28.246573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.191 [2024-07-21 03:44:28.246622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.191 qpair failed and we were unable to recover it. 00:34:43.191 [2024-07-21 03:44:28.246754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.191 [2024-07-21 03:44:28.246783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.191 qpair failed and we were unable to recover it. 00:34:43.191 [2024-07-21 03:44:28.246931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.191 [2024-07-21 03:44:28.246957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.191 qpair failed and we were unable to recover it. 00:34:43.191 [2024-07-21 03:44:28.247100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.191 [2024-07-21 03:44:28.247130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.191 qpair failed and we were unable to recover it. 00:34:43.191 [2024-07-21 03:44:28.247261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.191 [2024-07-21 03:44:28.247291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.191 qpair failed and we were unable to recover it. 00:34:43.191 [2024-07-21 03:44:28.247405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.191 [2024-07-21 03:44:28.247432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.191 qpair failed and we were unable to recover it. 00:34:43.191 [2024-07-21 03:44:28.247522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.191 [2024-07-21 03:44:28.247548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.191 qpair failed and we were unable to recover it. 00:34:43.191 [2024-07-21 03:44:28.247721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.191 [2024-07-21 03:44:28.247766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.191 qpair failed and we were unable to recover it. 00:34:43.191 [2024-07-21 03:44:28.247920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.191 [2024-07-21 03:44:28.247949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.191 qpair failed and we were unable to recover it. 00:34:43.191 [2024-07-21 03:44:28.248039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.191 [2024-07-21 03:44:28.248066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.191 qpair failed and we were unable to recover it. 00:34:43.191 [2024-07-21 03:44:28.248217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.191 [2024-07-21 03:44:28.248244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.191 qpair failed and we were unable to recover it. 00:34:43.191 [2024-07-21 03:44:28.248389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.191 [2024-07-21 03:44:28.248416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.191 qpair failed and we were unable to recover it. 00:34:43.191 [2024-07-21 03:44:28.248574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.191 [2024-07-21 03:44:28.248604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.191 qpair failed and we were unable to recover it. 00:34:43.191 [2024-07-21 03:44:28.248745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.191 [2024-07-21 03:44:28.248773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.191 qpair failed and we were unable to recover it. 00:34:43.191 [2024-07-21 03:44:28.248894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.191 [2024-07-21 03:44:28.248921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.191 qpair failed and we were unable to recover it. 00:34:43.191 [2024-07-21 03:44:28.249043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.191 [2024-07-21 03:44:28.249069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.191 qpair failed and we were unable to recover it. 00:34:43.191 [2024-07-21 03:44:28.249186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.191 [2024-07-21 03:44:28.249221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.192 qpair failed and we were unable to recover it. 00:34:43.192 [2024-07-21 03:44:28.249333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.192 [2024-07-21 03:44:28.249359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.192 qpair failed and we were unable to recover it. 00:34:43.192 [2024-07-21 03:44:28.249455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.192 [2024-07-21 03:44:28.249480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.192 qpair failed and we were unable to recover it. 00:34:43.192 [2024-07-21 03:44:28.249634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.192 [2024-07-21 03:44:28.249664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.192 qpair failed and we were unable to recover it. 00:34:43.192 [2024-07-21 03:44:28.249804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.192 [2024-07-21 03:44:28.249830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.192 qpair failed and we were unable to recover it. 00:34:43.192 [2024-07-21 03:44:28.249926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.192 [2024-07-21 03:44:28.249954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.192 qpair failed and we were unable to recover it. 00:34:43.192 [2024-07-21 03:44:28.250047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.192 [2024-07-21 03:44:28.250073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.192 qpair failed and we were unable to recover it. 00:34:43.192 [2024-07-21 03:44:28.250218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.192 [2024-07-21 03:44:28.250245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.192 qpair failed and we were unable to recover it. 00:34:43.192 [2024-07-21 03:44:28.250364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.192 [2024-07-21 03:44:28.250390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.192 qpair failed and we were unable to recover it. 00:34:43.192 [2024-07-21 03:44:28.250535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.192 [2024-07-21 03:44:28.250561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.192 qpair failed and we were unable to recover it. 00:34:43.192 [2024-07-21 03:44:28.250729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.192 [2024-07-21 03:44:28.250756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.192 qpair failed and we were unable to recover it. 00:34:43.192 [2024-07-21 03:44:28.250850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.192 [2024-07-21 03:44:28.250876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.192 qpair failed and we were unable to recover it. 00:34:43.192 [2024-07-21 03:44:28.250991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.192 [2024-07-21 03:44:28.251019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.192 qpair failed and we were unable to recover it. 00:34:43.192 [2024-07-21 03:44:28.251161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.192 [2024-07-21 03:44:28.251188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.192 qpair failed and we were unable to recover it. 00:34:43.192 [2024-07-21 03:44:28.251312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.192 [2024-07-21 03:44:28.251339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.192 qpair failed and we were unable to recover it. 00:34:43.192 [2024-07-21 03:44:28.251490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.192 [2024-07-21 03:44:28.251520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.192 qpair failed and we were unable to recover it. 00:34:43.192 [2024-07-21 03:44:28.251636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.192 [2024-07-21 03:44:28.251662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.192 qpair failed and we were unable to recover it. 00:34:43.192 [2024-07-21 03:44:28.251762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.192 [2024-07-21 03:44:28.251790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.192 qpair failed and we were unable to recover it. 00:34:43.192 [2024-07-21 03:44:28.251932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.192 [2024-07-21 03:44:28.251960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.192 qpair failed and we were unable to recover it. 00:34:43.192 [2024-07-21 03:44:28.252106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.192 [2024-07-21 03:44:28.252133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.192 qpair failed and we were unable to recover it. 00:34:43.192 [2024-07-21 03:44:28.252229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.192 [2024-07-21 03:44:28.252255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.192 qpair failed and we were unable to recover it. 00:34:43.192 [2024-07-21 03:44:28.252357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.192 [2024-07-21 03:44:28.252385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.192 qpair failed and we were unable to recover it. 00:34:43.192 [2024-07-21 03:44:28.252500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.192 [2024-07-21 03:44:28.252527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.192 qpair failed and we were unable to recover it. 00:34:43.192 [2024-07-21 03:44:28.252627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.192 [2024-07-21 03:44:28.252654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.192 qpair failed and we were unable to recover it. 00:34:43.192 [2024-07-21 03:44:28.252768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.192 [2024-07-21 03:44:28.252797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.192 qpair failed and we were unable to recover it. 00:34:43.192 [2024-07-21 03:44:28.252904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.192 [2024-07-21 03:44:28.252931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.192 qpair failed and we were unable to recover it. 00:34:43.192 [2024-07-21 03:44:28.253049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.192 [2024-07-21 03:44:28.253076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.193 qpair failed and we were unable to recover it. 00:34:43.193 [2024-07-21 03:44:28.253188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.193 [2024-07-21 03:44:28.253221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.193 qpair failed and we were unable to recover it. 00:34:43.193 [2024-07-21 03:44:28.253366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.193 [2024-07-21 03:44:28.253393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.193 qpair failed and we were unable to recover it. 00:34:43.193 [2024-07-21 03:44:28.253506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.193 [2024-07-21 03:44:28.253548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.193 qpair failed and we were unable to recover it. 00:34:43.193 [2024-07-21 03:44:28.253650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.193 [2024-07-21 03:44:28.253680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.193 qpair failed and we were unable to recover it. 00:34:43.193 [2024-07-21 03:44:28.253781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.193 [2024-07-21 03:44:28.253809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.193 qpair failed and we were unable to recover it. 00:34:43.193 [2024-07-21 03:44:28.253958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.193 [2024-07-21 03:44:28.253986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.193 qpair failed and we were unable to recover it. 00:34:43.193 [2024-07-21 03:44:28.254129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.193 [2024-07-21 03:44:28.254159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.193 qpair failed and we were unable to recover it. 00:34:43.193 [2024-07-21 03:44:28.254300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.193 [2024-07-21 03:44:28.254327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.193 qpair failed and we were unable to recover it. 00:34:43.193 [2024-07-21 03:44:28.254478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.193 [2024-07-21 03:44:28.254522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.193 qpair failed and we were unable to recover it. 00:34:43.193 [2024-07-21 03:44:28.254671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.193 [2024-07-21 03:44:28.254717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.193 qpair failed and we were unable to recover it. 00:34:43.193 [2024-07-21 03:44:28.254843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.193 [2024-07-21 03:44:28.254871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.193 qpair failed and we were unable to recover it. 00:34:43.193 [2024-07-21 03:44:28.254968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.193 [2024-07-21 03:44:28.254995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.193 qpair failed and we were unable to recover it. 00:34:43.193 [2024-07-21 03:44:28.255148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.193 [2024-07-21 03:44:28.255175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.193 qpair failed and we were unable to recover it. 00:34:43.193 [2024-07-21 03:44:28.255273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.193 [2024-07-21 03:44:28.255300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.193 qpair failed and we were unable to recover it. 00:34:43.193 [2024-07-21 03:44:28.255420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.193 [2024-07-21 03:44:28.255448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.193 qpair failed and we were unable to recover it. 00:34:43.193 [2024-07-21 03:44:28.255552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.193 [2024-07-21 03:44:28.255581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.193 qpair failed and we were unable to recover it. 00:34:43.193 [2024-07-21 03:44:28.255729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.193 [2024-07-21 03:44:28.255757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.193 qpair failed and we were unable to recover it. 00:34:43.193 [2024-07-21 03:44:28.255853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.193 [2024-07-21 03:44:28.255881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.193 qpair failed and we were unable to recover it. 00:34:43.193 [2024-07-21 03:44:28.255994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.193 [2024-07-21 03:44:28.256021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.193 qpair failed and we were unable to recover it. 00:34:43.193 [2024-07-21 03:44:28.256138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.193 [2024-07-21 03:44:28.256165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.193 qpair failed and we were unable to recover it. 00:34:43.193 [2024-07-21 03:44:28.256289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.193 [2024-07-21 03:44:28.256317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.193 qpair failed and we were unable to recover it. 00:34:43.193 [2024-07-21 03:44:28.256477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.193 [2024-07-21 03:44:28.256509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.193 qpair failed and we were unable to recover it. 00:34:43.193 [2024-07-21 03:44:28.256654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.193 [2024-07-21 03:44:28.256682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.193 qpair failed and we were unable to recover it. 00:34:43.193 [2024-07-21 03:44:28.256804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.193 [2024-07-21 03:44:28.256830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.193 qpair failed and we were unable to recover it. 00:34:43.193 [2024-07-21 03:44:28.256982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.193 [2024-07-21 03:44:28.257011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.193 qpair failed and we were unable to recover it. 00:34:43.193 [2024-07-21 03:44:28.257187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.193 [2024-07-21 03:44:28.257213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.194 qpair failed and we were unable to recover it. 00:34:43.194 [2024-07-21 03:44:28.257335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.194 [2024-07-21 03:44:28.257378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.194 qpair failed and we were unable to recover it. 00:34:43.194 [2024-07-21 03:44:28.257504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.194 [2024-07-21 03:44:28.257537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.194 qpair failed and we were unable to recover it. 00:34:43.194 [2024-07-21 03:44:28.257681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.194 [2024-07-21 03:44:28.257708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.194 qpair failed and we were unable to recover it. 00:34:43.194 [2024-07-21 03:44:28.257832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.194 [2024-07-21 03:44:28.257859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.194 qpair failed and we were unable to recover it. 00:34:43.194 [2024-07-21 03:44:28.257981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.194 [2024-07-21 03:44:28.258006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.194 qpair failed and we were unable to recover it. 00:34:43.194 [2024-07-21 03:44:28.258105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.194 [2024-07-21 03:44:28.258133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.194 qpair failed and we were unable to recover it. 00:34:43.194 [2024-07-21 03:44:28.258234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.194 [2024-07-21 03:44:28.258260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.194 qpair failed and we were unable to recover it. 00:34:43.194 [2024-07-21 03:44:28.258373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.194 [2024-07-21 03:44:28.258417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.194 qpair failed and we were unable to recover it. 00:34:43.194 [2024-07-21 03:44:28.258568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.194 [2024-07-21 03:44:28.258595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.194 qpair failed and we were unable to recover it. 00:34:43.194 [2024-07-21 03:44:28.258763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.194 [2024-07-21 03:44:28.258804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.194 qpair failed and we were unable to recover it. 00:34:43.194 [2024-07-21 03:44:28.258964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.194 [2024-07-21 03:44:28.259009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.194 qpair failed and we were unable to recover it. 00:34:43.194 [2024-07-21 03:44:28.259172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.194 [2024-07-21 03:44:28.259201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.194 qpair failed and we were unable to recover it. 00:34:43.194 [2024-07-21 03:44:28.259329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.194 [2024-07-21 03:44:28.259357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.194 qpair failed and we were unable to recover it. 00:34:43.194 [2024-07-21 03:44:28.259519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.194 [2024-07-21 03:44:28.259547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.194 qpair failed and we were unable to recover it. 00:34:43.194 [2024-07-21 03:44:28.259667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.194 [2024-07-21 03:44:28.259694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.194 qpair failed and we were unable to recover it. 00:34:43.194 [2024-07-21 03:44:28.259822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.194 [2024-07-21 03:44:28.259850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.194 qpair failed and we were unable to recover it. 00:34:43.194 [2024-07-21 03:44:28.259957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.194 [2024-07-21 03:44:28.259988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.194 qpair failed and we were unable to recover it. 00:34:43.194 [2024-07-21 03:44:28.260157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.194 [2024-07-21 03:44:28.260183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.194 qpair failed and we were unable to recover it. 00:34:43.194 [2024-07-21 03:44:28.260303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.194 [2024-07-21 03:44:28.260346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.194 qpair failed and we were unable to recover it. 00:34:43.194 [2024-07-21 03:44:28.260491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.194 [2024-07-21 03:44:28.260518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.194 qpair failed and we were unable to recover it. 00:34:43.194 [2024-07-21 03:44:28.260668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.194 [2024-07-21 03:44:28.260697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.194 qpair failed and we were unable to recover it. 00:34:43.194 [2024-07-21 03:44:28.260819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.194 [2024-07-21 03:44:28.260846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.194 qpair failed and we were unable to recover it. 00:34:43.194 [2024-07-21 03:44:28.261012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.194 [2024-07-21 03:44:28.261042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.194 qpair failed and we were unable to recover it. 00:34:43.194 [2024-07-21 03:44:28.261173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.194 [2024-07-21 03:44:28.261200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.194 qpair failed and we were unable to recover it. 00:34:43.194 [2024-07-21 03:44:28.261319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.194 [2024-07-21 03:44:28.261362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.194 qpair failed and we were unable to recover it. 00:34:43.194 [2024-07-21 03:44:28.261522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.195 [2024-07-21 03:44:28.261552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.195 qpair failed and we were unable to recover it. 00:34:43.195 [2024-07-21 03:44:28.261684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.195 [2024-07-21 03:44:28.261711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.195 qpair failed and we were unable to recover it. 00:34:43.195 [2024-07-21 03:44:28.261836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.195 [2024-07-21 03:44:28.261863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.195 qpair failed and we were unable to recover it. 00:34:43.195 [2024-07-21 03:44:28.261988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.195 [2024-07-21 03:44:28.262021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.195 qpair failed and we were unable to recover it. 00:34:43.195 [2024-07-21 03:44:28.262162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.195 [2024-07-21 03:44:28.262190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.195 qpair failed and we were unable to recover it. 00:34:43.195 [2024-07-21 03:44:28.262313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.195 [2024-07-21 03:44:28.262341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.195 qpair failed and we were unable to recover it. 00:34:43.195 [2024-07-21 03:44:28.262513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.195 [2024-07-21 03:44:28.262544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.195 qpair failed and we were unable to recover it. 00:34:43.195 [2024-07-21 03:44:28.262691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.195 [2024-07-21 03:44:28.262719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.195 qpair failed and we were unable to recover it. 00:34:43.195 [2024-07-21 03:44:28.262843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.195 [2024-07-21 03:44:28.262871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.195 qpair failed and we were unable to recover it. 00:34:43.195 [2024-07-21 03:44:28.263040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.195 [2024-07-21 03:44:28.263070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.195 qpair failed and we were unable to recover it. 00:34:43.195 [2024-07-21 03:44:28.263237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.195 [2024-07-21 03:44:28.263264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.195 qpair failed and we were unable to recover it. 00:34:43.195 [2024-07-21 03:44:28.263408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.195 [2024-07-21 03:44:28.263453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.195 qpair failed and we were unable to recover it. 00:34:43.195 [2024-07-21 03:44:28.263586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.195 [2024-07-21 03:44:28.263623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.195 qpair failed and we were unable to recover it. 00:34:43.195 [2024-07-21 03:44:28.263765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.195 [2024-07-21 03:44:28.263793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.195 qpair failed and we were unable to recover it. 00:34:43.195 [2024-07-21 03:44:28.263964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.195 [2024-07-21 03:44:28.263994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.195 qpair failed and we were unable to recover it. 00:34:43.195 [2024-07-21 03:44:28.264108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.195 [2024-07-21 03:44:28.264154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.195 qpair failed and we were unable to recover it. 00:34:43.195 [2024-07-21 03:44:28.264246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.195 [2024-07-21 03:44:28.264278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.195 qpair failed and we were unable to recover it. 00:34:43.195 [2024-07-21 03:44:28.264376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.195 [2024-07-21 03:44:28.264405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.195 qpair failed and we were unable to recover it. 00:34:43.195 [2024-07-21 03:44:28.264543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.195 [2024-07-21 03:44:28.264573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.195 qpair failed and we were unable to recover it. 00:34:43.195 [2024-07-21 03:44:28.264747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.195 [2024-07-21 03:44:28.264774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.195 qpair failed and we were unable to recover it. 00:34:43.195 [2024-07-21 03:44:28.264939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.195 [2024-07-21 03:44:28.264969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.195 qpair failed and we were unable to recover it. 00:34:43.195 [2024-07-21 03:44:28.265127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.195 [2024-07-21 03:44:28.265155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.195 qpair failed and we were unable to recover it. 00:34:43.195 [2024-07-21 03:44:28.265279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.195 [2024-07-21 03:44:28.265308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.195 qpair failed and we were unable to recover it. 00:34:43.195 [2024-07-21 03:44:28.265410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.195 [2024-07-21 03:44:28.265438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.195 qpair failed and we were unable to recover it. 00:34:43.195 [2024-07-21 03:44:28.265533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.195 [2024-07-21 03:44:28.265560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.195 qpair failed and we were unable to recover it. 00:34:43.195 [2024-07-21 03:44:28.265697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.196 [2024-07-21 03:44:28.265726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.196 qpair failed and we were unable to recover it. 00:34:43.196 [2024-07-21 03:44:28.265893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.196 [2024-07-21 03:44:28.265923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.196 qpair failed and we were unable to recover it. 00:34:43.196 [2024-07-21 03:44:28.266155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.196 [2024-07-21 03:44:28.266210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.196 qpair failed and we were unable to recover it. 00:34:43.196 [2024-07-21 03:44:28.266382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.196 [2024-07-21 03:44:28.266409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.196 qpair failed and we were unable to recover it. 00:34:43.196 [2024-07-21 03:44:28.266526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.196 [2024-07-21 03:44:28.266553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.196 qpair failed and we were unable to recover it. 00:34:43.196 [2024-07-21 03:44:28.266690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.196 [2024-07-21 03:44:28.266718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.196 qpair failed and we were unable to recover it. 00:34:43.196 [2024-07-21 03:44:28.266814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.196 [2024-07-21 03:44:28.266842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.196 qpair failed and we were unable to recover it. 00:34:43.196 [2024-07-21 03:44:28.266968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.196 [2024-07-21 03:44:28.266995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.196 qpair failed and we were unable to recover it. 00:34:43.196 [2024-07-21 03:44:28.267109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.196 [2024-07-21 03:44:28.267156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.196 qpair failed and we were unable to recover it. 00:34:43.196 [2024-07-21 03:44:28.267285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.196 [2024-07-21 03:44:28.267312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.196 qpair failed and we were unable to recover it. 00:34:43.196 [2024-07-21 03:44:28.267461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.196 [2024-07-21 03:44:28.267488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.196 qpair failed and we were unable to recover it. 00:34:43.196 [2024-07-21 03:44:28.267640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.196 [2024-07-21 03:44:28.267671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.196 qpair failed and we were unable to recover it. 00:34:43.196 [2024-07-21 03:44:28.267786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.196 [2024-07-21 03:44:28.267814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.196 qpair failed and we were unable to recover it. 00:34:43.196 [2024-07-21 03:44:28.267905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.196 [2024-07-21 03:44:28.267934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.196 qpair failed and we were unable to recover it. 00:34:43.196 [2024-07-21 03:44:28.268035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.196 [2024-07-21 03:44:28.268064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.196 qpair failed and we were unable to recover it. 00:34:43.196 [2024-07-21 03:44:28.268212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.196 [2024-07-21 03:44:28.268240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.196 qpair failed and we were unable to recover it. 00:34:43.196 [2024-07-21 03:44:28.268374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.196 [2024-07-21 03:44:28.268404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.196 qpair failed and we were unable to recover it. 00:34:43.196 [2024-07-21 03:44:28.268541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.196 [2024-07-21 03:44:28.268571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.196 qpair failed and we were unable to recover it. 00:34:43.196 [2024-07-21 03:44:28.268717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.196 [2024-07-21 03:44:28.268745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.196 qpair failed and we were unable to recover it. 00:34:43.196 [2024-07-21 03:44:28.268868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.196 [2024-07-21 03:44:28.268895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.196 qpair failed and we were unable to recover it. 00:34:43.196 [2024-07-21 03:44:28.269060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.196 [2024-07-21 03:44:28.269127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.196 qpair failed and we were unable to recover it. 00:34:43.196 [2024-07-21 03:44:28.269265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.196 [2024-07-21 03:44:28.269293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.196 qpair failed and we were unable to recover it. 00:34:43.196 [2024-07-21 03:44:28.269419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.196 [2024-07-21 03:44:28.269446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.196 qpair failed and we were unable to recover it. 00:34:43.196 [2024-07-21 03:44:28.269621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.196 [2024-07-21 03:44:28.269653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.196 qpair failed and we were unable to recover it. 00:34:43.196 [2024-07-21 03:44:28.269795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.196 [2024-07-21 03:44:28.269823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.196 qpair failed and we were unable to recover it. 00:34:43.196 [2024-07-21 03:44:28.269941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.196 [2024-07-21 03:44:28.269982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.196 qpair failed and we were unable to recover it. 00:34:43.196 [2024-07-21 03:44:28.270128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.196 [2024-07-21 03:44:28.270157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.196 qpair failed and we were unable to recover it. 00:34:43.197 [2024-07-21 03:44:28.270299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.197 [2024-07-21 03:44:28.270326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.197 qpair failed and we were unable to recover it. 00:34:43.197 [2024-07-21 03:44:28.270475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.197 [2024-07-21 03:44:28.270521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.197 qpair failed and we were unable to recover it. 00:34:43.197 [2024-07-21 03:44:28.270680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.197 [2024-07-21 03:44:28.270710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.197 qpair failed and we were unable to recover it. 00:34:43.197 [2024-07-21 03:44:28.270855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.197 [2024-07-21 03:44:28.270882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.197 qpair failed and we were unable to recover it. 00:34:43.197 [2024-07-21 03:44:28.271002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.197 [2024-07-21 03:44:28.271033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.197 qpair failed and we were unable to recover it. 00:34:43.197 [2024-07-21 03:44:28.271175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.197 [2024-07-21 03:44:28.271205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.197 qpair failed and we were unable to recover it. 00:34:43.197 [2024-07-21 03:44:28.271338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.197 [2024-07-21 03:44:28.271364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.197 qpair failed and we were unable to recover it. 00:34:43.197 [2024-07-21 03:44:28.271487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.197 [2024-07-21 03:44:28.271514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.197 qpair failed and we were unable to recover it. 00:34:43.197 [2024-07-21 03:44:28.271634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.197 [2024-07-21 03:44:28.271662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.197 qpair failed and we were unable to recover it. 00:34:43.197 [2024-07-21 03:44:28.271779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.197 [2024-07-21 03:44:28.271806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.197 qpair failed and we were unable to recover it. 00:34:43.197 [2024-07-21 03:44:28.271933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.197 [2024-07-21 03:44:28.271976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.197 qpair failed and we were unable to recover it. 00:34:43.197 [2024-07-21 03:44:28.272130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.197 [2024-07-21 03:44:28.272158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.197 qpair failed and we were unable to recover it. 00:34:43.197 [2024-07-21 03:44:28.272281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.197 [2024-07-21 03:44:28.272308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.197 qpair failed and we were unable to recover it. 00:34:43.197 [2024-07-21 03:44:28.272433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.197 [2024-07-21 03:44:28.272476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.197 qpair failed and we were unable to recover it. 00:34:43.197 [2024-07-21 03:44:28.272624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.197 [2024-07-21 03:44:28.272652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.197 qpair failed and we were unable to recover it. 00:34:43.197 [2024-07-21 03:44:28.272780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.197 [2024-07-21 03:44:28.272808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.197 qpair failed and we were unable to recover it. 00:34:43.197 [2024-07-21 03:44:28.272932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.197 [2024-07-21 03:44:28.272975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.197 qpair failed and we were unable to recover it. 00:34:43.197 [2024-07-21 03:44:28.273139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.197 [2024-07-21 03:44:28.273169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.197 qpair failed and we were unable to recover it. 00:34:43.197 [2024-07-21 03:44:28.273295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.197 [2024-07-21 03:44:28.273323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.197 qpair failed and we were unable to recover it. 00:34:43.197 [2024-07-21 03:44:28.273444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.197 [2024-07-21 03:44:28.273470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.197 qpair failed and we were unable to recover it. 00:34:43.197 [2024-07-21 03:44:28.273600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.198 [2024-07-21 03:44:28.273659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.198 qpair failed and we were unable to recover it. 00:34:43.198 [2024-07-21 03:44:28.273838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.198 [2024-07-21 03:44:28.273867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.198 qpair failed and we were unable to recover it. 00:34:43.198 [2024-07-21 03:44:28.273963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.198 [2024-07-21 03:44:28.273990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.198 qpair failed and we were unable to recover it. 00:34:43.198 [2024-07-21 03:44:28.274134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.198 [2024-07-21 03:44:28.274161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.198 qpair failed and we were unable to recover it. 00:34:43.198 [2024-07-21 03:44:28.274306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.198 [2024-07-21 03:44:28.274333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.198 qpair failed and we were unable to recover it. 00:34:43.198 [2024-07-21 03:44:28.274469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.198 [2024-07-21 03:44:28.274498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.198 qpair failed and we were unable to recover it. 00:34:43.198 [2024-07-21 03:44:28.274630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.198 [2024-07-21 03:44:28.274662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.198 qpair failed and we were unable to recover it. 00:34:43.198 [2024-07-21 03:44:28.274803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.198 [2024-07-21 03:44:28.274831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.198 qpair failed and we were unable to recover it. 00:34:43.198 [2024-07-21 03:44:28.274952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.198 [2024-07-21 03:44:28.274979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.198 qpair failed and we were unable to recover it. 00:34:43.198 [2024-07-21 03:44:28.275091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.198 [2024-07-21 03:44:28.275121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.198 qpair failed and we were unable to recover it. 00:34:43.198 [2024-07-21 03:44:28.275259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.198 [2024-07-21 03:44:28.275285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.198 qpair failed and we were unable to recover it. 00:34:43.198 [2024-07-21 03:44:28.275407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.198 [2024-07-21 03:44:28.275434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.198 qpair failed and we were unable to recover it. 00:34:43.198 [2024-07-21 03:44:28.275579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.198 [2024-07-21 03:44:28.275609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.198 qpair failed and we were unable to recover it. 00:34:43.198 [2024-07-21 03:44:28.275757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.198 [2024-07-21 03:44:28.275784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.198 qpair failed and we were unable to recover it. 00:34:43.198 [2024-07-21 03:44:28.275901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.198 [2024-07-21 03:44:28.275929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.198 qpair failed and we were unable to recover it. 00:34:43.198 [2024-07-21 03:44:28.276106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.198 [2024-07-21 03:44:28.276136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.198 qpair failed and we were unable to recover it. 00:34:43.198 [2024-07-21 03:44:28.276272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.198 [2024-07-21 03:44:28.276300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.198 qpair failed and we were unable to recover it. 00:34:43.198 [2024-07-21 03:44:28.276431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.198 [2024-07-21 03:44:28.276458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.198 qpair failed and we were unable to recover it. 00:34:43.198 [2024-07-21 03:44:28.276607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.198 [2024-07-21 03:44:28.276645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.198 qpair failed and we were unable to recover it. 00:34:43.198 [2024-07-21 03:44:28.276759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.198 [2024-07-21 03:44:28.276785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.198 qpair failed and we were unable to recover it. 00:34:43.198 [2024-07-21 03:44:28.276881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.198 [2024-07-21 03:44:28.276909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.198 qpair failed and we were unable to recover it. 00:34:43.198 [2024-07-21 03:44:28.277025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.198 [2024-07-21 03:44:28.277055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.198 qpair failed and we were unable to recover it. 00:34:43.198 [2024-07-21 03:44:28.277202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.198 [2024-07-21 03:44:28.277230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.198 qpair failed and we were unable to recover it. 00:34:43.198 [2024-07-21 03:44:28.277319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.198 [2024-07-21 03:44:28.277346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.198 qpair failed and we were unable to recover it. 00:34:43.198 [2024-07-21 03:44:28.277471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.198 [2024-07-21 03:44:28.277501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.198 qpair failed and we were unable to recover it. 00:34:43.198 [2024-07-21 03:44:28.277638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.198 [2024-07-21 03:44:28.277666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.198 qpair failed and we were unable to recover it. 00:34:43.198 [2024-07-21 03:44:28.277783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.199 [2024-07-21 03:44:28.277827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.199 qpair failed and we were unable to recover it. 00:34:43.199 [2024-07-21 03:44:28.277966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.199 [2024-07-21 03:44:28.277995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.199 qpair failed and we were unable to recover it. 00:34:43.199 [2024-07-21 03:44:28.278163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.199 [2024-07-21 03:44:28.278190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.199 qpair failed and we were unable to recover it. 00:34:43.199 [2024-07-21 03:44:28.278313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.199 [2024-07-21 03:44:28.278339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.199 qpair failed and we were unable to recover it. 00:34:43.199 [2024-07-21 03:44:28.278483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.199 [2024-07-21 03:44:28.278512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.199 qpair failed and we were unable to recover it. 00:34:43.199 [2024-07-21 03:44:28.278629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.199 [2024-07-21 03:44:28.278655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.199 qpair failed and we were unable to recover it. 00:34:43.199 [2024-07-21 03:44:28.278775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.199 [2024-07-21 03:44:28.278801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.199 qpair failed and we were unable to recover it. 00:34:43.199 [2024-07-21 03:44:28.278942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.199 [2024-07-21 03:44:28.278972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.199 qpair failed and we were unable to recover it. 00:34:43.199 [2024-07-21 03:44:28.279090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.199 [2024-07-21 03:44:28.279117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.199 qpair failed and we were unable to recover it. 00:34:43.199 [2024-07-21 03:44:28.279270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.199 [2024-07-21 03:44:28.279296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.199 qpair failed and we were unable to recover it. 00:34:43.199 [2024-07-21 03:44:28.279458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.199 [2024-07-21 03:44:28.279487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.199 qpair failed and we were unable to recover it. 00:34:43.199 [2024-07-21 03:44:28.279630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.199 [2024-07-21 03:44:28.279658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.199 qpair failed and we were unable to recover it. 00:34:43.199 [2024-07-21 03:44:28.279788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.199 [2024-07-21 03:44:28.279814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.199 qpair failed and we were unable to recover it. 00:34:43.199 [2024-07-21 03:44:28.279956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.199 [2024-07-21 03:44:28.280002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.199 qpair failed and we were unable to recover it. 00:34:43.199 [2024-07-21 03:44:28.280155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.199 [2024-07-21 03:44:28.280184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.199 qpair failed and we were unable to recover it. 00:34:43.199 [2024-07-21 03:44:28.280307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.199 [2024-07-21 03:44:28.280335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.199 qpair failed and we were unable to recover it. 00:34:43.199 [2024-07-21 03:44:28.280514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.199 [2024-07-21 03:44:28.280545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.199 qpair failed and we were unable to recover it. 00:34:43.199 [2024-07-21 03:44:28.280695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.199 [2024-07-21 03:44:28.280722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.199 qpair failed and we were unable to recover it. 00:34:43.199 [2024-07-21 03:44:28.280841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.199 [2024-07-21 03:44:28.280869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.199 qpair failed and we were unable to recover it. 00:34:43.199 [2024-07-21 03:44:28.281025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.199 [2024-07-21 03:44:28.281056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.199 qpair failed and we were unable to recover it. 00:34:43.199 [2024-07-21 03:44:28.281203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.199 [2024-07-21 03:44:28.281230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.199 qpair failed and we were unable to recover it. 00:34:43.199 [2024-07-21 03:44:28.281357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.199 [2024-07-21 03:44:28.281385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.199 qpair failed and we were unable to recover it. 00:34:43.199 [2024-07-21 03:44:28.281529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.199 [2024-07-21 03:44:28.281559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.199 qpair failed and we were unable to recover it. 00:34:43.199 [2024-07-21 03:44:28.281724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.199 [2024-07-21 03:44:28.281752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.199 qpair failed and we were unable to recover it. 00:34:43.199 [2024-07-21 03:44:28.281916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.199 [2024-07-21 03:44:28.281947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.199 qpair failed and we were unable to recover it. 00:34:43.199 [2024-07-21 03:44:28.282094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.199 [2024-07-21 03:44:28.282126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.199 qpair failed and we were unable to recover it. 00:34:43.199 [2024-07-21 03:44:28.282267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.199 [2024-07-21 03:44:28.282294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.199 qpair failed and we were unable to recover it. 00:34:43.199 [2024-07-21 03:44:28.282438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.199 [2024-07-21 03:44:28.282464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.199 qpair failed and we were unable to recover it. 00:34:43.199 [2024-07-21 03:44:28.282580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.199 [2024-07-21 03:44:28.282628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.199 qpair failed and we were unable to recover it. 00:34:43.199 [2024-07-21 03:44:28.282719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.199 [2024-07-21 03:44:28.282745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.199 qpair failed and we were unable to recover it. 00:34:43.199 [2024-07-21 03:44:28.282832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.199 [2024-07-21 03:44:28.282858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.199 qpair failed and we were unable to recover it. 00:34:43.199 [2024-07-21 03:44:28.283033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.199 [2024-07-21 03:44:28.283062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.199 qpair failed and we were unable to recover it. 00:34:43.199 [2024-07-21 03:44:28.283226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.199 [2024-07-21 03:44:28.283252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.200 qpair failed and we were unable to recover it. 00:34:43.200 [2024-07-21 03:44:28.283373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.200 [2024-07-21 03:44:28.283417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.200 qpair failed and we were unable to recover it. 00:34:43.200 [2024-07-21 03:44:28.283591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.200 [2024-07-21 03:44:28.283645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.200 qpair failed and we were unable to recover it. 00:34:43.200 [2024-07-21 03:44:28.283827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.200 [2024-07-21 03:44:28.283856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.200 qpair failed and we were unable to recover it. 00:34:43.200 [2024-07-21 03:44:28.284019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.200 [2024-07-21 03:44:28.284049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.200 qpair failed and we were unable to recover it. 00:34:43.200 [2024-07-21 03:44:28.284245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.200 [2024-07-21 03:44:28.284296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.200 qpair failed and we were unable to recover it. 00:34:43.200 [2024-07-21 03:44:28.284477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.200 [2024-07-21 03:44:28.284509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.200 qpair failed and we were unable to recover it. 00:34:43.200 [2024-07-21 03:44:28.284677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.200 [2024-07-21 03:44:28.284708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.200 qpair failed and we were unable to recover it. 00:34:43.200 [2024-07-21 03:44:28.284808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.200 [2024-07-21 03:44:28.284852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.200 qpair failed and we were unable to recover it. 00:34:43.200 [2024-07-21 03:44:28.284943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.200 [2024-07-21 03:44:28.284971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.200 qpair failed and we were unable to recover it. 00:34:43.200 [2024-07-21 03:44:28.285122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.200 [2024-07-21 03:44:28.285149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.200 qpair failed and we were unable to recover it. 00:34:43.200 [2024-07-21 03:44:28.285325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.200 [2024-07-21 03:44:28.285355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.200 qpair failed and we were unable to recover it. 00:34:43.200 [2024-07-21 03:44:28.285466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.200 [2024-07-21 03:44:28.285494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.200 qpair failed and we were unable to recover it. 00:34:43.200 [2024-07-21 03:44:28.285630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.200 [2024-07-21 03:44:28.285659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.200 qpair failed and we were unable to recover it. 00:34:43.200 [2024-07-21 03:44:28.285802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.200 [2024-07-21 03:44:28.285832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.200 qpair failed and we were unable to recover it. 00:34:43.200 [2024-07-21 03:44:28.285979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.200 [2024-07-21 03:44:28.286006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.200 qpair failed and we were unable to recover it. 00:34:43.200 [2024-07-21 03:44:28.286126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.200 [2024-07-21 03:44:28.286153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.200 qpair failed and we were unable to recover it. 00:34:43.200 [2024-07-21 03:44:28.286278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.200 [2024-07-21 03:44:28.286306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.200 qpair failed and we were unable to recover it. 00:34:43.200 [2024-07-21 03:44:28.286426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.200 [2024-07-21 03:44:28.286453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.200 qpair failed and we were unable to recover it. 00:34:43.200 [2024-07-21 03:44:28.286626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.200 [2024-07-21 03:44:28.286657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.200 qpair failed and we were unable to recover it. 00:34:43.200 [2024-07-21 03:44:28.286793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.200 [2024-07-21 03:44:28.286824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.200 qpair failed and we were unable to recover it. 00:34:43.200 [2024-07-21 03:44:28.286967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.200 [2024-07-21 03:44:28.286995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.200 qpair failed and we were unable to recover it. 00:34:43.200 [2024-07-21 03:44:28.287125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.200 [2024-07-21 03:44:28.287152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.200 qpair failed and we were unable to recover it. 00:34:43.200 [2024-07-21 03:44:28.287276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.200 [2024-07-21 03:44:28.287303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.200 qpair failed and we were unable to recover it. 00:34:43.200 [2024-07-21 03:44:28.287423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.200 [2024-07-21 03:44:28.287450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.200 qpair failed and we were unable to recover it. 00:34:43.200 [2024-07-21 03:44:28.287582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.200 [2024-07-21 03:44:28.287634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.200 qpair failed and we were unable to recover it. 00:34:43.200 [2024-07-21 03:44:28.287772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.200 [2024-07-21 03:44:28.287802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.200 qpair failed and we were unable to recover it. 00:34:43.200 [2024-07-21 03:44:28.287948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.200 [2024-07-21 03:44:28.287975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.200 qpair failed and we were unable to recover it. 00:34:43.200 [2024-07-21 03:44:28.288062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.200 [2024-07-21 03:44:28.288090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.200 qpair failed and we were unable to recover it. 00:34:43.200 [2024-07-21 03:44:28.288236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.200 [2024-07-21 03:44:28.288263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.200 qpair failed and we were unable to recover it. 00:34:43.200 [2024-07-21 03:44:28.288363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.200 [2024-07-21 03:44:28.288391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.200 qpair failed and we were unable to recover it. 00:34:43.200 [2024-07-21 03:44:28.288541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.200 [2024-07-21 03:44:28.288568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.200 qpair failed and we were unable to recover it. 00:34:43.200 [2024-07-21 03:44:28.288720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.200 [2024-07-21 03:44:28.288754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.200 qpair failed and we were unable to recover it. 00:34:43.200 [2024-07-21 03:44:28.288900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.200 [2024-07-21 03:44:28.288928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.200 qpair failed and we were unable to recover it. 00:34:43.200 [2024-07-21 03:44:28.289075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.200 [2024-07-21 03:44:28.289101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.200 qpair failed and we were unable to recover it. 00:34:43.200 [2024-07-21 03:44:28.289194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.200 [2024-07-21 03:44:28.289221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.200 qpair failed and we were unable to recover it. 00:34:43.200 [2024-07-21 03:44:28.289372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.200 [2024-07-21 03:44:28.289399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.200 qpair failed and we were unable to recover it. 00:34:43.200 [2024-07-21 03:44:28.289491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.200 [2024-07-21 03:44:28.289533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.200 qpair failed and we were unable to recover it. 00:34:43.200 [2024-07-21 03:44:28.289667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.200 [2024-07-21 03:44:28.289697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.200 qpair failed and we were unable to recover it. 00:34:43.200 [2024-07-21 03:44:28.289836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.200 [2024-07-21 03:44:28.289862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.201 qpair failed and we were unable to recover it. 00:34:43.201 [2024-07-21 03:44:28.289984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.201 [2024-07-21 03:44:28.290011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.201 qpair failed and we were unable to recover it. 00:34:43.201 [2024-07-21 03:44:28.290149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.201 [2024-07-21 03:44:28.290178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.201 qpair failed and we were unable to recover it. 00:34:43.201 [2024-07-21 03:44:28.290313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.201 [2024-07-21 03:44:28.290339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.201 qpair failed and we were unable to recover it. 00:34:43.201 [2024-07-21 03:44:28.290461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.201 [2024-07-21 03:44:28.290487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.201 qpair failed and we were unable to recover it. 00:34:43.201 [2024-07-21 03:44:28.290622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.201 [2024-07-21 03:44:28.290668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.201 qpair failed and we were unable to recover it. 00:34:43.201 [2024-07-21 03:44:28.290795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.201 [2024-07-21 03:44:28.290824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.201 qpair failed and we were unable to recover it. 00:34:43.201 [2024-07-21 03:44:28.290954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.201 [2024-07-21 03:44:28.290987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.201 qpair failed and we were unable to recover it. 00:34:43.201 [2024-07-21 03:44:28.291147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.201 [2024-07-21 03:44:28.291174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.201 qpair failed and we were unable to recover it. 00:34:43.201 [2024-07-21 03:44:28.291293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.201 [2024-07-21 03:44:28.291320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.201 qpair failed and we were unable to recover it. 00:34:43.201 [2024-07-21 03:44:28.291439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.201 [2024-07-21 03:44:28.291468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.201 qpair failed and we were unable to recover it. 00:34:43.201 [2024-07-21 03:44:28.291620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.201 [2024-07-21 03:44:28.291651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.201 qpair failed and we were unable to recover it. 00:34:43.201 [2024-07-21 03:44:28.291760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.201 [2024-07-21 03:44:28.291786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.201 qpair failed and we were unable to recover it. 00:34:43.201 [2024-07-21 03:44:28.291934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.201 [2024-07-21 03:44:28.291960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.201 qpair failed and we were unable to recover it. 00:34:43.201 [2024-07-21 03:44:28.292122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.201 [2024-07-21 03:44:28.292148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.201 qpair failed and we were unable to recover it. 00:34:43.201 [2024-07-21 03:44:28.292244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.201 [2024-07-21 03:44:28.292271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.201 qpair failed and we were unable to recover it. 00:34:43.201 [2024-07-21 03:44:28.292364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.201 [2024-07-21 03:44:28.292392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.201 qpair failed and we were unable to recover it. 00:34:43.201 [2024-07-21 03:44:28.292483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.201 [2024-07-21 03:44:28.292511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.201 qpair failed and we were unable to recover it. 00:34:43.201 [2024-07-21 03:44:28.292633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.201 [2024-07-21 03:44:28.292661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.201 qpair failed and we were unable to recover it. 00:34:43.201 [2024-07-21 03:44:28.292786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.201 [2024-07-21 03:44:28.292829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.201 qpair failed and we were unable to recover it. 00:34:43.201 [2024-07-21 03:44:28.292956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.201 [2024-07-21 03:44:28.292986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.201 qpair failed and we were unable to recover it. 00:34:43.201 [2024-07-21 03:44:28.293169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.201 [2024-07-21 03:44:28.293196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.201 qpair failed and we were unable to recover it. 00:34:43.201 [2024-07-21 03:44:28.293289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.201 [2024-07-21 03:44:28.293316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.201 qpair failed and we were unable to recover it. 00:34:43.201 [2024-07-21 03:44:28.293410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.201 [2024-07-21 03:44:28.293437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.201 qpair failed and we were unable to recover it. 00:34:43.201 [2024-07-21 03:44:28.293565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.201 [2024-07-21 03:44:28.293591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.201 qpair failed and we were unable to recover it. 00:34:43.201 [2024-07-21 03:44:28.293723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.201 [2024-07-21 03:44:28.293750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.201 qpair failed and we were unable to recover it. 00:34:43.201 [2024-07-21 03:44:28.293850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.201 [2024-07-21 03:44:28.293876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.201 qpair failed and we were unable to recover it. 00:34:43.201 [2024-07-21 03:44:28.293995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.201 [2024-07-21 03:44:28.294022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.201 qpair failed and we were unable to recover it. 00:34:43.201 [2024-07-21 03:44:28.294112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.201 [2024-07-21 03:44:28.294140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.201 qpair failed and we were unable to recover it. 00:34:43.201 [2024-07-21 03:44:28.294289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.201 [2024-07-21 03:44:28.294319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.201 qpair failed and we were unable to recover it. 00:34:43.201 [2024-07-21 03:44:28.294447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.201 [2024-07-21 03:44:28.294474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.201 qpair failed and we were unable to recover it. 00:34:43.201 [2024-07-21 03:44:28.294602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.201 [2024-07-21 03:44:28.294636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.201 qpair failed and we were unable to recover it. 00:34:43.201 [2024-07-21 03:44:28.294784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.201 [2024-07-21 03:44:28.294811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.201 qpair failed and we were unable to recover it. 00:34:43.201 [2024-07-21 03:44:28.294960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.201 [2024-07-21 03:44:28.294987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.201 qpair failed and we were unable to recover it. 00:34:43.201 [2024-07-21 03:44:28.295118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.201 [2024-07-21 03:44:28.295145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.201 qpair failed and we were unable to recover it. 00:34:43.201 [2024-07-21 03:44:28.295294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.201 [2024-07-21 03:44:28.295322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.201 qpair failed and we were unable to recover it. 00:34:43.201 [2024-07-21 03:44:28.295459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.201 [2024-07-21 03:44:28.295486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.201 qpair failed and we were unable to recover it. 00:34:43.201 [2024-07-21 03:44:28.295639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.201 [2024-07-21 03:44:28.295666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.201 qpair failed and we were unable to recover it. 00:34:43.201 [2024-07-21 03:44:28.295808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.201 [2024-07-21 03:44:28.295838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.201 qpair failed and we were unable to recover it. 00:34:43.201 [2024-07-21 03:44:28.295980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.201 [2024-07-21 03:44:28.296007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.201 qpair failed and we were unable to recover it. 00:34:43.201 [2024-07-21 03:44:28.296126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.201 [2024-07-21 03:44:28.296152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.202 qpair failed and we were unable to recover it. 00:34:43.202 [2024-07-21 03:44:28.296299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.202 [2024-07-21 03:44:28.296329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.202 qpair failed and we were unable to recover it. 00:34:43.202 [2024-07-21 03:44:28.296498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.202 [2024-07-21 03:44:28.296525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.202 qpair failed and we were unable to recover it. 00:34:43.202 [2024-07-21 03:44:28.296623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.202 [2024-07-21 03:44:28.296650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.202 qpair failed and we were unable to recover it. 00:34:43.202 [2024-07-21 03:44:28.296773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.202 [2024-07-21 03:44:28.296800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.202 qpair failed and we were unable to recover it. 00:34:43.202 [2024-07-21 03:44:28.296898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.202 [2024-07-21 03:44:28.296925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.202 qpair failed and we were unable to recover it. 00:34:43.202 [2024-07-21 03:44:28.297012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.202 [2024-07-21 03:44:28.297039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.202 qpair failed and we were unable to recover it. 00:34:43.202 [2024-07-21 03:44:28.297163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.202 [2024-07-21 03:44:28.297193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.202 qpair failed and we were unable to recover it. 00:34:43.202 [2024-07-21 03:44:28.297279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.202 [2024-07-21 03:44:28.297306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.202 qpair failed and we were unable to recover it. 00:34:43.202 [2024-07-21 03:44:28.297453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.202 [2024-07-21 03:44:28.297480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.202 qpair failed and we were unable to recover it. 00:34:43.202 [2024-07-21 03:44:28.297609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.202 [2024-07-21 03:44:28.297654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.202 qpair failed and we were unable to recover it. 00:34:43.202 [2024-07-21 03:44:28.297748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.202 [2024-07-21 03:44:28.297775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.202 qpair failed and we were unable to recover it. 00:34:43.202 [2024-07-21 03:44:28.297921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.202 [2024-07-21 03:44:28.297948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.202 qpair failed and we were unable to recover it. 00:34:43.202 [2024-07-21 03:44:28.298067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.202 [2024-07-21 03:44:28.298094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.202 qpair failed and we were unable to recover it. 00:34:43.202 [2024-07-21 03:44:28.298213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.202 [2024-07-21 03:44:28.298239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.202 qpair failed and we were unable to recover it. 00:34:43.202 [2024-07-21 03:44:28.298331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.202 [2024-07-21 03:44:28.298357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.202 qpair failed and we were unable to recover it. 00:34:43.202 [2024-07-21 03:44:28.298503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.202 [2024-07-21 03:44:28.298532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.202 qpair failed and we were unable to recover it. 00:34:43.202 [2024-07-21 03:44:28.298681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.202 [2024-07-21 03:44:28.298710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.202 qpair failed and we were unable to recover it. 00:34:43.202 [2024-07-21 03:44:28.298798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.202 [2024-07-21 03:44:28.298824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.202 qpair failed and we were unable to recover it. 00:34:43.202 [2024-07-21 03:44:28.298946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.202 [2024-07-21 03:44:28.298973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.202 qpair failed and we were unable to recover it. 00:34:43.202 [2024-07-21 03:44:28.299119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.202 [2024-07-21 03:44:28.299145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.202 qpair failed and we were unable to recover it. 00:34:43.202 [2024-07-21 03:44:28.299289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.202 [2024-07-21 03:44:28.299319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.202 qpair failed and we were unable to recover it. 00:34:43.202 [2024-07-21 03:44:28.299421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.202 [2024-07-21 03:44:28.299452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.202 qpair failed and we were unable to recover it. 00:34:43.202 [2024-07-21 03:44:28.299604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.202 [2024-07-21 03:44:28.299638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.202 qpair failed and we were unable to recover it. 00:34:43.202 [2024-07-21 03:44:28.299765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.202 [2024-07-21 03:44:28.299792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.202 qpair failed and we were unable to recover it. 00:34:43.202 [2024-07-21 03:44:28.299913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.202 [2024-07-21 03:44:28.299940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.202 qpair failed and we were unable to recover it. 00:34:43.202 [2024-07-21 03:44:28.300098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.202 [2024-07-21 03:44:28.300125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.202 qpair failed and we were unable to recover it. 00:34:43.202 [2024-07-21 03:44:28.300292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.202 [2024-07-21 03:44:28.300321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.202 qpair failed and we were unable to recover it. 00:34:43.202 [2024-07-21 03:44:28.300425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.202 [2024-07-21 03:44:28.300454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.202 qpair failed and we were unable to recover it. 00:34:43.202 [2024-07-21 03:44:28.300559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.202 [2024-07-21 03:44:28.300603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.202 qpair failed and we were unable to recover it. 00:34:43.202 [2024-07-21 03:44:28.300738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.202 [2024-07-21 03:44:28.300765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.202 qpair failed and we were unable to recover it. 00:34:43.202 [2024-07-21 03:44:28.300884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.202 [2024-07-21 03:44:28.300914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.202 qpair failed and we were unable to recover it. 00:34:43.202 [2024-07-21 03:44:28.301032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.202 [2024-07-21 03:44:28.301060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.202 qpair failed and we were unable to recover it. 00:34:43.202 [2024-07-21 03:44:28.301163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.202 [2024-07-21 03:44:28.301190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.202 qpair failed and we were unable to recover it. 00:34:43.202 [2024-07-21 03:44:28.301315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.202 [2024-07-21 03:44:28.301343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.202 qpair failed and we were unable to recover it. 00:34:43.203 [2024-07-21 03:44:28.301442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.203 [2024-07-21 03:44:28.301468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.203 qpair failed and we were unable to recover it. 00:34:43.203 [2024-07-21 03:44:28.301584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.203 [2024-07-21 03:44:28.301610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.203 qpair failed and we were unable to recover it. 00:34:43.203 [2024-07-21 03:44:28.301774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.203 [2024-07-21 03:44:28.301805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.203 qpair failed and we were unable to recover it. 00:34:43.203 [2024-07-21 03:44:28.301924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.203 [2024-07-21 03:44:28.301951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.203 qpair failed and we were unable to recover it. 00:34:43.203 [2024-07-21 03:44:28.302076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.203 [2024-07-21 03:44:28.302104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.203 qpair failed and we were unable to recover it. 00:34:43.203 [2024-07-21 03:44:28.302230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.203 [2024-07-21 03:44:28.302256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.203 qpair failed and we were unable to recover it. 00:34:43.203 [2024-07-21 03:44:28.302354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.203 [2024-07-21 03:44:28.302381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.203 qpair failed and we were unable to recover it. 00:34:43.203 [2024-07-21 03:44:28.302533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.203 [2024-07-21 03:44:28.302560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.203 qpair failed and we were unable to recover it. 00:34:43.203 [2024-07-21 03:44:28.302749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.203 [2024-07-21 03:44:28.302781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.203 qpair failed and we were unable to recover it. 00:34:43.203 [2024-07-21 03:44:28.302903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.203 [2024-07-21 03:44:28.302930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.203 qpair failed and we were unable to recover it. 00:34:43.203 [2024-07-21 03:44:28.303025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.203 [2024-07-21 03:44:28.303052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.203 qpair failed and we were unable to recover it. 00:34:43.203 [2024-07-21 03:44:28.303201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.203 [2024-07-21 03:44:28.303232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.203 qpair failed and we were unable to recover it. 00:34:43.203 [2024-07-21 03:44:28.303372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.203 [2024-07-21 03:44:28.303399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.203 qpair failed and we were unable to recover it. 00:34:43.203 [2024-07-21 03:44:28.303570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.203 [2024-07-21 03:44:28.303599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.203 qpair failed and we were unable to recover it. 00:34:43.203 [2024-07-21 03:44:28.303748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.203 [2024-07-21 03:44:28.303779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.203 qpair failed and we were unable to recover it. 00:34:43.203 [2024-07-21 03:44:28.303890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.203 [2024-07-21 03:44:28.303917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.203 qpair failed and we were unable to recover it. 00:34:43.203 [2024-07-21 03:44:28.304009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.203 [2024-07-21 03:44:28.304037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.203 qpair failed and we were unable to recover it. 00:34:43.203 [2024-07-21 03:44:28.304162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.203 [2024-07-21 03:44:28.304190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.203 qpair failed and we were unable to recover it. 00:34:43.203 [2024-07-21 03:44:28.304312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.203 [2024-07-21 03:44:28.304338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.203 qpair failed and we were unable to recover it. 00:34:43.203 [2024-07-21 03:44:28.304422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.203 [2024-07-21 03:44:28.304448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.203 qpair failed and we were unable to recover it. 00:34:43.203 [2024-07-21 03:44:28.304571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.203 [2024-07-21 03:44:28.304600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.203 qpair failed and we were unable to recover it. 00:34:43.203 [2024-07-21 03:44:28.304783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.203 [2024-07-21 03:44:28.304810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.203 qpair failed and we were unable to recover it. 00:34:43.203 [2024-07-21 03:44:28.304974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.203 [2024-07-21 03:44:28.305004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.203 qpair failed and we were unable to recover it. 00:34:43.203 [2024-07-21 03:44:28.305108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.203 [2024-07-21 03:44:28.305138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.203 qpair failed and we were unable to recover it. 00:34:43.203 [2024-07-21 03:44:28.305285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.203 [2024-07-21 03:44:28.305312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.203 qpair failed and we were unable to recover it. 00:34:43.203 [2024-07-21 03:44:28.305464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.203 [2024-07-21 03:44:28.305491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.203 qpair failed and we were unable to recover it. 00:34:43.203 [2024-07-21 03:44:28.305649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.203 [2024-07-21 03:44:28.305679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.203 qpair failed and we were unable to recover it. 00:34:43.203 [2024-07-21 03:44:28.305799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.203 [2024-07-21 03:44:28.305826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.203 qpair failed and we were unable to recover it. 00:34:43.203 [2024-07-21 03:44:28.305971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.203 [2024-07-21 03:44:28.305997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.203 qpair failed and we were unable to recover it. 00:34:43.203 [2024-07-21 03:44:28.306165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.203 [2024-07-21 03:44:28.306194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.203 qpair failed and we were unable to recover it. 00:34:43.203 [2024-07-21 03:44:28.306310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.203 [2024-07-21 03:44:28.306338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.203 qpair failed and we were unable to recover it. 00:34:43.203 [2024-07-21 03:44:28.306487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.203 [2024-07-21 03:44:28.306514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.203 qpair failed and we were unable to recover it. 00:34:43.203 [2024-07-21 03:44:28.306660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.203 [2024-07-21 03:44:28.306691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.203 qpair failed and we were unable to recover it. 00:34:43.203 [2024-07-21 03:44:28.306838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.203 [2024-07-21 03:44:28.306867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.203 qpair failed and we were unable to recover it. 00:34:43.203 [2024-07-21 03:44:28.307012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.203 [2024-07-21 03:44:28.307055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.203 qpair failed and we were unable to recover it. 00:34:43.203 [2024-07-21 03:44:28.307183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.203 [2024-07-21 03:44:28.307212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.203 qpair failed and we were unable to recover it. 00:34:43.203 [2024-07-21 03:44:28.307360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.203 [2024-07-21 03:44:28.307387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.203 qpair failed and we were unable to recover it. 00:34:43.203 [2024-07-21 03:44:28.307500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.203 [2024-07-21 03:44:28.307527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.203 qpair failed and we were unable to recover it. 00:34:43.203 [2024-07-21 03:44:28.307689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.203 [2024-07-21 03:44:28.307718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.203 qpair failed and we were unable to recover it. 00:34:43.203 [2024-07-21 03:44:28.307866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.203 [2024-07-21 03:44:28.307898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.203 qpair failed and we were unable to recover it. 00:34:43.204 [2024-07-21 03:44:28.308048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.204 [2024-07-21 03:44:28.308076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.204 qpair failed and we were unable to recover it. 00:34:43.204 [2024-07-21 03:44:28.308224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.204 [2024-07-21 03:44:28.308253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.204 qpair failed and we were unable to recover it. 00:34:43.204 [2024-07-21 03:44:28.308375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.204 [2024-07-21 03:44:28.308403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.204 qpair failed and we were unable to recover it. 00:34:43.204 [2024-07-21 03:44:28.308516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.204 [2024-07-21 03:44:28.308542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.204 qpair failed and we were unable to recover it. 00:34:43.204 [2024-07-21 03:44:28.308703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.204 [2024-07-21 03:44:28.308730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.204 qpair failed and we were unable to recover it. 00:34:43.204 [2024-07-21 03:44:28.308881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.204 [2024-07-21 03:44:28.308907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.204 qpair failed and we were unable to recover it. 00:34:43.204 [2024-07-21 03:44:28.309079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.204 [2024-07-21 03:44:28.309108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.204 qpair failed and we were unable to recover it. 00:34:43.204 [2024-07-21 03:44:28.309244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.204 [2024-07-21 03:44:28.309275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.204 qpair failed and we were unable to recover it. 00:34:43.204 [2024-07-21 03:44:28.309394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.204 [2024-07-21 03:44:28.309422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.204 qpair failed and we were unable to recover it. 00:34:43.204 [2024-07-21 03:44:28.309544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.204 [2024-07-21 03:44:28.309572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.204 qpair failed and we were unable to recover it. 00:34:43.204 [2024-07-21 03:44:28.309718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.204 [2024-07-21 03:44:28.309748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.204 qpair failed and we were unable to recover it. 00:34:43.204 [2024-07-21 03:44:28.309871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.204 [2024-07-21 03:44:28.309897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.204 qpair failed and we were unable to recover it. 00:34:43.204 [2024-07-21 03:44:28.309989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.204 [2024-07-21 03:44:28.310016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.204 qpair failed and we were unable to recover it. 00:34:43.204 [2024-07-21 03:44:28.310166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.204 [2024-07-21 03:44:28.310192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.204 qpair failed and we were unable to recover it. 00:34:43.204 [2024-07-21 03:44:28.310282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.204 [2024-07-21 03:44:28.310308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.204 qpair failed and we were unable to recover it. 00:34:43.204 [2024-07-21 03:44:28.310428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.204 [2024-07-21 03:44:28.310454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.204 qpair failed and we were unable to recover it. 00:34:43.204 [2024-07-21 03:44:28.310566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.204 [2024-07-21 03:44:28.310597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.204 qpair failed and we were unable to recover it. 00:34:43.204 [2024-07-21 03:44:28.310750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.204 [2024-07-21 03:44:28.310777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.204 qpair failed and we were unable to recover it. 00:34:43.204 [2024-07-21 03:44:28.310893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.204 [2024-07-21 03:44:28.310919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.204 qpair failed and we were unable to recover it. 00:34:43.204 [2024-07-21 03:44:28.311036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.204 [2024-07-21 03:44:28.311081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.204 qpair failed and we were unable to recover it. 00:34:43.204 [2024-07-21 03:44:28.311172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.204 [2024-07-21 03:44:28.311198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.204 qpair failed and we were unable to recover it. 00:34:43.204 [2024-07-21 03:44:28.311354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.204 [2024-07-21 03:44:28.311381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.204 qpair failed and we were unable to recover it. 00:34:43.204 [2024-07-21 03:44:28.311531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.204 [2024-07-21 03:44:28.311560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.204 qpair failed and we were unable to recover it. 00:34:43.204 [2024-07-21 03:44:28.311709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.204 [2024-07-21 03:44:28.311737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.204 qpair failed and we were unable to recover it. 00:34:43.204 [2024-07-21 03:44:28.311864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.204 [2024-07-21 03:44:28.311891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.204 qpair failed and we were unable to recover it. 00:34:43.204 [2024-07-21 03:44:28.312076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.204 [2024-07-21 03:44:28.312107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.204 qpair failed and we were unable to recover it. 00:34:43.204 [2024-07-21 03:44:28.312281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.204 [2024-07-21 03:44:28.312307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.204 qpair failed and we were unable to recover it. 00:34:43.204 [2024-07-21 03:44:28.312436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.204 [2024-07-21 03:44:28.312463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.204 qpair failed and we were unable to recover it. 00:34:43.204 [2024-07-21 03:44:28.312574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.204 [2024-07-21 03:44:28.312600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.204 qpair failed and we were unable to recover it. 00:34:43.204 [2024-07-21 03:44:28.312734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.204 [2024-07-21 03:44:28.312760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.204 qpair failed and we were unable to recover it. 00:34:43.204 [2024-07-21 03:44:28.312856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.204 [2024-07-21 03:44:28.312882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.204 qpair failed and we were unable to recover it. 00:34:43.204 [2024-07-21 03:44:28.313030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.204 [2024-07-21 03:44:28.313056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.204 qpair failed and we were unable to recover it. 00:34:43.204 [2024-07-21 03:44:28.313150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.204 [2024-07-21 03:44:28.313177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.204 qpair failed and we were unable to recover it. 00:34:43.204 [2024-07-21 03:44:28.313288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.204 [2024-07-21 03:44:28.313314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.204 qpair failed and we were unable to recover it. 00:34:43.204 [2024-07-21 03:44:28.313438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.204 [2024-07-21 03:44:28.313469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.204 qpair failed and we were unable to recover it. 00:34:43.204 [2024-07-21 03:44:28.313636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.204 [2024-07-21 03:44:28.313664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.204 qpair failed and we were unable to recover it. 00:34:43.204 [2024-07-21 03:44:28.313781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.204 [2024-07-21 03:44:28.313824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.204 qpair failed and we were unable to recover it. 00:34:43.204 [2024-07-21 03:44:28.313954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.204 [2024-07-21 03:44:28.313984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.204 qpair failed and we were unable to recover it. 00:34:43.204 [2024-07-21 03:44:28.314128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.204 [2024-07-21 03:44:28.314154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.204 qpair failed and we were unable to recover it. 00:34:43.204 [2024-07-21 03:44:28.314297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.204 [2024-07-21 03:44:28.314328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.204 qpair failed and we were unable to recover it. 00:34:43.205 [2024-07-21 03:44:28.314447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.205 [2024-07-21 03:44:28.314477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.205 qpair failed and we were unable to recover it. 00:34:43.205 [2024-07-21 03:44:28.314624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.205 [2024-07-21 03:44:28.314651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.205 qpair failed and we were unable to recover it. 00:34:43.205 [2024-07-21 03:44:28.314759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.205 [2024-07-21 03:44:28.314786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.205 qpair failed and we were unable to recover it. 00:34:43.205 [2024-07-21 03:44:28.314926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.205 [2024-07-21 03:44:28.314954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.205 qpair failed and we were unable to recover it. 00:34:43.205 [2024-07-21 03:44:28.315100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.205 [2024-07-21 03:44:28.315126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.205 qpair failed and we were unable to recover it. 00:34:43.205 [2024-07-21 03:44:28.315253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.205 [2024-07-21 03:44:28.315279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.205 qpair failed and we were unable to recover it. 00:34:43.205 [2024-07-21 03:44:28.315419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.205 [2024-07-21 03:44:28.315448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.205 qpair failed and we were unable to recover it. 00:34:43.205 [2024-07-21 03:44:28.315597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.205 [2024-07-21 03:44:28.315632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.205 qpair failed and we were unable to recover it. 00:34:43.205 [2024-07-21 03:44:28.315759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.205 [2024-07-21 03:44:28.315785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.205 qpair failed and we were unable to recover it. 00:34:43.205 [2024-07-21 03:44:28.315930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.205 [2024-07-21 03:44:28.315960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.205 qpair failed and we were unable to recover it. 00:34:43.205 [2024-07-21 03:44:28.316083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.205 [2024-07-21 03:44:28.316109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.205 qpair failed and we were unable to recover it. 00:34:43.205 [2024-07-21 03:44:28.316266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.205 [2024-07-21 03:44:28.316293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.205 qpair failed and we were unable to recover it. 00:34:43.205 [2024-07-21 03:44:28.316434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.205 [2024-07-21 03:44:28.316464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.205 qpair failed and we were unable to recover it. 00:34:43.205 [2024-07-21 03:44:28.316611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.205 [2024-07-21 03:44:28.316646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.205 qpair failed and we were unable to recover it. 00:34:43.205 [2024-07-21 03:44:28.316769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.205 [2024-07-21 03:44:28.316796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.205 qpair failed and we were unable to recover it. 00:34:43.205 [2024-07-21 03:44:28.316949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.205 [2024-07-21 03:44:28.316978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.205 qpair failed and we were unable to recover it. 00:34:43.205 [2024-07-21 03:44:28.317089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.205 [2024-07-21 03:44:28.317117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.205 qpair failed and we were unable to recover it. 00:34:43.205 [2024-07-21 03:44:28.317251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.205 [2024-07-21 03:44:28.317278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.205 qpair failed and we were unable to recover it. 00:34:43.205 [2024-07-21 03:44:28.317444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.205 [2024-07-21 03:44:28.317475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.205 qpair failed and we were unable to recover it. 00:34:43.205 [2024-07-21 03:44:28.317641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.205 [2024-07-21 03:44:28.317669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.205 qpair failed and we were unable to recover it. 00:34:43.205 [2024-07-21 03:44:28.317788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.205 [2024-07-21 03:44:28.317814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.205 qpair failed and we were unable to recover it. 00:34:43.205 [2024-07-21 03:44:28.317966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.205 [2024-07-21 03:44:28.317993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.205 qpair failed and we were unable to recover it. 00:34:43.205 [2024-07-21 03:44:28.318142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.205 [2024-07-21 03:44:28.318170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.205 qpair failed and we were unable to recover it. 00:34:43.205 [2024-07-21 03:44:28.318296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.205 [2024-07-21 03:44:28.318323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.205 qpair failed and we were unable to recover it. 00:34:43.205 [2024-07-21 03:44:28.318414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.205 [2024-07-21 03:44:28.318443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.205 qpair failed and we were unable to recover it. 00:34:43.205 [2024-07-21 03:44:28.318568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.205 [2024-07-21 03:44:28.318595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.205 qpair failed and we were unable to recover it. 00:34:43.205 [2024-07-21 03:44:28.318773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.205 [2024-07-21 03:44:28.318803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.205 qpair failed and we were unable to recover it. 00:34:43.205 [2024-07-21 03:44:28.318948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.205 [2024-07-21 03:44:28.318976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.205 qpair failed and we were unable to recover it. 00:34:43.205 [2024-07-21 03:44:28.319134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.205 [2024-07-21 03:44:28.319161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.205 qpair failed and we were unable to recover it. 00:34:43.205 [2024-07-21 03:44:28.319300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.205 [2024-07-21 03:44:28.319330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.205 qpair failed and we were unable to recover it. 00:34:43.205 [2024-07-21 03:44:28.319462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.205 [2024-07-21 03:44:28.319492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.205 qpair failed and we were unable to recover it. 00:34:43.205 [2024-07-21 03:44:28.319624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.205 [2024-07-21 03:44:28.319652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.205 qpair failed and we were unable to recover it. 00:34:43.205 [2024-07-21 03:44:28.319769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.205 [2024-07-21 03:44:28.319796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.205 qpair failed and we were unable to recover it. 00:34:43.205 [2024-07-21 03:44:28.319905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.205 [2024-07-21 03:44:28.319935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.205 qpair failed and we were unable to recover it. 00:34:43.205 [2024-07-21 03:44:28.320072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.205 [2024-07-21 03:44:28.320099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.205 qpair failed and we were unable to recover it. 00:34:43.205 [2024-07-21 03:44:28.320206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.205 [2024-07-21 03:44:28.320233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.205 qpair failed and we were unable to recover it. 00:34:43.205 [2024-07-21 03:44:28.320379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.205 [2024-07-21 03:44:28.320406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.205 qpair failed and we were unable to recover it. 00:34:43.205 [2024-07-21 03:44:28.320504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.205 [2024-07-21 03:44:28.320530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.205 qpair failed and we were unable to recover it. 00:34:43.205 [2024-07-21 03:44:28.320657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.205 [2024-07-21 03:44:28.320684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.205 qpair failed and we were unable to recover it. 00:34:43.205 [2024-07-21 03:44:28.320828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.205 [2024-07-21 03:44:28.320862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.205 qpair failed and we were unable to recover it. 00:34:43.205 [2024-07-21 03:44:28.321006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.206 [2024-07-21 03:44:28.321034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.206 qpair failed and we were unable to recover it. 00:34:43.206 [2024-07-21 03:44:28.321149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.206 [2024-07-21 03:44:28.321176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.206 qpair failed and we were unable to recover it. 00:34:43.206 [2024-07-21 03:44:28.321295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.206 [2024-07-21 03:44:28.321325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.206 qpair failed and we were unable to recover it. 00:34:43.206 [2024-07-21 03:44:28.321490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.206 [2024-07-21 03:44:28.321517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.206 qpair failed and we were unable to recover it. 00:34:43.206 [2024-07-21 03:44:28.321668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.206 [2024-07-21 03:44:28.321699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.206 qpair failed and we were unable to recover it. 00:34:43.206 [2024-07-21 03:44:28.321793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.206 [2024-07-21 03:44:28.321823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.206 qpair failed and we were unable to recover it. 00:34:43.206 [2024-07-21 03:44:28.321968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.206 [2024-07-21 03:44:28.321995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.206 qpair failed and we were unable to recover it. 00:34:43.206 [2024-07-21 03:44:28.322115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.206 [2024-07-21 03:44:28.322141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.206 qpair failed and we were unable to recover it. 00:34:43.206 [2024-07-21 03:44:28.322249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.206 [2024-07-21 03:44:28.322277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.206 qpair failed and we were unable to recover it. 00:34:43.206 [2024-07-21 03:44:28.322389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.206 [2024-07-21 03:44:28.322415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.206 qpair failed and we were unable to recover it. 00:34:43.206 [2024-07-21 03:44:28.322518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.206 [2024-07-21 03:44:28.322544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.206 qpair failed and we were unable to recover it. 00:34:43.206 [2024-07-21 03:44:28.322671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.206 [2024-07-21 03:44:28.322698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.206 qpair failed and we were unable to recover it. 00:34:43.206 [2024-07-21 03:44:28.322821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.206 [2024-07-21 03:44:28.322847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.206 qpair failed and we were unable to recover it. 00:34:43.206 [2024-07-21 03:44:28.322975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.206 [2024-07-21 03:44:28.323002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.206 qpair failed and we were unable to recover it. 00:34:43.206 [2024-07-21 03:44:28.323115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.206 [2024-07-21 03:44:28.323141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.206 qpair failed and we were unable to recover it. 00:34:43.206 [2024-07-21 03:44:28.323282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.206 [2024-07-21 03:44:28.323308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.206 qpair failed and we were unable to recover it. 00:34:43.206 [2024-07-21 03:44:28.323466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.206 [2024-07-21 03:44:28.323495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.206 qpair failed and we were unable to recover it. 00:34:43.206 [2024-07-21 03:44:28.323625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.206 [2024-07-21 03:44:28.323655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.206 qpair failed and we were unable to recover it. 00:34:43.206 [2024-07-21 03:44:28.323799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.206 [2024-07-21 03:44:28.323826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.206 qpair failed and we were unable to recover it. 00:34:43.206 [2024-07-21 03:44:28.323940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.206 [2024-07-21 03:44:28.323966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.206 qpair failed and we were unable to recover it. 00:34:43.206 [2024-07-21 03:44:28.324119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.206 [2024-07-21 03:44:28.324148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.206 qpair failed and we were unable to recover it. 00:34:43.206 [2024-07-21 03:44:28.324325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.206 [2024-07-21 03:44:28.324352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.206 qpair failed and we were unable to recover it. 00:34:43.206 [2024-07-21 03:44:28.324511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.206 [2024-07-21 03:44:28.324540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.206 qpair failed and we were unable to recover it. 00:34:43.206 [2024-07-21 03:44:28.324675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.206 [2024-07-21 03:44:28.324705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.206 qpair failed and we were unable to recover it. 00:34:43.206 [2024-07-21 03:44:28.324879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.206 [2024-07-21 03:44:28.324905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.206 qpair failed and we were unable to recover it. 00:34:43.206 [2024-07-21 03:44:28.325068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.206 [2024-07-21 03:44:28.325098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.206 qpair failed and we were unable to recover it. 00:34:43.206 [2024-07-21 03:44:28.325226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.206 [2024-07-21 03:44:28.325256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.206 qpair failed and we were unable to recover it. 00:34:43.206 [2024-07-21 03:44:28.325376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.206 [2024-07-21 03:44:28.325402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.206 qpair failed and we were unable to recover it. 00:34:43.206 [2024-07-21 03:44:28.325491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.206 [2024-07-21 03:44:28.325517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.206 qpair failed and we were unable to recover it. 00:34:43.206 [2024-07-21 03:44:28.325685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.206 [2024-07-21 03:44:28.325716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.206 qpair failed and we were unable to recover it. 00:34:43.206 [2024-07-21 03:44:28.325831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.206 [2024-07-21 03:44:28.325858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.206 qpair failed and we were unable to recover it. 00:34:43.206 [2024-07-21 03:44:28.325956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.206 [2024-07-21 03:44:28.325983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.206 qpair failed and we were unable to recover it. 00:34:43.206 [2024-07-21 03:44:28.326106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.206 [2024-07-21 03:44:28.326133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.206 qpair failed and we were unable to recover it. 00:34:43.206 [2024-07-21 03:44:28.326254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.206 [2024-07-21 03:44:28.326280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.206 qpair failed and we were unable to recover it. 00:34:43.206 [2024-07-21 03:44:28.326405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.206 [2024-07-21 03:44:28.326431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.206 qpair failed and we were unable to recover it. 00:34:43.206 [2024-07-21 03:44:28.326575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.206 [2024-07-21 03:44:28.326604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.206 qpair failed and we were unable to recover it. 00:34:43.206 [2024-07-21 03:44:28.326761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.206 [2024-07-21 03:44:28.326787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.206 qpair failed and we were unable to recover it. 00:34:43.206 [2024-07-21 03:44:28.326922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.206 [2024-07-21 03:44:28.326949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.206 qpair failed and we were unable to recover it. 00:34:43.206 [2024-07-21 03:44:28.327098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.206 [2024-07-21 03:44:28.327140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.206 qpair failed and we were unable to recover it. 00:34:43.206 [2024-07-21 03:44:28.327283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.206 [2024-07-21 03:44:28.327314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.206 qpair failed and we were unable to recover it. 00:34:43.207 [2024-07-21 03:44:28.327436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.207 [2024-07-21 03:44:28.327462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.207 qpair failed and we were unable to recover it. 00:34:43.207 [2024-07-21 03:44:28.327583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.207 [2024-07-21 03:44:28.327611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.207 qpair failed and we were unable to recover it. 00:34:43.207 [2024-07-21 03:44:28.327763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.207 [2024-07-21 03:44:28.327791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.207 qpair failed and we were unable to recover it. 00:34:43.207 [2024-07-21 03:44:28.327951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.207 [2024-07-21 03:44:28.327994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.207 qpair failed and we were unable to recover it. 00:34:43.207 [2024-07-21 03:44:28.328159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.207 [2024-07-21 03:44:28.328185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.207 qpair failed and we were unable to recover it. 00:34:43.207 [2024-07-21 03:44:28.328275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.207 [2024-07-21 03:44:28.328301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.207 qpair failed and we were unable to recover it. 00:34:43.207 [2024-07-21 03:44:28.328391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.207 [2024-07-21 03:44:28.328417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.207 qpair failed and we were unable to recover it. 00:34:43.207 [2024-07-21 03:44:28.328537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.207 [2024-07-21 03:44:28.328565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.207 qpair failed and we were unable to recover it. 00:34:43.207 [2024-07-21 03:44:28.328733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.207 [2024-07-21 03:44:28.328761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.207 qpair failed and we were unable to recover it. 00:34:43.207 [2024-07-21 03:44:28.328864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.207 [2024-07-21 03:44:28.328909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.207 qpair failed and we were unable to recover it. 00:34:43.207 [2024-07-21 03:44:28.329071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.207 [2024-07-21 03:44:28.329101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.207 qpair failed and we were unable to recover it. 00:34:43.207 [2024-07-21 03:44:28.329266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.207 [2024-07-21 03:44:28.329292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.207 qpair failed and we were unable to recover it. 00:34:43.207 [2024-07-21 03:44:28.329418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.207 [2024-07-21 03:44:28.329461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.207 qpair failed and we were unable to recover it. 00:34:43.207 [2024-07-21 03:44:28.329591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.207 [2024-07-21 03:44:28.329630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.207 qpair failed and we were unable to recover it. 00:34:43.207 [2024-07-21 03:44:28.329793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.207 [2024-07-21 03:44:28.329820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.207 qpair failed and we were unable to recover it. 00:34:43.207 [2024-07-21 03:44:28.329924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.207 [2024-07-21 03:44:28.329952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.207 qpair failed and we were unable to recover it. 00:34:43.207 [2024-07-21 03:44:28.330088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.207 [2024-07-21 03:44:28.330114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.207 qpair failed and we were unable to recover it. 00:34:43.207 [2024-07-21 03:44:28.330239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.207 [2024-07-21 03:44:28.330265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.207 qpair failed and we were unable to recover it. 00:34:43.207 [2024-07-21 03:44:28.330384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.207 [2024-07-21 03:44:28.330410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.207 qpair failed and we were unable to recover it. 00:34:43.207 [2024-07-21 03:44:28.330584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.207 [2024-07-21 03:44:28.330621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.207 qpair failed and we were unable to recover it. 00:34:43.207 [2024-07-21 03:44:28.330754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.207 [2024-07-21 03:44:28.330780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.207 qpair failed and we were unable to recover it. 00:34:43.207 [2024-07-21 03:44:28.330898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.207 [2024-07-21 03:44:28.330925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.207 qpair failed and we were unable to recover it. 00:34:43.207 [2024-07-21 03:44:28.331017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.207 [2024-07-21 03:44:28.331061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.207 qpair failed and we were unable to recover it. 00:34:43.207 [2024-07-21 03:44:28.331203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.207 [2024-07-21 03:44:28.331230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.207 qpair failed and we were unable to recover it. 00:34:43.207 [2024-07-21 03:44:28.331382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.207 [2024-07-21 03:44:28.331424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.207 qpair failed and we were unable to recover it. 00:34:43.207 [2024-07-21 03:44:28.331585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.207 [2024-07-21 03:44:28.331628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.207 qpair failed and we were unable to recover it. 00:34:43.207 [2024-07-21 03:44:28.331752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.207 [2024-07-21 03:44:28.331779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.207 qpair failed and we were unable to recover it. 00:34:43.207 [2024-07-21 03:44:28.331888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.207 [2024-07-21 03:44:28.331915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.207 qpair failed and we were unable to recover it. 00:34:43.207 [2024-07-21 03:44:28.332036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.207 [2024-07-21 03:44:28.332062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.207 qpair failed and we were unable to recover it. 00:34:43.207 [2024-07-21 03:44:28.332150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.207 [2024-07-21 03:44:28.332176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.207 qpair failed and we were unable to recover it. 00:34:43.207 [2024-07-21 03:44:28.332260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.207 [2024-07-21 03:44:28.332287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.207 qpair failed and we were unable to recover it. 00:34:43.207 [2024-07-21 03:44:28.332424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.207 [2024-07-21 03:44:28.332453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.207 qpair failed and we were unable to recover it. 00:34:43.207 [2024-07-21 03:44:28.332600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.207 [2024-07-21 03:44:28.332636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.207 qpair failed and we were unable to recover it. 00:34:43.207 [2024-07-21 03:44:28.332785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.207 [2024-07-21 03:44:28.332811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.207 qpair failed and we were unable to recover it. 00:34:43.207 [2024-07-21 03:44:28.332975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.207 [2024-07-21 03:44:28.333002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.207 qpair failed and we were unable to recover it. 00:34:43.208 [2024-07-21 03:44:28.333127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.208 [2024-07-21 03:44:28.333155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.208 qpair failed and we were unable to recover it. 00:34:43.208 [2024-07-21 03:44:28.333269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.208 [2024-07-21 03:44:28.333295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.208 qpair failed and we were unable to recover it. 00:34:43.208 [2024-07-21 03:44:28.333492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.208 [2024-07-21 03:44:28.333518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.208 qpair failed and we were unable to recover it. 00:34:43.208 [2024-07-21 03:44:28.333642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.208 [2024-07-21 03:44:28.333669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.208 qpair failed and we were unable to recover it. 00:34:43.208 [2024-07-21 03:44:28.333819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.208 [2024-07-21 03:44:28.333849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.208 qpair failed and we were unable to recover it. 00:34:43.208 [2024-07-21 03:44:28.334012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.208 [2024-07-21 03:44:28.334038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.208 qpair failed and we were unable to recover it. 00:34:43.208 [2024-07-21 03:44:28.334162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.208 [2024-07-21 03:44:28.334190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.208 qpair failed and we were unable to recover it. 00:34:43.208 [2024-07-21 03:44:28.334304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.208 [2024-07-21 03:44:28.334330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.208 qpair failed and we were unable to recover it. 00:34:43.208 [2024-07-21 03:44:28.334483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.208 [2024-07-21 03:44:28.334512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.208 qpair failed and we were unable to recover it. 00:34:43.208 [2024-07-21 03:44:28.334628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.208 [2024-07-21 03:44:28.334655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.208 qpair failed and we were unable to recover it. 00:34:43.208 [2024-07-21 03:44:28.334776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.208 [2024-07-21 03:44:28.334803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.208 qpair failed and we were unable to recover it. 00:34:43.208 [2024-07-21 03:44:28.334947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.208 [2024-07-21 03:44:28.334976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.208 qpair failed and we were unable to recover it. 00:34:43.208 [2024-07-21 03:44:28.335153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.208 [2024-07-21 03:44:28.335179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.208 qpair failed and we were unable to recover it. 00:34:43.208 [2024-07-21 03:44:28.335296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.208 [2024-07-21 03:44:28.335350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.208 qpair failed and we were unable to recover it. 00:34:43.208 [2024-07-21 03:44:28.335487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.208 [2024-07-21 03:44:28.335518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.208 qpair failed and we were unable to recover it. 00:34:43.208 [2024-07-21 03:44:28.335694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.208 [2024-07-21 03:44:28.335720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.208 qpair failed and we were unable to recover it. 00:34:43.208 [2024-07-21 03:44:28.335838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.208 [2024-07-21 03:44:28.335882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.208 qpair failed and we were unable to recover it. 00:34:43.208 [2024-07-21 03:44:28.336042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.208 [2024-07-21 03:44:28.336071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.208 qpair failed and we were unable to recover it. 00:34:43.208 [2024-07-21 03:44:28.336181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.208 [2024-07-21 03:44:28.336208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.208 qpair failed and we were unable to recover it. 00:34:43.208 [2024-07-21 03:44:28.336357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.208 [2024-07-21 03:44:28.336383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.208 qpair failed and we were unable to recover it. 00:34:43.208 [2024-07-21 03:44:28.336527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.208 [2024-07-21 03:44:28.336556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.208 qpair failed and we were unable to recover it. 00:34:43.208 [2024-07-21 03:44:28.336739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.208 [2024-07-21 03:44:28.336766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.208 qpair failed and we were unable to recover it. 00:34:43.208 [2024-07-21 03:44:28.336896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.208 [2024-07-21 03:44:28.336939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.208 qpair failed and we were unable to recover it. 00:34:43.208 [2024-07-21 03:44:28.337046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.208 [2024-07-21 03:44:28.337090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.208 qpair failed and we were unable to recover it. 00:34:43.208 [2024-07-21 03:44:28.337221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.208 [2024-07-21 03:44:28.337247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.208 qpair failed and we were unable to recover it. 00:34:43.208 [2024-07-21 03:44:28.337371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.208 [2024-07-21 03:44:28.337414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.208 qpair failed and we were unable to recover it. 00:34:43.208 [2024-07-21 03:44:28.337550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.208 [2024-07-21 03:44:28.337579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.208 qpair failed and we were unable to recover it. 00:34:43.208 [2024-07-21 03:44:28.337759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.208 [2024-07-21 03:44:28.337786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.208 qpair failed and we were unable to recover it. 00:34:43.208 [2024-07-21 03:44:28.337926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.208 [2024-07-21 03:44:28.337955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.208 qpair failed and we were unable to recover it. 00:34:43.208 [2024-07-21 03:44:28.338091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.208 [2024-07-21 03:44:28.338121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.208 qpair failed and we were unable to recover it. 00:34:43.208 [2024-07-21 03:44:28.338265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.208 [2024-07-21 03:44:28.338291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.208 qpair failed and we were unable to recover it. 00:34:43.208 [2024-07-21 03:44:28.338437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.208 [2024-07-21 03:44:28.338463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.208 qpair failed and we were unable to recover it. 00:34:43.208 [2024-07-21 03:44:28.338621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.208 [2024-07-21 03:44:28.338661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.208 qpair failed and we were unable to recover it. 00:34:43.208 [2024-07-21 03:44:28.338806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.208 [2024-07-21 03:44:28.338832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.208 qpair failed and we were unable to recover it. 00:34:43.208 [2024-07-21 03:44:28.338932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.208 [2024-07-21 03:44:28.338958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.208 qpair failed and we were unable to recover it. 00:34:43.208 [2024-07-21 03:44:28.339067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.208 [2024-07-21 03:44:28.339110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.208 qpair failed and we were unable to recover it. 00:34:43.208 [2024-07-21 03:44:28.339247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.208 [2024-07-21 03:44:28.339273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.208 qpair failed and we were unable to recover it. 00:34:43.208 [2024-07-21 03:44:28.339397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.208 [2024-07-21 03:44:28.339440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.208 qpair failed and we were unable to recover it. 00:34:43.208 [2024-07-21 03:44:28.339596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.208 [2024-07-21 03:44:28.339636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.208 qpair failed and we were unable to recover it. 00:34:43.208 [2024-07-21 03:44:28.339775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.209 [2024-07-21 03:44:28.339801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.209 qpair failed and we were unable to recover it. 00:34:43.209 [2024-07-21 03:44:28.339928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.209 [2024-07-21 03:44:28.339955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.209 qpair failed and we were unable to recover it. 00:34:43.209 [2024-07-21 03:44:28.340040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.209 [2024-07-21 03:44:28.340066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.209 qpair failed and we were unable to recover it. 00:34:43.209 [2024-07-21 03:44:28.340161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.209 [2024-07-21 03:44:28.340188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.209 qpair failed and we were unable to recover it. 00:34:43.209 [2024-07-21 03:44:28.340292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.209 [2024-07-21 03:44:28.340318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.209 qpair failed and we were unable to recover it. 00:34:43.209 [2024-07-21 03:44:28.340414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.209 [2024-07-21 03:44:28.340444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.209 qpair failed and we were unable to recover it. 00:34:43.209 [2024-07-21 03:44:28.340565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.209 [2024-07-21 03:44:28.340593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.209 qpair failed and we were unable to recover it. 00:34:43.209 [2024-07-21 03:44:28.340754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.209 [2024-07-21 03:44:28.340783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.209 qpair failed and we were unable to recover it. 00:34:43.209 [2024-07-21 03:44:28.340902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.209 [2024-07-21 03:44:28.340941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.209 qpair failed and we were unable to recover it. 00:34:43.209 [2024-07-21 03:44:28.341057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.209 [2024-07-21 03:44:28.341084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.209 qpair failed and we were unable to recover it. 00:34:43.209 [2024-07-21 03:44:28.341179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.209 [2024-07-21 03:44:28.341207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.209 qpair failed and we were unable to recover it. 00:34:43.209 [2024-07-21 03:44:28.341346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.209 [2024-07-21 03:44:28.341376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.209 qpair failed and we were unable to recover it. 00:34:43.209 [2024-07-21 03:44:28.341527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.209 [2024-07-21 03:44:28.341555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.209 qpair failed and we were unable to recover it. 00:34:43.209 [2024-07-21 03:44:28.341689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.209 [2024-07-21 03:44:28.341736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.209 qpair failed and we were unable to recover it. 00:34:43.209 [2024-07-21 03:44:28.341883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.209 [2024-07-21 03:44:28.341912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.209 qpair failed and we were unable to recover it. 00:34:43.209 [2024-07-21 03:44:28.342003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.209 [2024-07-21 03:44:28.342030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.209 qpair failed and we were unable to recover it. 00:34:43.209 [2024-07-21 03:44:28.342156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.209 [2024-07-21 03:44:28.342183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.209 qpair failed and we were unable to recover it. 00:34:43.209 [2024-07-21 03:44:28.342317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.209 [2024-07-21 03:44:28.342360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.209 qpair failed and we were unable to recover it. 00:34:43.209 [2024-07-21 03:44:28.342487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.209 [2024-07-21 03:44:28.342514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.209 qpair failed and we were unable to recover it. 00:34:43.209 [2024-07-21 03:44:28.342657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.209 [2024-07-21 03:44:28.342701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.209 qpair failed and we were unable to recover it. 00:34:43.209 [2024-07-21 03:44:28.342841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.209 [2024-07-21 03:44:28.342870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.209 qpair failed and we were unable to recover it. 00:34:43.209 [2024-07-21 03:44:28.343035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.209 [2024-07-21 03:44:28.343062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.209 qpair failed and we were unable to recover it. 00:34:43.209 [2024-07-21 03:44:28.343160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.209 [2024-07-21 03:44:28.343187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.209 qpair failed and we were unable to recover it. 00:34:43.209 [2024-07-21 03:44:28.343312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.209 [2024-07-21 03:44:28.343338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.209 qpair failed and we were unable to recover it. 00:34:43.209 [2024-07-21 03:44:28.343488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.209 [2024-07-21 03:44:28.343515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.209 qpair failed and we were unable to recover it. 00:34:43.209 [2024-07-21 03:44:28.343659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.209 [2024-07-21 03:44:28.343690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.209 qpair failed and we were unable to recover it. 00:34:43.209 [2024-07-21 03:44:28.343826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.209 [2024-07-21 03:44:28.343855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.209 qpair failed and we were unable to recover it. 00:34:43.209 [2024-07-21 03:44:28.343999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.209 [2024-07-21 03:44:28.344026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.209 qpair failed and we were unable to recover it. 00:34:43.209 [2024-07-21 03:44:28.344151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.209 [2024-07-21 03:44:28.344177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.209 qpair failed and we were unable to recover it. 00:34:43.209 [2024-07-21 03:44:28.344326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.209 [2024-07-21 03:44:28.344355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.209 qpair failed and we were unable to recover it. 00:34:43.209 [2024-07-21 03:44:28.344498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.209 [2024-07-21 03:44:28.344525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.209 qpair failed and we were unable to recover it. 00:34:43.209 [2024-07-21 03:44:28.344646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.209 [2024-07-21 03:44:28.344674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.209 qpair failed and we were unable to recover it. 00:34:43.209 [2024-07-21 03:44:28.344856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.209 [2024-07-21 03:44:28.344885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.209 qpair failed and we were unable to recover it. 00:34:43.209 [2024-07-21 03:44:28.345050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.209 [2024-07-21 03:44:28.345076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.209 qpair failed and we were unable to recover it. 00:34:43.209 [2024-07-21 03:44:28.345194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.209 [2024-07-21 03:44:28.345237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.209 qpair failed and we were unable to recover it. 00:34:43.209 [2024-07-21 03:44:28.345400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.209 [2024-07-21 03:44:28.345429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.209 qpair failed and we were unable to recover it. 00:34:43.209 [2024-07-21 03:44:28.345546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.209 [2024-07-21 03:44:28.345572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.209 qpair failed and we were unable to recover it. 00:34:43.209 [2024-07-21 03:44:28.345704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.209 [2024-07-21 03:44:28.345730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.209 qpair failed and we were unable to recover it. 00:34:43.209 [2024-07-21 03:44:28.345895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.209 [2024-07-21 03:44:28.345924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.209 qpair failed and we were unable to recover it. 00:34:43.210 [2024-07-21 03:44:28.346056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.210 [2024-07-21 03:44:28.346082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.210 qpair failed and we were unable to recover it. 00:34:43.210 [2024-07-21 03:44:28.346228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.210 [2024-07-21 03:44:28.346269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.210 qpair failed and we were unable to recover it. 00:34:43.210 [2024-07-21 03:44:28.346369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.210 [2024-07-21 03:44:28.346397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.210 qpair failed and we were unable to recover it. 00:34:43.210 [2024-07-21 03:44:28.346534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.210 [2024-07-21 03:44:28.346560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.210 qpair failed and we were unable to recover it. 00:34:43.210 [2024-07-21 03:44:28.346710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.210 [2024-07-21 03:44:28.346738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.210 qpair failed and we were unable to recover it. 00:34:43.210 [2024-07-21 03:44:28.346889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.210 [2024-07-21 03:44:28.346915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.210 qpair failed and we were unable to recover it. 00:34:43.210 [2024-07-21 03:44:28.347010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.210 [2024-07-21 03:44:28.347041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.210 qpair failed and we were unable to recover it. 00:34:43.210 [2024-07-21 03:44:28.347186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.210 [2024-07-21 03:44:28.347212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.210 qpair failed and we were unable to recover it. 00:34:43.210 [2024-07-21 03:44:28.347323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.210 [2024-07-21 03:44:28.347353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.210 qpair failed and we were unable to recover it. 00:34:43.210 [2024-07-21 03:44:28.347487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.210 [2024-07-21 03:44:28.347514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.210 qpair failed and we were unable to recover it. 00:34:43.210 [2024-07-21 03:44:28.347619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.210 [2024-07-21 03:44:28.347646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.210 qpair failed and we were unable to recover it. 00:34:43.210 [2024-07-21 03:44:28.347766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.210 [2024-07-21 03:44:28.347808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.210 qpair failed and we were unable to recover it. 00:34:43.210 [2024-07-21 03:44:28.347955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.210 [2024-07-21 03:44:28.347981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.210 qpair failed and we were unable to recover it. 00:34:43.210 [2024-07-21 03:44:28.348073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.210 [2024-07-21 03:44:28.348100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.210 qpair failed and we were unable to recover it. 00:34:43.210 [2024-07-21 03:44:28.348223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.210 [2024-07-21 03:44:28.348252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.210 qpair failed and we were unable to recover it. 00:34:43.210 [2024-07-21 03:44:28.348406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.210 [2024-07-21 03:44:28.348432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.210 qpair failed and we were unable to recover it. 00:34:43.210 [2024-07-21 03:44:28.348556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.210 [2024-07-21 03:44:28.348582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.210 qpair failed and we were unable to recover it. 00:34:43.210 [2024-07-21 03:44:28.348717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.210 [2024-07-21 03:44:28.348747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.210 qpair failed and we were unable to recover it. 00:34:43.210 [2024-07-21 03:44:28.348893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.210 [2024-07-21 03:44:28.348919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.210 qpair failed and we were unable to recover it. 00:34:43.210 [2024-07-21 03:44:28.349068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.210 [2024-07-21 03:44:28.349110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.210 qpair failed and we were unable to recover it. 00:34:43.210 [2024-07-21 03:44:28.349216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.210 [2024-07-21 03:44:28.349246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.210 qpair failed and we were unable to recover it. 00:34:43.210 [2024-07-21 03:44:28.349361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.210 [2024-07-21 03:44:28.349388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.210 qpair failed and we were unable to recover it. 00:34:43.210 [2024-07-21 03:44:28.349510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.210 [2024-07-21 03:44:28.349536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.210 qpair failed and we were unable to recover it. 00:34:43.210 [2024-07-21 03:44:28.349682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.210 [2024-07-21 03:44:28.349727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.210 qpair failed and we were unable to recover it. 00:34:43.210 [2024-07-21 03:44:28.349876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.210 [2024-07-21 03:44:28.349903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.210 qpair failed and we were unable to recover it. 00:34:43.210 [2024-07-21 03:44:28.350016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.210 [2024-07-21 03:44:28.350059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.210 qpair failed and we were unable to recover it. 00:34:43.210 [2024-07-21 03:44:28.350222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.210 [2024-07-21 03:44:28.350251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.210 qpair failed and we were unable to recover it. 00:34:43.210 [2024-07-21 03:44:28.350399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.210 [2024-07-21 03:44:28.350426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.210 qpair failed and we were unable to recover it. 00:34:43.210 [2024-07-21 03:44:28.350575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.210 [2024-07-21 03:44:28.350629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.210 qpair failed and we were unable to recover it. 00:34:43.210 [2024-07-21 03:44:28.350807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.210 [2024-07-21 03:44:28.350834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.210 qpair failed and we were unable to recover it. 00:34:43.210 [2024-07-21 03:44:28.350984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.210 [2024-07-21 03:44:28.351011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.210 qpair failed and we were unable to recover it. 00:34:43.210 [2024-07-21 03:44:28.351154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.210 [2024-07-21 03:44:28.351183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.210 qpair failed and we were unable to recover it. 00:34:43.210 [2024-07-21 03:44:28.351277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.210 [2024-07-21 03:44:28.351306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.210 qpair failed and we were unable to recover it. 00:34:43.210 [2024-07-21 03:44:28.351480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.210 [2024-07-21 03:44:28.351507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.210 qpair failed and we were unable to recover it. 00:34:43.210 [2024-07-21 03:44:28.351605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.210 [2024-07-21 03:44:28.351657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.210 qpair failed and we were unable to recover it. 00:34:43.210 [2024-07-21 03:44:28.351771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.210 [2024-07-21 03:44:28.351800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.210 qpair failed and we were unable to recover it. 00:34:43.210 [2024-07-21 03:44:28.351973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.210 [2024-07-21 03:44:28.352000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.210 qpair failed and we were unable to recover it. 00:34:43.210 [2024-07-21 03:44:28.352118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.210 [2024-07-21 03:44:28.352145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.210 qpair failed and we were unable to recover it. 00:34:43.210 [2024-07-21 03:44:28.352271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.210 [2024-07-21 03:44:28.352299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.210 qpair failed and we were unable to recover it. 00:34:43.210 [2024-07-21 03:44:28.352426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.210 [2024-07-21 03:44:28.352453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.210 qpair failed and we were unable to recover it. 00:34:43.211 [2024-07-21 03:44:28.352599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.211 [2024-07-21 03:44:28.352633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.211 qpair failed and we were unable to recover it. 00:34:43.211 [2024-07-21 03:44:28.352787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.211 [2024-07-21 03:44:28.352816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.211 qpair failed and we were unable to recover it. 00:34:43.211 [2024-07-21 03:44:28.352953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.211 [2024-07-21 03:44:28.352979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.211 qpair failed and we were unable to recover it. 00:34:43.211 [2024-07-21 03:44:28.353129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.211 [2024-07-21 03:44:28.353171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.211 qpair failed and we were unable to recover it. 00:34:43.211 [2024-07-21 03:44:28.353330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.211 [2024-07-21 03:44:28.353358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.211 qpair failed and we were unable to recover it. 00:34:43.211 [2024-07-21 03:44:28.353500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.211 [2024-07-21 03:44:28.353526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.211 qpair failed and we were unable to recover it. 00:34:43.211 [2024-07-21 03:44:28.353653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.211 [2024-07-21 03:44:28.353685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.211 qpair failed and we were unable to recover it. 00:34:43.211 [2024-07-21 03:44:28.353832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.211 [2024-07-21 03:44:28.353861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.211 qpair failed and we were unable to recover it. 00:34:43.211 [2024-07-21 03:44:28.353980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.211 [2024-07-21 03:44:28.354008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.211 qpair failed and we were unable to recover it. 00:34:43.211 [2024-07-21 03:44:28.354104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.211 [2024-07-21 03:44:28.354130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.211 qpair failed and we were unable to recover it. 00:34:43.211 [2024-07-21 03:44:28.354302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.211 [2024-07-21 03:44:28.354331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.211 qpair failed and we were unable to recover it. 00:34:43.211 [2024-07-21 03:44:28.354472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.211 [2024-07-21 03:44:28.354498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.211 qpair failed and we were unable to recover it. 00:34:43.211 [2024-07-21 03:44:28.354645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.211 [2024-07-21 03:44:28.354689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.211 qpair failed and we were unable to recover it. 00:34:43.211 [2024-07-21 03:44:28.354833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.211 [2024-07-21 03:44:28.354860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.211 qpair failed and we were unable to recover it. 00:34:43.211 [2024-07-21 03:44:28.354980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.211 [2024-07-21 03:44:28.355007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.211 qpair failed and we were unable to recover it. 00:34:43.211 [2024-07-21 03:44:28.355130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.211 [2024-07-21 03:44:28.355157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.211 qpair failed and we were unable to recover it. 00:34:43.211 [2024-07-21 03:44:28.355302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.211 [2024-07-21 03:44:28.355331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.211 qpair failed and we were unable to recover it. 00:34:43.211 [2024-07-21 03:44:28.355476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.211 [2024-07-21 03:44:28.355503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.211 qpair failed and we were unable to recover it. 00:34:43.211 [2024-07-21 03:44:28.355649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.211 [2024-07-21 03:44:28.355692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.211 qpair failed and we were unable to recover it. 00:34:43.211 [2024-07-21 03:44:28.355867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.211 [2024-07-21 03:44:28.355894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.211 qpair failed and we were unable to recover it. 00:34:43.211 [2024-07-21 03:44:28.356023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.211 [2024-07-21 03:44:28.356050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.211 qpair failed and we were unable to recover it. 00:34:43.211 [2024-07-21 03:44:28.356169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.211 [2024-07-21 03:44:28.356195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.211 qpair failed and we were unable to recover it. 00:34:43.211 [2024-07-21 03:44:28.356341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.211 [2024-07-21 03:44:28.356371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.211 qpair failed and we were unable to recover it. 00:34:43.211 [2024-07-21 03:44:28.356517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.211 [2024-07-21 03:44:28.356543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.211 qpair failed and we were unable to recover it. 00:34:43.211 [2024-07-21 03:44:28.356670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.211 [2024-07-21 03:44:28.356697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.211 qpair failed and we were unable to recover it. 00:34:43.211 [2024-07-21 03:44:28.356837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.211 [2024-07-21 03:44:28.356867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.211 qpair failed and we were unable to recover it. 00:34:43.211 [2024-07-21 03:44:28.357015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.211 [2024-07-21 03:44:28.357041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.211 qpair failed and we were unable to recover it. 00:34:43.211 [2024-07-21 03:44:28.357138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.211 [2024-07-21 03:44:28.357164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.211 qpair failed and we were unable to recover it. 00:34:43.211 [2024-07-21 03:44:28.357315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.211 [2024-07-21 03:44:28.357341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.211 qpair failed and we were unable to recover it. 00:34:43.211 [2024-07-21 03:44:28.357493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.211 [2024-07-21 03:44:28.357519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.211 qpair failed and we were unable to recover it. 00:34:43.211 [2024-07-21 03:44:28.357664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.211 [2024-07-21 03:44:28.357709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.211 qpair failed and we were unable to recover it. 00:34:43.211 [2024-07-21 03:44:28.357816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.211 [2024-07-21 03:44:28.357861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.211 qpair failed and we were unable to recover it. 00:34:43.211 [2024-07-21 03:44:28.357983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.211 [2024-07-21 03:44:28.358010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.211 qpair failed and we were unable to recover it. 00:34:43.211 [2024-07-21 03:44:28.358165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.211 [2024-07-21 03:44:28.358192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.211 qpair failed and we were unable to recover it. 00:34:43.211 [2024-07-21 03:44:28.358349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.211 [2024-07-21 03:44:28.358376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.211 qpair failed and we were unable to recover it. 00:34:43.212 [2024-07-21 03:44:28.358523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.212 [2024-07-21 03:44:28.358550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.212 qpair failed and we were unable to recover it. 00:34:43.212 [2024-07-21 03:44:28.358697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.212 [2024-07-21 03:44:28.358727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.212 qpair failed and we were unable to recover it. 00:34:43.212 [2024-07-21 03:44:28.358823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.212 [2024-07-21 03:44:28.358852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.212 qpair failed and we were unable to recover it. 00:34:43.212 [2024-07-21 03:44:28.359016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.212 [2024-07-21 03:44:28.359043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.212 qpair failed and we were unable to recover it. 00:34:43.212 [2024-07-21 03:44:28.359212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.212 [2024-07-21 03:44:28.359242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.212 qpair failed and we were unable to recover it. 00:34:43.212 [2024-07-21 03:44:28.359345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.212 [2024-07-21 03:44:28.359375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.212 qpair failed and we were unable to recover it. 00:34:43.212 [2024-07-21 03:44:28.359523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.212 [2024-07-21 03:44:28.359549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.212 qpair failed and we were unable to recover it. 00:34:43.212 [2024-07-21 03:44:28.359648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.212 [2024-07-21 03:44:28.359675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.212 qpair failed and we were unable to recover it. 00:34:43.212 [2024-07-21 03:44:28.359818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.212 [2024-07-21 03:44:28.359847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.212 qpair failed and we were unable to recover it. 00:34:43.212 [2024-07-21 03:44:28.359966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.212 [2024-07-21 03:44:28.359992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.212 qpair failed and we were unable to recover it. 00:34:43.212 [2024-07-21 03:44:28.360115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.212 [2024-07-21 03:44:28.360142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.212 qpair failed and we were unable to recover it. 00:34:43.212 [2024-07-21 03:44:28.360261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.212 [2024-07-21 03:44:28.360292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.212 qpair failed and we were unable to recover it. 00:34:43.212 [2024-07-21 03:44:28.360414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.212 [2024-07-21 03:44:28.360440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.212 qpair failed and we were unable to recover it. 00:34:43.212 [2024-07-21 03:44:28.360559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.212 [2024-07-21 03:44:28.360601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.212 qpair failed and we were unable to recover it. 00:34:43.212 [2024-07-21 03:44:28.360720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.212 [2024-07-21 03:44:28.360751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.212 qpair failed and we were unable to recover it. 00:34:43.212 [2024-07-21 03:44:28.360859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.212 [2024-07-21 03:44:28.360886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.212 qpair failed and we were unable to recover it. 00:34:43.212 [2024-07-21 03:44:28.360985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.212 [2024-07-21 03:44:28.361012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.212 qpair failed and we were unable to recover it. 00:34:43.212 [2024-07-21 03:44:28.361105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.212 [2024-07-21 03:44:28.361131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.212 qpair failed and we were unable to recover it. 00:34:43.212 [2024-07-21 03:44:28.361225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.212 [2024-07-21 03:44:28.361251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.212 qpair failed and we were unable to recover it. 00:34:43.212 [2024-07-21 03:44:28.361366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.212 [2024-07-21 03:44:28.361393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.212 qpair failed and we were unable to recover it. 00:34:43.212 [2024-07-21 03:44:28.361497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.212 [2024-07-21 03:44:28.361526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.212 qpair failed and we were unable to recover it. 00:34:43.212 [2024-07-21 03:44:28.361681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.212 [2024-07-21 03:44:28.361708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.212 qpair failed and we were unable to recover it. 00:34:43.212 [2024-07-21 03:44:28.361833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.212 [2024-07-21 03:44:28.361859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.212 qpair failed and we were unable to recover it. 00:34:43.212 [2024-07-21 03:44:28.362018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.212 [2024-07-21 03:44:28.362044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.212 qpair failed and we were unable to recover it. 00:34:43.212 [2024-07-21 03:44:28.362141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.212 [2024-07-21 03:44:28.362168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.212 qpair failed and we were unable to recover it. 00:34:43.212 [2024-07-21 03:44:28.362310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.212 [2024-07-21 03:44:28.362337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.212 qpair failed and we were unable to recover it. 00:34:43.212 [2024-07-21 03:44:28.362477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.212 [2024-07-21 03:44:28.362506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.212 qpair failed and we were unable to recover it. 00:34:43.212 [2024-07-21 03:44:28.362644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.212 [2024-07-21 03:44:28.362672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.212 qpair failed and we were unable to recover it. 00:34:43.212 [2024-07-21 03:44:28.362792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.212 [2024-07-21 03:44:28.362819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.212 qpair failed and we were unable to recover it. 00:34:43.212 [2024-07-21 03:44:28.362942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.212 [2024-07-21 03:44:28.362971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.212 qpair failed and we were unable to recover it. 00:34:43.212 [2024-07-21 03:44:28.363112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.212 [2024-07-21 03:44:28.363139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.212 qpair failed and we were unable to recover it. 00:34:43.212 [2024-07-21 03:44:28.363264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.212 [2024-07-21 03:44:28.363290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.212 qpair failed and we were unable to recover it. 00:34:43.212 [2024-07-21 03:44:28.363417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.212 [2024-07-21 03:44:28.363446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.212 qpair failed and we were unable to recover it. 00:34:43.212 [2024-07-21 03:44:28.363585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.212 [2024-07-21 03:44:28.363612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.212 qpair failed and we were unable to recover it. 00:34:43.212 [2024-07-21 03:44:28.363740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.212 [2024-07-21 03:44:28.363768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.212 qpair failed and we were unable to recover it. 00:34:43.212 [2024-07-21 03:44:28.363885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.212 [2024-07-21 03:44:28.363912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.212 qpair failed and we were unable to recover it. 00:34:43.212 [2024-07-21 03:44:28.364057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.212 [2024-07-21 03:44:28.364083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.212 qpair failed and we were unable to recover it. 00:34:43.212 [2024-07-21 03:44:28.364198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.212 [2024-07-21 03:44:28.364224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.212 qpair failed and we were unable to recover it. 00:34:43.212 [2024-07-21 03:44:28.364366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.212 [2024-07-21 03:44:28.364395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.212 qpair failed and we were unable to recover it. 00:34:43.212 [2024-07-21 03:44:28.364533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.212 [2024-07-21 03:44:28.364560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.212 qpair failed and we were unable to recover it. 00:34:43.213 [2024-07-21 03:44:28.364708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.213 [2024-07-21 03:44:28.364735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.213 qpair failed and we were unable to recover it. 00:34:43.213 [2024-07-21 03:44:28.364837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.213 [2024-07-21 03:44:28.364863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.213 qpair failed and we were unable to recover it. 00:34:43.213 [2024-07-21 03:44:28.364979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.213 [2024-07-21 03:44:28.365005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.213 qpair failed and we were unable to recover it. 00:34:43.213 [2024-07-21 03:44:28.365131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.213 [2024-07-21 03:44:28.365157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.213 qpair failed and we were unable to recover it. 00:34:43.213 [2024-07-21 03:44:28.365315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.213 [2024-07-21 03:44:28.365341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.213 qpair failed and we were unable to recover it. 00:34:43.213 [2024-07-21 03:44:28.365470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.213 [2024-07-21 03:44:28.365496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.213 qpair failed and we were unable to recover it. 00:34:43.213 [2024-07-21 03:44:28.365596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.213 [2024-07-21 03:44:28.365641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.213 qpair failed and we were unable to recover it. 00:34:43.213 [2024-07-21 03:44:28.365790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.213 [2024-07-21 03:44:28.365838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.213 qpair failed and we were unable to recover it. 00:34:43.213 [2024-07-21 03:44:28.365930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.213 [2024-07-21 03:44:28.365956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.213 qpair failed and we were unable to recover it. 00:34:43.213 [2024-07-21 03:44:28.366079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.213 [2024-07-21 03:44:28.366106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.213 qpair failed and we were unable to recover it. 00:34:43.213 [2024-07-21 03:44:28.366248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.213 [2024-07-21 03:44:28.366277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.213 qpair failed and we were unable to recover it. 00:34:43.213 [2024-07-21 03:44:28.366444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.213 [2024-07-21 03:44:28.366474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.213 qpair failed and we were unable to recover it. 00:34:43.213 [2024-07-21 03:44:28.366642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.213 [2024-07-21 03:44:28.366672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.213 qpair failed and we were unable to recover it. 00:34:43.213 [2024-07-21 03:44:28.366859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.213 [2024-07-21 03:44:28.366885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.213 qpair failed and we were unable to recover it. 00:34:43.213 [2024-07-21 03:44:28.367004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.213 [2024-07-21 03:44:28.367030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.213 qpair failed and we were unable to recover it. 00:34:43.213 [2024-07-21 03:44:28.367152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.213 [2024-07-21 03:44:28.367178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.213 qpair failed and we were unable to recover it. 00:34:43.213 [2024-07-21 03:44:28.367299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.213 [2024-07-21 03:44:28.367328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.213 qpair failed and we were unable to recover it. 00:34:43.213 [2024-07-21 03:44:28.367464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.213 [2024-07-21 03:44:28.367490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.213 qpair failed and we were unable to recover it. 00:34:43.213 [2024-07-21 03:44:28.367582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.213 [2024-07-21 03:44:28.367610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.213 qpair failed and we were unable to recover it. 00:34:43.213 [2024-07-21 03:44:28.367745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.213 [2024-07-21 03:44:28.367772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.213 qpair failed and we were unable to recover it. 00:34:43.213 [2024-07-21 03:44:28.367920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.213 [2024-07-21 03:44:28.367946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.213 qpair failed and we were unable to recover it. 00:34:43.213 [2024-07-21 03:44:28.368070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.213 [2024-07-21 03:44:28.368096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.213 qpair failed and we were unable to recover it. 00:34:43.213 [2024-07-21 03:44:28.368194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.213 [2024-07-21 03:44:28.368220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.213 qpair failed and we were unable to recover it. 00:34:43.213 [2024-07-21 03:44:28.368315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.213 [2024-07-21 03:44:28.368341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.213 qpair failed and we were unable to recover it. 00:34:43.213 [2024-07-21 03:44:28.368454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.213 [2024-07-21 03:44:28.368480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.213 qpair failed and we were unable to recover it. 00:34:43.213 [2024-07-21 03:44:28.368619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.213 [2024-07-21 03:44:28.368646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.213 qpair failed and we were unable to recover it. 00:34:43.213 [2024-07-21 03:44:28.368810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.213 [2024-07-21 03:44:28.368836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.213 qpair failed and we were unable to recover it. 00:34:43.213 [2024-07-21 03:44:28.368930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.213 [2024-07-21 03:44:28.368956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.213 qpair failed and we were unable to recover it. 00:34:43.213 [2024-07-21 03:44:28.369043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.213 [2024-07-21 03:44:28.369069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.213 qpair failed and we were unable to recover it. 00:34:43.213 [2024-07-21 03:44:28.369206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.213 [2024-07-21 03:44:28.369232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.213 qpair failed and we were unable to recover it. 00:34:43.213 [2024-07-21 03:44:28.369375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.213 [2024-07-21 03:44:28.369402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.213 qpair failed and we were unable to recover it. 00:34:43.213 [2024-07-21 03:44:28.369590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.213 [2024-07-21 03:44:28.369637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.213 qpair failed and we were unable to recover it. 00:34:43.213 [2024-07-21 03:44:28.369797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.213 [2024-07-21 03:44:28.369824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.213 qpair failed and we were unable to recover it. 00:34:43.213 [2024-07-21 03:44:28.369944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.213 [2024-07-21 03:44:28.369971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.213 qpair failed and we were unable to recover it. 00:34:43.213 [2024-07-21 03:44:28.370065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.213 [2024-07-21 03:44:28.370091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.213 qpair failed and we were unable to recover it. 00:34:43.213 [2024-07-21 03:44:28.370248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.213 [2024-07-21 03:44:28.370274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.213 qpair failed and we were unable to recover it. 00:34:43.213 [2024-07-21 03:44:28.370401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.214 [2024-07-21 03:44:28.370428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.214 qpair failed and we were unable to recover it. 00:34:43.214 [2024-07-21 03:44:28.370523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.214 [2024-07-21 03:44:28.370549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.214 qpair failed and we were unable to recover it. 00:34:43.214 [2024-07-21 03:44:28.370685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.214 [2024-07-21 03:44:28.370713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.214 qpair failed and we were unable to recover it. 00:34:43.214 [2024-07-21 03:44:28.370881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.214 [2024-07-21 03:44:28.370910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.214 qpair failed and we were unable to recover it. 00:34:43.214 [2024-07-21 03:44:28.371037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.214 [2024-07-21 03:44:28.371065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.214 qpair failed and we were unable to recover it. 00:34:43.214 [2024-07-21 03:44:28.371211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.214 [2024-07-21 03:44:28.371238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.214 qpair failed and we were unable to recover it. 00:34:43.214 [2024-07-21 03:44:28.371383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.214 [2024-07-21 03:44:28.371409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.214 qpair failed and we were unable to recover it. 00:34:43.214 [2024-07-21 03:44:28.371557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.214 [2024-07-21 03:44:28.371585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.214 qpair failed and we were unable to recover it. 00:34:43.214 [2024-07-21 03:44:28.371712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.214 [2024-07-21 03:44:28.371739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.214 qpair failed and we were unable to recover it. 00:34:43.214 [2024-07-21 03:44:28.371868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.214 [2024-07-21 03:44:28.371894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.214 qpair failed and we were unable to recover it. 00:34:43.214 [2024-07-21 03:44:28.372015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.214 [2024-07-21 03:44:28.372041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.214 qpair failed and we were unable to recover it. 00:34:43.214 [2024-07-21 03:44:28.372161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.214 [2024-07-21 03:44:28.372188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.214 qpair failed and we were unable to recover it. 00:34:43.214 [2024-07-21 03:44:28.372310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.214 [2024-07-21 03:44:28.372336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.214 qpair failed and we were unable to recover it. 00:34:43.214 [2024-07-21 03:44:28.372450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.214 [2024-07-21 03:44:28.372479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.214 qpair failed and we were unable to recover it. 00:34:43.214 [2024-07-21 03:44:28.372645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.214 [2024-07-21 03:44:28.372672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.214 qpair failed and we were unable to recover it. 00:34:43.214 [2024-07-21 03:44:28.372793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.214 [2024-07-21 03:44:28.372824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.214 qpair failed and we were unable to recover it. 00:34:43.214 [2024-07-21 03:44:28.372950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.214 [2024-07-21 03:44:28.372976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.214 qpair failed and we were unable to recover it. 00:34:43.214 [2024-07-21 03:44:28.373067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.214 [2024-07-21 03:44:28.373094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.214 qpair failed and we were unable to recover it. 00:34:43.214 [2024-07-21 03:44:28.373244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.214 [2024-07-21 03:44:28.373271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.214 qpair failed and we were unable to recover it. 00:34:43.214 [2024-07-21 03:44:28.373420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.214 [2024-07-21 03:44:28.373446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.214 qpair failed and we were unable to recover it. 00:34:43.214 [2024-07-21 03:44:28.373601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.214 [2024-07-21 03:44:28.373633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.214 qpair failed and we were unable to recover it. 00:34:43.214 [2024-07-21 03:44:28.373755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.214 [2024-07-21 03:44:28.373781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.214 qpair failed and we were unable to recover it. 00:34:43.214 [2024-07-21 03:44:28.373961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.214 [2024-07-21 03:44:28.373989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.214 qpair failed and we were unable to recover it. 00:34:43.214 [2024-07-21 03:44:28.374135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.214 [2024-07-21 03:44:28.374161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.214 qpair failed and we were unable to recover it. 00:34:43.214 [2024-07-21 03:44:28.374305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.214 [2024-07-21 03:44:28.374332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.214 qpair failed and we were unable to recover it. 00:34:43.214 [2024-07-21 03:44:28.374468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.214 [2024-07-21 03:44:28.374497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.214 qpair failed and we were unable to recover it. 00:34:43.215 [2024-07-21 03:44:28.374638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.215 [2024-07-21 03:44:28.374665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.215 qpair failed and we were unable to recover it. 00:34:43.215 [2024-07-21 03:44:28.374779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.215 [2024-07-21 03:44:28.374806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.215 qpair failed and we were unable to recover it. 00:34:43.215 [2024-07-21 03:44:28.374990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.215 [2024-07-21 03:44:28.375017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.215 qpair failed and we were unable to recover it. 00:34:43.215 [2024-07-21 03:44:28.375170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.215 [2024-07-21 03:44:28.375196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.215 qpair failed and we were unable to recover it. 00:34:43.215 [2024-07-21 03:44:28.375320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.215 [2024-07-21 03:44:28.375346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.215 qpair failed and we were unable to recover it. 00:34:43.215 [2024-07-21 03:44:28.375462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.215 [2024-07-21 03:44:28.375488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.215 qpair failed and we were unable to recover it. 00:34:43.215 [2024-07-21 03:44:28.375603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.215 [2024-07-21 03:44:28.375637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.215 qpair failed and we were unable to recover it. 00:34:43.215 [2024-07-21 03:44:28.375781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.215 [2024-07-21 03:44:28.375807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.215 qpair failed and we were unable to recover it. 00:34:43.215 [2024-07-21 03:44:28.375956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.215 [2024-07-21 03:44:28.375985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.215 qpair failed and we were unable to recover it. 00:34:43.215 [2024-07-21 03:44:28.376135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.215 [2024-07-21 03:44:28.376161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.215 qpair failed and we were unable to recover it. 00:34:43.215 [2024-07-21 03:44:28.376280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.215 [2024-07-21 03:44:28.376307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.215 qpair failed and we were unable to recover it. 00:34:43.215 [2024-07-21 03:44:28.376431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.215 [2024-07-21 03:44:28.376458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.215 qpair failed and we were unable to recover it. 00:34:43.215 [2024-07-21 03:44:28.376609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.215 [2024-07-21 03:44:28.376642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.215 qpair failed and we were unable to recover it. 00:34:43.215 [2024-07-21 03:44:28.376747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.215 [2024-07-21 03:44:28.376773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.215 qpair failed and we were unable to recover it. 00:34:43.215 [2024-07-21 03:44:28.376870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.215 [2024-07-21 03:44:28.376896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.215 qpair failed and we were unable to recover it. 00:34:43.215 [2024-07-21 03:44:28.377015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.215 [2024-07-21 03:44:28.377042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.215 qpair failed and we were unable to recover it. 00:34:43.215 [2024-07-21 03:44:28.377162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.215 [2024-07-21 03:44:28.377189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.215 qpair failed and we were unable to recover it. 00:34:43.215 [2024-07-21 03:44:28.377331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.215 [2024-07-21 03:44:28.377361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.215 qpair failed and we were unable to recover it. 00:34:43.215 [2024-07-21 03:44:28.377498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.215 [2024-07-21 03:44:28.377525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.215 qpair failed and we were unable to recover it. 00:34:43.215 [2024-07-21 03:44:28.377647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.215 [2024-07-21 03:44:28.377675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.215 qpair failed and we were unable to recover it. 00:34:43.215 [2024-07-21 03:44:28.377845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.215 [2024-07-21 03:44:28.377875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.215 qpair failed and we were unable to recover it. 00:34:43.215 [2024-07-21 03:44:28.378039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.215 [2024-07-21 03:44:28.378065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.215 qpair failed and we were unable to recover it. 00:34:43.215 [2024-07-21 03:44:28.378227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.215 [2024-07-21 03:44:28.378257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.215 qpair failed and we were unable to recover it. 00:34:43.215 [2024-07-21 03:44:28.378406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.215 [2024-07-21 03:44:28.378432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.215 qpair failed and we were unable to recover it. 00:34:43.215 [2024-07-21 03:44:28.378579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.215 [2024-07-21 03:44:28.378605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.215 qpair failed and we were unable to recover it. 00:34:43.215 [2024-07-21 03:44:28.378784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.215 [2024-07-21 03:44:28.378813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.216 qpair failed and we were unable to recover it. 00:34:43.216 [2024-07-21 03:44:28.378972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.216 [2024-07-21 03:44:28.379001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.216 qpair failed and we were unable to recover it. 00:34:43.216 [2024-07-21 03:44:28.379172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.216 [2024-07-21 03:44:28.379198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.216 qpair failed and we were unable to recover it. 00:34:43.216 [2024-07-21 03:44:28.379357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.216 [2024-07-21 03:44:28.379386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.216 qpair failed and we were unable to recover it. 00:34:43.216 [2024-07-21 03:44:28.379521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.216 [2024-07-21 03:44:28.379554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.216 qpair failed and we were unable to recover it. 00:34:43.216 [2024-07-21 03:44:28.379705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.216 [2024-07-21 03:44:28.379733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.216 qpair failed and we were unable to recover it. 00:34:43.216 [2024-07-21 03:44:28.379820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.216 [2024-07-21 03:44:28.379846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.216 qpair failed and we were unable to recover it. 00:34:43.216 [2024-07-21 03:44:28.379983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.216 [2024-07-21 03:44:28.380012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.216 qpair failed and we were unable to recover it. 00:34:43.216 [2024-07-21 03:44:28.380128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.216 [2024-07-21 03:44:28.380154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.216 qpair failed and we were unable to recover it. 00:34:43.216 [2024-07-21 03:44:28.380302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.216 [2024-07-21 03:44:28.380328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.216 qpair failed and we were unable to recover it. 00:34:43.216 [2024-07-21 03:44:28.380500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.216 [2024-07-21 03:44:28.380527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.216 qpair failed and we were unable to recover it. 00:34:43.216 [2024-07-21 03:44:28.380652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.216 [2024-07-21 03:44:28.380679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.216 qpair failed and we were unable to recover it. 00:34:43.216 [2024-07-21 03:44:28.380768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.216 [2024-07-21 03:44:28.380795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.216 qpair failed and we were unable to recover it. 00:34:43.216 [2024-07-21 03:44:28.380898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.216 [2024-07-21 03:44:28.380924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.216 qpair failed and we were unable to recover it. 00:34:43.216 [2024-07-21 03:44:28.381044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.216 [2024-07-21 03:44:28.381070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.216 qpair failed and we were unable to recover it. 00:34:43.216 [2024-07-21 03:44:28.381239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.216 [2024-07-21 03:44:28.381268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.216 qpair failed and we were unable to recover it. 00:34:43.216 [2024-07-21 03:44:28.381393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.216 [2024-07-21 03:44:28.381422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.216 qpair failed and we were unable to recover it. 00:34:43.216 [2024-07-21 03:44:28.381565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.216 [2024-07-21 03:44:28.381592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.216 qpair failed and we were unable to recover it. 00:34:43.216 [2024-07-21 03:44:28.381728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.216 [2024-07-21 03:44:28.381770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.216 qpair failed and we were unable to recover it. 00:34:43.216 [2024-07-21 03:44:28.381935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.216 [2024-07-21 03:44:28.381961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.216 qpair failed and we were unable to recover it. 00:34:43.216 [2024-07-21 03:44:28.382084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.216 [2024-07-21 03:44:28.382110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.216 qpair failed and we were unable to recover it. 00:34:43.216 [2024-07-21 03:44:28.382216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.216 [2024-07-21 03:44:28.382242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.216 qpair failed and we were unable to recover it. 00:34:43.216 [2024-07-21 03:44:28.382364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.217 [2024-07-21 03:44:28.382390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.217 qpair failed and we were unable to recover it. 00:34:43.217 [2024-07-21 03:44:28.382536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.217 [2024-07-21 03:44:28.382562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.217 qpair failed and we were unable to recover it. 00:34:43.217 [2024-07-21 03:44:28.382659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.217 [2024-07-21 03:44:28.382687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.217 qpair failed and we were unable to recover it. 00:34:43.217 [2024-07-21 03:44:28.382784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.217 [2024-07-21 03:44:28.382810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.217 qpair failed and we were unable to recover it. 00:34:43.217 [2024-07-21 03:44:28.382935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.217 [2024-07-21 03:44:28.382961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.217 qpair failed and we were unable to recover it. 00:34:43.217 [2024-07-21 03:44:28.383057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.217 [2024-07-21 03:44:28.383084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.217 qpair failed and we were unable to recover it. 00:34:43.217 [2024-07-21 03:44:28.383198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.217 [2024-07-21 03:44:28.383224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.217 qpair failed and we were unable to recover it. 00:34:43.217 [2024-07-21 03:44:28.383322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.217 [2024-07-21 03:44:28.383348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.217 qpair failed and we were unable to recover it. 00:34:43.217 [2024-07-21 03:44:28.383463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.217 [2024-07-21 03:44:28.383489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.217 qpair failed and we were unable to recover it. 00:34:43.217 [2024-07-21 03:44:28.383662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.217 [2024-07-21 03:44:28.383703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.217 qpair failed and we were unable to recover it. 00:34:43.217 [2024-07-21 03:44:28.383827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.217 [2024-07-21 03:44:28.383856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.217 qpair failed and we were unable to recover it. 00:34:43.217 [2024-07-21 03:44:28.383960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.217 [2024-07-21 03:44:28.383987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.217 qpair failed and we were unable to recover it. 00:34:43.217 [2024-07-21 03:44:28.384075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.217 [2024-07-21 03:44:28.384102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.217 qpair failed and we were unable to recover it. 00:34:43.217 [2024-07-21 03:44:28.384253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.217 [2024-07-21 03:44:28.384280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.217 qpair failed and we were unable to recover it. 00:34:43.217 [2024-07-21 03:44:28.384368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.217 [2024-07-21 03:44:28.384395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.217 qpair failed and we were unable to recover it. 00:34:43.217 [2024-07-21 03:44:28.384543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.217 [2024-07-21 03:44:28.384573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.217 qpair failed and we were unable to recover it. 00:34:43.217 [2024-07-21 03:44:28.384720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.217 [2024-07-21 03:44:28.384747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.217 qpair failed and we were unable to recover it. 00:34:43.217 [2024-07-21 03:44:28.384872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.217 [2024-07-21 03:44:28.384898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.217 qpair failed and we were unable to recover it. 00:34:43.217 [2024-07-21 03:44:28.385062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.217 [2024-07-21 03:44:28.385130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.217 qpair failed and we were unable to recover it. 00:34:43.217 [2024-07-21 03:44:28.385271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.217 [2024-07-21 03:44:28.385298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.217 qpair failed and we were unable to recover it. 00:34:43.217 [2024-07-21 03:44:28.385389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.217 [2024-07-21 03:44:28.385417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.217 qpair failed and we were unable to recover it. 00:34:43.217 [2024-07-21 03:44:28.385594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.217 [2024-07-21 03:44:28.385630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.217 qpair failed and we were unable to recover it. 00:34:43.217 [2024-07-21 03:44:28.385773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.217 [2024-07-21 03:44:28.385799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.217 qpair failed and we were unable to recover it. 00:34:43.217 [2024-07-21 03:44:28.385926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.217 [2024-07-21 03:44:28.385954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.217 qpair failed and we were unable to recover it. 00:34:43.217 [2024-07-21 03:44:28.386064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.217 [2024-07-21 03:44:28.386109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.217 qpair failed and we were unable to recover it. 00:34:43.217 [2024-07-21 03:44:28.386229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.217 [2024-07-21 03:44:28.386256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.217 qpair failed and we were unable to recover it. 00:34:43.217 [2024-07-21 03:44:28.386368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.217 [2024-07-21 03:44:28.386394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.217 qpair failed and we were unable to recover it. 00:34:43.217 [2024-07-21 03:44:28.386590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.217 [2024-07-21 03:44:28.386623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.217 qpair failed and we were unable to recover it. 00:34:43.218 [2024-07-21 03:44:28.386741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.218 [2024-07-21 03:44:28.386767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.218 qpair failed and we were unable to recover it. 00:34:43.218 [2024-07-21 03:44:28.386868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.218 [2024-07-21 03:44:28.386894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.218 qpair failed and we were unable to recover it. 00:34:43.218 [2024-07-21 03:44:28.386981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.218 [2024-07-21 03:44:28.387023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.218 qpair failed and we were unable to recover it. 00:34:43.218 [2024-07-21 03:44:28.387188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.218 [2024-07-21 03:44:28.387215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.218 qpair failed and we were unable to recover it. 00:34:43.218 [2024-07-21 03:44:28.387334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.218 [2024-07-21 03:44:28.387360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.218 qpair failed and we were unable to recover it. 00:34:43.218 [2024-07-21 03:44:28.387481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.218 [2024-07-21 03:44:28.387508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.218 qpair failed and we were unable to recover it. 00:34:43.218 [2024-07-21 03:44:28.387657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.218 [2024-07-21 03:44:28.387684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.218 qpair failed and we were unable to recover it. 00:34:43.218 [2024-07-21 03:44:28.387780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.218 [2024-07-21 03:44:28.387807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.218 qpair failed and we were unable to recover it. 00:34:43.218 [2024-07-21 03:44:28.387962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.218 [2024-07-21 03:44:28.387989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.218 qpair failed and we were unable to recover it. 00:34:43.218 [2024-07-21 03:44:28.388140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.218 [2024-07-21 03:44:28.388166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.218 qpair failed and we were unable to recover it. 00:34:43.218 [2024-07-21 03:44:28.388294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.218 [2024-07-21 03:44:28.388320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.218 qpair failed and we were unable to recover it. 00:34:43.218 [2024-07-21 03:44:28.388431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.218 [2024-07-21 03:44:28.388461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.218 qpair failed and we were unable to recover it. 00:34:43.218 [2024-07-21 03:44:28.388640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.218 [2024-07-21 03:44:28.388668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.218 qpair failed and we were unable to recover it. 00:34:43.218 [2024-07-21 03:44:28.388769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.218 [2024-07-21 03:44:28.388812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.218 qpair failed and we were unable to recover it. 00:34:43.218 [2024-07-21 03:44:28.388953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.218 [2024-07-21 03:44:28.388985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.218 qpair failed and we were unable to recover it. 00:34:43.218 [2024-07-21 03:44:28.389093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.218 [2024-07-21 03:44:28.389119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.218 qpair failed and we were unable to recover it. 00:34:43.218 [2024-07-21 03:44:28.389245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.218 [2024-07-21 03:44:28.389273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.218 qpair failed and we were unable to recover it. 00:34:43.218 [2024-07-21 03:44:28.389375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.218 [2024-07-21 03:44:28.389404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.218 qpair failed and we were unable to recover it. 00:34:43.218 [2024-07-21 03:44:28.389521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.218 [2024-07-21 03:44:28.389548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.218 qpair failed and we were unable to recover it. 00:34:43.218 [2024-07-21 03:44:28.389693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.218 [2024-07-21 03:44:28.389720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.218 qpair failed and we were unable to recover it. 00:34:43.218 [2024-07-21 03:44:28.389877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.218 [2024-07-21 03:44:28.389904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.218 qpair failed and we were unable to recover it. 00:34:43.218 [2024-07-21 03:44:28.390055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.218 [2024-07-21 03:44:28.390086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.218 qpair failed and we were unable to recover it. 00:34:43.218 [2024-07-21 03:44:28.390176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.218 [2024-07-21 03:44:28.390203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.218 qpair failed and we were unable to recover it. 00:34:43.218 [2024-07-21 03:44:28.390326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.218 [2024-07-21 03:44:28.390354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.218 qpair failed and we were unable to recover it. 00:34:43.218 [2024-07-21 03:44:28.390474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.218 [2024-07-21 03:44:28.390500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.218 qpair failed and we were unable to recover it. 00:34:43.218 [2024-07-21 03:44:28.390593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.218 [2024-07-21 03:44:28.390626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.218 qpair failed and we were unable to recover it. 00:34:43.218 [2024-07-21 03:44:28.390752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.218 [2024-07-21 03:44:28.390778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.218 qpair failed and we were unable to recover it. 00:34:43.219 [2024-07-21 03:44:28.390896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.219 [2024-07-21 03:44:28.390923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.219 qpair failed and we were unable to recover it. 00:34:43.219 [2024-07-21 03:44:28.391042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.219 [2024-07-21 03:44:28.391068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.219 qpair failed and we were unable to recover it. 00:34:43.219 [2024-07-21 03:44:28.391159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.219 [2024-07-21 03:44:28.391186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.219 qpair failed and we were unable to recover it. 00:34:43.219 [2024-07-21 03:44:28.391310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.219 [2024-07-21 03:44:28.391338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.219 qpair failed and we were unable to recover it. 00:34:43.219 [2024-07-21 03:44:28.391488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.219 [2024-07-21 03:44:28.391514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.219 qpair failed and we were unable to recover it. 00:34:43.219 [2024-07-21 03:44:28.391611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.219 [2024-07-21 03:44:28.391643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.219 qpair failed and we were unable to recover it. 00:34:43.219 [2024-07-21 03:44:28.391769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.219 [2024-07-21 03:44:28.391795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.219 qpair failed and we were unable to recover it. 00:34:43.219 [2024-07-21 03:44:28.391913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.219 [2024-07-21 03:44:28.391939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.219 qpair failed and we were unable to recover it. 00:34:43.219 [2024-07-21 03:44:28.392058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.219 [2024-07-21 03:44:28.392084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.219 qpair failed and we were unable to recover it. 00:34:43.219 [2024-07-21 03:44:28.392211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.219 [2024-07-21 03:44:28.392237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.219 qpair failed and we were unable to recover it. 00:34:43.219 [2024-07-21 03:44:28.392334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.219 [2024-07-21 03:44:28.392360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.219 qpair failed and we were unable to recover it. 00:34:43.219 [2024-07-21 03:44:28.392478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.219 [2024-07-21 03:44:28.392510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.219 qpair failed and we were unable to recover it. 00:34:43.219 [2024-07-21 03:44:28.392633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.219 [2024-07-21 03:44:28.392660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.219 qpair failed and we were unable to recover it. 00:34:43.219 [2024-07-21 03:44:28.392751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.219 [2024-07-21 03:44:28.392778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.219 qpair failed and we were unable to recover it. 00:34:43.219 [2024-07-21 03:44:28.392870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.219 [2024-07-21 03:44:28.392898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.219 qpair failed and we were unable to recover it. 00:34:43.219 [2024-07-21 03:44:28.393047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.219 [2024-07-21 03:44:28.393074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.219 qpair failed and we were unable to recover it. 00:34:43.219 [2024-07-21 03:44:28.393240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.219 [2024-07-21 03:44:28.393269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.219 qpair failed and we were unable to recover it. 00:34:43.219 [2024-07-21 03:44:28.393370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.219 [2024-07-21 03:44:28.393415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.219 qpair failed and we were unable to recover it. 00:34:43.219 [2024-07-21 03:44:28.393542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.219 [2024-07-21 03:44:28.393569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.219 qpair failed and we were unable to recover it. 00:34:43.219 [2024-07-21 03:44:28.393727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.219 [2024-07-21 03:44:28.393754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.219 qpair failed and we were unable to recover it. 00:34:43.219 [2024-07-21 03:44:28.393843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.219 [2024-07-21 03:44:28.393869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.219 qpair failed and we were unable to recover it. 00:34:43.219 [2024-07-21 03:44:28.394067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.219 [2024-07-21 03:44:28.394094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.219 qpair failed and we were unable to recover it. 00:34:43.219 [2024-07-21 03:44:28.394183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.219 [2024-07-21 03:44:28.394209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.219 qpair failed and we were unable to recover it. 00:34:43.219 [2024-07-21 03:44:28.394352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.219 [2024-07-21 03:44:28.394381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.219 qpair failed and we were unable to recover it. 00:34:43.219 [2024-07-21 03:44:28.394497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.219 [2024-07-21 03:44:28.394523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.219 qpair failed and we were unable to recover it. 00:34:43.219 [2024-07-21 03:44:28.394647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.219 [2024-07-21 03:44:28.394674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.219 qpair failed and we were unable to recover it. 00:34:43.219 [2024-07-21 03:44:28.394836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.219 [2024-07-21 03:44:28.394866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.219 qpair failed and we were unable to recover it. 00:34:43.219 [2024-07-21 03:44:28.395034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.219 [2024-07-21 03:44:28.395061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.219 qpair failed and we were unable to recover it. 00:34:43.219 [2024-07-21 03:44:28.395190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.219 [2024-07-21 03:44:28.395216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.219 qpair failed and we were unable to recover it. 00:34:43.219 [2024-07-21 03:44:28.395339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.220 [2024-07-21 03:44:28.395365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.220 qpair failed and we were unable to recover it. 00:34:43.220 [2024-07-21 03:44:28.395515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.220 [2024-07-21 03:44:28.395542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.220 qpair failed and we were unable to recover it. 00:34:43.220 [2024-07-21 03:44:28.395666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.220 [2024-07-21 03:44:28.395694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.220 qpair failed and we were unable to recover it. 00:34:43.220 [2024-07-21 03:44:28.395837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.220 [2024-07-21 03:44:28.395866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.220 qpair failed and we were unable to recover it. 00:34:43.220 [2024-07-21 03:44:28.396001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.220 [2024-07-21 03:44:28.396028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.220 qpair failed and we were unable to recover it. 00:34:43.220 [2024-07-21 03:44:28.396145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.220 [2024-07-21 03:44:28.396175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.220 qpair failed and we were unable to recover it. 00:34:43.220 [2024-07-21 03:44:28.396315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.220 [2024-07-21 03:44:28.396344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.220 qpair failed and we were unable to recover it. 00:34:43.220 [2024-07-21 03:44:28.396480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.220 [2024-07-21 03:44:28.396506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.220 qpair failed and we were unable to recover it. 00:34:43.220 [2024-07-21 03:44:28.396653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.220 [2024-07-21 03:44:28.396680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.220 qpair failed and we were unable to recover it. 00:34:43.220 [2024-07-21 03:44:28.396806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.220 [2024-07-21 03:44:28.396835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.220 qpair failed and we were unable to recover it. 00:34:43.220 [2024-07-21 03:44:28.396988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.220 [2024-07-21 03:44:28.397014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.220 qpair failed and we were unable to recover it. 00:34:43.220 [2024-07-21 03:44:28.397143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.220 [2024-07-21 03:44:28.397169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.220 qpair failed and we were unable to recover it. 00:34:43.220 [2024-07-21 03:44:28.397267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.220 [2024-07-21 03:44:28.397294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.220 qpair failed and we were unable to recover it. 00:34:43.220 [2024-07-21 03:44:28.397413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.220 [2024-07-21 03:44:28.397440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.220 qpair failed and we were unable to recover it. 00:34:43.220 [2024-07-21 03:44:28.397560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.220 [2024-07-21 03:44:28.397586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.220 qpair failed and we were unable to recover it. 00:34:43.220 [2024-07-21 03:44:28.397712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.220 [2024-07-21 03:44:28.397741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.220 qpair failed and we were unable to recover it. 00:34:43.220 [2024-07-21 03:44:28.397884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.220 [2024-07-21 03:44:28.397912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.220 qpair failed and we were unable to recover it. 00:34:43.220 [2024-07-21 03:44:28.398032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.220 [2024-07-21 03:44:28.398059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.220 qpair failed and we were unable to recover it. 00:34:43.220 [2024-07-21 03:44:28.398235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.220 [2024-07-21 03:44:28.398262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.220 qpair failed and we were unable to recover it. 00:34:43.220 [2024-07-21 03:44:28.398385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.220 [2024-07-21 03:44:28.398412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.220 qpair failed and we were unable to recover it. 00:34:43.220 [2024-07-21 03:44:28.398562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.220 [2024-07-21 03:44:28.398589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.220 qpair failed and we were unable to recover it. 00:34:43.220 [2024-07-21 03:44:28.398756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.220 [2024-07-21 03:44:28.398800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.220 qpair failed and we were unable to recover it. 00:34:43.220 [2024-07-21 03:44:28.398925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.220 [2024-07-21 03:44:28.398952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.220 qpair failed and we were unable to recover it. 00:34:43.220 [2024-07-21 03:44:28.399053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.220 [2024-07-21 03:44:28.399081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.220 qpair failed and we were unable to recover it. 00:34:43.220 [2024-07-21 03:44:28.399223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.220 [2024-07-21 03:44:28.399252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.220 qpair failed and we were unable to recover it. 00:34:43.220 [2024-07-21 03:44:28.399370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.220 [2024-07-21 03:44:28.399397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.220 qpair failed and we were unable to recover it. 00:34:43.220 [2024-07-21 03:44:28.399490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.220 [2024-07-21 03:44:28.399516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.220 qpair failed and we were unable to recover it. 00:34:43.220 [2024-07-21 03:44:28.399662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.220 [2024-07-21 03:44:28.399695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.220 qpair failed and we were unable to recover it. 00:34:43.220 [2024-07-21 03:44:28.399853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.220 [2024-07-21 03:44:28.399879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.220 qpair failed and we were unable to recover it. 00:34:43.220 [2024-07-21 03:44:28.399980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.220 [2024-07-21 03:44:28.400007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.220 qpair failed and we were unable to recover it. 00:34:43.220 [2024-07-21 03:44:28.400149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.220 [2024-07-21 03:44:28.400178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.221 qpair failed and we were unable to recover it. 00:34:43.221 [2024-07-21 03:44:28.400298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.221 [2024-07-21 03:44:28.400324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.221 qpair failed and we were unable to recover it. 00:34:43.221 [2024-07-21 03:44:28.400417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.221 [2024-07-21 03:44:28.400443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.221 qpair failed and we were unable to recover it. 00:34:43.221 [2024-07-21 03:44:28.400533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.221 [2024-07-21 03:44:28.400559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.221 qpair failed and we were unable to recover it. 00:34:43.221 [2024-07-21 03:44:28.400684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.221 [2024-07-21 03:44:28.400711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.221 qpair failed and we were unable to recover it. 00:34:43.221 [2024-07-21 03:44:28.400832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.221 [2024-07-21 03:44:28.400858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.221 qpair failed and we were unable to recover it. 00:34:43.221 [2024-07-21 03:44:28.400977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.221 [2024-07-21 03:44:28.401004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.221 qpair failed and we were unable to recover it. 00:34:43.221 [2024-07-21 03:44:28.401153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.221 [2024-07-21 03:44:28.401179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.221 qpair failed and we were unable to recover it. 00:34:43.221 [2024-07-21 03:44:28.401344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.221 [2024-07-21 03:44:28.401373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.221 qpair failed and we were unable to recover it. 00:34:43.221 [2024-07-21 03:44:28.401479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.221 [2024-07-21 03:44:28.401508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.221 qpair failed and we were unable to recover it. 00:34:43.221 [2024-07-21 03:44:28.401628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.221 [2024-07-21 03:44:28.401656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.221 qpair failed and we were unable to recover it. 00:34:43.221 [2024-07-21 03:44:28.401811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.221 [2024-07-21 03:44:28.401837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.221 qpair failed and we were unable to recover it. 00:34:43.221 [2024-07-21 03:44:28.402065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.221 [2024-07-21 03:44:28.402123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.221 qpair failed and we were unable to recover it. 00:34:43.221 [2024-07-21 03:44:28.402230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.221 [2024-07-21 03:44:28.402257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.221 qpair failed and we were unable to recover it. 00:34:43.221 [2024-07-21 03:44:28.402387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.221 [2024-07-21 03:44:28.402413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.221 qpair failed and we were unable to recover it. 00:34:43.221 [2024-07-21 03:44:28.402569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.221 [2024-07-21 03:44:28.402600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.221 qpair failed and we were unable to recover it. 00:34:43.221 [2024-07-21 03:44:28.402746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.221 [2024-07-21 03:44:28.402772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.221 qpair failed and we were unable to recover it. 00:34:43.221 [2024-07-21 03:44:28.402899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.221 [2024-07-21 03:44:28.402925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.221 qpair failed and we were unable to recover it. 00:34:43.221 [2024-07-21 03:44:28.403071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.221 [2024-07-21 03:44:28.403100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.221 qpair failed and we were unable to recover it. 00:34:43.221 [2024-07-21 03:44:28.403274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.221 [2024-07-21 03:44:28.403300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.221 qpair failed and we were unable to recover it. 00:34:43.221 [2024-07-21 03:44:28.403388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.221 [2024-07-21 03:44:28.403416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.221 qpair failed and we were unable to recover it. 00:34:43.221 [2024-07-21 03:44:28.403536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.221 [2024-07-21 03:44:28.403562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.221 qpair failed and we were unable to recover it. 00:34:43.221 [2024-07-21 03:44:28.403655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.221 [2024-07-21 03:44:28.403683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.221 qpair failed and we were unable to recover it. 00:34:43.221 [2024-07-21 03:44:28.403806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.221 [2024-07-21 03:44:28.403832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.221 qpair failed and we were unable to recover it. 00:34:43.221 [2024-07-21 03:44:28.403958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.221 [2024-07-21 03:44:28.403987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.221 qpair failed and we were unable to recover it. 00:34:43.221 [2024-07-21 03:44:28.404105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.221 [2024-07-21 03:44:28.404133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.221 qpair failed and we were unable to recover it. 00:34:43.221 [2024-07-21 03:44:28.404284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.221 [2024-07-21 03:44:28.404310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.221 qpair failed and we were unable to recover it. 00:34:43.221 [2024-07-21 03:44:28.404471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.221 [2024-07-21 03:44:28.404515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.221 qpair failed and we were unable to recover it. 00:34:43.221 [2024-07-21 03:44:28.404659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.221 [2024-07-21 03:44:28.404688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.221 qpair failed and we were unable to recover it. 00:34:43.221 [2024-07-21 03:44:28.404816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.221 [2024-07-21 03:44:28.404842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.221 qpair failed and we were unable to recover it. 00:34:43.221 [2024-07-21 03:44:28.404995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.221 [2024-07-21 03:44:28.405037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.221 qpair failed and we were unable to recover it. 00:34:43.221 [2024-07-21 03:44:28.405181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.221 [2024-07-21 03:44:28.405208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.221 qpair failed and we were unable to recover it. 00:34:43.221 [2024-07-21 03:44:28.405300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.222 [2024-07-21 03:44:28.405327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.222 qpair failed and we were unable to recover it. 00:34:43.222 [2024-07-21 03:44:28.405446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.222 [2024-07-21 03:44:28.405473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.222 qpair failed and we were unable to recover it. 00:34:43.222 [2024-07-21 03:44:28.405595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.222 [2024-07-21 03:44:28.405629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.222 qpair failed and we were unable to recover it. 00:34:43.222 [2024-07-21 03:44:28.405731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.222 [2024-07-21 03:44:28.405758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.222 qpair failed and we were unable to recover it. 00:34:43.222 [2024-07-21 03:44:28.405848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.222 [2024-07-21 03:44:28.405876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.222 qpair failed and we were unable to recover it. 00:34:43.222 [2024-07-21 03:44:28.406000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.222 [2024-07-21 03:44:28.406027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.222 qpair failed and we were unable to recover it. 00:34:43.222 [2024-07-21 03:44:28.406140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.222 [2024-07-21 03:44:28.406167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.222 qpair failed and we were unable to recover it. 00:34:43.222 [2024-07-21 03:44:28.406295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.222 [2024-07-21 03:44:28.406322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.222 qpair failed and we were unable to recover it. 00:34:43.222 [2024-07-21 03:44:28.406482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.222 [2024-07-21 03:44:28.406509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.222 qpair failed and we were unable to recover it. 00:34:43.222 [2024-07-21 03:44:28.406655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.222 [2024-07-21 03:44:28.406682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.222 qpair failed and we were unable to recover it. 00:34:43.222 [2024-07-21 03:44:28.406839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.222 [2024-07-21 03:44:28.406870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.222 qpair failed and we were unable to recover it. 00:34:43.222 [2024-07-21 03:44:28.407012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.222 [2024-07-21 03:44:28.407038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.222 qpair failed and we were unable to recover it. 00:34:43.222 [2024-07-21 03:44:28.407192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.222 [2024-07-21 03:44:28.407237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.222 qpair failed and we were unable to recover it. 00:34:43.222 [2024-07-21 03:44:28.407368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.222 [2024-07-21 03:44:28.407410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.222 qpair failed and we were unable to recover it. 00:34:43.222 [2024-07-21 03:44:28.407504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.222 [2024-07-21 03:44:28.407531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.222 qpair failed and we were unable to recover it. 00:34:43.222 [2024-07-21 03:44:28.407631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.222 [2024-07-21 03:44:28.407659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.222 qpair failed and we were unable to recover it. 00:34:43.222 [2024-07-21 03:44:28.407762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.222 [2024-07-21 03:44:28.407805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.222 qpair failed and we were unable to recover it. 00:34:43.222 [2024-07-21 03:44:28.407948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.222 [2024-07-21 03:44:28.407974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.222 qpair failed and we were unable to recover it. 00:34:43.222 [2024-07-21 03:44:28.408069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.222 [2024-07-21 03:44:28.408095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.222 qpair failed and we were unable to recover it. 00:34:43.222 [2024-07-21 03:44:28.408193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.222 [2024-07-21 03:44:28.408220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.222 qpair failed and we were unable to recover it. 00:34:43.222 [2024-07-21 03:44:28.408370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.222 [2024-07-21 03:44:28.408396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.222 qpair failed and we were unable to recover it. 00:34:43.222 [2024-07-21 03:44:28.408556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.222 [2024-07-21 03:44:28.408585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.222 qpair failed and we were unable to recover it. 00:34:43.222 [2024-07-21 03:44:28.408688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.222 [2024-07-21 03:44:28.408732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.222 qpair failed and we were unable to recover it. 00:34:43.222 [2024-07-21 03:44:28.408882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.222 [2024-07-21 03:44:28.408912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.222 qpair failed and we were unable to recover it. 00:34:43.222 [2024-07-21 03:44:28.409053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.222 [2024-07-21 03:44:28.409084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.222 qpair failed and we were unable to recover it. 00:34:43.222 [2024-07-21 03:44:28.409220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.222 [2024-07-21 03:44:28.409249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.222 qpair failed and we were unable to recover it. 00:34:43.222 [2024-07-21 03:44:28.409418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.222 [2024-07-21 03:44:28.409444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.222 qpair failed and we were unable to recover it. 00:34:43.222 [2024-07-21 03:44:28.409606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.222 [2024-07-21 03:44:28.409640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.222 qpair failed and we were unable to recover it. 00:34:43.222 [2024-07-21 03:44:28.409776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.222 [2024-07-21 03:44:28.409807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.222 qpair failed and we were unable to recover it. 00:34:43.222 [2024-07-21 03:44:28.409920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.222 [2024-07-21 03:44:28.409947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.222 qpair failed and we were unable to recover it. 00:34:43.222 [2024-07-21 03:44:28.410039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.222 [2024-07-21 03:44:28.410065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.222 qpair failed and we were unable to recover it. 00:34:43.222 [2024-07-21 03:44:28.410228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.222 [2024-07-21 03:44:28.410260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.222 qpair failed and we were unable to recover it. 00:34:43.222 [2024-07-21 03:44:28.410379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.222 [2024-07-21 03:44:28.410405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.222 qpair failed and we were unable to recover it. 00:34:43.222 [2024-07-21 03:44:28.410521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.222 [2024-07-21 03:44:28.410548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.222 qpair failed and we were unable to recover it. 00:34:43.222 [2024-07-21 03:44:28.410719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.222 [2024-07-21 03:44:28.410749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.222 qpair failed and we were unable to recover it. 00:34:43.223 [2024-07-21 03:44:28.410885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.223 [2024-07-21 03:44:28.410911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.223 qpair failed and we were unable to recover it. 00:34:43.223 [2024-07-21 03:44:28.411010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.223 [2024-07-21 03:44:28.411038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.223 qpair failed and we were unable to recover it. 00:34:43.223 [2024-07-21 03:44:28.411157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.223 [2024-07-21 03:44:28.411184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.223 qpair failed and we were unable to recover it. 00:34:43.223 [2024-07-21 03:44:28.411302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.223 [2024-07-21 03:44:28.411329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.223 qpair failed and we were unable to recover it. 00:34:43.223 [2024-07-21 03:44:28.411444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.223 [2024-07-21 03:44:28.411470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.223 qpair failed and we were unable to recover it. 00:34:43.223 [2024-07-21 03:44:28.411650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.223 [2024-07-21 03:44:28.411681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.223 qpair failed and we were unable to recover it. 00:34:43.223 [2024-07-21 03:44:28.411850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.223 [2024-07-21 03:44:28.411877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.223 qpair failed and we were unable to recover it. 00:34:43.223 [2024-07-21 03:44:28.411996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.223 [2024-07-21 03:44:28.412039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.223 qpair failed and we were unable to recover it. 00:34:43.223 [2024-07-21 03:44:28.412210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.223 [2024-07-21 03:44:28.412237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.223 qpair failed and we were unable to recover it. 00:34:43.223 [2024-07-21 03:44:28.412328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.223 [2024-07-21 03:44:28.412355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.223 qpair failed and we were unable to recover it. 00:34:43.223 [2024-07-21 03:44:28.412477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.223 [2024-07-21 03:44:28.412503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.223 qpair failed and we were unable to recover it. 00:34:43.223 [2024-07-21 03:44:28.412618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.223 [2024-07-21 03:44:28.412648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.223 qpair failed and we were unable to recover it. 00:34:43.223 [2024-07-21 03:44:28.412752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.223 [2024-07-21 03:44:28.412779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.223 qpair failed and we were unable to recover it. 00:34:43.223 [2024-07-21 03:44:28.412900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.223 [2024-07-21 03:44:28.412926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.223 qpair failed and we were unable to recover it. 00:34:43.223 [2024-07-21 03:44:28.413056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.223 [2024-07-21 03:44:28.413085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.223 qpair failed and we were unable to recover it. 00:34:43.223 [2024-07-21 03:44:28.413244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.223 [2024-07-21 03:44:28.413271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.223 qpair failed and we were unable to recover it. 00:34:43.223 [2024-07-21 03:44:28.413365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.223 [2024-07-21 03:44:28.413391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.223 qpair failed and we were unable to recover it. 00:34:43.223 [2024-07-21 03:44:28.413510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.223 [2024-07-21 03:44:28.413536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.223 qpair failed and we were unable to recover it. 00:34:43.223 [2024-07-21 03:44:28.413660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.223 [2024-07-21 03:44:28.413687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.223 qpair failed and we were unable to recover it. 00:34:43.223 [2024-07-21 03:44:28.413834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.223 [2024-07-21 03:44:28.413879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.223 qpair failed and we were unable to recover it. 00:34:43.223 [2024-07-21 03:44:28.414040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.223 [2024-07-21 03:44:28.414102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.223 qpair failed and we were unable to recover it. 00:34:43.223 [2024-07-21 03:44:28.414245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.223 [2024-07-21 03:44:28.414272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.223 qpair failed and we were unable to recover it. 00:34:43.223 [2024-07-21 03:44:28.414399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.223 [2024-07-21 03:44:28.414425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.223 qpair failed and we were unable to recover it. 00:34:43.223 [2024-07-21 03:44:28.414583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.223 [2024-07-21 03:44:28.414610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.223 qpair failed and we were unable to recover it. 00:34:43.223 [2024-07-21 03:44:28.414740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.223 [2024-07-21 03:44:28.414767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.223 qpair failed and we were unable to recover it. 00:34:43.223 [2024-07-21 03:44:28.414888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.223 [2024-07-21 03:44:28.414915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.223 qpair failed and we were unable to recover it. 00:34:43.223 [2024-07-21 03:44:28.415066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.223 [2024-07-21 03:44:28.415095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.223 qpair failed and we were unable to recover it. 00:34:43.223 [2024-07-21 03:44:28.415263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.223 [2024-07-21 03:44:28.415289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.223 qpair failed and we were unable to recover it. 00:34:43.223 [2024-07-21 03:44:28.415453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.223 [2024-07-21 03:44:28.415488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.223 qpair failed and we were unable to recover it. 00:34:43.223 [2024-07-21 03:44:28.415626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.223 [2024-07-21 03:44:28.415656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.223 qpair failed and we were unable to recover it. 00:34:43.223 [2024-07-21 03:44:28.415824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.223 [2024-07-21 03:44:28.415850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.223 qpair failed and we were unable to recover it. 00:34:43.223 [2024-07-21 03:44:28.415966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.223 [2024-07-21 03:44:28.416009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.223 qpair failed and we were unable to recover it. 00:34:43.223 [2024-07-21 03:44:28.416145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.223 [2024-07-21 03:44:28.416174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.223 qpair failed and we were unable to recover it. 00:34:43.223 [2024-07-21 03:44:28.416289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.223 [2024-07-21 03:44:28.416315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.223 qpair failed and we were unable to recover it. 00:34:43.224 [2024-07-21 03:44:28.416461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.224 [2024-07-21 03:44:28.416488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.224 qpair failed and we were unable to recover it. 00:34:43.224 [2024-07-21 03:44:28.416631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.224 [2024-07-21 03:44:28.416674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.224 qpair failed and we were unable to recover it. 00:34:43.224 [2024-07-21 03:44:28.416796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.224 [2024-07-21 03:44:28.416823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.224 qpair failed and we were unable to recover it. 00:34:43.224 [2024-07-21 03:44:28.416991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.224 [2024-07-21 03:44:28.417020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.224 qpair failed and we were unable to recover it. 00:34:43.224 [2024-07-21 03:44:28.417201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.224 [2024-07-21 03:44:28.417230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.224 qpair failed and we were unable to recover it. 00:34:43.224 [2024-07-21 03:44:28.417351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.224 [2024-07-21 03:44:28.417377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.224 qpair failed and we were unable to recover it. 00:34:43.224 [2024-07-21 03:44:28.417493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.224 [2024-07-21 03:44:28.417520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.224 qpair failed and we were unable to recover it. 00:34:43.224 [2024-07-21 03:44:28.417669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.224 [2024-07-21 03:44:28.417698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.224 qpair failed and we were unable to recover it. 00:34:43.224 [2024-07-21 03:44:28.417873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.224 [2024-07-21 03:44:28.417900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.224 qpair failed and we were unable to recover it. 00:34:43.224 [2024-07-21 03:44:28.417996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.224 [2024-07-21 03:44:28.418022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.224 qpair failed and we were unable to recover it. 00:34:43.224 [2024-07-21 03:44:28.418137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.224 [2024-07-21 03:44:28.418166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.224 qpair failed and we were unable to recover it. 00:34:43.224 [2024-07-21 03:44:28.418315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.224 [2024-07-21 03:44:28.418342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.224 qpair failed and we were unable to recover it. 00:34:43.224 [2024-07-21 03:44:28.418461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.224 [2024-07-21 03:44:28.418487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.224 qpair failed and we were unable to recover it. 00:34:43.224 [2024-07-21 03:44:28.418607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.224 [2024-07-21 03:44:28.418653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.224 qpair failed and we were unable to recover it. 00:34:43.224 [2024-07-21 03:44:28.418797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.224 [2024-07-21 03:44:28.418823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.224 qpair failed and we were unable to recover it. 00:34:43.224 [2024-07-21 03:44:28.418926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.224 [2024-07-21 03:44:28.418953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.224 qpair failed and we were unable to recover it. 00:34:43.224 [2024-07-21 03:44:28.419053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.224 [2024-07-21 03:44:28.419081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.224 qpair failed and we were unable to recover it. 00:34:43.224 [2024-07-21 03:44:28.419236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.224 [2024-07-21 03:44:28.419262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.224 qpair failed and we were unable to recover it. 00:34:43.224 [2024-07-21 03:44:28.419352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.224 [2024-07-21 03:44:28.419394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.224 qpair failed and we were unable to recover it. 00:34:43.224 [2024-07-21 03:44:28.419566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.224 [2024-07-21 03:44:28.419593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.224 qpair failed and we were unable to recover it. 00:34:43.224 [2024-07-21 03:44:28.419754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.224 [2024-07-21 03:44:28.419780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.224 qpair failed and we were unable to recover it. 00:34:43.224 [2024-07-21 03:44:28.419926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.224 [2024-07-21 03:44:28.419956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.224 qpair failed and we were unable to recover it. 00:34:43.224 [2024-07-21 03:44:28.420081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.224 [2024-07-21 03:44:28.420110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.224 qpair failed and we were unable to recover it. 00:34:43.224 [2024-07-21 03:44:28.420221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.224 [2024-07-21 03:44:28.420248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.224 qpair failed and we were unable to recover it. 00:34:43.224 [2024-07-21 03:44:28.420340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.224 [2024-07-21 03:44:28.420366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.224 qpair failed and we were unable to recover it. 00:34:43.224 [2024-07-21 03:44:28.420486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.224 [2024-07-21 03:44:28.420513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.224 qpair failed and we were unable to recover it. 00:34:43.224 [2024-07-21 03:44:28.420639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.224 [2024-07-21 03:44:28.420666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.224 qpair failed and we were unable to recover it. 00:34:43.224 [2024-07-21 03:44:28.420795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.224 [2024-07-21 03:44:28.420822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.224 qpair failed and we were unable to recover it. 00:34:43.224 [2024-07-21 03:44:28.420969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.224 [2024-07-21 03:44:28.420998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.224 qpair failed and we were unable to recover it. 00:34:43.224 [2024-07-21 03:44:28.421106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.224 [2024-07-21 03:44:28.421132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.224 qpair failed and we were unable to recover it. 00:34:43.224 [2024-07-21 03:44:28.421276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.224 [2024-07-21 03:44:28.421302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.224 qpair failed and we were unable to recover it. 00:34:43.224 [2024-07-21 03:44:28.421421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.224 [2024-07-21 03:44:28.421453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.224 qpair failed and we were unable to recover it. 00:34:43.224 [2024-07-21 03:44:28.421593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.224 [2024-07-21 03:44:28.421627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.224 qpair failed and we were unable to recover it. 00:34:43.224 [2024-07-21 03:44:28.421781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.224 [2024-07-21 03:44:28.421823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.224 qpair failed and we were unable to recover it. 00:34:43.224 [2024-07-21 03:44:28.421961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.224 [2024-07-21 03:44:28.421995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.224 qpair failed and we were unable to recover it. 00:34:43.224 [2024-07-21 03:44:28.422140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.224 [2024-07-21 03:44:28.422167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.224 qpair failed and we were unable to recover it. 00:34:43.224 [2024-07-21 03:44:28.422312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.224 [2024-07-21 03:44:28.422338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.224 qpair failed and we were unable to recover it. 00:34:43.224 [2024-07-21 03:44:28.422466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.224 [2024-07-21 03:44:28.422497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.224 qpair failed and we were unable to recover it. 00:34:43.224 [2024-07-21 03:44:28.422628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.224 [2024-07-21 03:44:28.422656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.224 qpair failed and we were unable to recover it. 00:34:43.224 [2024-07-21 03:44:28.422785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.224 [2024-07-21 03:44:28.422811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.224 qpair failed and we were unable to recover it. 00:34:43.224 [2024-07-21 03:44:28.422956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.225 [2024-07-21 03:44:28.422985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.225 qpair failed and we were unable to recover it. 00:34:43.225 [2024-07-21 03:44:28.423151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.225 [2024-07-21 03:44:28.423177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.225 qpair failed and we were unable to recover it. 00:34:43.225 [2024-07-21 03:44:28.423302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.225 [2024-07-21 03:44:28.423329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.225 qpair failed and we were unable to recover it. 00:34:43.225 [2024-07-21 03:44:28.423449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.225 [2024-07-21 03:44:28.423478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.225 qpair failed and we were unable to recover it. 00:34:43.225 [2024-07-21 03:44:28.423619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.225 [2024-07-21 03:44:28.423647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.225 qpair failed and we were unable to recover it. 00:34:43.225 [2024-07-21 03:44:28.423766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.225 [2024-07-21 03:44:28.423793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.225 qpair failed and we were unable to recover it. 00:34:43.225 [2024-07-21 03:44:28.423936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.225 [2024-07-21 03:44:28.423966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.225 qpair failed and we were unable to recover it. 00:34:43.225 [2024-07-21 03:44:28.424134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.225 [2024-07-21 03:44:28.424161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.225 qpair failed and we were unable to recover it. 00:34:43.225 [2024-07-21 03:44:28.424266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.225 [2024-07-21 03:44:28.424294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.225 qpair failed and we were unable to recover it. 00:34:43.225 [2024-07-21 03:44:28.424392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.225 [2024-07-21 03:44:28.424419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.225 qpair failed and we were unable to recover it. 00:34:43.225 [2024-07-21 03:44:28.424538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.225 [2024-07-21 03:44:28.424565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.225 qpair failed and we were unable to recover it. 00:34:43.225 [2024-07-21 03:44:28.424684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.225 [2024-07-21 03:44:28.424712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.225 qpair failed and we were unable to recover it. 00:34:43.225 [2024-07-21 03:44:28.424832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.225 [2024-07-21 03:44:28.424862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.225 qpair failed and we were unable to recover it. 00:34:43.225 [2024-07-21 03:44:28.425004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.225 [2024-07-21 03:44:28.425031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.225 qpair failed and we were unable to recover it. 00:34:43.225 [2024-07-21 03:44:28.425150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.225 [2024-07-21 03:44:28.425176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.225 qpair failed and we were unable to recover it. 00:34:43.225 [2024-07-21 03:44:28.425323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.225 [2024-07-21 03:44:28.425355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.225 qpair failed and we were unable to recover it. 00:34:43.225 [2024-07-21 03:44:28.425500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.225 [2024-07-21 03:44:28.425527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.225 qpair failed and we were unable to recover it. 00:34:43.225 [2024-07-21 03:44:28.425655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.225 [2024-07-21 03:44:28.425682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.225 qpair failed and we were unable to recover it. 00:34:43.225 [2024-07-21 03:44:28.425857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.225 [2024-07-21 03:44:28.425887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.225 qpair failed and we were unable to recover it. 00:34:43.225 [2024-07-21 03:44:28.426023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.225 [2024-07-21 03:44:28.426049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.225 qpair failed and we were unable to recover it. 00:34:43.225 [2024-07-21 03:44:28.426174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.225 [2024-07-21 03:44:28.426200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.225 qpair failed and we were unable to recover it. 00:34:43.225 [2024-07-21 03:44:28.426318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.225 [2024-07-21 03:44:28.426350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.225 qpair failed and we were unable to recover it. 00:34:43.225 [2024-07-21 03:44:28.426492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.225 [2024-07-21 03:44:28.426519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.225 qpair failed and we were unable to recover it. 00:34:43.225 [2024-07-21 03:44:28.426647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.225 [2024-07-21 03:44:28.426674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.225 qpair failed and we were unable to recover it. 00:34:43.225 [2024-07-21 03:44:28.426796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.225 [2024-07-21 03:44:28.426822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.225 qpair failed and we were unable to recover it. 00:34:43.225 [2024-07-21 03:44:28.426981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.225 [2024-07-21 03:44:28.427007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.225 qpair failed and we were unable to recover it. 00:34:43.225 [2024-07-21 03:44:28.427147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.225 [2024-07-21 03:44:28.427176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.225 qpair failed and we were unable to recover it. 00:34:43.225 [2024-07-21 03:44:28.427309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.225 [2024-07-21 03:44:28.427337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.225 qpair failed and we were unable to recover it. 00:34:43.225 [2024-07-21 03:44:28.427507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.225 [2024-07-21 03:44:28.427533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.225 qpair failed and we were unable to recover it. 00:34:43.225 [2024-07-21 03:44:28.427656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.225 [2024-07-21 03:44:28.427699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.225 qpair failed and we were unable to recover it. 00:34:43.225 [2024-07-21 03:44:28.427822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.225 [2024-07-21 03:44:28.427852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.225 qpair failed and we were unable to recover it. 00:34:43.225 [2024-07-21 03:44:28.427989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.225 [2024-07-21 03:44:28.428015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.225 qpair failed and we were unable to recover it. 00:34:43.225 [2024-07-21 03:44:28.428140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.225 [2024-07-21 03:44:28.428166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.225 qpair failed and we were unable to recover it. 00:34:43.225 [2024-07-21 03:44:28.428274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.225 [2024-07-21 03:44:28.428303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.225 qpair failed and we were unable to recover it. 00:34:43.225 [2024-07-21 03:44:28.428442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.225 [2024-07-21 03:44:28.428491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.225 qpair failed and we were unable to recover it. 00:34:43.225 [2024-07-21 03:44:28.428625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.225 [2024-07-21 03:44:28.428670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.225 qpair failed and we were unable to recover it. 00:34:43.225 [2024-07-21 03:44:28.428782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.225 [2024-07-21 03:44:28.428808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.225 qpair failed and we were unable to recover it. 00:34:43.225 [2024-07-21 03:44:28.428899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.225 [2024-07-21 03:44:28.428925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.225 qpair failed and we were unable to recover it. 00:34:43.225 [2024-07-21 03:44:28.429013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.225 [2024-07-21 03:44:28.429041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.225 qpair failed and we were unable to recover it. 00:34:43.225 [2024-07-21 03:44:28.429146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.225 [2024-07-21 03:44:28.429176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.225 qpair failed and we were unable to recover it. 00:34:43.225 [2024-07-21 03:44:28.429321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.225 [2024-07-21 03:44:28.429348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.225 qpair failed and we were unable to recover it. 00:34:43.225 [2024-07-21 03:44:28.429469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.226 [2024-07-21 03:44:28.429496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.226 qpair failed and we were unable to recover it. 00:34:43.226 [2024-07-21 03:44:28.429648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.226 [2024-07-21 03:44:28.429679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.226 qpair failed and we were unable to recover it. 00:34:43.226 [2024-07-21 03:44:28.429800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.226 [2024-07-21 03:44:28.429827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.226 qpair failed and we were unable to recover it. 00:34:43.226 [2024-07-21 03:44:28.429926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.226 [2024-07-21 03:44:28.429952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.226 qpair failed and we were unable to recover it. 00:34:43.226 [2024-07-21 03:44:28.430105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.226 [2024-07-21 03:44:28.430131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.226 qpair failed and we were unable to recover it. 00:34:43.226 [2024-07-21 03:44:28.430263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.226 [2024-07-21 03:44:28.430290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.226 qpair failed and we were unable to recover it. 00:34:43.226 [2024-07-21 03:44:28.430465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.226 [2024-07-21 03:44:28.430495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.226 qpair failed and we were unable to recover it. 00:34:43.226 [2024-07-21 03:44:28.430599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.226 [2024-07-21 03:44:28.430642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.226 qpair failed and we were unable to recover it. 00:34:43.226 [2024-07-21 03:44:28.430784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.226 [2024-07-21 03:44:28.430810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.226 qpair failed and we were unable to recover it. 00:34:43.226 [2024-07-21 03:44:28.430930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.226 [2024-07-21 03:44:28.430956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.226 qpair failed and we were unable to recover it. 00:34:43.226 [2024-07-21 03:44:28.431095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.226 [2024-07-21 03:44:28.431125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.226 qpair failed and we were unable to recover it. 00:34:43.226 [2024-07-21 03:44:28.431291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.226 [2024-07-21 03:44:28.431318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.226 qpair failed and we were unable to recover it. 00:34:43.226 [2024-07-21 03:44:28.431439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.226 [2024-07-21 03:44:28.431467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.226 qpair failed and we were unable to recover it. 00:34:43.226 [2024-07-21 03:44:28.431622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.226 [2024-07-21 03:44:28.431650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.226 qpair failed and we were unable to recover it. 00:34:43.226 [2024-07-21 03:44:28.431805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.226 [2024-07-21 03:44:28.431832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.226 qpair failed and we were unable to recover it. 00:34:43.226 [2024-07-21 03:44:28.431957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.226 [2024-07-21 03:44:28.431983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.226 qpair failed and we were unable to recover it. 00:34:43.226 [2024-07-21 03:44:28.432156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.226 [2024-07-21 03:44:28.432185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.226 qpair failed and we were unable to recover it. 00:34:43.226 [2024-07-21 03:44:28.432324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.226 [2024-07-21 03:44:28.432351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.226 qpair failed and we were unable to recover it. 00:34:43.226 [2024-07-21 03:44:28.432476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.226 [2024-07-21 03:44:28.432503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.226 qpair failed and we were unable to recover it. 00:34:43.226 [2024-07-21 03:44:28.432624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.226 [2024-07-21 03:44:28.432650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.226 qpair failed and we were unable to recover it. 00:34:43.226 [2024-07-21 03:44:28.432783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.226 [2024-07-21 03:44:28.432811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.226 qpair failed and we were unable to recover it. 00:34:43.226 [2024-07-21 03:44:28.432937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.226 [2024-07-21 03:44:28.432962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.226 qpair failed and we were unable to recover it. 00:34:43.226 [2024-07-21 03:44:28.433109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.226 [2024-07-21 03:44:28.433137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.226 qpair failed and we were unable to recover it. 00:34:43.226 [2024-07-21 03:44:28.433272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.226 [2024-07-21 03:44:28.433298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.226 qpair failed and we were unable to recover it. 00:34:43.226 [2024-07-21 03:44:28.433397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.226 [2024-07-21 03:44:28.433422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.226 qpair failed and we were unable to recover it. 00:34:43.226 [2024-07-21 03:44:28.433539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.226 [2024-07-21 03:44:28.433584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.226 qpair failed and we were unable to recover it. 00:34:43.226 [2024-07-21 03:44:28.433688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.226 [2024-07-21 03:44:28.433715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.226 qpair failed and we were unable to recover it. 00:34:43.226 [2024-07-21 03:44:28.433837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.226 [2024-07-21 03:44:28.433865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.226 qpair failed and we were unable to recover it. 00:34:43.226 [2024-07-21 03:44:28.434001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.226 [2024-07-21 03:44:28.434031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.226 qpair failed and we were unable to recover it. 00:34:43.226 [2024-07-21 03:44:28.434151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.226 [2024-07-21 03:44:28.434177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.226 qpair failed and we were unable to recover it. 00:34:43.226 [2024-07-21 03:44:28.434326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.226 [2024-07-21 03:44:28.434352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.226 qpair failed and we were unable to recover it. 00:34:43.226 [2024-07-21 03:44:28.434514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.226 [2024-07-21 03:44:28.434558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.226 qpair failed and we were unable to recover it. 00:34:43.226 [2024-07-21 03:44:28.434699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.226 [2024-07-21 03:44:28.434727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.226 qpair failed and we were unable to recover it. 00:34:43.226 [2024-07-21 03:44:28.434822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.226 [2024-07-21 03:44:28.434854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.226 qpair failed and we were unable to recover it. 00:34:43.226 [2024-07-21 03:44:28.434990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.226 [2024-07-21 03:44:28.435020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.226 qpair failed and we were unable to recover it. 00:34:43.226 [2024-07-21 03:44:28.435190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.226 [2024-07-21 03:44:28.435216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.226 qpair failed and we were unable to recover it. 00:34:43.226 [2024-07-21 03:44:28.435342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.226 [2024-07-21 03:44:28.435370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.226 qpair failed and we were unable to recover it. 00:34:43.226 [2024-07-21 03:44:28.435491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.226 [2024-07-21 03:44:28.435518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.226 qpair failed and we were unable to recover it. 00:34:43.226 [2024-07-21 03:44:28.435680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.226 [2024-07-21 03:44:28.435708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.226 qpair failed and we were unable to recover it. 00:34:43.226 [2024-07-21 03:44:28.435831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.226 [2024-07-21 03:44:28.435858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.226 qpair failed and we were unable to recover it. 00:34:43.226 [2024-07-21 03:44:28.435980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.226 [2024-07-21 03:44:28.436009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.226 qpair failed and we were unable to recover it. 00:34:43.227 [2024-07-21 03:44:28.436153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.227 [2024-07-21 03:44:28.436179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.227 qpair failed and we were unable to recover it. 00:34:43.227 [2024-07-21 03:44:28.436273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.227 [2024-07-21 03:44:28.436300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.227 qpair failed and we were unable to recover it. 00:34:43.227 [2024-07-21 03:44:28.436389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.227 [2024-07-21 03:44:28.436416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.227 qpair failed and we were unable to recover it. 00:34:43.227 [2024-07-21 03:44:28.436530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.227 [2024-07-21 03:44:28.436557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.227 qpair failed and we were unable to recover it. 00:34:43.227 [2024-07-21 03:44:28.436680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.227 [2024-07-21 03:44:28.436707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.227 qpair failed and we were unable to recover it. 00:34:43.227 [2024-07-21 03:44:28.436854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.227 [2024-07-21 03:44:28.436880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.227 qpair failed and we were unable to recover it. 00:34:43.227 [2024-07-21 03:44:28.437037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.227 [2024-07-21 03:44:28.437064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.227 qpair failed and we were unable to recover it. 00:34:43.227 [2024-07-21 03:44:28.437216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.227 [2024-07-21 03:44:28.437245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.227 qpair failed and we were unable to recover it. 00:34:43.227 [2024-07-21 03:44:28.437401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.227 [2024-07-21 03:44:28.437430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.227 qpair failed and we were unable to recover it. 00:34:43.227 [2024-07-21 03:44:28.437577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.227 [2024-07-21 03:44:28.437604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.227 qpair failed and we were unable to recover it. 00:34:43.227 [2024-07-21 03:44:28.437732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.227 [2024-07-21 03:44:28.437758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.227 qpair failed and we were unable to recover it. 00:34:43.227 [2024-07-21 03:44:28.437848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.227 [2024-07-21 03:44:28.437876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.227 qpair failed and we were unable to recover it. 00:34:43.227 [2024-07-21 03:44:28.437963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.227 [2024-07-21 03:44:28.437990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.227 qpair failed and we were unable to recover it. 00:34:43.227 [2024-07-21 03:44:28.438138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.227 [2024-07-21 03:44:28.438164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.227 qpair failed and we were unable to recover it. 00:34:43.227 [2024-07-21 03:44:28.438331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.227 [2024-07-21 03:44:28.438360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.227 qpair failed and we were unable to recover it. 00:34:43.227 [2024-07-21 03:44:28.438503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.227 [2024-07-21 03:44:28.438530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.227 qpair failed and we were unable to recover it. 00:34:43.227 [2024-07-21 03:44:28.438661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.227 [2024-07-21 03:44:28.438688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.227 qpair failed and we were unable to recover it. 00:34:43.227 [2024-07-21 03:44:28.438785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.227 [2024-07-21 03:44:28.438832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.227 qpair failed and we were unable to recover it. 00:34:43.227 [2024-07-21 03:44:28.438971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.227 [2024-07-21 03:44:28.438996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.227 qpair failed and we were unable to recover it. 00:34:43.227 [2024-07-21 03:44:28.439099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.227 [2024-07-21 03:44:28.439125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.227 qpair failed and we were unable to recover it. 00:34:43.227 [2024-07-21 03:44:28.439265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.227 [2024-07-21 03:44:28.439293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.227 qpair failed and we were unable to recover it. 00:34:43.227 [2024-07-21 03:44:28.439461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.227 [2024-07-21 03:44:28.439487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.227 qpair failed and we were unable to recover it. 00:34:43.227 [2024-07-21 03:44:28.439618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.227 [2024-07-21 03:44:28.439645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.227 qpair failed and we were unable to recover it. 00:34:43.227 [2024-07-21 03:44:28.439770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.227 [2024-07-21 03:44:28.439796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.227 qpair failed and we were unable to recover it. 00:34:43.227 [2024-07-21 03:44:28.439962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.227 [2024-07-21 03:44:28.439988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.227 qpair failed and we were unable to recover it. 00:34:43.227 [2024-07-21 03:44:28.440101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.227 [2024-07-21 03:44:28.440126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.227 qpair failed and we were unable to recover it. 00:34:43.227 [2024-07-21 03:44:28.440271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.227 [2024-07-21 03:44:28.440296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.227 qpair failed and we were unable to recover it. 00:34:43.227 [2024-07-21 03:44:28.440462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.227 [2024-07-21 03:44:28.440487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.227 qpair failed and we were unable to recover it. 00:34:43.227 [2024-07-21 03:44:28.440635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.227 [2024-07-21 03:44:28.440677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.227 qpair failed and we were unable to recover it. 00:34:43.227 [2024-07-21 03:44:28.440812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.227 [2024-07-21 03:44:28.440841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.227 qpair failed and we were unable to recover it. 00:34:43.227 [2024-07-21 03:44:28.440953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.227 [2024-07-21 03:44:28.440978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.227 qpair failed and we were unable to recover it. 00:34:43.227 [2024-07-21 03:44:28.441094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.227 [2024-07-21 03:44:28.441120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.227 qpair failed and we were unable to recover it. 00:34:43.227 [2024-07-21 03:44:28.441266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.227 [2024-07-21 03:44:28.441299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.227 qpair failed and we were unable to recover it. 00:34:43.227 [2024-07-21 03:44:28.441469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.227 [2024-07-21 03:44:28.441495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.228 qpair failed and we were unable to recover it. 00:34:43.228 [2024-07-21 03:44:28.441657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.228 [2024-07-21 03:44:28.441687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.228 qpair failed and we were unable to recover it. 00:34:43.228 [2024-07-21 03:44:28.441799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.228 [2024-07-21 03:44:28.441831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.228 qpair failed and we were unable to recover it. 00:34:43.228 [2024-07-21 03:44:28.441976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.228 [2024-07-21 03:44:28.442003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.228 qpair failed and we were unable to recover it. 00:34:43.228 [2024-07-21 03:44:28.442148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.228 [2024-07-21 03:44:28.442191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.228 qpair failed and we were unable to recover it. 00:34:43.228 [2024-07-21 03:44:28.442359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.228 [2024-07-21 03:44:28.442413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.228 qpair failed and we were unable to recover it. 00:34:43.228 [2024-07-21 03:44:28.442578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.228 [2024-07-21 03:44:28.442604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.228 qpair failed and we were unable to recover it. 00:34:43.228 [2024-07-21 03:44:28.442709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.228 [2024-07-21 03:44:28.442736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.228 qpair failed and we were unable to recover it. 00:34:43.228 [2024-07-21 03:44:28.442885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.228 [2024-07-21 03:44:28.442914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.228 qpair failed and we were unable to recover it. 00:34:43.228 [2024-07-21 03:44:28.443055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.228 [2024-07-21 03:44:28.443081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.228 qpair failed and we were unable to recover it. 00:34:43.228 [2024-07-21 03:44:28.443197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.228 [2024-07-21 03:44:28.443224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.228 qpair failed and we were unable to recover it. 00:34:43.228 [2024-07-21 03:44:28.443397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.228 [2024-07-21 03:44:28.443428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.228 qpair failed and we were unable to recover it. 00:34:43.228 [2024-07-21 03:44:28.443548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.228 [2024-07-21 03:44:28.443575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.228 qpair failed and we were unable to recover it. 00:34:43.228 [2024-07-21 03:44:28.443710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.228 [2024-07-21 03:44:28.443737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.228 qpair failed and we were unable to recover it. 00:34:43.228 [2024-07-21 03:44:28.443907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.228 [2024-07-21 03:44:28.443937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.228 qpair failed and we were unable to recover it. 00:34:43.228 [2024-07-21 03:44:28.444054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.228 [2024-07-21 03:44:28.444080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.228 qpair failed and we were unable to recover it. 00:34:43.228 [2024-07-21 03:44:28.444212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.228 [2024-07-21 03:44:28.444237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.228 qpair failed and we were unable to recover it. 00:34:43.228 [2024-07-21 03:44:28.444402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.228 [2024-07-21 03:44:28.444431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.228 qpair failed and we were unable to recover it. 00:34:43.228 [2024-07-21 03:44:28.444593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.228 [2024-07-21 03:44:28.444625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.228 qpair failed and we were unable to recover it. 00:34:43.228 [2024-07-21 03:44:28.444763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.228 [2024-07-21 03:44:28.444792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.228 qpair failed and we were unable to recover it. 00:34:43.228 [2024-07-21 03:44:28.444901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.228 [2024-07-21 03:44:28.444933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.228 qpair failed and we were unable to recover it. 00:34:43.228 [2024-07-21 03:44:28.445057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.228 [2024-07-21 03:44:28.445085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.228 qpair failed and we were unable to recover it. 00:34:43.228 [2024-07-21 03:44:28.445208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.228 [2024-07-21 03:44:28.445234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.228 qpair failed and we were unable to recover it. 00:34:43.228 [2024-07-21 03:44:28.445410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.228 [2024-07-21 03:44:28.445436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.228 qpair failed and we were unable to recover it. 00:34:43.228 [2024-07-21 03:44:28.445565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.228 [2024-07-21 03:44:28.445593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.228 qpair failed and we were unable to recover it. 00:34:43.228 [2024-07-21 03:44:28.445701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.228 [2024-07-21 03:44:28.445729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.228 qpair failed and we were unable to recover it. 00:34:43.228 [2024-07-21 03:44:28.445916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.228 [2024-07-21 03:44:28.445947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.228 qpair failed and we were unable to recover it. 00:34:43.228 [2024-07-21 03:44:28.446091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.228 [2024-07-21 03:44:28.446117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.228 qpair failed and we were unable to recover it. 00:34:43.228 [2024-07-21 03:44:28.446204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.228 [2024-07-21 03:44:28.446229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.228 qpair failed and we were unable to recover it. 00:34:43.228 [2024-07-21 03:44:28.446336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.228 [2024-07-21 03:44:28.446365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.228 qpair failed and we were unable to recover it. 00:34:43.228 [2024-07-21 03:44:28.446512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.228 [2024-07-21 03:44:28.446538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.228 qpair failed and we were unable to recover it. 00:34:43.228 [2024-07-21 03:44:28.446656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.228 [2024-07-21 03:44:28.446682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.228 qpair failed and we were unable to recover it. 00:34:43.228 [2024-07-21 03:44:28.446812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.228 [2024-07-21 03:44:28.446841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.228 qpair failed and we were unable to recover it. 00:34:43.228 [2024-07-21 03:44:28.446960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.228 [2024-07-21 03:44:28.446987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.228 qpair failed and we were unable to recover it. 00:34:43.228 [2024-07-21 03:44:28.447117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.228 [2024-07-21 03:44:28.447143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.228 qpair failed and we were unable to recover it. 00:34:43.228 [2024-07-21 03:44:28.447305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.228 [2024-07-21 03:44:28.447334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.228 qpair failed and we were unable to recover it. 00:34:43.228 [2024-07-21 03:44:28.447457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.228 [2024-07-21 03:44:28.447484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.228 qpair failed and we were unable to recover it. 00:34:43.228 [2024-07-21 03:44:28.447573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.228 [2024-07-21 03:44:28.447600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.228 qpair failed and we were unable to recover it. 00:34:43.228 [2024-07-21 03:44:28.447738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.228 [2024-07-21 03:44:28.447764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.228 qpair failed and we were unable to recover it. 00:34:43.228 [2024-07-21 03:44:28.447912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.228 [2024-07-21 03:44:28.447943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.228 qpair failed and we were unable to recover it. 00:34:43.228 [2024-07-21 03:44:28.448035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.228 [2024-07-21 03:44:28.448061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.228 qpair failed and we were unable to recover it. 00:34:43.228 [2024-07-21 03:44:28.448206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.229 [2024-07-21 03:44:28.448232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.229 qpair failed and we were unable to recover it. 00:34:43.229 [2024-07-21 03:44:28.448352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.229 [2024-07-21 03:44:28.448378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.229 qpair failed and we were unable to recover it. 00:34:43.229 [2024-07-21 03:44:28.448474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.229 [2024-07-21 03:44:28.448501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.229 qpair failed and we were unable to recover it. 00:34:43.229 [2024-07-21 03:44:28.448676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.229 [2024-07-21 03:44:28.448708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.229 qpair failed and we were unable to recover it. 00:34:43.229 [2024-07-21 03:44:28.448830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.229 [2024-07-21 03:44:28.448857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.229 qpair failed and we were unable to recover it. 00:34:43.229 [2024-07-21 03:44:28.449007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.229 [2024-07-21 03:44:28.449033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.229 qpair failed and we were unable to recover it. 00:34:43.229 [2024-07-21 03:44:28.449264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.229 [2024-07-21 03:44:28.449312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.229 qpair failed and we were unable to recover it. 00:34:43.229 [2024-07-21 03:44:28.449425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.229 [2024-07-21 03:44:28.449451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.229 qpair failed and we were unable to recover it. 00:34:43.229 [2024-07-21 03:44:28.449604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.229 [2024-07-21 03:44:28.449639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.229 qpair failed and we were unable to recover it. 00:34:43.229 [2024-07-21 03:44:28.449781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.229 [2024-07-21 03:44:28.449809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.229 qpair failed and we were unable to recover it. 00:34:43.229 [2024-07-21 03:44:28.449940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.229 [2024-07-21 03:44:28.449966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.229 qpair failed and we were unable to recover it. 00:34:43.229 [2024-07-21 03:44:28.450108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.229 [2024-07-21 03:44:28.450133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.229 qpair failed and we were unable to recover it. 00:34:43.229 [2024-07-21 03:44:28.450288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.229 [2024-07-21 03:44:28.450318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.229 qpair failed and we were unable to recover it. 00:34:43.229 [2024-07-21 03:44:28.450457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.229 [2024-07-21 03:44:28.450484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.229 qpair failed and we were unable to recover it. 00:34:43.229 [2024-07-21 03:44:28.450607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.229 [2024-07-21 03:44:28.450647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.229 qpair failed and we were unable to recover it. 00:34:43.229 [2024-07-21 03:44:28.450795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.229 [2024-07-21 03:44:28.450824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.229 qpair failed and we were unable to recover it. 00:34:43.229 [2024-07-21 03:44:28.450966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.229 [2024-07-21 03:44:28.450993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.229 qpair failed and we were unable to recover it. 00:34:43.229 [2024-07-21 03:44:28.451096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.229 [2024-07-21 03:44:28.451123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.229 qpair failed and we were unable to recover it. 00:34:43.229 [2024-07-21 03:44:28.451246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.229 [2024-07-21 03:44:28.451275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.229 qpair failed and we were unable to recover it. 00:34:43.229 [2024-07-21 03:44:28.451368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.229 [2024-07-21 03:44:28.451394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.229 qpair failed and we were unable to recover it. 00:34:43.229 [2024-07-21 03:44:28.451506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.229 [2024-07-21 03:44:28.451533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.229 qpair failed and we were unable to recover it. 00:34:43.229 [2024-07-21 03:44:28.451648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.229 [2024-07-21 03:44:28.451677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.229 qpair failed and we were unable to recover it. 00:34:43.229 [2024-07-21 03:44:28.451818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.229 [2024-07-21 03:44:28.451844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.229 qpair failed and we were unable to recover it. 00:34:43.229 [2024-07-21 03:44:28.451943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.229 [2024-07-21 03:44:28.451969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.229 qpair failed and we were unable to recover it. 00:34:43.229 [2024-07-21 03:44:28.452080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.229 [2024-07-21 03:44:28.452108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.229 qpair failed and we were unable to recover it. 00:34:43.229 [2024-07-21 03:44:28.452257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.229 [2024-07-21 03:44:28.452283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.229 qpair failed and we were unable to recover it. 00:34:43.229 [2024-07-21 03:44:28.452381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.229 [2024-07-21 03:44:28.452406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.229 qpair failed and we were unable to recover it. 00:34:43.229 [2024-07-21 03:44:28.452556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.229 [2024-07-21 03:44:28.452583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.229 qpair failed and we were unable to recover it. 00:34:43.229 [2024-07-21 03:44:28.452743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.229 [2024-07-21 03:44:28.452770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.229 qpair failed and we were unable to recover it. 00:34:43.229 [2024-07-21 03:44:28.452862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.229 [2024-07-21 03:44:28.452888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.229 qpair failed and we were unable to recover it. 00:34:43.229 [2024-07-21 03:44:28.453019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.229 [2024-07-21 03:44:28.453048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.229 qpair failed and we were unable to recover it. 00:34:43.229 [2024-07-21 03:44:28.453185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.229 [2024-07-21 03:44:28.453211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.229 qpair failed and we were unable to recover it. 00:34:43.229 [2024-07-21 03:44:28.453335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.229 [2024-07-21 03:44:28.453361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.229 qpair failed and we were unable to recover it. 00:34:43.229 [2024-07-21 03:44:28.453471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.229 [2024-07-21 03:44:28.453516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.229 qpair failed and we were unable to recover it. 00:34:43.229 [2024-07-21 03:44:28.453611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.229 [2024-07-21 03:44:28.453646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.229 qpair failed and we were unable to recover it. 00:34:43.229 [2024-07-21 03:44:28.453737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.229 [2024-07-21 03:44:28.453763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.229 qpair failed and we were unable to recover it. 00:34:43.229 [2024-07-21 03:44:28.453876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.229 [2024-07-21 03:44:28.453904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.229 qpair failed and we were unable to recover it. 00:34:43.229 [2024-07-21 03:44:28.454038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.229 [2024-07-21 03:44:28.454064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.229 qpair failed and we were unable to recover it. 00:34:43.229 [2024-07-21 03:44:28.454186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.229 [2024-07-21 03:44:28.454215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.229 qpair failed and we were unable to recover it. 00:34:43.229 [2024-07-21 03:44:28.454387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.229 [2024-07-21 03:44:28.454415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.229 qpair failed and we were unable to recover it. 00:34:43.229 [2024-07-21 03:44:28.454560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.229 [2024-07-21 03:44:28.454588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.230 qpair failed and we were unable to recover it. 00:34:43.230 [2024-07-21 03:44:28.454724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.230 [2024-07-21 03:44:28.454751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.230 qpair failed and we were unable to recover it. 00:34:43.230 [2024-07-21 03:44:28.454870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.230 [2024-07-21 03:44:28.454896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.230 qpair failed and we were unable to recover it. 00:34:43.230 [2024-07-21 03:44:28.455050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.230 [2024-07-21 03:44:28.455078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.230 qpair failed and we were unable to recover it. 00:34:43.230 [2024-07-21 03:44:28.455204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.230 [2024-07-21 03:44:28.455230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.230 qpair failed and we were unable to recover it. 00:34:43.230 [2024-07-21 03:44:28.455376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.230 [2024-07-21 03:44:28.455406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.230 qpair failed and we were unable to recover it. 00:34:43.230 [2024-07-21 03:44:28.455543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.230 [2024-07-21 03:44:28.455569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.230 qpair failed and we were unable to recover it. 00:34:43.230 [2024-07-21 03:44:28.455670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.230 [2024-07-21 03:44:28.455697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.230 qpair failed and we were unable to recover it. 00:34:43.230 [2024-07-21 03:44:28.455842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.230 [2024-07-21 03:44:28.455868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.230 qpair failed and we were unable to recover it. 00:34:43.230 [2024-07-21 03:44:28.456023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.230 [2024-07-21 03:44:28.456050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.230 qpair failed and we were unable to recover it. 00:34:43.230 [2024-07-21 03:44:28.456174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.230 [2024-07-21 03:44:28.456200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.230 qpair failed and we were unable to recover it. 00:34:43.230 [2024-07-21 03:44:28.456350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.230 [2024-07-21 03:44:28.456380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.230 qpair failed and we were unable to recover it. 00:34:43.230 [2024-07-21 03:44:28.456521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.230 [2024-07-21 03:44:28.456548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.230 qpair failed and we were unable to recover it. 00:34:43.230 [2024-07-21 03:44:28.456680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.230 [2024-07-21 03:44:28.456708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.230 qpair failed and we were unable to recover it. 00:34:43.230 [2024-07-21 03:44:28.456830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.230 [2024-07-21 03:44:28.456857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.230 qpair failed and we were unable to recover it. 00:34:43.230 [2024-07-21 03:44:28.457015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.230 [2024-07-21 03:44:28.457059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.230 qpair failed and we were unable to recover it. 00:34:43.230 [2024-07-21 03:44:28.457196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.230 [2024-07-21 03:44:28.457236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.230 qpair failed and we were unable to recover it. 00:34:43.230 [2024-07-21 03:44:28.457393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.230 [2024-07-21 03:44:28.457423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.230 qpair failed and we were unable to recover it. 00:34:43.230 [2024-07-21 03:44:28.457565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.230 [2024-07-21 03:44:28.457595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.230 qpair failed and we were unable to recover it. 00:34:43.230 [2024-07-21 03:44:28.457759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.230 [2024-07-21 03:44:28.457796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.230 qpair failed and we were unable to recover it. 00:34:43.230 [2024-07-21 03:44:28.457963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.230 [2024-07-21 03:44:28.458002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.230 qpair failed and we were unable to recover it. 00:34:43.230 [2024-07-21 03:44:28.458143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.230 [2024-07-21 03:44:28.458171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.230 qpair failed and we were unable to recover it. 00:34:43.230 [2024-07-21 03:44:28.458319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.230 [2024-07-21 03:44:28.458346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.230 qpair failed and we were unable to recover it. 00:34:43.230 [2024-07-21 03:44:28.458476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.230 [2024-07-21 03:44:28.458505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.230 qpair failed and we were unable to recover it. 00:34:43.230 [2024-07-21 03:44:28.458624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.230 [2024-07-21 03:44:28.458651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.230 qpair failed and we were unable to recover it. 00:34:43.230 [2024-07-21 03:44:28.458809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.230 [2024-07-21 03:44:28.458849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.230 qpair failed and we were unable to recover it. 00:34:43.230 [2024-07-21 03:44:28.458979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.230 [2024-07-21 03:44:28.459006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.230 qpair failed and we were unable to recover it. 00:34:43.230 [2024-07-21 03:44:28.459125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.230 [2024-07-21 03:44:28.459151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.230 qpair failed and we were unable to recover it. 00:34:43.230 [2024-07-21 03:44:28.459278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.230 [2024-07-21 03:44:28.459304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.230 qpair failed and we were unable to recover it. 00:34:43.230 [2024-07-21 03:44:28.459436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.230 [2024-07-21 03:44:28.459477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.230 qpair failed and we were unable to recover it. 00:34:43.230 [2024-07-21 03:44:28.459632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.230 [2024-07-21 03:44:28.459681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.230 qpair failed and we were unable to recover it. 00:34:43.230 [2024-07-21 03:44:28.459800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.230 [2024-07-21 03:44:28.459827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.230 qpair failed and we were unable to recover it. 00:34:43.230 [2024-07-21 03:44:28.459923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.230 [2024-07-21 03:44:28.459968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.230 qpair failed and we were unable to recover it. 00:34:43.230 [2024-07-21 03:44:28.460221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.230 [2024-07-21 03:44:28.460276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.230 qpair failed and we were unable to recover it. 00:34:43.230 [2024-07-21 03:44:28.460411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.230 [2024-07-21 03:44:28.460441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.230 qpair failed and we were unable to recover it. 00:34:43.230 [2024-07-21 03:44:28.460570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.230 [2024-07-21 03:44:28.460599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.230 qpair failed and we were unable to recover it. 00:34:43.230 [2024-07-21 03:44:28.460756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.230 [2024-07-21 03:44:28.460793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.230 qpair failed and we were unable to recover it. 00:34:43.230 [2024-07-21 03:44:28.460983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.230 [2024-07-21 03:44:28.461015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.230 qpair failed and we were unable to recover it. 00:34:43.230 [2024-07-21 03:44:28.461272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.230 [2024-07-21 03:44:28.461314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.230 qpair failed and we were unable to recover it. 00:34:43.515 [2024-07-21 03:44:28.461480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.515 [2024-07-21 03:44:28.461524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.515 qpair failed and we were unable to recover it. 00:34:43.515 [2024-07-21 03:44:28.461687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.515 [2024-07-21 03:44:28.461713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.515 qpair failed and we were unable to recover it. 00:34:43.515 [2024-07-21 03:44:28.461810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.515 [2024-07-21 03:44:28.461837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.515 qpair failed and we were unable to recover it. 00:34:43.515 [2024-07-21 03:44:28.461936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.515 [2024-07-21 03:44:28.461962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.515 qpair failed and we were unable to recover it. 00:34:43.515 [2024-07-21 03:44:28.462132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.515 [2024-07-21 03:44:28.462160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.515 qpair failed and we were unable to recover it. 00:34:43.515 [2024-07-21 03:44:28.462294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.515 [2024-07-21 03:44:28.462325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.515 qpair failed and we were unable to recover it. 00:34:43.515 [2024-07-21 03:44:28.462469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.515 [2024-07-21 03:44:28.462497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.515 qpair failed and we were unable to recover it. 00:34:43.515 [2024-07-21 03:44:28.462652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.515 [2024-07-21 03:44:28.462678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.515 qpair failed and we were unable to recover it. 00:34:43.515 [2024-07-21 03:44:28.462794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.515 [2024-07-21 03:44:28.462821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.515 qpair failed and we were unable to recover it. 00:34:43.515 [2024-07-21 03:44:28.462952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.515 [2024-07-21 03:44:28.462981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.515 qpair failed and we were unable to recover it. 00:34:43.516 [2024-07-21 03:44:28.463127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.516 [2024-07-21 03:44:28.463157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.516 qpair failed and we were unable to recover it. 00:34:43.516 [2024-07-21 03:44:28.463293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.516 [2024-07-21 03:44:28.463322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.516 qpair failed and we were unable to recover it. 00:34:43.516 [2024-07-21 03:44:28.463422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.516 [2024-07-21 03:44:28.463451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.516 qpair failed and we were unable to recover it. 00:34:43.516 [2024-07-21 03:44:28.463592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.516 [2024-07-21 03:44:28.463627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.516 qpair failed and we were unable to recover it. 00:34:43.516 [2024-07-21 03:44:28.463776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.516 [2024-07-21 03:44:28.463802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.516 qpair failed and we were unable to recover it. 00:34:43.516 [2024-07-21 03:44:28.463927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.516 [2024-07-21 03:44:28.463953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.516 qpair failed and we were unable to recover it. 00:34:43.516 [2024-07-21 03:44:28.464063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.516 [2024-07-21 03:44:28.464091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.516 qpair failed and we were unable to recover it. 00:34:43.516 [2024-07-21 03:44:28.464225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.516 [2024-07-21 03:44:28.464255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.516 qpair failed and we were unable to recover it. 00:34:43.516 [2024-07-21 03:44:28.464386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.516 [2024-07-21 03:44:28.464416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.516 qpair failed and we were unable to recover it. 00:34:43.516 [2024-07-21 03:44:28.464559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.516 [2024-07-21 03:44:28.464585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.516 qpair failed and we were unable to recover it. 00:34:43.516 [2024-07-21 03:44:28.464725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.516 [2024-07-21 03:44:28.464751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.516 qpair failed and we were unable to recover it. 00:34:43.516 [2024-07-21 03:44:28.464897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.516 [2024-07-21 03:44:28.464923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.516 qpair failed and we were unable to recover it. 00:34:43.516 [2024-07-21 03:44:28.465070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.516 [2024-07-21 03:44:28.465096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.516 qpair failed and we were unable to recover it. 00:34:43.516 [2024-07-21 03:44:28.465223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.516 [2024-07-21 03:44:28.465249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.516 qpair failed and we were unable to recover it. 00:34:43.516 [2024-07-21 03:44:28.465366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.516 [2024-07-21 03:44:28.465392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.516 qpair failed and we were unable to recover it. 00:34:43.516 [2024-07-21 03:44:28.465535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.516 [2024-07-21 03:44:28.465564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.516 qpair failed and we were unable to recover it. 00:34:43.516 [2024-07-21 03:44:28.465696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.516 [2024-07-21 03:44:28.465728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.516 qpair failed and we were unable to recover it. 00:34:43.516 [2024-07-21 03:44:28.465855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.516 [2024-07-21 03:44:28.465881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.516 qpair failed and we were unable to recover it. 00:34:43.516 [2024-07-21 03:44:28.466001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.516 [2024-07-21 03:44:28.466027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.516 qpair failed and we were unable to recover it. 00:34:43.516 [2024-07-21 03:44:28.466167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.516 [2024-07-21 03:44:28.466196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.516 qpair failed and we were unable to recover it. 00:34:43.516 [2024-07-21 03:44:28.466357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.516 [2024-07-21 03:44:28.466387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.516 qpair failed and we were unable to recover it. 00:34:43.516 [2024-07-21 03:44:28.466501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.516 [2024-07-21 03:44:28.466527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.516 qpair failed and we were unable to recover it. 00:34:43.516 [2024-07-21 03:44:28.466663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.516 [2024-07-21 03:44:28.466703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.516 qpair failed and we were unable to recover it. 00:34:43.516 [2024-07-21 03:44:28.466829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.516 [2024-07-21 03:44:28.466856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.516 qpair failed and we were unable to recover it. 00:34:43.516 [2024-07-21 03:44:28.466955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.516 [2024-07-21 03:44:28.466982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.516 qpair failed and we were unable to recover it. 00:34:43.516 [2024-07-21 03:44:28.467106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.516 [2024-07-21 03:44:28.467132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.516 qpair failed and we were unable to recover it. 00:34:43.516 [2024-07-21 03:44:28.467225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.516 [2024-07-21 03:44:28.467269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.516 qpair failed and we were unable to recover it. 00:34:43.516 [2024-07-21 03:44:28.467380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.516 [2024-07-21 03:44:28.467409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.516 qpair failed and we were unable to recover it. 00:34:43.516 [2024-07-21 03:44:28.467533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.516 [2024-07-21 03:44:28.467564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.516 qpair failed and we were unable to recover it. 00:34:43.516 [2024-07-21 03:44:28.467718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.516 [2024-07-21 03:44:28.467746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.516 qpair failed and we were unable to recover it. 00:34:43.516 [2024-07-21 03:44:28.467871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.516 [2024-07-21 03:44:28.467898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.516 qpair failed and we were unable to recover it. 00:34:43.516 [2024-07-21 03:44:28.468023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.516 [2024-07-21 03:44:28.468066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.516 qpair failed and we were unable to recover it. 00:34:43.516 [2024-07-21 03:44:28.468198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.516 [2024-07-21 03:44:28.468228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.516 qpair failed and we were unable to recover it. 00:34:43.516 [2024-07-21 03:44:28.468383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.516 [2024-07-21 03:44:28.468413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.516 qpair failed and we were unable to recover it. 00:34:43.516 [2024-07-21 03:44:28.468515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.516 [2024-07-21 03:44:28.468546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.516 qpair failed and we were unable to recover it. 00:34:43.516 [2024-07-21 03:44:28.468695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.516 [2024-07-21 03:44:28.468722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.516 qpair failed and we were unable to recover it. 00:34:43.516 [2024-07-21 03:44:28.468851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.516 [2024-07-21 03:44:28.468877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.516 qpair failed and we were unable to recover it. 00:34:43.516 [2024-07-21 03:44:28.469048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.516 [2024-07-21 03:44:28.469110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.516 qpair failed and we were unable to recover it. 00:34:43.516 [2024-07-21 03:44:28.469290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.516 [2024-07-21 03:44:28.469348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.516 qpair failed and we were unable to recover it. 00:34:43.516 [2024-07-21 03:44:28.469544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.516 [2024-07-21 03:44:28.469573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.516 qpair failed and we were unable to recover it. 00:34:43.516 [2024-07-21 03:44:28.469725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.516 [2024-07-21 03:44:28.469755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.517 qpair failed and we were unable to recover it. 00:34:43.517 [2024-07-21 03:44:28.469900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.517 [2024-07-21 03:44:28.469929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.517 qpair failed and we were unable to recover it. 00:34:43.517 [2024-07-21 03:44:28.470086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.517 [2024-07-21 03:44:28.470115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.517 qpair failed and we were unable to recover it. 00:34:43.517 [2024-07-21 03:44:28.470278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.517 [2024-07-21 03:44:28.470344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.517 qpair failed and we were unable to recover it. 00:34:43.517 [2024-07-21 03:44:28.470459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.517 [2024-07-21 03:44:28.470488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.517 qpair failed and we were unable to recover it. 00:34:43.517 [2024-07-21 03:44:28.470672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.517 [2024-07-21 03:44:28.470697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.517 qpair failed and we were unable to recover it. 00:34:43.517 [2024-07-21 03:44:28.470822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.517 [2024-07-21 03:44:28.470847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.517 qpair failed and we were unable to recover it. 00:34:43.517 [2024-07-21 03:44:28.471000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.517 [2024-07-21 03:44:28.471028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.517 qpair failed and we were unable to recover it. 00:34:43.517 [2024-07-21 03:44:28.471235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.517 [2024-07-21 03:44:28.471324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.517 qpair failed and we were unable to recover it. 00:34:43.517 [2024-07-21 03:44:28.471456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.517 [2024-07-21 03:44:28.471485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.517 qpair failed and we were unable to recover it. 00:34:43.517 [2024-07-21 03:44:28.471607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.517 [2024-07-21 03:44:28.471638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.517 qpair failed and we were unable to recover it. 00:34:43.517 [2024-07-21 03:44:28.471793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.517 [2024-07-21 03:44:28.471819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.517 qpair failed and we were unable to recover it. 00:34:43.517 [2024-07-21 03:44:28.471991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.517 [2024-07-21 03:44:28.472020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.517 qpair failed and we were unable to recover it. 00:34:43.517 [2024-07-21 03:44:28.472118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.517 [2024-07-21 03:44:28.472145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.517 qpair failed and we were unable to recover it. 00:34:43.517 [2024-07-21 03:44:28.472302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.517 [2024-07-21 03:44:28.472333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.517 qpair failed and we were unable to recover it. 00:34:43.517 [2024-07-21 03:44:28.472473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.517 [2024-07-21 03:44:28.472504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.517 qpair failed and we were unable to recover it. 00:34:43.517 [2024-07-21 03:44:28.472604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.517 [2024-07-21 03:44:28.472639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.517 qpair failed and we were unable to recover it. 00:34:43.517 [2024-07-21 03:44:28.472774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.517 [2024-07-21 03:44:28.472800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.517 qpair failed and we were unable to recover it. 00:34:43.517 [2024-07-21 03:44:28.472891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.517 [2024-07-21 03:44:28.472919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.517 qpair failed and we were unable to recover it. 00:34:43.517 [2024-07-21 03:44:28.473112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.517 [2024-07-21 03:44:28.473163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.517 qpair failed and we were unable to recover it. 00:34:43.517 [2024-07-21 03:44:28.473401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.517 [2024-07-21 03:44:28.473452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.517 qpair failed and we were unable to recover it. 00:34:43.517 [2024-07-21 03:44:28.473590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.517 [2024-07-21 03:44:28.473642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.517 qpair failed and we were unable to recover it. 00:34:43.517 [2024-07-21 03:44:28.473801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.517 [2024-07-21 03:44:28.473841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.517 qpair failed and we were unable to recover it. 00:34:43.517 [2024-07-21 03:44:28.473985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.517 [2024-07-21 03:44:28.474054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.517 qpair failed and we were unable to recover it. 00:34:43.517 [2024-07-21 03:44:28.474312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.517 [2024-07-21 03:44:28.474364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.517 qpair failed and we were unable to recover it. 00:34:43.517 [2024-07-21 03:44:28.474489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.517 [2024-07-21 03:44:28.474517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.517 qpair failed and we were unable to recover it. 00:34:43.517 [2024-07-21 03:44:28.474657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.517 [2024-07-21 03:44:28.474697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.517 qpair failed and we were unable to recover it. 00:34:43.517 [2024-07-21 03:44:28.474853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.517 [2024-07-21 03:44:28.474880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.517 qpair failed and we were unable to recover it. 00:34:43.517 [2024-07-21 03:44:28.475027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.517 [2024-07-21 03:44:28.475072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.517 qpair failed and we were unable to recover it. 00:34:43.517 [2024-07-21 03:44:28.475209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.517 [2024-07-21 03:44:28.475251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.517 qpair failed and we were unable to recover it. 00:34:43.517 [2024-07-21 03:44:28.475379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.517 [2024-07-21 03:44:28.475406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.517 qpair failed and we were unable to recover it. 00:34:43.517 [2024-07-21 03:44:28.475551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.517 [2024-07-21 03:44:28.475578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.517 qpair failed and we were unable to recover it. 00:34:43.517 [2024-07-21 03:44:28.475729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.517 [2024-07-21 03:44:28.475778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.517 qpair failed and we were unable to recover it. 00:34:43.517 [2024-07-21 03:44:28.475931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.517 [2024-07-21 03:44:28.475974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.517 qpair failed and we were unable to recover it. 00:34:43.517 [2024-07-21 03:44:28.476094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.517 [2024-07-21 03:44:28.476138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.517 qpair failed and we were unable to recover it. 00:34:43.517 [2024-07-21 03:44:28.476261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.517 [2024-07-21 03:44:28.476288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.517 qpair failed and we were unable to recover it. 00:34:43.517 [2024-07-21 03:44:28.476438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.517 [2024-07-21 03:44:28.476465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.517 qpair failed and we were unable to recover it. 00:34:43.517 [2024-07-21 03:44:28.476607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.517 [2024-07-21 03:44:28.476645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.517 qpair failed and we were unable to recover it. 00:34:43.517 [2024-07-21 03:44:28.476802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.517 [2024-07-21 03:44:28.476848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.517 qpair failed and we were unable to recover it. 00:34:43.517 [2024-07-21 03:44:28.476990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.517 [2024-07-21 03:44:28.477034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.517 qpair failed and we were unable to recover it. 00:34:43.517 [2024-07-21 03:44:28.477165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.517 [2024-07-21 03:44:28.477209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.517 qpair failed and we were unable to recover it. 00:34:43.517 [2024-07-21 03:44:28.477330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.517 [2024-07-21 03:44:28.477355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.518 qpair failed and we were unable to recover it. 00:34:43.518 [2024-07-21 03:44:28.477502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.518 [2024-07-21 03:44:28.477527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.518 qpair failed and we were unable to recover it. 00:34:43.518 [2024-07-21 03:44:28.477666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.518 [2024-07-21 03:44:28.477711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.518 qpair failed and we were unable to recover it. 00:34:43.518 [2024-07-21 03:44:28.477826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.518 [2024-07-21 03:44:28.477865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.518 qpair failed and we were unable to recover it. 00:34:43.518 [2024-07-21 03:44:28.477962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.518 [2024-07-21 03:44:28.477989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.518 qpair failed and we were unable to recover it. 00:34:43.518 [2024-07-21 03:44:28.478105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.518 [2024-07-21 03:44:28.478131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.518 qpair failed and we were unable to recover it. 00:34:43.518 [2024-07-21 03:44:28.478223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.518 [2024-07-21 03:44:28.478250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.518 qpair failed and we were unable to recover it. 00:34:43.518 [2024-07-21 03:44:28.478373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.518 [2024-07-21 03:44:28.478399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.518 qpair failed and we were unable to recover it. 00:34:43.518 [2024-07-21 03:44:28.478520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.518 [2024-07-21 03:44:28.478545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.518 qpair failed and we were unable to recover it. 00:34:43.518 [2024-07-21 03:44:28.478662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.518 [2024-07-21 03:44:28.478705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.518 qpair failed and we were unable to recover it. 00:34:43.518 [2024-07-21 03:44:28.478844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.518 [2024-07-21 03:44:28.478874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.518 qpair failed and we were unable to recover it. 00:34:43.518 [2024-07-21 03:44:28.479004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.518 [2024-07-21 03:44:28.479034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.518 qpair failed and we were unable to recover it. 00:34:43.518 [2024-07-21 03:44:28.479279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.518 [2024-07-21 03:44:28.479308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.518 qpair failed and we were unable to recover it. 00:34:43.518 [2024-07-21 03:44:28.479427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.518 [2024-07-21 03:44:28.479455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.518 qpair failed and we were unable to recover it. 00:34:43.518 [2024-07-21 03:44:28.479571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.518 [2024-07-21 03:44:28.479596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.518 qpair failed and we were unable to recover it. 00:34:43.518 [2024-07-21 03:44:28.479733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.518 [2024-07-21 03:44:28.479759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.518 qpair failed and we were unable to recover it. 00:34:43.518 [2024-07-21 03:44:28.479882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.518 [2024-07-21 03:44:28.479924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.518 qpair failed and we were unable to recover it. 00:34:43.518 [2024-07-21 03:44:28.480115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.518 [2024-07-21 03:44:28.480170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.518 qpair failed and we were unable to recover it. 00:34:43.518 [2024-07-21 03:44:28.480334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.518 [2024-07-21 03:44:28.480364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.518 qpair failed and we were unable to recover it. 00:34:43.518 [2024-07-21 03:44:28.480497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.518 [2024-07-21 03:44:28.480526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.518 qpair failed and we were unable to recover it. 00:34:43.518 [2024-07-21 03:44:28.480664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.518 [2024-07-21 03:44:28.480691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.518 qpair failed and we were unable to recover it. 00:34:43.518 [2024-07-21 03:44:28.480816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.518 [2024-07-21 03:44:28.480843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.518 qpair failed and we were unable to recover it. 00:34:43.518 [2024-07-21 03:44:28.480987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.518 [2024-07-21 03:44:28.481017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.518 qpair failed and we were unable to recover it. 00:34:43.518 [2024-07-21 03:44:28.481132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.518 [2024-07-21 03:44:28.481175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.518 qpair failed and we were unable to recover it. 00:34:43.518 [2024-07-21 03:44:28.481333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.518 [2024-07-21 03:44:28.481363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.518 qpair failed and we were unable to recover it. 00:34:43.518 [2024-07-21 03:44:28.481494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.518 [2024-07-21 03:44:28.481524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.518 qpair failed and we were unable to recover it. 00:34:43.518 [2024-07-21 03:44:28.481687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.518 [2024-07-21 03:44:28.481727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.518 qpair failed and we were unable to recover it. 00:34:43.518 [2024-07-21 03:44:28.481864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.518 [2024-07-21 03:44:28.481909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.518 qpair failed and we were unable to recover it. 00:34:43.518 [2024-07-21 03:44:28.482051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.518 [2024-07-21 03:44:28.482083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.518 qpair failed and we were unable to recover it. 00:34:43.518 [2024-07-21 03:44:28.482226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.518 [2024-07-21 03:44:28.482257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.518 qpair failed and we were unable to recover it. 00:34:43.518 [2024-07-21 03:44:28.482393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.518 [2024-07-21 03:44:28.482423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.518 qpair failed and we were unable to recover it. 00:34:43.518 [2024-07-21 03:44:28.482527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.518 [2024-07-21 03:44:28.482557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.518 qpair failed and we were unable to recover it. 00:34:43.518 [2024-07-21 03:44:28.482679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.518 [2024-07-21 03:44:28.482706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.518 qpair failed and we were unable to recover it. 00:34:43.518 [2024-07-21 03:44:28.482805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.518 [2024-07-21 03:44:28.482849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.518 qpair failed and we were unable to recover it. 00:34:43.518 [2024-07-21 03:44:28.483017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.518 [2024-07-21 03:44:28.483045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.518 qpair failed and we were unable to recover it. 00:34:43.518 [2024-07-21 03:44:28.483171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.518 [2024-07-21 03:44:28.483200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.518 qpair failed and we were unable to recover it. 00:34:43.518 [2024-07-21 03:44:28.483359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.518 [2024-07-21 03:44:28.483388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.518 qpair failed and we were unable to recover it. 00:34:43.518 [2024-07-21 03:44:28.483496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.518 [2024-07-21 03:44:28.483526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.518 qpair failed and we were unable to recover it. 00:34:43.518 [2024-07-21 03:44:28.483651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.518 [2024-07-21 03:44:28.483679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.518 qpair failed and we were unable to recover it. 00:34:43.518 [2024-07-21 03:44:28.483827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.518 [2024-07-21 03:44:28.483854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.518 qpair failed and we were unable to recover it. 00:34:43.518 [2024-07-21 03:44:28.483942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.518 [2024-07-21 03:44:28.483983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.518 qpair failed and we were unable to recover it. 00:34:43.518 [2024-07-21 03:44:28.484124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.518 [2024-07-21 03:44:28.484154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.518 qpair failed and we were unable to recover it. 00:34:43.519 [2024-07-21 03:44:28.484313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.519 [2024-07-21 03:44:28.484348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.519 qpair failed and we were unable to recover it. 00:34:43.519 [2024-07-21 03:44:28.484495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.519 [2024-07-21 03:44:28.484522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.519 qpair failed and we were unable to recover it. 00:34:43.519 [2024-07-21 03:44:28.484630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.519 [2024-07-21 03:44:28.484671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.519 qpair failed and we were unable to recover it. 00:34:43.519 [2024-07-21 03:44:28.484771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.519 [2024-07-21 03:44:28.484798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.519 qpair failed and we were unable to recover it. 00:34:43.519 [2024-07-21 03:44:28.484917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.519 [2024-07-21 03:44:28.484942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.519 qpair failed and we were unable to recover it. 00:34:43.519 [2024-07-21 03:44:28.485167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.519 [2024-07-21 03:44:28.485225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.519 qpair failed and we were unable to recover it. 00:34:43.519 [2024-07-21 03:44:28.485341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.519 [2024-07-21 03:44:28.485402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.519 qpair failed and we were unable to recover it. 00:34:43.519 [2024-07-21 03:44:28.485539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.519 [2024-07-21 03:44:28.485568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.519 qpair failed and we were unable to recover it. 00:34:43.519 [2024-07-21 03:44:28.485721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.519 [2024-07-21 03:44:28.485749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.519 qpair failed and we were unable to recover it. 00:34:43.519 [2024-07-21 03:44:28.485874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.519 [2024-07-21 03:44:28.485902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.519 qpair failed and we were unable to recover it. 00:34:43.519 [2024-07-21 03:44:28.486038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.519 [2024-07-21 03:44:28.486068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.519 qpair failed and we were unable to recover it. 00:34:43.519 [2024-07-21 03:44:28.486206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.519 [2024-07-21 03:44:28.486235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.519 qpair failed and we were unable to recover it. 00:34:43.519 [2024-07-21 03:44:28.486392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.519 [2024-07-21 03:44:28.486421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.519 qpair failed and we were unable to recover it. 00:34:43.519 [2024-07-21 03:44:28.486526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.519 [2024-07-21 03:44:28.486555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.519 qpair failed and we were unable to recover it. 00:34:43.519 [2024-07-21 03:44:28.486704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.519 [2024-07-21 03:44:28.486733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.519 qpair failed and we were unable to recover it. 00:34:43.519 [2024-07-21 03:44:28.486890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.519 [2024-07-21 03:44:28.486915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.519 qpair failed and we were unable to recover it. 00:34:43.519 [2024-07-21 03:44:28.487061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.519 [2024-07-21 03:44:28.487119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.519 qpair failed and we were unable to recover it. 00:34:43.519 [2024-07-21 03:44:28.487247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.519 [2024-07-21 03:44:28.487275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.519 qpair failed and we were unable to recover it. 00:34:43.519 [2024-07-21 03:44:28.487434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.519 [2024-07-21 03:44:28.487463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.519 qpair failed and we were unable to recover it. 00:34:43.519 [2024-07-21 03:44:28.487594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.519 [2024-07-21 03:44:28.487628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.519 qpair failed and we were unable to recover it. 00:34:43.519 [2024-07-21 03:44:28.487773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.519 [2024-07-21 03:44:28.487799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.519 qpair failed and we were unable to recover it. 00:34:43.519 [2024-07-21 03:44:28.487899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.519 [2024-07-21 03:44:28.487942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.519 qpair failed and we were unable to recover it. 00:34:43.519 [2024-07-21 03:44:28.488077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.519 [2024-07-21 03:44:28.488107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.519 qpair failed and we were unable to recover it. 00:34:43.519 [2024-07-21 03:44:28.488204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.519 [2024-07-21 03:44:28.488232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.519 qpair failed and we were unable to recover it. 00:34:43.519 [2024-07-21 03:44:28.488362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.519 [2024-07-21 03:44:28.488391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.519 qpair failed and we were unable to recover it. 00:34:43.519 [2024-07-21 03:44:28.488552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.519 [2024-07-21 03:44:28.488581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.519 qpair failed and we were unable to recover it. 00:34:43.519 [2024-07-21 03:44:28.488730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.519 [2024-07-21 03:44:28.488756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.519 qpair failed and we were unable to recover it. 00:34:43.519 [2024-07-21 03:44:28.488926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.519 [2024-07-21 03:44:28.488962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.519 qpair failed and we were unable to recover it. 00:34:43.519 [2024-07-21 03:44:28.489090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.519 [2024-07-21 03:44:28.489118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.519 qpair failed and we were unable to recover it. 00:34:43.519 [2024-07-21 03:44:28.489220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.519 [2024-07-21 03:44:28.489249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.519 qpair failed and we were unable to recover it. 00:34:43.519 [2024-07-21 03:44:28.489416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.519 [2024-07-21 03:44:28.489474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.519 qpair failed and we were unable to recover it. 00:34:43.519 [2024-07-21 03:44:28.489582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.519 [2024-07-21 03:44:28.489611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.519 qpair failed and we were unable to recover it. 00:34:43.519 [2024-07-21 03:44:28.489760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.519 [2024-07-21 03:44:28.489804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.519 qpair failed and we were unable to recover it. 00:34:43.519 [2024-07-21 03:44:28.489945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.519 [2024-07-21 03:44:28.489994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.519 qpair failed and we were unable to recover it. 00:34:43.519 [2024-07-21 03:44:28.490195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.519 [2024-07-21 03:44:28.490253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.519 qpair failed and we were unable to recover it. 00:34:43.519 [2024-07-21 03:44:28.490410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.519 [2024-07-21 03:44:28.490436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.519 qpair failed and we were unable to recover it. 00:34:43.519 [2024-07-21 03:44:28.490524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.519 [2024-07-21 03:44:28.490552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.520 qpair failed and we were unable to recover it. 00:34:43.520 [2024-07-21 03:44:28.490645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.520 [2024-07-21 03:44:28.490672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.520 qpair failed and we were unable to recover it. 00:34:43.520 [2024-07-21 03:44:28.490785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.520 [2024-07-21 03:44:28.490814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.520 qpair failed and we were unable to recover it. 00:34:43.520 [2024-07-21 03:44:28.490985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.520 [2024-07-21 03:44:28.491014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.520 qpair failed and we were unable to recover it. 00:34:43.520 [2024-07-21 03:44:28.491205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.520 [2024-07-21 03:44:28.491257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.520 qpair failed and we were unable to recover it. 00:34:43.520 [2024-07-21 03:44:28.491360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.520 [2024-07-21 03:44:28.491388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.520 qpair failed and we were unable to recover it. 00:34:43.520 [2024-07-21 03:44:28.491520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.520 [2024-07-21 03:44:28.491546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.520 qpair failed and we were unable to recover it. 00:34:43.520 [2024-07-21 03:44:28.491693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.520 [2024-07-21 03:44:28.491719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.520 qpair failed and we were unable to recover it. 00:34:43.520 [2024-07-21 03:44:28.491848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.520 [2024-07-21 03:44:28.491874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.520 qpair failed and we were unable to recover it. 00:34:43.520 [2024-07-21 03:44:28.492006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.520 [2024-07-21 03:44:28.492048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.520 qpair failed and we were unable to recover it. 00:34:43.520 [2024-07-21 03:44:28.492181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.520 [2024-07-21 03:44:28.492209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.520 qpair failed and we were unable to recover it. 00:34:43.520 [2024-07-21 03:44:28.492368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.520 [2024-07-21 03:44:28.492397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.520 qpair failed and we were unable to recover it. 00:34:43.520 [2024-07-21 03:44:28.492499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.520 [2024-07-21 03:44:28.492524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.520 qpair failed and we were unable to recover it. 00:34:43.520 [2024-07-21 03:44:28.492646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.520 [2024-07-21 03:44:28.492672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.520 qpair failed and we were unable to recover it. 00:34:43.520 [2024-07-21 03:44:28.492768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.520 [2024-07-21 03:44:28.492808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.520 qpair failed and we were unable to recover it. 00:34:43.520 [2024-07-21 03:44:28.492944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.520 [2024-07-21 03:44:28.492973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.520 qpair failed and we were unable to recover it. 00:34:43.520 [2024-07-21 03:44:28.493108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.520 [2024-07-21 03:44:28.493137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.520 qpair failed and we were unable to recover it. 00:34:43.520 [2024-07-21 03:44:28.493305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.520 [2024-07-21 03:44:28.493333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.520 qpair failed and we were unable to recover it. 00:34:43.520 [2024-07-21 03:44:28.493491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.520 [2024-07-21 03:44:28.493536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.520 qpair failed and we were unable to recover it. 00:34:43.520 [2024-07-21 03:44:28.493659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.520 [2024-07-21 03:44:28.493692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.520 qpair failed and we were unable to recover it. 00:34:43.520 [2024-07-21 03:44:28.493832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.520 [2024-07-21 03:44:28.493858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.520 qpair failed and we were unable to recover it. 00:34:43.520 [2024-07-21 03:44:28.494009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.520 [2024-07-21 03:44:28.494036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.520 qpair failed and we were unable to recover it. 00:34:43.520 [2024-07-21 03:44:28.494185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.520 [2024-07-21 03:44:28.494229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.520 qpair failed and we were unable to recover it. 00:34:43.520 [2024-07-21 03:44:28.494332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.520 [2024-07-21 03:44:28.494359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.520 qpair failed and we were unable to recover it. 00:34:43.520 [2024-07-21 03:44:28.494512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.520 [2024-07-21 03:44:28.494538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.520 qpair failed and we were unable to recover it. 00:34:43.520 [2024-07-21 03:44:28.494700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.520 [2024-07-21 03:44:28.494745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.520 qpair failed and we were unable to recover it. 00:34:43.520 [2024-07-21 03:44:28.494854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.520 [2024-07-21 03:44:28.494884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.520 qpair failed and we were unable to recover it. 00:34:43.520 [2024-07-21 03:44:28.495042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.520 [2024-07-21 03:44:28.495071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.520 qpair failed and we were unable to recover it. 00:34:43.520 [2024-07-21 03:44:28.495223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.520 [2024-07-21 03:44:28.495266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.520 qpair failed and we were unable to recover it. 00:34:43.520 [2024-07-21 03:44:28.495392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.520 [2024-07-21 03:44:28.495418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.520 qpair failed and we were unable to recover it. 00:34:43.520 [2024-07-21 03:44:28.495542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.520 [2024-07-21 03:44:28.495568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.520 qpair failed and we were unable to recover it. 00:34:43.520 [2024-07-21 03:44:28.495686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.520 [2024-07-21 03:44:28.495712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.520 qpair failed and we were unable to recover it. 00:34:43.520 [2024-07-21 03:44:28.495860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.520 [2024-07-21 03:44:28.495887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.520 qpair failed and we were unable to recover it. 00:34:43.520 [2024-07-21 03:44:28.496008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.520 [2024-07-21 03:44:28.496034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.520 qpair failed and we were unable to recover it. 00:34:43.520 [2024-07-21 03:44:28.496182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.520 [2024-07-21 03:44:28.496208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.520 qpair failed and we were unable to recover it. 00:34:43.520 [2024-07-21 03:44:28.496353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.520 [2024-07-21 03:44:28.496379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.520 qpair failed and we were unable to recover it. 00:34:43.520 [2024-07-21 03:44:28.496508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.520 [2024-07-21 03:44:28.496538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.520 qpair failed and we were unable to recover it. 00:34:43.520 [2024-07-21 03:44:28.496662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.520 [2024-07-21 03:44:28.496689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.520 qpair failed and we were unable to recover it. 00:34:43.520 [2024-07-21 03:44:28.496787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.520 [2024-07-21 03:44:28.496812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.521 qpair failed and we were unable to recover it. 00:34:43.521 [2024-07-21 03:44:28.496938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.521 [2024-07-21 03:44:28.496964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.521 qpair failed and we were unable to recover it. 00:34:43.521 [2024-07-21 03:44:28.497101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.521 [2024-07-21 03:44:28.497129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.521 qpair failed and we were unable to recover it. 00:34:43.521 [2024-07-21 03:44:28.497257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.521 [2024-07-21 03:44:28.497286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.521 qpair failed and we were unable to recover it. 00:34:43.521 [2024-07-21 03:44:28.497419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.521 [2024-07-21 03:44:28.497448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.521 qpair failed and we were unable to recover it. 00:34:43.521 [2024-07-21 03:44:28.497569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.521 [2024-07-21 03:44:28.497610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.521 qpair failed and we were unable to recover it. 00:34:43.521 [2024-07-21 03:44:28.497784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.521 [2024-07-21 03:44:28.497811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.521 qpair failed and we were unable to recover it. 00:34:43.521 [2024-07-21 03:44:28.497953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.521 [2024-07-21 03:44:28.497986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.521 qpair failed and we were unable to recover it. 00:34:43.521 [2024-07-21 03:44:28.498093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.521 [2024-07-21 03:44:28.498124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.521 qpair failed and we were unable to recover it. 00:34:43.521 [2024-07-21 03:44:28.498227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.521 [2024-07-21 03:44:28.498256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.521 qpair failed and we were unable to recover it. 00:34:43.521 [2024-07-21 03:44:28.498422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.521 [2024-07-21 03:44:28.498470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.521 qpair failed and we were unable to recover it. 00:34:43.521 [2024-07-21 03:44:28.498595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.521 [2024-07-21 03:44:28.498628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.521 qpair failed and we were unable to recover it. 00:34:43.521 [2024-07-21 03:44:28.498744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.521 [2024-07-21 03:44:28.498770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.521 qpair failed and we were unable to recover it. 00:34:43.521 [2024-07-21 03:44:28.498904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.521 [2024-07-21 03:44:28.498948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.521 qpair failed and we were unable to recover it. 00:34:43.521 [2024-07-21 03:44:28.499080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.521 [2024-07-21 03:44:28.499123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.521 qpair failed and we were unable to recover it. 00:34:43.521 [2024-07-21 03:44:28.499274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.521 [2024-07-21 03:44:28.499300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.521 qpair failed and we were unable to recover it. 00:34:43.521 [2024-07-21 03:44:28.499448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.521 [2024-07-21 03:44:28.499473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.521 qpair failed and we were unable to recover it. 00:34:43.521 [2024-07-21 03:44:28.499593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.521 [2024-07-21 03:44:28.499627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.521 qpair failed and we were unable to recover it. 00:34:43.521 [2024-07-21 03:44:28.499774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.521 [2024-07-21 03:44:28.499818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.521 qpair failed and we were unable to recover it. 00:34:43.521 [2024-07-21 03:44:28.499988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.521 [2024-07-21 03:44:28.500037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.521 qpair failed and we were unable to recover it. 00:34:43.521 [2024-07-21 03:44:28.500172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.521 [2024-07-21 03:44:28.500201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.521 qpair failed and we were unable to recover it. 00:34:43.521 [2024-07-21 03:44:28.500323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.521 [2024-07-21 03:44:28.500349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.521 qpair failed and we were unable to recover it. 00:34:43.521 [2024-07-21 03:44:28.500438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.521 [2024-07-21 03:44:28.500466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.521 qpair failed and we were unable to recover it. 00:34:43.521 [2024-07-21 03:44:28.500581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.521 [2024-07-21 03:44:28.500607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.521 qpair failed and we were unable to recover it. 00:34:43.521 [2024-07-21 03:44:28.500762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.521 [2024-07-21 03:44:28.500790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.521 qpair failed and we were unable to recover it. 00:34:43.521 [2024-07-21 03:44:28.500929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.521 [2024-07-21 03:44:28.500987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.521 qpair failed and we were unable to recover it. 00:34:43.521 [2024-07-21 03:44:28.501116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.521 [2024-07-21 03:44:28.501144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.521 qpair failed and we were unable to recover it. 00:34:43.521 [2024-07-21 03:44:28.501300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.521 [2024-07-21 03:44:28.501328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.521 qpair failed and we were unable to recover it. 00:34:43.521 [2024-07-21 03:44:28.501464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.521 [2024-07-21 03:44:28.501490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.521 qpair failed and we were unable to recover it. 00:34:43.521 [2024-07-21 03:44:28.501608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.521 [2024-07-21 03:44:28.501640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.521 qpair failed and we were unable to recover it. 00:34:43.521 [2024-07-21 03:44:28.501738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.521 [2024-07-21 03:44:28.501765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.521 qpair failed and we were unable to recover it. 00:34:43.521 [2024-07-21 03:44:28.501879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.521 [2024-07-21 03:44:28.501909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.521 qpair failed and we were unable to recover it. 00:34:43.521 [2024-07-21 03:44:28.502045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.521 [2024-07-21 03:44:28.502088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.521 qpair failed and we were unable to recover it. 00:34:43.521 [2024-07-21 03:44:28.502179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.521 [2024-07-21 03:44:28.502205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.521 qpair failed and we were unable to recover it. 00:34:43.521 [2024-07-21 03:44:28.502343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.521 [2024-07-21 03:44:28.502373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.521 qpair failed and we were unable to recover it. 00:34:43.521 [2024-07-21 03:44:28.502509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.521 [2024-07-21 03:44:28.502538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.521 qpair failed and we were unable to recover it. 00:34:43.521 [2024-07-21 03:44:28.502686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.521 [2024-07-21 03:44:28.502743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.521 qpair failed and we were unable to recover it. 00:34:43.521 [2024-07-21 03:44:28.502915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.521 [2024-07-21 03:44:28.502945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.521 qpair failed and we were unable to recover it. 00:34:43.521 [2024-07-21 03:44:28.503081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.521 [2024-07-21 03:44:28.503111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.521 qpair failed and we were unable to recover it. 00:34:43.521 [2024-07-21 03:44:28.503220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.521 [2024-07-21 03:44:28.503248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.521 qpair failed and we were unable to recover it. 00:34:43.521 [2024-07-21 03:44:28.503444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.521 [2024-07-21 03:44:28.503493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.521 qpair failed and we were unable to recover it. 00:34:43.521 [2024-07-21 03:44:28.503618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.521 [2024-07-21 03:44:28.503644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.522 qpair failed and we were unable to recover it. 00:34:43.522 [2024-07-21 03:44:28.503788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.522 [2024-07-21 03:44:28.503831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.522 qpair failed and we were unable to recover it. 00:34:43.522 [2024-07-21 03:44:28.503970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.522 [2024-07-21 03:44:28.504013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.522 qpair failed and we were unable to recover it. 00:34:43.522 [2024-07-21 03:44:28.504153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.522 [2024-07-21 03:44:28.504197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.522 qpair failed and we were unable to recover it. 00:34:43.522 [2024-07-21 03:44:28.504343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.522 [2024-07-21 03:44:28.504433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.522 qpair failed and we were unable to recover it. 00:34:43.522 [2024-07-21 03:44:28.504556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.522 [2024-07-21 03:44:28.504583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.522 qpair failed and we were unable to recover it. 00:34:43.522 [2024-07-21 03:44:28.504711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.522 [2024-07-21 03:44:28.504738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.522 qpair failed and we were unable to recover it. 00:34:43.522 [2024-07-21 03:44:28.504845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.522 [2024-07-21 03:44:28.504871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.522 qpair failed and we were unable to recover it. 00:34:43.522 [2024-07-21 03:44:28.505016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.522 [2024-07-21 03:44:28.505041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.522 qpair failed and we were unable to recover it. 00:34:43.522 [2024-07-21 03:44:28.505185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.522 [2024-07-21 03:44:28.505229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.522 qpair failed and we were unable to recover it. 00:34:43.522 [2024-07-21 03:44:28.505375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.522 [2024-07-21 03:44:28.505401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.522 qpair failed and we were unable to recover it. 00:34:43.522 [2024-07-21 03:44:28.505525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.522 [2024-07-21 03:44:28.505551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.522 qpair failed and we were unable to recover it. 00:34:43.522 [2024-07-21 03:44:28.505662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.522 [2024-07-21 03:44:28.505691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.522 qpair failed and we were unable to recover it. 00:34:43.522 [2024-07-21 03:44:28.505817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.522 [2024-07-21 03:44:28.505846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.522 qpair failed and we were unable to recover it. 00:34:43.522 [2024-07-21 03:44:28.505979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.522 [2024-07-21 03:44:28.506023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.522 qpair failed and we were unable to recover it. 00:34:43.522 [2024-07-21 03:44:28.506162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.522 [2024-07-21 03:44:28.506205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.522 qpair failed and we were unable to recover it. 00:34:43.522 [2024-07-21 03:44:28.506295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.522 [2024-07-21 03:44:28.506321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.522 qpair failed and we were unable to recover it. 00:34:43.522 [2024-07-21 03:44:28.506411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.522 [2024-07-21 03:44:28.506438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.522 qpair failed and we were unable to recover it. 00:34:43.522 [2024-07-21 03:44:28.506562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.522 [2024-07-21 03:44:28.506589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.522 qpair failed and we were unable to recover it. 00:34:43.522 [2024-07-21 03:44:28.506732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.522 [2024-07-21 03:44:28.506771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.522 qpair failed and we were unable to recover it. 00:34:43.522 [2024-07-21 03:44:28.506881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.522 [2024-07-21 03:44:28.506909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.522 qpair failed and we were unable to recover it. 00:34:43.522 [2024-07-21 03:44:28.507034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.522 [2024-07-21 03:44:28.507060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.522 qpair failed and we were unable to recover it. 00:34:43.522 [2024-07-21 03:44:28.507170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.522 [2024-07-21 03:44:28.507200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.522 qpair failed and we were unable to recover it. 00:34:43.522 [2024-07-21 03:44:28.507439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.522 [2024-07-21 03:44:28.507487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.522 qpair failed and we were unable to recover it. 00:34:43.522 [2024-07-21 03:44:28.507640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.522 [2024-07-21 03:44:28.507683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.522 qpair failed and we were unable to recover it. 00:34:43.522 [2024-07-21 03:44:28.507794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.522 [2024-07-21 03:44:28.507823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.522 qpair failed and we were unable to recover it. 00:34:43.522 [2024-07-21 03:44:28.507956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.522 [2024-07-21 03:44:28.507986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.522 qpair failed and we were unable to recover it. 00:34:43.522 [2024-07-21 03:44:28.508119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.522 [2024-07-21 03:44:28.508147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.522 qpair failed and we were unable to recover it. 00:34:43.522 [2024-07-21 03:44:28.508278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.522 [2024-07-21 03:44:28.508307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.522 qpair failed and we were unable to recover it. 00:34:43.522 [2024-07-21 03:44:28.508445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.522 [2024-07-21 03:44:28.508471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.522 qpair failed and we were unable to recover it. 00:34:43.522 [2024-07-21 03:44:28.508624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.522 [2024-07-21 03:44:28.508650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.522 qpair failed and we were unable to recover it. 00:34:43.522 [2024-07-21 03:44:28.508771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.522 [2024-07-21 03:44:28.508796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.522 qpair failed and we were unable to recover it. 00:34:43.522 [2024-07-21 03:44:28.508939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.522 [2024-07-21 03:44:28.508982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.522 qpair failed and we were unable to recover it. 00:34:43.522 [2024-07-21 03:44:28.509113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.522 [2024-07-21 03:44:28.509147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.522 qpair failed and we were unable to recover it. 00:34:43.522 [2024-07-21 03:44:28.509269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.522 [2024-07-21 03:44:28.509312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.522 qpair failed and we were unable to recover it. 00:34:43.522 [2024-07-21 03:44:28.509480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.522 [2024-07-21 03:44:28.509508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.522 qpair failed and we were unable to recover it. 00:34:43.522 [2024-07-21 03:44:28.509610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.522 [2024-07-21 03:44:28.509657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.522 qpair failed and we were unable to recover it. 00:34:43.522 [2024-07-21 03:44:28.509784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.522 [2024-07-21 03:44:28.509810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.522 qpair failed and we were unable to recover it. 00:34:43.522 [2024-07-21 03:44:28.509897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.522 [2024-07-21 03:44:28.509922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.522 qpair failed and we were unable to recover it. 00:34:43.522 [2024-07-21 03:44:28.510058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.522 [2024-07-21 03:44:28.510086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.522 qpair failed and we were unable to recover it. 00:34:43.522 [2024-07-21 03:44:28.510205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.522 [2024-07-21 03:44:28.510252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.522 qpair failed and we were unable to recover it. 00:34:43.522 [2024-07-21 03:44:28.510410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.522 [2024-07-21 03:44:28.510439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.522 qpair failed and we were unable to recover it. 00:34:43.523 [2024-07-21 03:44:28.510599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.523 [2024-07-21 03:44:28.510634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.523 qpair failed and we were unable to recover it. 00:34:43.523 [2024-07-21 03:44:28.510778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.523 [2024-07-21 03:44:28.510803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.523 qpair failed and we were unable to recover it. 00:34:43.523 [2024-07-21 03:44:28.510926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.523 [2024-07-21 03:44:28.510952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.523 qpair failed and we were unable to recover it. 00:34:43.523 [2024-07-21 03:44:28.511139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.523 [2024-07-21 03:44:28.511186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.523 qpair failed and we were unable to recover it. 00:34:43.523 [2024-07-21 03:44:28.511412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.523 [2024-07-21 03:44:28.511440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.523 qpair failed and we were unable to recover it. 00:34:43.523 [2024-07-21 03:44:28.511573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.523 [2024-07-21 03:44:28.511601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.523 qpair failed and we were unable to recover it. 00:34:43.523 [2024-07-21 03:44:28.511751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.523 [2024-07-21 03:44:28.511777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.523 qpair failed and we were unable to recover it. 00:34:43.523 [2024-07-21 03:44:28.511910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.523 [2024-07-21 03:44:28.511938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.523 qpair failed and we were unable to recover it. 00:34:43.523 [2024-07-21 03:44:28.512073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.523 [2024-07-21 03:44:28.512101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.523 qpair failed and we were unable to recover it. 00:34:43.523 [2024-07-21 03:44:28.512225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.523 [2024-07-21 03:44:28.512253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.523 qpair failed and we were unable to recover it. 00:34:43.523 [2024-07-21 03:44:28.512354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.523 [2024-07-21 03:44:28.512382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.523 qpair failed and we were unable to recover it. 00:34:43.523 [2024-07-21 03:44:28.512543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.523 [2024-07-21 03:44:28.512571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.523 qpair failed and we were unable to recover it. 00:34:43.523 [2024-07-21 03:44:28.512743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.523 [2024-07-21 03:44:28.512770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.523 qpair failed and we were unable to recover it. 00:34:43.523 [2024-07-21 03:44:28.512919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.523 [2024-07-21 03:44:28.512976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.523 qpair failed and we were unable to recover it. 00:34:43.523 [2024-07-21 03:44:28.513125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.523 [2024-07-21 03:44:28.513174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.523 qpair failed and we were unable to recover it. 00:34:43.523 [2024-07-21 03:44:28.513316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.523 [2024-07-21 03:44:28.513360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.523 qpair failed and we were unable to recover it. 00:34:43.523 [2024-07-21 03:44:28.513480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.523 [2024-07-21 03:44:28.513506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.523 qpair failed and we were unable to recover it. 00:34:43.523 [2024-07-21 03:44:28.513598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.523 [2024-07-21 03:44:28.513633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.523 qpair failed and we were unable to recover it. 00:34:43.523 [2024-07-21 03:44:28.513757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.523 [2024-07-21 03:44:28.513789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.523 qpair failed and we were unable to recover it. 00:34:43.523 [2024-07-21 03:44:28.513886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.523 [2024-07-21 03:44:28.513913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.523 qpair failed and we were unable to recover it. 00:34:43.523 [2024-07-21 03:44:28.514066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.523 [2024-07-21 03:44:28.514092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.523 qpair failed and we were unable to recover it. 00:34:43.523 [2024-07-21 03:44:28.514239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.523 [2024-07-21 03:44:28.514266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.523 qpair failed and we were unable to recover it. 00:34:43.523 [2024-07-21 03:44:28.514407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.523 [2024-07-21 03:44:28.514434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.523 qpair failed and we were unable to recover it. 00:34:43.523 [2024-07-21 03:44:28.514594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.523 [2024-07-21 03:44:28.514626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.523 qpair failed and we were unable to recover it. 00:34:43.523 [2024-07-21 03:44:28.514763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.524 [2024-07-21 03:44:28.514807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.524 qpair failed and we were unable to recover it. 00:34:43.524 [2024-07-21 03:44:28.514917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.524 [2024-07-21 03:44:28.514961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.524 qpair failed and we were unable to recover it. 00:34:43.524 [2024-07-21 03:44:28.515127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.524 [2024-07-21 03:44:28.515169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.524 qpair failed and we were unable to recover it. 00:34:43.524 [2024-07-21 03:44:28.515303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.524 [2024-07-21 03:44:28.515344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.524 qpair failed and we were unable to recover it. 00:34:43.524 [2024-07-21 03:44:28.515495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.524 [2024-07-21 03:44:28.515522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.524 qpair failed and we were unable to recover it. 00:34:43.524 [2024-07-21 03:44:28.515691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.524 [2024-07-21 03:44:28.515741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.524 qpair failed and we were unable to recover it. 00:34:43.524 [2024-07-21 03:44:28.515890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.524 [2024-07-21 03:44:28.515915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.524 qpair failed and we were unable to recover it. 00:34:43.524 [2024-07-21 03:44:28.516091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.524 [2024-07-21 03:44:28.516152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.524 qpair failed and we were unable to recover it. 00:34:43.524 [2024-07-21 03:44:28.516304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.524 [2024-07-21 03:44:28.516330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.524 qpair failed and we were unable to recover it. 00:34:43.524 [2024-07-21 03:44:28.516445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.524 [2024-07-21 03:44:28.516471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.524 qpair failed and we were unable to recover it. 00:34:43.524 [2024-07-21 03:44:28.516625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.524 [2024-07-21 03:44:28.516651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.524 qpair failed and we were unable to recover it. 00:34:43.524 [2024-07-21 03:44:28.516789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.524 [2024-07-21 03:44:28.516833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.524 qpair failed and we were unable to recover it. 00:34:43.524 [2024-07-21 03:44:28.516942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.524 [2024-07-21 03:44:28.516972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.524 qpair failed and we were unable to recover it. 00:34:43.524 [2024-07-21 03:44:28.517111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.524 [2024-07-21 03:44:28.517138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.524 qpair failed and we were unable to recover it. 00:34:43.524 [2024-07-21 03:44:28.517258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.524 [2024-07-21 03:44:28.517284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.524 qpair failed and we were unable to recover it. 00:34:43.524 [2024-07-21 03:44:28.517432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.524 [2024-07-21 03:44:28.517458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.524 qpair failed and we were unable to recover it. 00:34:43.524 [2024-07-21 03:44:28.517586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.524 [2024-07-21 03:44:28.517619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.524 qpair failed and we were unable to recover it. 00:34:43.524 [2024-07-21 03:44:28.517754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.524 [2024-07-21 03:44:28.517799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.524 qpair failed and we were unable to recover it. 00:34:43.524 [2024-07-21 03:44:28.517932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.524 [2024-07-21 03:44:28.517960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.524 qpair failed and we were unable to recover it. 00:34:43.524 [2024-07-21 03:44:28.518124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.524 [2024-07-21 03:44:28.518168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.524 qpair failed and we were unable to recover it. 00:34:43.524 [2024-07-21 03:44:28.518318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.524 [2024-07-21 03:44:28.518344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.524 qpair failed and we were unable to recover it. 00:34:43.524 [2024-07-21 03:44:28.518494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.524 [2024-07-21 03:44:28.518521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.524 qpair failed and we were unable to recover it. 00:34:43.524 [2024-07-21 03:44:28.518649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.524 [2024-07-21 03:44:28.518675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.524 qpair failed and we were unable to recover it. 00:34:43.524 [2024-07-21 03:44:28.518795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.525 [2024-07-21 03:44:28.518820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.525 qpair failed and we were unable to recover it. 00:34:43.525 [2024-07-21 03:44:28.518941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.525 [2024-07-21 03:44:28.518967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.525 qpair failed and we were unable to recover it. 00:34:43.525 [2024-07-21 03:44:28.519114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.525 [2024-07-21 03:44:28.519139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.525 qpair failed and we were unable to recover it. 00:34:43.525 [2024-07-21 03:44:28.519248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.525 [2024-07-21 03:44:28.519273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.525 qpair failed and we were unable to recover it. 00:34:43.525 [2024-07-21 03:44:28.519397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.525 [2024-07-21 03:44:28.519423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.525 qpair failed and we were unable to recover it. 00:34:43.525 [2024-07-21 03:44:28.519517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.525 [2024-07-21 03:44:28.519544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.525 qpair failed and we were unable to recover it. 00:34:43.525 [2024-07-21 03:44:28.519700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.525 [2024-07-21 03:44:28.519726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.525 qpair failed and we were unable to recover it. 00:34:43.525 [2024-07-21 03:44:28.519874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.525 [2024-07-21 03:44:28.519899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.525 qpair failed and we were unable to recover it. 00:34:43.525 [2024-07-21 03:44:28.520044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.525 [2024-07-21 03:44:28.520071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.525 qpair failed and we were unable to recover it. 00:34:43.525 [2024-07-21 03:44:28.520192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.525 [2024-07-21 03:44:28.520219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.525 qpair failed and we were unable to recover it. 00:34:43.525 [2024-07-21 03:44:28.520348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.525 [2024-07-21 03:44:28.520375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.525 qpair failed and we were unable to recover it. 00:34:43.525 [2024-07-21 03:44:28.520471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.525 [2024-07-21 03:44:28.520501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.525 qpair failed and we were unable to recover it. 00:34:43.525 [2024-07-21 03:44:28.520647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.525 [2024-07-21 03:44:28.520673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.525 qpair failed and we were unable to recover it. 00:34:43.525 [2024-07-21 03:44:28.520827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.525 [2024-07-21 03:44:28.520854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.525 qpair failed and we were unable to recover it. 00:34:43.525 [2024-07-21 03:44:28.520968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.525 [2024-07-21 03:44:28.520993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.525 qpair failed and we were unable to recover it. 00:34:43.525 [2024-07-21 03:44:28.521140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.525 [2024-07-21 03:44:28.521166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.525 qpair failed and we were unable to recover it. 00:34:43.525 [2024-07-21 03:44:28.521289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.525 [2024-07-21 03:44:28.521315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.525 qpair failed and we were unable to recover it. 00:34:43.525 [2024-07-21 03:44:28.521454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.525 [2024-07-21 03:44:28.521493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.525 qpair failed and we were unable to recover it. 00:34:43.525 [2024-07-21 03:44:28.521675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.525 [2024-07-21 03:44:28.521720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.525 qpair failed and we were unable to recover it. 00:34:43.525 [2024-07-21 03:44:28.521860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.525 [2024-07-21 03:44:28.521890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.525 qpair failed and we were unable to recover it. 00:34:43.525 [2024-07-21 03:44:28.522027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.525 [2024-07-21 03:44:28.522056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.525 qpair failed and we were unable to recover it. 00:34:43.525 [2024-07-21 03:44:28.522193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.525 [2024-07-21 03:44:28.522222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.525 qpair failed and we were unable to recover it. 00:34:43.525 [2024-07-21 03:44:28.522333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.525 [2024-07-21 03:44:28.522358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.525 qpair failed and we were unable to recover it. 00:34:43.525 [2024-07-21 03:44:28.522490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.525 [2024-07-21 03:44:28.522517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.525 qpair failed and we were unable to recover it. 00:34:43.525 [2024-07-21 03:44:28.522649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.526 [2024-07-21 03:44:28.522680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.526 qpair failed and we were unable to recover it. 00:34:43.526 [2024-07-21 03:44:28.522816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.526 [2024-07-21 03:44:28.522843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.526 qpair failed and we were unable to recover it. 00:34:43.526 [2024-07-21 03:44:28.522977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.526 [2024-07-21 03:44:28.523005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.526 qpair failed and we were unable to recover it. 00:34:43.526 [2024-07-21 03:44:28.523135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.526 [2024-07-21 03:44:28.523164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.526 qpair failed and we were unable to recover it. 00:34:43.526 [2024-07-21 03:44:28.523261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.526 [2024-07-21 03:44:28.523289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.526 qpair failed and we were unable to recover it. 00:34:43.526 [2024-07-21 03:44:28.523424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.526 [2024-07-21 03:44:28.523456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.526 qpair failed and we were unable to recover it. 00:34:43.526 [2024-07-21 03:44:28.523598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.526 [2024-07-21 03:44:28.523638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.526 qpair failed and we were unable to recover it. 00:34:43.526 [2024-07-21 03:44:28.523789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.526 [2024-07-21 03:44:28.523815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.526 qpair failed and we were unable to recover it. 00:34:43.526 [2024-07-21 03:44:28.523959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.526 [2024-07-21 03:44:28.524004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.526 qpair failed and we were unable to recover it. 00:34:43.526 [2024-07-21 03:44:28.524186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.526 [2024-07-21 03:44:28.524213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.526 qpair failed and we were unable to recover it. 00:34:43.526 [2024-07-21 03:44:28.524309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.526 [2024-07-21 03:44:28.524336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.526 qpair failed and we were unable to recover it. 00:34:43.526 [2024-07-21 03:44:28.524457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.526 [2024-07-21 03:44:28.524485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.526 qpair failed and we were unable to recover it. 00:34:43.526 [2024-07-21 03:44:28.524623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.526 [2024-07-21 03:44:28.524669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.526 qpair failed and we were unable to recover it. 00:34:43.526 [2024-07-21 03:44:28.524805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.526 [2024-07-21 03:44:28.524835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.526 qpair failed and we were unable to recover it. 00:34:43.526 [2024-07-21 03:44:28.525041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.526 [2024-07-21 03:44:28.525103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.526 qpair failed and we were unable to recover it. 00:34:43.526 [2024-07-21 03:44:28.525213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.526 [2024-07-21 03:44:28.525238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.526 qpair failed and we were unable to recover it. 00:34:43.526 [2024-07-21 03:44:28.525411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.526 [2024-07-21 03:44:28.525440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.526 qpair failed and we were unable to recover it. 00:34:43.526 [2024-07-21 03:44:28.525579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.526 [2024-07-21 03:44:28.525605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.526 qpair failed and we were unable to recover it. 00:34:43.526 [2024-07-21 03:44:28.525764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.526 [2024-07-21 03:44:28.525790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.526 qpair failed and we were unable to recover it. 00:34:43.526 [2024-07-21 03:44:28.525954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.526 [2024-07-21 03:44:28.525982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.526 qpair failed and we were unable to recover it. 00:34:43.526 [2024-07-21 03:44:28.526117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.526 [2024-07-21 03:44:28.526146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.526 qpair failed and we were unable to recover it. 00:34:43.526 [2024-07-21 03:44:28.526284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.526 [2024-07-21 03:44:28.526313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.526 qpair failed and we were unable to recover it. 00:34:43.526 [2024-07-21 03:44:28.526443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.526 [2024-07-21 03:44:28.526473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.526 qpair failed and we were unable to recover it. 00:34:43.526 [2024-07-21 03:44:28.526620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.526 [2024-07-21 03:44:28.526647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.526 qpair failed and we were unable to recover it. 00:34:43.526 [2024-07-21 03:44:28.526740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.526 [2024-07-21 03:44:28.526783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.526 qpair failed and we were unable to recover it. 00:34:43.526 [2024-07-21 03:44:28.526943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.526 [2024-07-21 03:44:28.526972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.526 qpair failed and we were unable to recover it. 00:34:43.526 [2024-07-21 03:44:28.527140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.526 [2024-07-21 03:44:28.527169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.526 qpair failed and we were unable to recover it. 00:34:43.526 [2024-07-21 03:44:28.527312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.526 [2024-07-21 03:44:28.527340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.526 qpair failed and we were unable to recover it. 00:34:43.526 [2024-07-21 03:44:28.527466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.526 [2024-07-21 03:44:28.527492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.526 qpair failed and we were unable to recover it. 00:34:43.526 [2024-07-21 03:44:28.527577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.526 [2024-07-21 03:44:28.527603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.526 qpair failed and we were unable to recover it. 00:34:43.526 [2024-07-21 03:44:28.527758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.526 [2024-07-21 03:44:28.527784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.526 qpair failed and we were unable to recover it. 00:34:43.526 [2024-07-21 03:44:28.527890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.527 [2024-07-21 03:44:28.527919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.527 qpair failed and we were unable to recover it. 00:34:43.527 [2024-07-21 03:44:28.528051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.527 [2024-07-21 03:44:28.528080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.527 qpair failed and we were unable to recover it. 00:34:43.527 [2024-07-21 03:44:28.528214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.527 [2024-07-21 03:44:28.528244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.527 qpair failed and we were unable to recover it. 00:34:43.527 [2024-07-21 03:44:28.528366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.527 [2024-07-21 03:44:28.528395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.527 qpair failed and we were unable to recover it. 00:34:43.527 [2024-07-21 03:44:28.528508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.527 [2024-07-21 03:44:28.528534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.527 qpair failed and we were unable to recover it. 00:34:43.527 [2024-07-21 03:44:28.528627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.527 [2024-07-21 03:44:28.528654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.527 qpair failed and we were unable to recover it. 00:34:43.527 [2024-07-21 03:44:28.528775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.527 [2024-07-21 03:44:28.528800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.527 qpair failed and we were unable to recover it. 00:34:43.527 [2024-07-21 03:44:28.528909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.527 [2024-07-21 03:44:28.528949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.527 qpair failed and we were unable to recover it. 00:34:43.527 [2024-07-21 03:44:28.529100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.527 [2024-07-21 03:44:28.529146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.527 qpair failed and we were unable to recover it. 00:34:43.527 [2024-07-21 03:44:28.529293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.527 [2024-07-21 03:44:28.529336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.527 qpair failed and we were unable to recover it. 00:34:43.527 [2024-07-21 03:44:28.529467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.527 [2024-07-21 03:44:28.529493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.527 qpair failed and we were unable to recover it. 00:34:43.527 [2024-07-21 03:44:28.529631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.527 [2024-07-21 03:44:28.529692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.527 qpair failed and we were unable to recover it. 00:34:43.527 [2024-07-21 03:44:28.529833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.527 [2024-07-21 03:44:28.529863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.527 qpair failed and we were unable to recover it. 00:34:43.527 [2024-07-21 03:44:28.529985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.527 [2024-07-21 03:44:28.530027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.527 qpair failed and we were unable to recover it. 00:34:43.527 [2024-07-21 03:44:28.530125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.527 [2024-07-21 03:44:28.530155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.527 qpair failed and we were unable to recover it. 00:34:43.527 [2024-07-21 03:44:28.530290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.527 [2024-07-21 03:44:28.530317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.527 qpair failed and we were unable to recover it. 00:34:43.527 [2024-07-21 03:44:28.530505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.527 [2024-07-21 03:44:28.530545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.527 qpair failed and we were unable to recover it. 00:34:43.527 [2024-07-21 03:44:28.530657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.527 [2024-07-21 03:44:28.530685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.527 qpair failed and we were unable to recover it. 00:34:43.527 [2024-07-21 03:44:28.530828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.527 [2024-07-21 03:44:28.530872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.527 qpair failed and we were unable to recover it. 00:34:43.527 [2024-07-21 03:44:28.531097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.527 [2024-07-21 03:44:28.531149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.527 qpair failed and we were unable to recover it. 00:34:43.527 [2024-07-21 03:44:28.531381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.527 [2024-07-21 03:44:28.531435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.527 qpair failed and we were unable to recover it. 00:34:43.527 [2024-07-21 03:44:28.531582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.527 [2024-07-21 03:44:28.531609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.527 qpair failed and we were unable to recover it. 00:34:43.527 [2024-07-21 03:44:28.531741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.527 [2024-07-21 03:44:28.531767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.527 qpair failed and we were unable to recover it. 00:34:43.527 [2024-07-21 03:44:28.531937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.527 [2024-07-21 03:44:28.531986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.527 qpair failed and we were unable to recover it. 00:34:43.527 [2024-07-21 03:44:28.532122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.527 [2024-07-21 03:44:28.532166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.527 qpair failed and we were unable to recover it. 00:34:43.527 [2024-07-21 03:44:28.532306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.527 [2024-07-21 03:44:28.532349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.527 qpair failed and we were unable to recover it. 00:34:43.527 [2024-07-21 03:44:28.532466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.527 [2024-07-21 03:44:28.532491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.527 qpair failed and we were unable to recover it. 00:34:43.527 [2024-07-21 03:44:28.532621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.527 [2024-07-21 03:44:28.532648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.527 qpair failed and we were unable to recover it. 00:34:43.527 [2024-07-21 03:44:28.532796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.527 [2024-07-21 03:44:28.532822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.527 qpair failed and we were unable to recover it. 00:34:43.527 [2024-07-21 03:44:28.532966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.527 [2024-07-21 03:44:28.532992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.527 qpair failed and we were unable to recover it. 00:34:43.527 [2024-07-21 03:44:28.533134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.528 [2024-07-21 03:44:28.533164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.528 qpair failed and we were unable to recover it. 00:34:43.528 [2024-07-21 03:44:28.533309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.528 [2024-07-21 03:44:28.533335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.528 qpair failed and we were unable to recover it. 00:34:43.528 [2024-07-21 03:44:28.533460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.528 [2024-07-21 03:44:28.533487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.528 qpair failed and we were unable to recover it. 00:34:43.528 [2024-07-21 03:44:28.533604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.528 [2024-07-21 03:44:28.533639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.528 qpair failed and we were unable to recover it. 00:34:43.528 [2024-07-21 03:44:28.533744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.528 [2024-07-21 03:44:28.533775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.528 qpair failed and we were unable to recover it. 00:34:43.528 [2024-07-21 03:44:28.533907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.528 [2024-07-21 03:44:28.533950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.528 qpair failed and we were unable to recover it. 00:34:43.528 [2024-07-21 03:44:28.534086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.528 [2024-07-21 03:44:28.534130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.528 qpair failed and we were unable to recover it. 00:34:43.528 [2024-07-21 03:44:28.534225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.528 [2024-07-21 03:44:28.534252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.528 qpair failed and we were unable to recover it. 00:34:43.528 [2024-07-21 03:44:28.534372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.528 [2024-07-21 03:44:28.534397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.528 qpair failed and we were unable to recover it. 00:34:43.528 [2024-07-21 03:44:28.534549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.528 [2024-07-21 03:44:28.534576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.528 qpair failed and we were unable to recover it. 00:34:43.528 [2024-07-21 03:44:28.534708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.528 [2024-07-21 03:44:28.534748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.528 qpair failed and we were unable to recover it. 00:34:43.528 [2024-07-21 03:44:28.534856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.528 [2024-07-21 03:44:28.534883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.528 qpair failed and we were unable to recover it. 00:34:43.528 [2024-07-21 03:44:28.534997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.528 [2024-07-21 03:44:28.535023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.528 qpair failed and we were unable to recover it. 00:34:43.528 [2024-07-21 03:44:28.535122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.528 [2024-07-21 03:44:28.535147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.528 qpair failed and we were unable to recover it. 00:34:43.528 [2024-07-21 03:44:28.535234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.528 [2024-07-21 03:44:28.535259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.528 qpair failed and we were unable to recover it. 00:34:43.528 [2024-07-21 03:44:28.535403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.528 [2024-07-21 03:44:28.535429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.528 qpair failed and we were unable to recover it. 00:34:43.528 [2024-07-21 03:44:28.535533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.528 [2024-07-21 03:44:28.535561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.528 qpair failed and we were unable to recover it. 00:34:43.528 [2024-07-21 03:44:28.535706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.528 [2024-07-21 03:44:28.535751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.528 qpair failed and we were unable to recover it. 00:34:43.528 [2024-07-21 03:44:28.535902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.528 [2024-07-21 03:44:28.535947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.528 qpair failed and we were unable to recover it. 00:34:43.528 [2024-07-21 03:44:28.536053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.528 [2024-07-21 03:44:28.536081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.528 qpair failed and we were unable to recover it. 00:34:43.528 [2024-07-21 03:44:28.536292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.528 [2024-07-21 03:44:28.536354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.528 qpair failed and we were unable to recover it. 00:34:43.528 [2024-07-21 03:44:28.536533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.528 [2024-07-21 03:44:28.536566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.528 qpair failed and we were unable to recover it. 00:34:43.528 [2024-07-21 03:44:28.536721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.528 [2024-07-21 03:44:28.536748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.528 qpair failed and we were unable to recover it. 00:34:43.528 [2024-07-21 03:44:28.536890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.528 [2024-07-21 03:44:28.536920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.528 qpair failed and we were unable to recover it. 00:34:43.528 [2024-07-21 03:44:28.537060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.528 [2024-07-21 03:44:28.537089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.528 qpair failed and we were unable to recover it. 00:34:43.528 [2024-07-21 03:44:28.537246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.528 [2024-07-21 03:44:28.537274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.528 qpair failed and we were unable to recover it. 00:34:43.528 [2024-07-21 03:44:28.537455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.528 [2024-07-21 03:44:28.537501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.528 qpair failed and we were unable to recover it. 00:34:43.528 [2024-07-21 03:44:28.537598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.528 [2024-07-21 03:44:28.537632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.528 qpair failed and we were unable to recover it. 00:34:43.528 [2024-07-21 03:44:28.537780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.528 [2024-07-21 03:44:28.537806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.528 qpair failed and we were unable to recover it. 00:34:43.528 [2024-07-21 03:44:28.537949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.528 [2024-07-21 03:44:28.537992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.528 qpair failed and we were unable to recover it. 00:34:43.528 [2024-07-21 03:44:28.538106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.529 [2024-07-21 03:44:28.538149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.529 qpair failed and we were unable to recover it. 00:34:43.529 [2024-07-21 03:44:28.538316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.529 [2024-07-21 03:44:28.538359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.529 qpair failed and we were unable to recover it. 00:34:43.529 [2024-07-21 03:44:28.538509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.529 [2024-07-21 03:44:28.538534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.529 qpair failed and we were unable to recover it. 00:34:43.529 [2024-07-21 03:44:28.538676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.529 [2024-07-21 03:44:28.538723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.529 qpair failed and we were unable to recover it. 00:34:43.529 [2024-07-21 03:44:28.538846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.529 [2024-07-21 03:44:28.538872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.529 qpair failed and we were unable to recover it. 00:34:43.529 [2024-07-21 03:44:28.539024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.529 [2024-07-21 03:44:28.539049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.529 qpair failed and we were unable to recover it. 00:34:43.529 [2024-07-21 03:44:28.539134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.529 [2024-07-21 03:44:28.539160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.529 qpair failed and we were unable to recover it. 00:34:43.529 [2024-07-21 03:44:28.539310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.529 [2024-07-21 03:44:28.539335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.529 qpair failed and we were unable to recover it. 00:34:43.529 [2024-07-21 03:44:28.539457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.529 [2024-07-21 03:44:28.539483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.529 qpair failed and we were unable to recover it. 00:34:43.529 [2024-07-21 03:44:28.539581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.529 [2024-07-21 03:44:28.539607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.529 qpair failed and we were unable to recover it. 00:34:43.529 [2024-07-21 03:44:28.539698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.529 [2024-07-21 03:44:28.539724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.529 qpair failed and we were unable to recover it. 00:34:43.529 [2024-07-21 03:44:28.539844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.529 [2024-07-21 03:44:28.539870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.529 qpair failed and we were unable to recover it. 00:34:43.529 [2024-07-21 03:44:28.539967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.529 [2024-07-21 03:44:28.539995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.529 qpair failed and we were unable to recover it. 00:34:43.529 [2024-07-21 03:44:28.540147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.529 [2024-07-21 03:44:28.540173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.529 qpair failed and we were unable to recover it. 00:34:43.529 [2024-07-21 03:44:28.540295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.529 [2024-07-21 03:44:28.540321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.529 qpair failed and we were unable to recover it. 00:34:43.529 [2024-07-21 03:44:28.540442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.529 [2024-07-21 03:44:28.540469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.529 qpair failed and we were unable to recover it. 00:34:43.529 [2024-07-21 03:44:28.540563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.529 [2024-07-21 03:44:28.540590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.529 qpair failed and we were unable to recover it. 00:34:43.529 [2024-07-21 03:44:28.540769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.529 [2024-07-21 03:44:28.540813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.529 qpair failed and we were unable to recover it. 00:34:43.529 [2024-07-21 03:44:28.540957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.529 [2024-07-21 03:44:28.540988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.529 qpair failed and we were unable to recover it. 00:34:43.529 [2024-07-21 03:44:28.541269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.529 [2024-07-21 03:44:28.541321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.529 qpair failed and we were unable to recover it. 00:34:43.529 [2024-07-21 03:44:28.541419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.529 [2024-07-21 03:44:28.541444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.529 qpair failed and we were unable to recover it. 00:34:43.529 [2024-07-21 03:44:28.541565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.529 [2024-07-21 03:44:28.541590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.529 qpair failed and we were unable to recover it. 00:34:43.529 [2024-07-21 03:44:28.541767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.529 [2024-07-21 03:44:28.541813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.529 qpair failed and we were unable to recover it. 00:34:43.529 [2024-07-21 03:44:28.541956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.529 [2024-07-21 03:44:28.541986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.529 qpair failed and we were unable to recover it. 00:34:43.529 [2024-07-21 03:44:28.542156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.529 [2024-07-21 03:44:28.542185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.529 qpair failed and we were unable to recover it. 00:34:43.529 [2024-07-21 03:44:28.542349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.529 [2024-07-21 03:44:28.542407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.529 qpair failed and we were unable to recover it. 00:34:43.529 [2024-07-21 03:44:28.542535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.529 [2024-07-21 03:44:28.542563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.529 qpair failed and we were unable to recover it. 00:34:43.529 [2024-07-21 03:44:28.542690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.529 [2024-07-21 03:44:28.542716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.529 qpair failed and we were unable to recover it. 00:34:43.529 [2024-07-21 03:44:28.542849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.529 [2024-07-21 03:44:28.542877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.529 qpair failed and we were unable to recover it. 00:34:43.529 [2024-07-21 03:44:28.543019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.529 [2024-07-21 03:44:28.543048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.529 qpair failed and we were unable to recover it. 00:34:43.529 [2024-07-21 03:44:28.543185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.529 [2024-07-21 03:44:28.543219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.529 qpair failed and we were unable to recover it. 00:34:43.529 [2024-07-21 03:44:28.543348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.530 [2024-07-21 03:44:28.543376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.530 qpair failed and we were unable to recover it. 00:34:43.530 [2024-07-21 03:44:28.543542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.530 [2024-07-21 03:44:28.543571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.530 qpair failed and we were unable to recover it. 00:34:43.530 [2024-07-21 03:44:28.543747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.530 [2024-07-21 03:44:28.543774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.530 qpair failed and we were unable to recover it. 00:34:43.530 [2024-07-21 03:44:28.543893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.530 [2024-07-21 03:44:28.543938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.530 qpair failed and we were unable to recover it. 00:34:43.530 [2024-07-21 03:44:28.544085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.530 [2024-07-21 03:44:28.544130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.530 qpair failed and we were unable to recover it. 00:34:43.530 [2024-07-21 03:44:28.544297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.530 [2024-07-21 03:44:28.544345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.530 qpair failed and we were unable to recover it. 00:34:43.530 [2024-07-21 03:44:28.544462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.530 [2024-07-21 03:44:28.544487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.530 qpair failed and we were unable to recover it. 00:34:43.530 [2024-07-21 03:44:28.544609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.530 [2024-07-21 03:44:28.544643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.530 qpair failed and we were unable to recover it. 00:34:43.530 [2024-07-21 03:44:28.544794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.530 [2024-07-21 03:44:28.544820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.530 qpair failed and we were unable to recover it. 00:34:43.530 [2024-07-21 03:44:28.544913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.530 [2024-07-21 03:44:28.544939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.530 qpair failed and we were unable to recover it. 00:34:43.530 [2024-07-21 03:44:28.545084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.530 [2024-07-21 03:44:28.545135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.530 qpair failed and we were unable to recover it. 00:34:43.530 [2024-07-21 03:44:28.545263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.530 [2024-07-21 03:44:28.545293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.530 qpair failed and we were unable to recover it. 00:34:43.530 [2024-07-21 03:44:28.545398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.530 [2024-07-21 03:44:28.545427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.530 qpair failed and we were unable to recover it. 00:34:43.530 [2024-07-21 03:44:28.545587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.530 [2024-07-21 03:44:28.545619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.530 qpair failed and we were unable to recover it. 00:34:43.530 [2024-07-21 03:44:28.545747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.530 [2024-07-21 03:44:28.545773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.530 qpair failed and we were unable to recover it. 00:34:43.530 [2024-07-21 03:44:28.545866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.530 [2024-07-21 03:44:28.545892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.530 qpair failed and we were unable to recover it. 00:34:43.530 [2024-07-21 03:44:28.546019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.530 [2024-07-21 03:44:28.546082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.530 qpair failed and we were unable to recover it. 00:34:43.530 [2024-07-21 03:44:28.546219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.530 [2024-07-21 03:44:28.546264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.530 qpair failed and we were unable to recover it. 00:34:43.530 [2024-07-21 03:44:28.546411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.530 [2024-07-21 03:44:28.546437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.530 qpair failed and we were unable to recover it. 00:34:43.530 [2024-07-21 03:44:28.546556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.530 [2024-07-21 03:44:28.546581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.530 qpair failed and we were unable to recover it. 00:34:43.530 [2024-07-21 03:44:28.546705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.530 [2024-07-21 03:44:28.546735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.530 qpair failed and we were unable to recover it. 00:34:43.530 [2024-07-21 03:44:28.546917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.530 [2024-07-21 03:44:28.546947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.530 qpair failed and we were unable to recover it. 00:34:43.530 [2024-07-21 03:44:28.547078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.530 [2024-07-21 03:44:28.547105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.530 qpair failed and we were unable to recover it. 00:34:43.530 [2024-07-21 03:44:28.547229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.530 [2024-07-21 03:44:28.547254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.530 qpair failed and we were unable to recover it. 00:34:43.530 [2024-07-21 03:44:28.547352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.530 [2024-07-21 03:44:28.547378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.530 qpair failed and we were unable to recover it. 00:34:43.530 [2024-07-21 03:44:28.547463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.530 [2024-07-21 03:44:28.547491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.531 qpair failed and we were unable to recover it. 00:34:43.531 [2024-07-21 03:44:28.547625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.531 [2024-07-21 03:44:28.547666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.531 qpair failed and we were unable to recover it. 00:34:43.531 [2024-07-21 03:44:28.547805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.531 [2024-07-21 03:44:28.547836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.531 qpair failed and we were unable to recover it. 00:34:43.531 [2024-07-21 03:44:28.547997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.531 [2024-07-21 03:44:28.548026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.531 qpair failed and we were unable to recover it. 00:34:43.531 [2024-07-21 03:44:28.548156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.531 [2024-07-21 03:44:28.548185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.531 qpair failed and we were unable to recover it. 00:34:43.531 [2024-07-21 03:44:28.548295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.531 [2024-07-21 03:44:28.548341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.531 qpair failed and we were unable to recover it. 00:34:43.531 [2024-07-21 03:44:28.548482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.531 [2024-07-21 03:44:28.548508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.531 qpair failed and we were unable to recover it. 00:34:43.531 [2024-07-21 03:44:28.548606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.531 [2024-07-21 03:44:28.548639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.531 qpair failed and we were unable to recover it. 00:34:43.531 [2024-07-21 03:44:28.548730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.531 [2024-07-21 03:44:28.548756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.531 qpair failed and we were unable to recover it. 00:34:43.531 [2024-07-21 03:44:28.548897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.531 [2024-07-21 03:44:28.548939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.531 qpair failed and we were unable to recover it. 00:34:43.531 [2024-07-21 03:44:28.549071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.531 [2024-07-21 03:44:28.549100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.531 qpair failed and we were unable to recover it. 00:34:43.531 [2024-07-21 03:44:28.549234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.531 [2024-07-21 03:44:28.549259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.531 qpair failed and we were unable to recover it. 00:34:43.531 [2024-07-21 03:44:28.549362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.531 [2024-07-21 03:44:28.549390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.531 qpair failed and we were unable to recover it. 00:34:43.531 [2024-07-21 03:44:28.549526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.531 [2024-07-21 03:44:28.549565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.531 qpair failed and we were unable to recover it. 00:34:43.531 [2024-07-21 03:44:28.549703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.531 [2024-07-21 03:44:28.549731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.531 qpair failed and we were unable to recover it. 00:34:43.531 [2024-07-21 03:44:28.549877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.531 [2024-07-21 03:44:28.549907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.531 qpair failed and we were unable to recover it. 00:34:43.531 [2024-07-21 03:44:28.550111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.531 [2024-07-21 03:44:28.550166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.531 qpair failed and we were unable to recover it. 00:34:43.531 [2024-07-21 03:44:28.550268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.531 [2024-07-21 03:44:28.550296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.531 qpair failed and we were unable to recover it. 00:34:43.531 [2024-07-21 03:44:28.550418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.531 [2024-07-21 03:44:28.550446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.531 qpair failed and we were unable to recover it. 00:34:43.531 [2024-07-21 03:44:28.550572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.531 [2024-07-21 03:44:28.550600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.531 qpair failed and we were unable to recover it. 00:34:43.531 [2024-07-21 03:44:28.550710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.531 [2024-07-21 03:44:28.550735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.531 qpair failed and we were unable to recover it. 00:34:43.531 [2024-07-21 03:44:28.550895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.531 [2024-07-21 03:44:28.550924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.531 qpair failed and we were unable to recover it. 00:34:43.531 [2024-07-21 03:44:28.551110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.531 [2024-07-21 03:44:28.551164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.531 qpair failed and we were unable to recover it. 00:34:43.531 [2024-07-21 03:44:28.551414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.531 [2024-07-21 03:44:28.551463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.531 qpair failed and we were unable to recover it. 00:34:43.531 [2024-07-21 03:44:28.551593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.531 [2024-07-21 03:44:28.551625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.531 qpair failed and we were unable to recover it. 00:34:43.531 [2024-07-21 03:44:28.551717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.531 [2024-07-21 03:44:28.551742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.531 qpair failed and we were unable to recover it. 00:34:43.531 [2024-07-21 03:44:28.551900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.531 [2024-07-21 03:44:28.551928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.531 qpair failed and we were unable to recover it. 00:34:43.531 [2024-07-21 03:44:28.552115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.531 [2024-07-21 03:44:28.552166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.531 qpair failed and we were unable to recover it. 00:34:43.531 [2024-07-21 03:44:28.552354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.531 [2024-07-21 03:44:28.552400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.531 qpair failed and we were unable to recover it. 00:34:43.531 [2024-07-21 03:44:28.552538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.531 [2024-07-21 03:44:28.552566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.531 qpair failed and we were unable to recover it. 00:34:43.531 [2024-07-21 03:44:28.552719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.531 [2024-07-21 03:44:28.552745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.531 qpair failed and we were unable to recover it. 00:34:43.531 [2024-07-21 03:44:28.552838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.531 [2024-07-21 03:44:28.552881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.531 qpair failed and we were unable to recover it. 00:34:43.531 [2024-07-21 03:44:28.553076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.531 [2024-07-21 03:44:28.553136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.531 qpair failed and we were unable to recover it. 00:34:43.532 [2024-07-21 03:44:28.553320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.532 [2024-07-21 03:44:28.553346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.532 qpair failed and we were unable to recover it. 00:34:43.532 [2024-07-21 03:44:28.553496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.532 [2024-07-21 03:44:28.553524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.532 qpair failed and we were unable to recover it. 00:34:43.532 [2024-07-21 03:44:28.553648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.532 [2024-07-21 03:44:28.553705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.532 qpair failed and we were unable to recover it. 00:34:43.532 [2024-07-21 03:44:28.553805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.532 [2024-07-21 03:44:28.553832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.532 qpair failed and we were unable to recover it. 00:34:43.532 [2024-07-21 03:44:28.554012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.532 [2024-07-21 03:44:28.554075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.532 qpair failed and we were unable to recover it. 00:34:43.532 [2024-07-21 03:44:28.554282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.532 [2024-07-21 03:44:28.554335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.532 qpair failed and we were unable to recover it. 00:34:43.532 [2024-07-21 03:44:28.554461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.532 [2024-07-21 03:44:28.554501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.532 qpair failed and we were unable to recover it. 00:34:43.532 [2024-07-21 03:44:28.554638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.532 [2024-07-21 03:44:28.554667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.532 qpair failed and we were unable to recover it. 00:34:43.532 [2024-07-21 03:44:28.554782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.532 [2024-07-21 03:44:28.554812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.532 qpair failed and we were unable to recover it. 00:34:43.532 [2024-07-21 03:44:28.555096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.532 [2024-07-21 03:44:28.555148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.532 qpair failed and we were unable to recover it. 00:34:43.532 [2024-07-21 03:44:28.555393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.532 [2024-07-21 03:44:28.555442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.532 qpair failed and we were unable to recover it. 00:34:43.532 [2024-07-21 03:44:28.555566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.532 [2024-07-21 03:44:28.555595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.532 qpair failed and we were unable to recover it. 00:34:43.532 [2024-07-21 03:44:28.555702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.532 [2024-07-21 03:44:28.555728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.532 qpair failed and we were unable to recover it. 00:34:43.532 [2024-07-21 03:44:28.555816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.532 [2024-07-21 03:44:28.555841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.532 qpair failed and we were unable to recover it. 00:34:43.532 [2024-07-21 03:44:28.556031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.532 [2024-07-21 03:44:28.556086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.532 qpair failed and we were unable to recover it. 00:34:43.532 [2024-07-21 03:44:28.556294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.532 [2024-07-21 03:44:28.556347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.532 qpair failed and we were unable to recover it. 00:34:43.532 [2024-07-21 03:44:28.556467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.532 [2024-07-21 03:44:28.556495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.532 qpair failed and we were unable to recover it. 00:34:43.532 [2024-07-21 03:44:28.556640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.532 [2024-07-21 03:44:28.556669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.532 qpair failed and we were unable to recover it. 00:34:43.532 [2024-07-21 03:44:28.556761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.532 [2024-07-21 03:44:28.556786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.532 qpair failed and we were unable to recover it. 00:34:43.532 [2024-07-21 03:44:28.556946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.532 [2024-07-21 03:44:28.556988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.532 qpair failed and we were unable to recover it. 00:34:43.532 [2024-07-21 03:44:28.557195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.532 [2024-07-21 03:44:28.557244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.532 qpair failed and we were unable to recover it. 00:34:43.532 [2024-07-21 03:44:28.557446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.532 [2024-07-21 03:44:28.557504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.532 qpair failed and we were unable to recover it. 00:34:43.532 [2024-07-21 03:44:28.557650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.532 [2024-07-21 03:44:28.557680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.532 qpair failed and we were unable to recover it. 00:34:43.532 [2024-07-21 03:44:28.557832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.532 [2024-07-21 03:44:28.557875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.532 qpair failed and we were unable to recover it. 00:34:43.532 [2024-07-21 03:44:28.558001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.532 [2024-07-21 03:44:28.558043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.532 qpair failed and we were unable to recover it. 00:34:43.532 [2024-07-21 03:44:28.558219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.533 [2024-07-21 03:44:28.558266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.533 qpair failed and we were unable to recover it. 00:34:43.533 [2024-07-21 03:44:28.558416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.533 [2024-07-21 03:44:28.558442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.533 qpair failed and we were unable to recover it. 00:34:43.533 [2024-07-21 03:44:28.558565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.533 [2024-07-21 03:44:28.558592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.533 qpair failed and we were unable to recover it. 00:34:43.533 [2024-07-21 03:44:28.558741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.533 [2024-07-21 03:44:28.558784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.533 qpair failed and we were unable to recover it. 00:34:43.533 [2024-07-21 03:44:28.558891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.533 [2024-07-21 03:44:28.558921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.533 qpair failed and we were unable to recover it. 00:34:43.533 [2024-07-21 03:44:28.559030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.533 [2024-07-21 03:44:28.559058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.533 qpair failed and we were unable to recover it. 00:34:43.533 [2024-07-21 03:44:28.559213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.533 [2024-07-21 03:44:28.559239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.533 qpair failed and we were unable to recover it. 00:34:43.533 [2024-07-21 03:44:28.559384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.533 [2024-07-21 03:44:28.559409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.533 qpair failed and we were unable to recover it. 00:34:43.533 [2024-07-21 03:44:28.559499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.533 [2024-07-21 03:44:28.559525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.533 qpair failed and we were unable to recover it. 00:34:43.533 [2024-07-21 03:44:28.559626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.533 [2024-07-21 03:44:28.559653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.533 qpair failed and we were unable to recover it. 00:34:43.533 [2024-07-21 03:44:28.559785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.533 [2024-07-21 03:44:28.559815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.533 qpair failed and we were unable to recover it. 00:34:43.533 [2024-07-21 03:44:28.559956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.533 [2024-07-21 03:44:28.559985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.533 qpair failed and we were unable to recover it. 00:34:43.533 [2024-07-21 03:44:28.560126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.533 [2024-07-21 03:44:28.560152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.533 qpair failed and we were unable to recover it. 00:34:43.533 [2024-07-21 03:44:28.560272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.533 [2024-07-21 03:44:28.560297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.533 qpair failed and we were unable to recover it. 00:34:43.533 [2024-07-21 03:44:28.560458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.533 [2024-07-21 03:44:28.560498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.533 qpair failed and we were unable to recover it. 00:34:43.533 [2024-07-21 03:44:28.560632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.533 [2024-07-21 03:44:28.560679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.533 qpair failed and we were unable to recover it. 00:34:43.533 [2024-07-21 03:44:28.560787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.533 [2024-07-21 03:44:28.560816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.533 qpair failed and we were unable to recover it. 00:34:43.533 [2024-07-21 03:44:28.560975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.533 [2024-07-21 03:44:28.561005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.533 qpair failed and we were unable to recover it. 00:34:43.533 [2024-07-21 03:44:28.561199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.533 [2024-07-21 03:44:28.561245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.533 qpair failed and we were unable to recover it. 00:34:43.533 [2024-07-21 03:44:28.561448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.533 [2024-07-21 03:44:28.561504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.533 qpair failed and we were unable to recover it. 00:34:43.533 [2024-07-21 03:44:28.561646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.533 [2024-07-21 03:44:28.561673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.533 qpair failed and we were unable to recover it. 00:34:43.533 [2024-07-21 03:44:28.561818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.533 [2024-07-21 03:44:28.561844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.533 qpair failed and we were unable to recover it. 00:34:43.533 [2024-07-21 03:44:28.561989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.533 [2024-07-21 03:44:28.562020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.533 qpair failed and we were unable to recover it. 00:34:43.533 [2024-07-21 03:44:28.562222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.533 [2024-07-21 03:44:28.562252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.533 qpair failed and we were unable to recover it. 00:34:43.533 [2024-07-21 03:44:28.562432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.533 [2024-07-21 03:44:28.562477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.534 qpair failed and we were unable to recover it. 00:34:43.534 [2024-07-21 03:44:28.562632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.534 [2024-07-21 03:44:28.562660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.534 qpair failed and we were unable to recover it. 00:34:43.534 [2024-07-21 03:44:28.562786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.534 [2024-07-21 03:44:28.562812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.534 qpair failed and we were unable to recover it. 00:34:43.534 [2024-07-21 03:44:28.562939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.534 [2024-07-21 03:44:28.562982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.534 qpair failed and we were unable to recover it. 00:34:43.534 [2024-07-21 03:44:28.563177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.534 [2024-07-21 03:44:28.563232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.534 qpair failed and we were unable to recover it. 00:34:43.534 [2024-07-21 03:44:28.563462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.534 [2024-07-21 03:44:28.563518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.534 qpair failed and we were unable to recover it. 00:34:43.534 [2024-07-21 03:44:28.563667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.534 [2024-07-21 03:44:28.563694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.534 qpair failed and we were unable to recover it. 00:34:43.534 [2024-07-21 03:44:28.563841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.534 [2024-07-21 03:44:28.563868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.534 qpair failed and we were unable to recover it. 00:34:43.534 [2024-07-21 03:44:28.563993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.534 [2024-07-21 03:44:28.564019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.534 qpair failed and we were unable to recover it. 00:34:43.534 [2024-07-21 03:44:28.564166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.534 [2024-07-21 03:44:28.564194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.534 qpair failed and we were unable to recover it. 00:34:43.534 [2024-07-21 03:44:28.564354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.534 [2024-07-21 03:44:28.564382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.534 qpair failed and we were unable to recover it. 00:34:43.534 [2024-07-21 03:44:28.564519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.534 [2024-07-21 03:44:28.564548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.534 qpair failed and we were unable to recover it. 00:34:43.534 [2024-07-21 03:44:28.564713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.534 [2024-07-21 03:44:28.564740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.534 qpair failed and we were unable to recover it. 00:34:43.534 [2024-07-21 03:44:28.564860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.534 [2024-07-21 03:44:28.564886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.534 qpair failed and we were unable to recover it. 00:34:43.534 [2024-07-21 03:44:28.565022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.534 [2024-07-21 03:44:28.565048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.534 qpair failed and we were unable to recover it. 00:34:43.534 [2024-07-21 03:44:28.565190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.534 [2024-07-21 03:44:28.565218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.534 qpair failed and we were unable to recover it. 00:34:43.534 [2024-07-21 03:44:28.565320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.534 [2024-07-21 03:44:28.565348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.534 qpair failed and we were unable to recover it. 00:34:43.534 [2024-07-21 03:44:28.565508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.534 [2024-07-21 03:44:28.565537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.534 qpair failed and we were unable to recover it. 00:34:43.534 [2024-07-21 03:44:28.565675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.534 [2024-07-21 03:44:28.565717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.534 qpair failed and we were unable to recover it. 00:34:43.534 [2024-07-21 03:44:28.565816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.534 [2024-07-21 03:44:28.565842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.534 qpair failed and we were unable to recover it. 00:34:43.534 [2024-07-21 03:44:28.565977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.534 [2024-07-21 03:44:28.566006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.534 qpair failed and we were unable to recover it. 00:34:43.534 [2024-07-21 03:44:28.566132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.534 [2024-07-21 03:44:28.566161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.534 qpair failed and we were unable to recover it. 00:34:43.534 [2024-07-21 03:44:28.566323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.534 [2024-07-21 03:44:28.566352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.534 qpair failed and we were unable to recover it. 00:34:43.534 [2024-07-21 03:44:28.566501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.534 [2024-07-21 03:44:28.566528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.534 qpair failed and we were unable to recover it. 00:34:43.534 [2024-07-21 03:44:28.566668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.534 [2024-07-21 03:44:28.566708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.534 qpair failed and we were unable to recover it. 00:34:43.534 [2024-07-21 03:44:28.566840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.534 [2024-07-21 03:44:28.566867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.534 qpair failed and we were unable to recover it. 00:34:43.534 [2024-07-21 03:44:28.566998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.534 [2024-07-21 03:44:28.567049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.534 qpair failed and we were unable to recover it. 00:34:43.534 [2024-07-21 03:44:28.567214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.535 [2024-07-21 03:44:28.567243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.535 qpair failed and we were unable to recover it. 00:34:43.535 [2024-07-21 03:44:28.567348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.535 [2024-07-21 03:44:28.567378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.535 qpair failed and we were unable to recover it. 00:34:43.535 [2024-07-21 03:44:28.567525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.535 [2024-07-21 03:44:28.567565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.535 qpair failed and we were unable to recover it. 00:34:43.535 [2024-07-21 03:44:28.567701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.535 [2024-07-21 03:44:28.567730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.535 qpair failed and we were unable to recover it. 00:34:43.535 [2024-07-21 03:44:28.567900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.535 [2024-07-21 03:44:28.567929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.535 qpair failed and we were unable to recover it. 00:34:43.535 [2024-07-21 03:44:28.568077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.535 [2024-07-21 03:44:28.568103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.535 qpair failed and we were unable to recover it. 00:34:43.535 [2024-07-21 03:44:28.568249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.535 [2024-07-21 03:44:28.568279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.535 qpair failed and we were unable to recover it. 00:34:43.535 [2024-07-21 03:44:28.568408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.535 [2024-07-21 03:44:28.568436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.535 qpair failed and we were unable to recover it. 00:34:43.535 [2024-07-21 03:44:28.568538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.535 [2024-07-21 03:44:28.568580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.535 qpair failed and we were unable to recover it. 00:34:43.535 [2024-07-21 03:44:28.568736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.535 [2024-07-21 03:44:28.568762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.535 qpair failed and we were unable to recover it. 00:34:43.535 [2024-07-21 03:44:28.568874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.535 [2024-07-21 03:44:28.568904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.535 qpair failed and we were unable to recover it. 00:34:43.535 [2024-07-21 03:44:28.569093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.535 [2024-07-21 03:44:28.569152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.535 qpair failed and we were unable to recover it. 00:34:43.535 [2024-07-21 03:44:28.569261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.535 [2024-07-21 03:44:28.569287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.535 qpair failed and we were unable to recover it. 00:34:43.535 [2024-07-21 03:44:28.569442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.535 [2024-07-21 03:44:28.569470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.535 qpair failed and we were unable to recover it. 00:34:43.535 [2024-07-21 03:44:28.569596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.535 [2024-07-21 03:44:28.569633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.535 qpair failed and we were unable to recover it. 00:34:43.535 [2024-07-21 03:44:28.569769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.535 [2024-07-21 03:44:28.569795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.535 qpair failed and we were unable to recover it. 00:34:43.535 [2024-07-21 03:44:28.569919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.535 [2024-07-21 03:44:28.569959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.535 qpair failed and we were unable to recover it. 00:34:43.535 [2024-07-21 03:44:28.570149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.535 [2024-07-21 03:44:28.570206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.535 qpair failed and we were unable to recover it. 00:34:43.535 [2024-07-21 03:44:28.570322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.535 [2024-07-21 03:44:28.570364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.535 qpair failed and we were unable to recover it. 00:34:43.535 [2024-07-21 03:44:28.570488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.535 [2024-07-21 03:44:28.570516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.535 qpair failed and we were unable to recover it. 00:34:43.535 [2024-07-21 03:44:28.570684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.535 [2024-07-21 03:44:28.570724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.535 qpair failed and we were unable to recover it. 00:34:43.535 [2024-07-21 03:44:28.570870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.535 [2024-07-21 03:44:28.570910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.535 qpair failed and we were unable to recover it. 00:34:43.535 [2024-07-21 03:44:28.571057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.535 [2024-07-21 03:44:28.571103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.535 qpair failed and we were unable to recover it. 00:34:43.535 [2024-07-21 03:44:28.571247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.535 [2024-07-21 03:44:28.571276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.535 qpair failed and we were unable to recover it. 00:34:43.535 [2024-07-21 03:44:28.571391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.535 [2024-07-21 03:44:28.571418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.535 qpair failed and we were unable to recover it. 00:34:43.535 [2024-07-21 03:44:28.571563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.536 [2024-07-21 03:44:28.571590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.536 qpair failed and we were unable to recover it. 00:34:43.536 [2024-07-21 03:44:28.571746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.536 [2024-07-21 03:44:28.571777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.536 qpair failed and we were unable to recover it. 00:34:43.536 [2024-07-21 03:44:28.571956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.536 [2024-07-21 03:44:28.572007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.536 qpair failed and we were unable to recover it. 00:34:43.536 [2024-07-21 03:44:28.572143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.536 [2024-07-21 03:44:28.572171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.536 qpair failed and we were unable to recover it. 00:34:43.536 [2024-07-21 03:44:28.572272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.536 [2024-07-21 03:44:28.572300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.536 qpair failed and we were unable to recover it. 00:34:43.536 [2024-07-21 03:44:28.572447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.536 [2024-07-21 03:44:28.572473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.536 qpair failed and we were unable to recover it. 00:34:43.536 [2024-07-21 03:44:28.572597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.536 [2024-07-21 03:44:28.572631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.536 qpair failed and we were unable to recover it. 00:34:43.536 [2024-07-21 03:44:28.572780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.536 [2024-07-21 03:44:28.572807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.536 qpair failed and we were unable to recover it. 00:34:43.536 [2024-07-21 03:44:28.572924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.536 [2024-07-21 03:44:28.572953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.536 qpair failed and we were unable to recover it. 00:34:43.536 [2024-07-21 03:44:28.573112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.536 [2024-07-21 03:44:28.573141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.536 qpair failed and we were unable to recover it. 00:34:43.536 [2024-07-21 03:44:28.573298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.536 [2024-07-21 03:44:28.573328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.536 qpair failed and we were unable to recover it. 00:34:43.536 [2024-07-21 03:44:28.573475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.536 [2024-07-21 03:44:28.573502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.536 qpair failed and we were unable to recover it. 00:34:43.536 [2024-07-21 03:44:28.573646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.536 [2024-07-21 03:44:28.573672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.536 qpair failed and we were unable to recover it. 00:34:43.536 [2024-07-21 03:44:28.573785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.536 [2024-07-21 03:44:28.573816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.536 qpair failed and we were unable to recover it. 00:34:43.536 [2024-07-21 03:44:28.573956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.536 [2024-07-21 03:44:28.574006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.536 qpair failed and we were unable to recover it. 00:34:43.536 [2024-07-21 03:44:28.574120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.536 [2024-07-21 03:44:28.574151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.536 qpair failed and we were unable to recover it. 00:34:43.536 [2024-07-21 03:44:28.574311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.536 [2024-07-21 03:44:28.574340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.536 qpair failed and we were unable to recover it. 00:34:43.536 [2024-07-21 03:44:28.574487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.536 [2024-07-21 03:44:28.574514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.536 qpair failed and we were unable to recover it. 00:34:43.536 [2024-07-21 03:44:28.574633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.536 [2024-07-21 03:44:28.574661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.536 qpair failed and we were unable to recover it. 00:34:43.536 [2024-07-21 03:44:28.574752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.536 [2024-07-21 03:44:28.574778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.536 qpair failed and we were unable to recover it. 00:34:43.536 [2024-07-21 03:44:28.574894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.536 [2024-07-21 03:44:28.574923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.536 qpair failed and we were unable to recover it. 00:34:43.536 [2024-07-21 03:44:28.575080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.536 [2024-07-21 03:44:28.575109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.536 qpair failed and we were unable to recover it. 00:34:43.536 [2024-07-21 03:44:28.575274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.536 [2024-07-21 03:44:28.575338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.536 qpair failed and we were unable to recover it. 00:34:43.536 [2024-07-21 03:44:28.575471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.536 [2024-07-21 03:44:28.575501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.536 qpair failed and we were unable to recover it. 00:34:43.536 [2024-07-21 03:44:28.575633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.536 [2024-07-21 03:44:28.575676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.536 qpair failed and we were unable to recover it. 00:34:43.536 [2024-07-21 03:44:28.575772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.536 [2024-07-21 03:44:28.575800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.536 qpair failed and we were unable to recover it. 00:34:43.537 [2024-07-21 03:44:28.575939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.537 [2024-07-21 03:44:28.575984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.537 qpair failed and we were unable to recover it. 00:34:43.537 [2024-07-21 03:44:28.576155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.537 [2024-07-21 03:44:28.576183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.537 qpair failed and we were unable to recover it. 00:34:43.537 [2024-07-21 03:44:28.576346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.537 [2024-07-21 03:44:28.576392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.537 qpair failed and we were unable to recover it. 00:34:43.537 [2024-07-21 03:44:28.576488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.537 [2024-07-21 03:44:28.576513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.537 qpair failed and we were unable to recover it. 00:34:43.537 [2024-07-21 03:44:28.576641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.537 [2024-07-21 03:44:28.576668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.537 qpair failed and we were unable to recover it. 00:34:43.537 [2024-07-21 03:44:28.576838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.537 [2024-07-21 03:44:28.576886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.537 qpair failed and we were unable to recover it. 00:34:43.537 [2024-07-21 03:44:28.577031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.537 [2024-07-21 03:44:28.577059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.537 qpair failed and we were unable to recover it. 00:34:43.537 [2024-07-21 03:44:28.577220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.537 [2024-07-21 03:44:28.577264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.537 qpair failed and we were unable to recover it. 00:34:43.537 [2024-07-21 03:44:28.577389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.537 [2024-07-21 03:44:28.577414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.537 qpair failed and we were unable to recover it. 00:34:43.537 [2024-07-21 03:44:28.577561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.537 [2024-07-21 03:44:28.577588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.537 qpair failed and we were unable to recover it. 00:34:43.537 [2024-07-21 03:44:28.577735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.537 [2024-07-21 03:44:28.577780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.537 qpair failed and we were unable to recover it. 00:34:43.537 [2024-07-21 03:44:28.577947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.537 [2024-07-21 03:44:28.577989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.537 qpair failed and we were unable to recover it. 00:34:43.537 [2024-07-21 03:44:28.578123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.537 [2024-07-21 03:44:28.578195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.537 qpair failed and we were unable to recover it. 00:34:43.537 [2024-07-21 03:44:28.578346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.537 [2024-07-21 03:44:28.578372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.537 qpair failed and we were unable to recover it. 00:34:43.537 [2024-07-21 03:44:28.578462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.537 [2024-07-21 03:44:28.578489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.537 qpair failed and we were unable to recover it. 00:34:43.537 [2024-07-21 03:44:28.578626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.537 [2024-07-21 03:44:28.578683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.537 qpair failed and we were unable to recover it. 00:34:43.537 [2024-07-21 03:44:28.578830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.537 [2024-07-21 03:44:28.578861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.537 qpair failed and we were unable to recover it. 00:34:43.537 [2024-07-21 03:44:28.578974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.537 [2024-07-21 03:44:28.579004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.537 qpair failed and we were unable to recover it. 00:34:43.537 [2024-07-21 03:44:28.579149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.537 [2024-07-21 03:44:28.579201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.537 qpair failed and we were unable to recover it. 00:34:43.537 [2024-07-21 03:44:28.579362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.537 [2024-07-21 03:44:28.579391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.537 qpair failed and we were unable to recover it. 00:34:43.537 [2024-07-21 03:44:28.579564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.537 [2024-07-21 03:44:28.579609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.537 qpair failed and we were unable to recover it. 00:34:43.537 [2024-07-21 03:44:28.579770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.537 [2024-07-21 03:44:28.579802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.537 qpair failed and we were unable to recover it. 00:34:43.537 [2024-07-21 03:44:28.579941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.537 [2024-07-21 03:44:28.579969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.537 qpair failed and we were unable to recover it. 00:34:43.537 [2024-07-21 03:44:28.580127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.537 [2024-07-21 03:44:28.580155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.537 qpair failed and we were unable to recover it. 00:34:43.537 [2024-07-21 03:44:28.580359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.537 [2024-07-21 03:44:28.580390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.537 qpair failed and we were unable to recover it. 00:34:43.537 [2024-07-21 03:44:28.580528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.537 [2024-07-21 03:44:28.580556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.537 qpair failed and we were unable to recover it. 00:34:43.537 [2024-07-21 03:44:28.580690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.537 [2024-07-21 03:44:28.580734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.537 qpair failed and we were unable to recover it. 00:34:43.537 [2024-07-21 03:44:28.580903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.537 [2024-07-21 03:44:28.580946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.537 qpair failed and we were unable to recover it. 00:34:43.537 [2024-07-21 03:44:28.581062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.537 [2024-07-21 03:44:28.581109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.537 qpair failed and we were unable to recover it. 00:34:43.537 [2024-07-21 03:44:28.581251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.537 [2024-07-21 03:44:28.581296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.537 qpair failed and we were unable to recover it. 00:34:43.537 [2024-07-21 03:44:28.581414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.537 [2024-07-21 03:44:28.581440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.537 qpair failed and we were unable to recover it. 00:34:43.537 [2024-07-21 03:44:28.581566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.537 [2024-07-21 03:44:28.581592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.537 qpair failed and we were unable to recover it. 00:34:43.538 [2024-07-21 03:44:28.581740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.538 [2024-07-21 03:44:28.581784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.538 qpair failed and we were unable to recover it. 00:34:43.538 [2024-07-21 03:44:28.581960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.538 [2024-07-21 03:44:28.582004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.538 qpair failed and we were unable to recover it. 00:34:43.538 [2024-07-21 03:44:28.582146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.538 [2024-07-21 03:44:28.582192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.538 qpair failed and we were unable to recover it. 00:34:43.538 [2024-07-21 03:44:28.582286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.538 [2024-07-21 03:44:28.582314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.538 qpair failed and we were unable to recover it. 00:34:43.538 [2024-07-21 03:44:28.582437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.538 [2024-07-21 03:44:28.582463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.538 qpair failed and we were unable to recover it. 00:34:43.538 [2024-07-21 03:44:28.582651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.538 [2024-07-21 03:44:28.582682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.538 qpair failed and we were unable to recover it. 00:34:43.538 [2024-07-21 03:44:28.582827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.538 [2024-07-21 03:44:28.582872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.538 qpair failed and we were unable to recover it. 00:34:43.538 [2024-07-21 03:44:28.583022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.538 [2024-07-21 03:44:28.583065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.538 qpair failed and we were unable to recover it. 00:34:43.538 [2024-07-21 03:44:28.583215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.538 [2024-07-21 03:44:28.583241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.538 qpair failed and we were unable to recover it. 00:34:43.538 [2024-07-21 03:44:28.583367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.538 [2024-07-21 03:44:28.583393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.538 qpair failed and we were unable to recover it. 00:34:43.538 [2024-07-21 03:44:28.583525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.538 [2024-07-21 03:44:28.583551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.538 qpair failed and we were unable to recover it. 00:34:43.538 [2024-07-21 03:44:28.583696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.538 [2024-07-21 03:44:28.583740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.538 qpair failed and we were unable to recover it. 00:34:43.538 [2024-07-21 03:44:28.583908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.538 [2024-07-21 03:44:28.583951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.538 qpair failed and we were unable to recover it. 00:34:43.538 [2024-07-21 03:44:28.584123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.538 [2024-07-21 03:44:28.584167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.538 qpair failed and we were unable to recover it. 00:34:43.538 [2024-07-21 03:44:28.584293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.538 [2024-07-21 03:44:28.584320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.538 qpair failed and we were unable to recover it. 00:34:43.538 [2024-07-21 03:44:28.584448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.538 [2024-07-21 03:44:28.584474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.538 qpair failed and we were unable to recover it. 00:34:43.538 [2024-07-21 03:44:28.584585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.538 [2024-07-21 03:44:28.584633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.538 qpair failed and we were unable to recover it. 00:34:43.538 [2024-07-21 03:44:28.584796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.538 [2024-07-21 03:44:28.584827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.538 qpair failed and we were unable to recover it. 00:34:43.538 [2024-07-21 03:44:28.584936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.538 [2024-07-21 03:44:28.584965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.538 qpair failed and we were unable to recover it. 00:34:43.538 [2024-07-21 03:44:28.585112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.538 [2024-07-21 03:44:28.585138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.538 qpair failed and we were unable to recover it. 00:34:43.538 [2024-07-21 03:44:28.585274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.538 [2024-07-21 03:44:28.585303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.538 qpair failed and we were unable to recover it. 00:34:43.538 [2024-07-21 03:44:28.585449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.538 [2024-07-21 03:44:28.585476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.538 qpair failed and we were unable to recover it. 00:34:43.538 [2024-07-21 03:44:28.585599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.538 [2024-07-21 03:44:28.585631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.538 qpair failed and we were unable to recover it. 00:34:43.538 [2024-07-21 03:44:28.585729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.538 [2024-07-21 03:44:28.585756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.538 qpair failed and we were unable to recover it. 00:34:43.538 [2024-07-21 03:44:28.585874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.538 [2024-07-21 03:44:28.585917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.538 qpair failed and we were unable to recover it. 00:34:43.538 [2024-07-21 03:44:28.586077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.538 [2024-07-21 03:44:28.586106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.538 qpair failed and we were unable to recover it. 00:34:43.538 [2024-07-21 03:44:28.586233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.538 [2024-07-21 03:44:28.586262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.538 qpair failed and we were unable to recover it. 00:34:43.538 [2024-07-21 03:44:28.586392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.538 [2024-07-21 03:44:28.586419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.538 qpair failed and we were unable to recover it. 00:34:43.538 [2024-07-21 03:44:28.586577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.538 [2024-07-21 03:44:28.586623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.538 qpair failed and we were unable to recover it. 00:34:43.538 [2024-07-21 03:44:28.586754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.538 [2024-07-21 03:44:28.586783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.538 qpair failed and we were unable to recover it. 00:34:43.538 [2024-07-21 03:44:28.586966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.538 [2024-07-21 03:44:28.587011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.538 qpair failed and we were unable to recover it. 00:34:43.538 [2024-07-21 03:44:28.587211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.538 [2024-07-21 03:44:28.587266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.538 qpair failed and we were unable to recover it. 00:34:43.538 [2024-07-21 03:44:28.587398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.538 [2024-07-21 03:44:28.587428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.538 qpair failed and we were unable to recover it. 00:34:43.538 [2024-07-21 03:44:28.587592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.538 [2024-07-21 03:44:28.587632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.538 qpair failed and we were unable to recover it. 00:34:43.538 [2024-07-21 03:44:28.587767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.538 [2024-07-21 03:44:28.587795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.538 qpair failed and we were unable to recover it. 00:34:43.538 [2024-07-21 03:44:28.587964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.538 [2024-07-21 03:44:28.587993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.538 qpair failed and we were unable to recover it. 00:34:43.538 [2024-07-21 03:44:28.588184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.538 [2024-07-21 03:44:28.588221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.538 qpair failed and we were unable to recover it. 00:34:43.538 [2024-07-21 03:44:28.588422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.538 [2024-07-21 03:44:28.588475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.538 qpair failed and we were unable to recover it. 00:34:43.538 [2024-07-21 03:44:28.588651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.538 [2024-07-21 03:44:28.588679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.538 qpair failed and we were unable to recover it. 00:34:43.538 [2024-07-21 03:44:28.588808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.538 [2024-07-21 03:44:28.588835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.538 qpair failed and we were unable to recover it. 00:34:43.538 [2024-07-21 03:44:28.588935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.538 [2024-07-21 03:44:28.588980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.539 qpair failed and we were unable to recover it. 00:34:43.539 [2024-07-21 03:44:28.589115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.539 [2024-07-21 03:44:28.589143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.539 qpair failed and we were unable to recover it. 00:34:43.539 [2024-07-21 03:44:28.589271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.539 [2024-07-21 03:44:28.589299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.539 qpair failed and we were unable to recover it. 00:34:43.539 [2024-07-21 03:44:28.589430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.539 [2024-07-21 03:44:28.589459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.539 qpair failed and we were unable to recover it. 00:34:43.539 [2024-07-21 03:44:28.589593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.539 [2024-07-21 03:44:28.589629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.539 qpair failed and we were unable to recover it. 00:34:43.539 [2024-07-21 03:44:28.589767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.539 [2024-07-21 03:44:28.589793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.539 qpair failed and we were unable to recover it. 00:34:43.539 [2024-07-21 03:44:28.589942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.539 [2024-07-21 03:44:28.589985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.539 qpair failed and we were unable to recover it. 00:34:43.539 [2024-07-21 03:44:28.590141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.539 [2024-07-21 03:44:28.590166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.539 qpair failed and we were unable to recover it. 00:34:43.539 [2024-07-21 03:44:28.590344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.539 [2024-07-21 03:44:28.590372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.539 qpair failed and we were unable to recover it. 00:34:43.539 [2024-07-21 03:44:28.590502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.539 [2024-07-21 03:44:28.590531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.539 qpair failed and we were unable to recover it. 00:34:43.539 [2024-07-21 03:44:28.590678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.539 [2024-07-21 03:44:28.590705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.539 qpair failed and we were unable to recover it. 00:34:43.539 [2024-07-21 03:44:28.590801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.539 [2024-07-21 03:44:28.590827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.539 qpair failed and we were unable to recover it. 00:34:43.539 [2024-07-21 03:44:28.590954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.539 [2024-07-21 03:44:28.590979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.539 qpair failed and we were unable to recover it. 00:34:43.539 [2024-07-21 03:44:28.591095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.539 [2024-07-21 03:44:28.591123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.539 qpair failed and we were unable to recover it. 00:34:43.539 [2024-07-21 03:44:28.591273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.539 [2024-07-21 03:44:28.591304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.539 qpair failed and we were unable to recover it. 00:34:43.539 [2024-07-21 03:44:28.591433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.539 [2024-07-21 03:44:28.591461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.539 qpair failed and we were unable to recover it. 00:34:43.539 [2024-07-21 03:44:28.591585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.539 [2024-07-21 03:44:28.591621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.539 qpair failed and we were unable to recover it. 00:34:43.539 [2024-07-21 03:44:28.591747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.539 [2024-07-21 03:44:28.591773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.539 qpair failed and we were unable to recover it. 00:34:43.539 [2024-07-21 03:44:28.591893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.539 [2024-07-21 03:44:28.591919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.539 qpair failed and we were unable to recover it. 00:34:43.539 [2024-07-21 03:44:28.592058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.539 [2024-07-21 03:44:28.592086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.539 qpair failed and we were unable to recover it. 00:34:43.539 [2024-07-21 03:44:28.592244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.539 [2024-07-21 03:44:28.592272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.539 qpair failed and we were unable to recover it. 00:34:43.539 [2024-07-21 03:44:28.592428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.539 [2024-07-21 03:44:28.592456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.539 qpair failed and we were unable to recover it. 00:34:43.539 [2024-07-21 03:44:28.592557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.539 [2024-07-21 03:44:28.592585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.539 qpair failed and we were unable to recover it. 00:34:43.539 [2024-07-21 03:44:28.592748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.539 [2024-07-21 03:44:28.592788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.539 qpair failed and we were unable to recover it. 00:34:43.539 [2024-07-21 03:44:28.592925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.539 [2024-07-21 03:44:28.592965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.539 qpair failed and we were unable to recover it. 00:34:43.539 [2024-07-21 03:44:28.593061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.539 [2024-07-21 03:44:28.593105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.539 qpair failed and we were unable to recover it. 00:34:43.539 [2024-07-21 03:44:28.593262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.539 [2024-07-21 03:44:28.593292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.539 qpair failed and we were unable to recover it. 00:34:43.539 [2024-07-21 03:44:28.593453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.539 [2024-07-21 03:44:28.593483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.539 qpair failed and we were unable to recover it. 00:34:43.539 [2024-07-21 03:44:28.593584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.539 [2024-07-21 03:44:28.593620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.539 qpair failed and we were unable to recover it. 00:34:43.539 [2024-07-21 03:44:28.593772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.539 [2024-07-21 03:44:28.593799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.539 qpair failed and we were unable to recover it. 00:34:43.539 [2024-07-21 03:44:28.593906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.539 [2024-07-21 03:44:28.593935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.539 qpair failed and we were unable to recover it. 00:34:43.539 [2024-07-21 03:44:28.594070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.539 [2024-07-21 03:44:28.594099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.539 qpair failed and we were unable to recover it. 00:34:43.539 [2024-07-21 03:44:28.594236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.539 [2024-07-21 03:44:28.594266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.539 qpair failed and we were unable to recover it. 00:34:43.539 [2024-07-21 03:44:28.594422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.539 [2024-07-21 03:44:28.594452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.539 qpair failed and we were unable to recover it. 00:34:43.539 [2024-07-21 03:44:28.594591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.539 [2024-07-21 03:44:28.594625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.539 qpair failed and we were unable to recover it. 00:34:43.539 [2024-07-21 03:44:28.594724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.539 [2024-07-21 03:44:28.594751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.539 qpair failed and we were unable to recover it. 00:34:43.539 [2024-07-21 03:44:28.594872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.539 [2024-07-21 03:44:28.594922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.539 qpair failed and we were unable to recover it. 00:34:43.539 [2024-07-21 03:44:28.595082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.539 [2024-07-21 03:44:28.595111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.539 qpair failed and we were unable to recover it. 00:34:43.539 [2024-07-21 03:44:28.595240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.539 [2024-07-21 03:44:28.595269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.539 qpair failed and we were unable to recover it. 00:34:43.539 [2024-07-21 03:44:28.595375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.539 [2024-07-21 03:44:28.595405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.539 qpair failed and we were unable to recover it. 00:34:43.539 [2024-07-21 03:44:28.595547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.539 [2024-07-21 03:44:28.595574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.539 qpair failed and we were unable to recover it. 00:34:43.539 [2024-07-21 03:44:28.595735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.540 [2024-07-21 03:44:28.595763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.540 qpair failed and we were unable to recover it. 00:34:43.540 [2024-07-21 03:44:28.595926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.540 [2024-07-21 03:44:28.595955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.540 qpair failed and we were unable to recover it. 00:34:43.540 [2024-07-21 03:44:28.596197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.540 [2024-07-21 03:44:28.596249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.540 qpair failed and we were unable to recover it. 00:34:43.540 [2024-07-21 03:44:28.596369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.540 [2024-07-21 03:44:28.596399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.540 qpair failed and we were unable to recover it. 00:34:43.540 [2024-07-21 03:44:28.596511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.540 [2024-07-21 03:44:28.596550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.540 qpair failed and we were unable to recover it. 00:34:43.540 [2024-07-21 03:44:28.596684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.540 [2024-07-21 03:44:28.596712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.540 qpair failed and we were unable to recover it. 00:34:43.540 [2024-07-21 03:44:28.596839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.540 [2024-07-21 03:44:28.596866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.540 qpair failed and we were unable to recover it. 00:34:43.540 [2024-07-21 03:44:28.597016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.540 [2024-07-21 03:44:28.597042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.540 qpair failed and we were unable to recover it. 00:34:43.540 [2024-07-21 03:44:28.597233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.540 [2024-07-21 03:44:28.597293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.540 qpair failed and we were unable to recover it. 00:34:43.540 [2024-07-21 03:44:28.597428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.540 [2024-07-21 03:44:28.597457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.540 qpair failed and we were unable to recover it. 00:34:43.540 [2024-07-21 03:44:28.597590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.540 [2024-07-21 03:44:28.597627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.540 qpair failed and we were unable to recover it. 00:34:43.540 [2024-07-21 03:44:28.597800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.540 [2024-07-21 03:44:28.597827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.540 qpair failed and we were unable to recover it. 00:34:43.540 [2024-07-21 03:44:28.597980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.540 [2024-07-21 03:44:28.598023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.540 qpair failed and we were unable to recover it. 00:34:43.540 [2024-07-21 03:44:28.598220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.540 [2024-07-21 03:44:28.598280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.540 qpair failed and we were unable to recover it. 00:34:43.540 [2024-07-21 03:44:28.598432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.540 [2024-07-21 03:44:28.598460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.540 qpair failed and we were unable to recover it. 00:34:43.540 [2024-07-21 03:44:28.598623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.540 [2024-07-21 03:44:28.598668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.540 qpair failed and we were unable to recover it. 00:34:43.540 [2024-07-21 03:44:28.598816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.540 [2024-07-21 03:44:28.598843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.540 qpair failed and we were unable to recover it. 00:34:43.540 [2024-07-21 03:44:28.598964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.540 [2024-07-21 03:44:28.598991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.540 qpair failed and we were unable to recover it. 00:34:43.540 [2024-07-21 03:44:28.599113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.540 [2024-07-21 03:44:28.599156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.540 qpair failed and we were unable to recover it. 00:34:43.540 [2024-07-21 03:44:28.599255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.540 [2024-07-21 03:44:28.599286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.540 qpair failed and we were unable to recover it. 00:34:43.540 [2024-07-21 03:44:28.599487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.540 [2024-07-21 03:44:28.599516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.540 qpair failed and we were unable to recover it. 00:34:43.540 [2024-07-21 03:44:28.599649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.540 [2024-07-21 03:44:28.599692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.540 qpair failed and we were unable to recover it. 00:34:43.540 [2024-07-21 03:44:28.599821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.540 [2024-07-21 03:44:28.599848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.540 qpair failed and we were unable to recover it. 00:34:43.540 [2024-07-21 03:44:28.600022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.540 [2024-07-21 03:44:28.600048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.540 qpair failed and we were unable to recover it. 00:34:43.540 [2024-07-21 03:44:28.600195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.540 [2024-07-21 03:44:28.600223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.540 qpair failed and we were unable to recover it. 00:34:43.540 [2024-07-21 03:44:28.600353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.540 [2024-07-21 03:44:28.600382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.540 qpair failed and we were unable to recover it. 00:34:43.540 [2024-07-21 03:44:28.600510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.540 [2024-07-21 03:44:28.600539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.540 qpair failed and we were unable to recover it. 00:34:43.540 [2024-07-21 03:44:28.600677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.540 [2024-07-21 03:44:28.600703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.540 qpair failed and we were unable to recover it. 00:34:43.540 [2024-07-21 03:44:28.600805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.540 [2024-07-21 03:44:28.600831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.540 qpair failed and we were unable to recover it. 00:34:43.540 [2024-07-21 03:44:28.601016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.540 [2024-07-21 03:44:28.601072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.540 qpair failed and we were unable to recover it. 00:34:43.540 [2024-07-21 03:44:28.601205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.540 [2024-07-21 03:44:28.601233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.540 qpair failed and we were unable to recover it. 00:34:43.540 [2024-07-21 03:44:28.601364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.540 [2024-07-21 03:44:28.601393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.540 qpair failed and we were unable to recover it. 00:34:43.540 [2024-07-21 03:44:28.601527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.540 [2024-07-21 03:44:28.601557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.540 qpair failed and we were unable to recover it. 00:34:43.540 [2024-07-21 03:44:28.601692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.540 [2024-07-21 03:44:28.601718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.540 qpair failed and we were unable to recover it. 00:34:43.540 [2024-07-21 03:44:28.601816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.540 [2024-07-21 03:44:28.601842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.540 qpair failed and we were unable to recover it. 00:34:43.540 [2024-07-21 03:44:28.601985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.540 [2024-07-21 03:44:28.602014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.540 qpair failed and we were unable to recover it. 00:34:43.540 [2024-07-21 03:44:28.602116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.540 [2024-07-21 03:44:28.602159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.540 qpair failed and we were unable to recover it. 00:34:43.540 [2024-07-21 03:44:28.602315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.540 [2024-07-21 03:44:28.602341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.540 qpair failed and we were unable to recover it. 00:34:43.540 [2024-07-21 03:44:28.602518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.540 [2024-07-21 03:44:28.602546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.540 qpair failed and we were unable to recover it. 00:34:43.540 [2024-07-21 03:44:28.602683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.540 [2024-07-21 03:44:28.602710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.540 qpair failed and we were unable to recover it. 00:34:43.540 [2024-07-21 03:44:28.602803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.540 [2024-07-21 03:44:28.602829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.540 qpair failed and we were unable to recover it. 00:34:43.540 [2024-07-21 03:44:28.602918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.541 [2024-07-21 03:44:28.602945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.541 qpair failed and we were unable to recover it. 00:34:43.541 [2024-07-21 03:44:28.603094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.541 [2024-07-21 03:44:28.603121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.541 qpair failed and we were unable to recover it. 00:34:43.541 [2024-07-21 03:44:28.603294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.541 [2024-07-21 03:44:28.603323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.541 qpair failed and we were unable to recover it. 00:34:43.541 [2024-07-21 03:44:28.603432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.541 [2024-07-21 03:44:28.603473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.541 qpair failed and we were unable to recover it. 00:34:43.541 [2024-07-21 03:44:28.603605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.541 [2024-07-21 03:44:28.603662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.541 qpair failed and we were unable to recover it. 00:34:43.541 [2024-07-21 03:44:28.603816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.541 [2024-07-21 03:44:28.603843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.541 qpair failed and we were unable to recover it. 00:34:43.541 [2024-07-21 03:44:28.604019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.541 [2024-07-21 03:44:28.604045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.541 qpair failed and we were unable to recover it. 00:34:43.541 [2024-07-21 03:44:28.604167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.541 [2024-07-21 03:44:28.604210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.541 qpair failed and we were unable to recover it. 00:34:43.541 [2024-07-21 03:44:28.604360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.541 [2024-07-21 03:44:28.604388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.541 qpair failed and we were unable to recover it. 00:34:43.541 [2024-07-21 03:44:28.604542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.541 [2024-07-21 03:44:28.604570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.541 qpair failed and we were unable to recover it. 00:34:43.541 [2024-07-21 03:44:28.604684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.541 [2024-07-21 03:44:28.604711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.541 qpair failed and we were unable to recover it. 00:34:43.541 [2024-07-21 03:44:28.604836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.541 [2024-07-21 03:44:28.604862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.541 qpair failed and we were unable to recover it. 00:34:43.541 [2024-07-21 03:44:28.604988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.541 [2024-07-21 03:44:28.605015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.541 qpair failed and we were unable to recover it. 00:34:43.541 [2024-07-21 03:44:28.605114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.541 [2024-07-21 03:44:28.605141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.541 qpair failed and we were unable to recover it. 00:34:43.541 [2024-07-21 03:44:28.605299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.541 [2024-07-21 03:44:28.605325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.541 qpair failed and we were unable to recover it. 00:34:43.541 [2024-07-21 03:44:28.605436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.541 [2024-07-21 03:44:28.605465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.541 qpair failed and we were unable to recover it. 00:34:43.541 [2024-07-21 03:44:28.605594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.541 [2024-07-21 03:44:28.605633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.541 qpair failed and we were unable to recover it. 00:34:43.541 [2024-07-21 03:44:28.605751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.541 [2024-07-21 03:44:28.605777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.541 qpair failed and we were unable to recover it. 00:34:43.541 [2024-07-21 03:44:28.605869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.541 [2024-07-21 03:44:28.605895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.541 qpair failed and we were unable to recover it. 00:34:43.541 [2024-07-21 03:44:28.606012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.541 [2024-07-21 03:44:28.606037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.541 qpair failed and we were unable to recover it. 00:34:43.541 [2024-07-21 03:44:28.606181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.541 [2024-07-21 03:44:28.606212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.541 qpair failed and we were unable to recover it. 00:34:43.541 [2024-07-21 03:44:28.606450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.541 [2024-07-21 03:44:28.606480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.541 qpair failed and we were unable to recover it. 00:34:43.541 [2024-07-21 03:44:28.606607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.541 [2024-07-21 03:44:28.606667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.541 qpair failed and we were unable to recover it. 00:34:43.541 [2024-07-21 03:44:28.606769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.541 [2024-07-21 03:44:28.606795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.541 qpair failed and we were unable to recover it. 00:34:43.541 [2024-07-21 03:44:28.606908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.541 [2024-07-21 03:44:28.606939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.541 qpair failed and we were unable to recover it. 00:34:43.541 [2024-07-21 03:44:28.607103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.541 [2024-07-21 03:44:28.607133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.541 qpair failed and we were unable to recover it. 00:34:43.541 [2024-07-21 03:44:28.607233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.541 [2024-07-21 03:44:28.607261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.541 qpair failed and we were unable to recover it. 00:34:43.541 [2024-07-21 03:44:28.607412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.541 [2024-07-21 03:44:28.607438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.541 qpair failed and we were unable to recover it. 00:34:43.541 [2024-07-21 03:44:28.607533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.541 [2024-07-21 03:44:28.607559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.541 qpair failed and we were unable to recover it. 00:34:43.541 [2024-07-21 03:44:28.607682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.541 [2024-07-21 03:44:28.607709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.541 qpair failed and we were unable to recover it. 00:34:43.541 [2024-07-21 03:44:28.607839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.541 [2024-07-21 03:44:28.607865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.541 qpair failed and we were unable to recover it. 00:34:43.541 [2024-07-21 03:44:28.608011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.541 [2024-07-21 03:44:28.608037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.541 qpair failed and we were unable to recover it. 00:34:43.541 [2024-07-21 03:44:28.608173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.541 [2024-07-21 03:44:28.608202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.541 qpair failed and we were unable to recover it. 00:34:43.541 [2024-07-21 03:44:28.608338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.541 [2024-07-21 03:44:28.608367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.541 qpair failed and we were unable to recover it. 00:34:43.541 [2024-07-21 03:44:28.608487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.541 [2024-07-21 03:44:28.608518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.541 qpair failed and we were unable to recover it. 00:34:43.542 [2024-07-21 03:44:28.608649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.542 [2024-07-21 03:44:28.608679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.542 qpair failed and we were unable to recover it. 00:34:43.542 [2024-07-21 03:44:28.608803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.542 [2024-07-21 03:44:28.608828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.542 qpair failed and we were unable to recover it. 00:34:43.542 [2024-07-21 03:44:28.608941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.542 [2024-07-21 03:44:28.608970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.542 qpair failed and we were unable to recover it. 00:34:43.542 [2024-07-21 03:44:28.609165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.542 [2024-07-21 03:44:28.609193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.542 qpair failed and we were unable to recover it. 00:34:43.542 [2024-07-21 03:44:28.609321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.542 [2024-07-21 03:44:28.609348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.542 qpair failed and we were unable to recover it. 00:34:43.542 [2024-07-21 03:44:28.609523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.542 [2024-07-21 03:44:28.609549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.542 qpair failed and we were unable to recover it. 00:34:43.542 [2024-07-21 03:44:28.609694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.542 [2024-07-21 03:44:28.609724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.542 qpair failed and we were unable to recover it. 00:34:43.542 [2024-07-21 03:44:28.609877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.542 [2024-07-21 03:44:28.609905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.542 qpair failed and we were unable to recover it. 00:34:43.542 [2024-07-21 03:44:28.610016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.542 [2024-07-21 03:44:28.610045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.542 qpair failed and we were unable to recover it. 00:34:43.542 [2024-07-21 03:44:28.610251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.542 [2024-07-21 03:44:28.610276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.542 qpair failed and we were unable to recover it. 00:34:43.542 [2024-07-21 03:44:28.610428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.542 [2024-07-21 03:44:28.610453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.542 qpair failed and we were unable to recover it. 00:34:43.542 [2024-07-21 03:44:28.610605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.542 [2024-07-21 03:44:28.610639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.542 qpair failed and we were unable to recover it. 00:34:43.542 [2024-07-21 03:44:28.610765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.542 [2024-07-21 03:44:28.610795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.542 qpair failed and we were unable to recover it. 00:34:43.542 [2024-07-21 03:44:28.611030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.542 [2024-07-21 03:44:28.611059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.542 qpair failed and we were unable to recover it. 00:34:43.542 [2024-07-21 03:44:28.611194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.542 [2024-07-21 03:44:28.611219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.542 qpair failed and we were unable to recover it. 00:34:43.542 [2024-07-21 03:44:28.611370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.542 [2024-07-21 03:44:28.611395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.542 qpair failed and we were unable to recover it. 00:34:43.542 [2024-07-21 03:44:28.611517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.542 [2024-07-21 03:44:28.611543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.542 qpair failed and we were unable to recover it. 00:34:43.542 [2024-07-21 03:44:28.611649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.542 [2024-07-21 03:44:28.611676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.542 qpair failed and we were unable to recover it. 00:34:43.542 [2024-07-21 03:44:28.611796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.542 [2024-07-21 03:44:28.611822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.542 qpair failed and we were unable to recover it. 00:34:43.542 [2024-07-21 03:44:28.611969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.542 [2024-07-21 03:44:28.611994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.542 qpair failed and we were unable to recover it. 00:34:43.542 [2024-07-21 03:44:28.612153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.542 [2024-07-21 03:44:28.612187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.542 qpair failed and we were unable to recover it. 00:34:43.542 [2024-07-21 03:44:28.612338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.542 [2024-07-21 03:44:28.612365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.542 qpair failed and we were unable to recover it. 00:34:43.542 [2024-07-21 03:44:28.612465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.542 [2024-07-21 03:44:28.612490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.542 qpair failed and we were unable to recover it. 00:34:43.542 [2024-07-21 03:44:28.612621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.542 [2024-07-21 03:44:28.612665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.542 qpair failed and we were unable to recover it. 00:34:43.542 [2024-07-21 03:44:28.612851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.542 [2024-07-21 03:44:28.612879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.542 qpair failed and we were unable to recover it. 00:34:43.542 [2024-07-21 03:44:28.613100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.542 [2024-07-21 03:44:28.613153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.542 qpair failed and we were unable to recover it. 00:34:43.542 [2024-07-21 03:44:28.613323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.542 [2024-07-21 03:44:28.613349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.542 qpair failed and we were unable to recover it. 00:34:43.542 [2024-07-21 03:44:28.613496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.542 [2024-07-21 03:44:28.613520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.542 qpair failed and we were unable to recover it. 00:34:43.542 [2024-07-21 03:44:28.613661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.542 [2024-07-21 03:44:28.613690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.542 qpair failed and we were unable to recover it. 00:34:43.542 [2024-07-21 03:44:28.613822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.542 [2024-07-21 03:44:28.613852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.542 qpair failed and we were unable to recover it. 00:34:43.542 [2024-07-21 03:44:28.614041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.542 [2024-07-21 03:44:28.614069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.542 qpair failed and we were unable to recover it. 00:34:43.542 [2024-07-21 03:44:28.614209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.542 [2024-07-21 03:44:28.614236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.542 qpair failed and we were unable to recover it. 00:34:43.542 [2024-07-21 03:44:28.614334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.542 [2024-07-21 03:44:28.614360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.542 qpair failed and we were unable to recover it. 00:34:43.542 [2024-07-21 03:44:28.614502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.542 [2024-07-21 03:44:28.614527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.542 qpair failed and we were unable to recover it. 00:34:43.542 [2024-07-21 03:44:28.614622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.542 [2024-07-21 03:44:28.614667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.542 qpair failed and we were unable to recover it. 00:34:43.542 [2024-07-21 03:44:28.614814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.542 [2024-07-21 03:44:28.614839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.542 qpair failed and we were unable to recover it. 00:34:43.542 [2024-07-21 03:44:28.614961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.542 [2024-07-21 03:44:28.614986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.542 qpair failed and we were unable to recover it. 00:34:43.542 [2024-07-21 03:44:28.615114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.542 [2024-07-21 03:44:28.615140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.542 qpair failed and we were unable to recover it. 00:34:43.542 [2024-07-21 03:44:28.615261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.542 [2024-07-21 03:44:28.615285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.542 qpair failed and we were unable to recover it. 00:34:43.542 [2024-07-21 03:44:28.615409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.542 [2024-07-21 03:44:28.615438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.542 qpair failed and we were unable to recover it. 00:34:43.542 [2024-07-21 03:44:28.615560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.542 [2024-07-21 03:44:28.615585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.542 qpair failed and we were unable to recover it. 00:34:43.543 [2024-07-21 03:44:28.615756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.543 [2024-07-21 03:44:28.615786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.543 qpair failed and we were unable to recover it. 00:34:43.543 [2024-07-21 03:44:28.615985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.543 [2024-07-21 03:44:28.616013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.543 qpair failed and we were unable to recover it. 00:34:43.543 [2024-07-21 03:44:28.616150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.543 [2024-07-21 03:44:28.616177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.543 qpair failed and we were unable to recover it. 00:34:43.543 [2024-07-21 03:44:28.616296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.543 [2024-07-21 03:44:28.616322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.543 qpair failed and we were unable to recover it. 00:34:43.543 [2024-07-21 03:44:28.616469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.543 [2024-07-21 03:44:28.616496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.543 qpair failed and we were unable to recover it. 00:34:43.543 [2024-07-21 03:44:28.616600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.543 [2024-07-21 03:44:28.616635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.543 qpair failed and we were unable to recover it. 00:34:43.543 [2024-07-21 03:44:28.616736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.543 [2024-07-21 03:44:28.616761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.543 qpair failed and we were unable to recover it. 00:34:43.543 [2024-07-21 03:44:28.616857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.543 [2024-07-21 03:44:28.616883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.543 qpair failed and we were unable to recover it. 00:34:43.543 [2024-07-21 03:44:28.617035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.543 [2024-07-21 03:44:28.617061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.543 qpair failed and we were unable to recover it. 00:34:43.543 [2024-07-21 03:44:28.617154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.543 [2024-07-21 03:44:28.617180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.543 qpair failed and we were unable to recover it. 00:34:43.543 [2024-07-21 03:44:28.617274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.543 [2024-07-21 03:44:28.617301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.543 qpair failed and we were unable to recover it. 00:34:43.543 [2024-07-21 03:44:28.617417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.543 [2024-07-21 03:44:28.617442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.543 qpair failed and we were unable to recover it. 00:34:43.543 [2024-07-21 03:44:28.617544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.543 [2024-07-21 03:44:28.617569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.543 qpair failed and we were unable to recover it. 00:34:43.543 [2024-07-21 03:44:28.617757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.543 [2024-07-21 03:44:28.617804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.543 qpair failed and we were unable to recover it. 00:34:43.543 [2024-07-21 03:44:28.617937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.543 [2024-07-21 03:44:28.617967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.543 qpair failed and we were unable to recover it. 00:34:43.543 [2024-07-21 03:44:28.618103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.543 [2024-07-21 03:44:28.618128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.543 qpair failed and we were unable to recover it. 00:34:43.543 [2024-07-21 03:44:28.618276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.543 [2024-07-21 03:44:28.618301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.543 qpair failed and we were unable to recover it. 00:34:43.543 [2024-07-21 03:44:28.618384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.543 [2024-07-21 03:44:28.618410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.543 qpair failed and we were unable to recover it. 00:34:43.543 [2024-07-21 03:44:28.618534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.543 [2024-07-21 03:44:28.618560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.543 qpair failed and we were unable to recover it. 00:34:43.543 [2024-07-21 03:44:28.618674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.543 [2024-07-21 03:44:28.618704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.543 qpair failed and we were unable to recover it. 00:34:43.543 [2024-07-21 03:44:28.618845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.543 [2024-07-21 03:44:28.618873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.543 qpair failed and we were unable to recover it. 00:34:43.543 [2024-07-21 03:44:28.618984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.543 [2024-07-21 03:44:28.619010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.543 qpair failed and we were unable to recover it. 00:34:43.543 [2024-07-21 03:44:28.619153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.543 [2024-07-21 03:44:28.619178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.543 qpair failed and we were unable to recover it. 00:34:43.543 [2024-07-21 03:44:28.619283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.543 [2024-07-21 03:44:28.619308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.543 qpair failed and we were unable to recover it. 00:34:43.543 [2024-07-21 03:44:28.619460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.543 [2024-07-21 03:44:28.619485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.543 qpair failed and we were unable to recover it. 00:34:43.543 [2024-07-21 03:44:28.619582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.543 [2024-07-21 03:44:28.619620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.543 qpair failed and we were unable to recover it. 00:34:43.543 [2024-07-21 03:44:28.619765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.543 [2024-07-21 03:44:28.619794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.543 qpair failed and we were unable to recover it. 00:34:43.543 [2024-07-21 03:44:28.619972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.543 [2024-07-21 03:44:28.620001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.543 qpair failed and we were unable to recover it. 00:34:43.543 [2024-07-21 03:44:28.620119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.543 [2024-07-21 03:44:28.620145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.543 qpair failed and we were unable to recover it. 00:34:43.543 [2024-07-21 03:44:28.620264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.543 [2024-07-21 03:44:28.620290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.543 qpair failed and we were unable to recover it. 00:34:43.543 [2024-07-21 03:44:28.620438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.543 [2024-07-21 03:44:28.620463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.543 qpair failed and we were unable to recover it. 00:34:43.543 [2024-07-21 03:44:28.620610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.543 [2024-07-21 03:44:28.620658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.543 qpair failed and we were unable to recover it. 00:34:43.543 [2024-07-21 03:44:28.620767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.543 [2024-07-21 03:44:28.620811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.543 qpair failed and we were unable to recover it. 00:34:43.543 [2024-07-21 03:44:28.620945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.543 [2024-07-21 03:44:28.620973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.543 qpair failed and we were unable to recover it. 00:34:43.543 [2024-07-21 03:44:28.621119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.543 [2024-07-21 03:44:28.621206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.543 qpair failed and we were unable to recover it. 00:34:43.543 [2024-07-21 03:44:28.621356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.543 [2024-07-21 03:44:28.621381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.543 qpair failed and we were unable to recover it. 00:34:43.543 [2024-07-21 03:44:28.621513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.543 [2024-07-21 03:44:28.621539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.543 qpair failed and we were unable to recover it. 00:34:43.543 [2024-07-21 03:44:28.621677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.543 [2024-07-21 03:44:28.621706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.543 qpair failed and we were unable to recover it. 00:34:43.543 [2024-07-21 03:44:28.621859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.543 [2024-07-21 03:44:28.621887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.543 qpair failed and we were unable to recover it. 00:34:43.543 [2024-07-21 03:44:28.622033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.543 [2024-07-21 03:44:28.622075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.543 qpair failed and we were unable to recover it. 00:34:43.543 [2024-07-21 03:44:28.622196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.543 [2024-07-21 03:44:28.622221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.544 qpair failed and we were unable to recover it. 00:34:43.544 [2024-07-21 03:44:28.622342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.544 [2024-07-21 03:44:28.622367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.544 qpair failed and we were unable to recover it. 00:34:43.544 [2024-07-21 03:44:28.622459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.544 [2024-07-21 03:44:28.622484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.544 qpair failed and we were unable to recover it. 00:34:43.544 [2024-07-21 03:44:28.622585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.544 [2024-07-21 03:44:28.622611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.544 qpair failed and we were unable to recover it. 00:34:43.544 [2024-07-21 03:44:28.622756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.544 [2024-07-21 03:44:28.622784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.544 qpair failed and we were unable to recover it. 00:34:43.544 [2024-07-21 03:44:28.622968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.544 [2024-07-21 03:44:28.622996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.544 qpair failed and we were unable to recover it. 00:34:43.544 [2024-07-21 03:44:28.623119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.544 [2024-07-21 03:44:28.623147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.544 qpair failed and we were unable to recover it. 00:34:43.544 [2024-07-21 03:44:28.623285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.544 [2024-07-21 03:44:28.623312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.544 qpair failed and we were unable to recover it. 00:34:43.544 [2024-07-21 03:44:28.623429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.544 [2024-07-21 03:44:28.623454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.544 qpair failed and we were unable to recover it. 00:34:43.544 [2024-07-21 03:44:28.623545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.544 [2024-07-21 03:44:28.623570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.544 qpair failed and we were unable to recover it. 00:34:43.544 [2024-07-21 03:44:28.623676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.544 [2024-07-21 03:44:28.623702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.544 qpair failed and we were unable to recover it. 00:34:43.544 [2024-07-21 03:44:28.623800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.544 [2024-07-21 03:44:28.623825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.544 qpair failed and we were unable to recover it. 00:34:43.544 [2024-07-21 03:44:28.623956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.544 [2024-07-21 03:44:28.623982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.544 qpair failed and we were unable to recover it. 00:34:43.544 [2024-07-21 03:44:28.624074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.544 [2024-07-21 03:44:28.624098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.544 qpair failed and we were unable to recover it. 00:34:43.544 [2024-07-21 03:44:28.624195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.544 [2024-07-21 03:44:28.624221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.544 qpair failed and we were unable to recover it. 00:34:43.544 [2024-07-21 03:44:28.624344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.544 [2024-07-21 03:44:28.624369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.544 qpair failed and we were unable to recover it. 00:34:43.544 [2024-07-21 03:44:28.624495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.544 [2024-07-21 03:44:28.624520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.544 qpair failed and we were unable to recover it. 00:34:43.544 [2024-07-21 03:44:28.624612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.544 [2024-07-21 03:44:28.624661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.544 qpair failed and we were unable to recover it. 00:34:43.544 [2024-07-21 03:44:28.624832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.544 [2024-07-21 03:44:28.624859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.544 qpair failed and we were unable to recover it. 00:34:43.544 [2024-07-21 03:44:28.625017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.544 [2024-07-21 03:44:28.625045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.544 qpair failed and we were unable to recover it. 00:34:43.544 [2024-07-21 03:44:28.625216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.544 [2024-07-21 03:44:28.625248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.544 qpair failed and we were unable to recover it. 00:34:43.544 [2024-07-21 03:44:28.625370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.544 [2024-07-21 03:44:28.625395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.544 qpair failed and we were unable to recover it. 00:34:43.544 [2024-07-21 03:44:28.625527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.544 [2024-07-21 03:44:28.625552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.544 qpair failed and we were unable to recover it. 00:34:43.544 [2024-07-21 03:44:28.625684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.544 [2024-07-21 03:44:28.625710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.544 qpair failed and we were unable to recover it. 00:34:43.544 [2024-07-21 03:44:28.625819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.544 [2024-07-21 03:44:28.625847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.544 qpair failed and we were unable to recover it. 00:34:43.544 [2024-07-21 03:44:28.626137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.544 [2024-07-21 03:44:28.626197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.544 qpair failed and we were unable to recover it. 00:34:43.544 [2024-07-21 03:44:28.626333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.544 [2024-07-21 03:44:28.626358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.544 qpair failed and we were unable to recover it. 00:34:43.544 [2024-07-21 03:44:28.626488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.544 [2024-07-21 03:44:28.626512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.544 qpair failed and we were unable to recover it. 00:34:43.544 [2024-07-21 03:44:28.626642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.544 [2024-07-21 03:44:28.626669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.544 qpair failed and we were unable to recover it. 00:34:43.544 [2024-07-21 03:44:28.626784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.544 [2024-07-21 03:44:28.626809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.544 qpair failed and we were unable to recover it. 00:34:43.544 [2024-07-21 03:44:28.626913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.544 [2024-07-21 03:44:28.626953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.544 qpair failed and we were unable to recover it. 00:34:43.544 [2024-07-21 03:44:28.627076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.544 [2024-07-21 03:44:28.627101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.544 qpair failed and we were unable to recover it. 00:34:43.544 [2024-07-21 03:44:28.627245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.544 [2024-07-21 03:44:28.627270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.544 qpair failed and we were unable to recover it. 00:34:43.544 [2024-07-21 03:44:28.627419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.544 [2024-07-21 03:44:28.627459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.544 qpair failed and we were unable to recover it. 00:34:43.544 [2024-07-21 03:44:28.627568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.544 [2024-07-21 03:44:28.627596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.544 qpair failed and we were unable to recover it. 00:34:43.544 [2024-07-21 03:44:28.627703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.544 [2024-07-21 03:44:28.627730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.544 qpair failed and we were unable to recover it. 00:34:43.544 [2024-07-21 03:44:28.627846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.544 [2024-07-21 03:44:28.627872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.544 qpair failed and we were unable to recover it. 00:34:43.544 [2024-07-21 03:44:28.627995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.544 [2024-07-21 03:44:28.628021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.544 qpair failed and we were unable to recover it. 00:34:43.544 [2024-07-21 03:44:28.628140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.544 [2024-07-21 03:44:28.628167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.544 qpair failed and we were unable to recover it. 00:34:43.544 [2024-07-21 03:44:28.628301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.544 [2024-07-21 03:44:28.628326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.544 qpair failed and we were unable to recover it. 00:34:43.544 [2024-07-21 03:44:28.628442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.544 [2024-07-21 03:44:28.628469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.545 qpair failed and we were unable to recover it. 00:34:43.545 [2024-07-21 03:44:28.628621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.545 [2024-07-21 03:44:28.628649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.545 qpair failed and we were unable to recover it. 00:34:43.545 [2024-07-21 03:44:28.628769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.545 [2024-07-21 03:44:28.628810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.545 qpair failed and we were unable to recover it. 00:34:43.545 [2024-07-21 03:44:28.628953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.545 [2024-07-21 03:44:28.628979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.545 qpair failed and we were unable to recover it. 00:34:43.545 [2024-07-21 03:44:28.629126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.545 [2024-07-21 03:44:28.629154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.545 qpair failed and we were unable to recover it. 00:34:43.545 [2024-07-21 03:44:28.629316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.545 [2024-07-21 03:44:28.629342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.545 qpair failed and we were unable to recover it. 00:34:43.545 [2024-07-21 03:44:28.629489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.545 [2024-07-21 03:44:28.629515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.545 qpair failed and we were unable to recover it. 00:34:43.545 [2024-07-21 03:44:28.629682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.545 [2024-07-21 03:44:28.629711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.545 qpair failed and we were unable to recover it. 00:34:43.545 [2024-07-21 03:44:28.629855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.545 [2024-07-21 03:44:28.629882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.545 qpair failed and we were unable to recover it. 00:34:43.545 [2024-07-21 03:44:28.630004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.545 [2024-07-21 03:44:28.630029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.545 qpair failed and we were unable to recover it. 00:34:43.545 [2024-07-21 03:44:28.630151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.545 [2024-07-21 03:44:28.630176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.545 qpair failed and we were unable to recover it. 00:34:43.545 [2024-07-21 03:44:28.630325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.545 [2024-07-21 03:44:28.630353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.545 qpair failed and we were unable to recover it. 00:34:43.545 [2024-07-21 03:44:28.630501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.545 [2024-07-21 03:44:28.630527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.545 qpair failed and we were unable to recover it. 00:34:43.545 [2024-07-21 03:44:28.630680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.545 [2024-07-21 03:44:28.630706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.545 qpair failed and we were unable to recover it. 00:34:43.545 [2024-07-21 03:44:28.630887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.545 [2024-07-21 03:44:28.630913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.545 qpair failed and we were unable to recover it. 00:34:43.545 [2024-07-21 03:44:28.631061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.545 [2024-07-21 03:44:28.631087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.545 qpair failed and we were unable to recover it. 00:34:43.545 [2024-07-21 03:44:28.631223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.545 [2024-07-21 03:44:28.631253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.545 qpair failed and we were unable to recover it. 00:34:43.545 [2024-07-21 03:44:28.631350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.545 [2024-07-21 03:44:28.631378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.545 qpair failed and we were unable to recover it. 00:34:43.545 [2024-07-21 03:44:28.631492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.545 [2024-07-21 03:44:28.631519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.545 qpair failed and we were unable to recover it. 00:34:43.545 [2024-07-21 03:44:28.631644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.545 [2024-07-21 03:44:28.631671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.545 qpair failed and we were unable to recover it. 00:34:43.545 [2024-07-21 03:44:28.631817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.545 [2024-07-21 03:44:28.631846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.545 qpair failed and we were unable to recover it. 00:34:43.545 [2024-07-21 03:44:28.632007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.545 [2024-07-21 03:44:28.632033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.545 qpair failed and we were unable to recover it. 00:34:43.545 [2024-07-21 03:44:28.632151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.545 [2024-07-21 03:44:28.632176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.545 qpair failed and we were unable to recover it. 00:34:43.545 [2024-07-21 03:44:28.632315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.545 [2024-07-21 03:44:28.632349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.545 qpair failed and we were unable to recover it. 00:34:43.545 [2024-07-21 03:44:28.632497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.545 [2024-07-21 03:44:28.632523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.545 qpair failed and we were unable to recover it. 00:34:43.545 [2024-07-21 03:44:28.632607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.545 [2024-07-21 03:44:28.632650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.545 qpair failed and we were unable to recover it. 00:34:43.545 [2024-07-21 03:44:28.632804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.545 [2024-07-21 03:44:28.632830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.545 qpair failed and we were unable to recover it. 00:34:43.545 [2024-07-21 03:44:28.633020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.545 [2024-07-21 03:44:28.633046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.545 qpair failed and we were unable to recover it. 00:34:43.545 [2024-07-21 03:44:28.633223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.545 [2024-07-21 03:44:28.633252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.545 qpair failed and we were unable to recover it. 00:34:43.545 [2024-07-21 03:44:28.633379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.545 [2024-07-21 03:44:28.633407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.545 qpair failed and we were unable to recover it. 00:34:43.545 [2024-07-21 03:44:28.633550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.545 [2024-07-21 03:44:28.633575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.545 qpair failed and we were unable to recover it. 00:34:43.545 [2024-07-21 03:44:28.633712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.545 [2024-07-21 03:44:28.633738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.545 qpair failed and we were unable to recover it. 00:34:43.545 [2024-07-21 03:44:28.633871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.545 [2024-07-21 03:44:28.633913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.545 qpair failed and we were unable to recover it. 00:34:43.545 [2024-07-21 03:44:28.634032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.545 [2024-07-21 03:44:28.634060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.545 qpair failed and we were unable to recover it. 00:34:43.545 [2024-07-21 03:44:28.634209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.545 [2024-07-21 03:44:28.634235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.545 qpair failed and we were unable to recover it. 00:34:43.545 [2024-07-21 03:44:28.634368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.545 [2024-07-21 03:44:28.634411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.545 qpair failed and we were unable to recover it. 00:34:43.545 [2024-07-21 03:44:28.634533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.545 [2024-07-21 03:44:28.634559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.545 qpair failed and we were unable to recover it. 00:34:43.546 [2024-07-21 03:44:28.634661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.546 [2024-07-21 03:44:28.634688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.546 qpair failed and we were unable to recover it. 00:34:43.546 [2024-07-21 03:44:28.634776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.546 [2024-07-21 03:44:28.634802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.546 qpair failed and we were unable to recover it. 00:34:43.546 [2024-07-21 03:44:28.634955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.546 [2024-07-21 03:44:28.634981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.546 qpair failed and we were unable to recover it. 00:34:43.546 [2024-07-21 03:44:28.635119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.546 [2024-07-21 03:44:28.635148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.546 qpair failed and we were unable to recover it. 00:34:43.546 [2024-07-21 03:44:28.635252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.546 [2024-07-21 03:44:28.635285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.546 qpair failed and we were unable to recover it. 00:34:43.546 [2024-07-21 03:44:28.635403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.546 [2024-07-21 03:44:28.635430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.546 qpair failed and we were unable to recover it. 00:34:43.546 [2024-07-21 03:44:28.635595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.546 [2024-07-21 03:44:28.635627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.546 qpair failed and we were unable to recover it. 00:34:43.546 [2024-07-21 03:44:28.635773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.546 [2024-07-21 03:44:28.635802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.546 qpair failed and we were unable to recover it. 00:34:43.546 [2024-07-21 03:44:28.635953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.546 [2024-07-21 03:44:28.635979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.546 qpair failed and we were unable to recover it. 00:34:43.546 [2024-07-21 03:44:28.636098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.546 [2024-07-21 03:44:28.636125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.546 qpair failed and we were unable to recover it. 00:34:43.546 [2024-07-21 03:44:28.636269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.546 [2024-07-21 03:44:28.636298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.546 qpair failed and we were unable to recover it. 00:34:43.546 [2024-07-21 03:44:28.636440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.546 [2024-07-21 03:44:28.636466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.546 qpair failed and we were unable to recover it. 00:34:43.546 [2024-07-21 03:44:28.636598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.546 [2024-07-21 03:44:28.636632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.546 qpair failed and we were unable to recover it. 00:34:43.546 [2024-07-21 03:44:28.636806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.546 [2024-07-21 03:44:28.636835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.546 qpair failed and we were unable to recover it. 00:34:43.546 [2024-07-21 03:44:28.636971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.546 [2024-07-21 03:44:28.636996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.546 qpair failed and we were unable to recover it. 00:34:43.546 [2024-07-21 03:44:28.637123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.546 [2024-07-21 03:44:28.637149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.546 qpair failed and we were unable to recover it. 00:34:43.546 [2024-07-21 03:44:28.637292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.546 [2024-07-21 03:44:28.637330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.546 qpair failed and we were unable to recover it. 00:34:43.546 [2024-07-21 03:44:28.637476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.546 [2024-07-21 03:44:28.637502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.546 qpair failed and we were unable to recover it. 00:34:43.546 [2024-07-21 03:44:28.637625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.546 [2024-07-21 03:44:28.637651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.546 qpair failed and we were unable to recover it. 00:34:43.546 [2024-07-21 03:44:28.637771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.546 [2024-07-21 03:44:28.637813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.546 qpair failed and we were unable to recover it. 00:34:43.546 [2024-07-21 03:44:28.637943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.546 [2024-07-21 03:44:28.637968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.546 qpair failed and we were unable to recover it. 00:34:43.546 [2024-07-21 03:44:28.638097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.546 [2024-07-21 03:44:28.638129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.546 qpair failed and we were unable to recover it. 00:34:43.546 [2024-07-21 03:44:28.638226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.546 [2024-07-21 03:44:28.638250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.546 qpair failed and we were unable to recover it. 00:34:43.546 [2024-07-21 03:44:28.638372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.546 [2024-07-21 03:44:28.638399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.546 qpair failed and we were unable to recover it. 00:34:43.546 [2024-07-21 03:44:28.638497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.546 [2024-07-21 03:44:28.638522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.546 qpair failed and we were unable to recover it. 00:34:43.546 [2024-07-21 03:44:28.638668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.546 [2024-07-21 03:44:28.638698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.546 qpair failed and we were unable to recover it. 00:34:43.546 [2024-07-21 03:44:28.638808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.546 [2024-07-21 03:44:28.638834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.546 qpair failed and we were unable to recover it. 00:34:43.546 [2024-07-21 03:44:28.638927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.546 [2024-07-21 03:44:28.638954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.546 qpair failed and we were unable to recover it. 00:34:43.546 [2024-07-21 03:44:28.639054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.546 [2024-07-21 03:44:28.639084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.546 qpair failed and we were unable to recover it. 00:34:43.546 [2024-07-21 03:44:28.639208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.546 [2024-07-21 03:44:28.639234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.546 qpair failed and we were unable to recover it. 00:34:43.546 [2024-07-21 03:44:28.639372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.546 [2024-07-21 03:44:28.639416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.546 qpair failed and we were unable to recover it. 00:34:43.546 [2024-07-21 03:44:28.639525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.546 [2024-07-21 03:44:28.639554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.546 qpair failed and we were unable to recover it. 00:34:43.546 [2024-07-21 03:44:28.639671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.546 [2024-07-21 03:44:28.639698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.546 qpair failed and we were unable to recover it. 00:34:43.546 [2024-07-21 03:44:28.639799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.546 [2024-07-21 03:44:28.639825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.546 qpair failed and we were unable to recover it. 00:34:43.546 [2024-07-21 03:44:28.639945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.546 [2024-07-21 03:44:28.639972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.546 qpair failed and we were unable to recover it. 00:34:43.546 [2024-07-21 03:44:28.640095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.546 [2024-07-21 03:44:28.640121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.546 qpair failed and we were unable to recover it. 00:34:43.546 [2024-07-21 03:44:28.640239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.546 [2024-07-21 03:44:28.640265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.546 qpair failed and we were unable to recover it. 00:34:43.546 [2024-07-21 03:44:28.640362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.546 [2024-07-21 03:44:28.640390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.546 qpair failed and we were unable to recover it. 00:34:43.546 [2024-07-21 03:44:28.640528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.546 [2024-07-21 03:44:28.640553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.546 qpair failed and we were unable to recover it. 00:34:43.547 [2024-07-21 03:44:28.640652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.547 [2024-07-21 03:44:28.640678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.547 qpair failed and we were unable to recover it. 00:34:43.547 [2024-07-21 03:44:28.640799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.547 [2024-07-21 03:44:28.640827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.547 qpair failed and we were unable to recover it. 00:34:43.547 [2024-07-21 03:44:28.640940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.547 [2024-07-21 03:44:28.640965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.547 qpair failed and we were unable to recover it. 00:34:43.547 [2024-07-21 03:44:28.641072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.547 [2024-07-21 03:44:28.641097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.547 qpair failed and we were unable to recover it. 00:34:43.547 [2024-07-21 03:44:28.641208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.547 [2024-07-21 03:44:28.641236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.547 qpair failed and we were unable to recover it. 00:34:43.547 [2024-07-21 03:44:28.641376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.547 [2024-07-21 03:44:28.641402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.547 qpair failed and we were unable to recover it. 00:34:43.547 [2024-07-21 03:44:28.641522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.547 [2024-07-21 03:44:28.641564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.547 qpair failed and we were unable to recover it. 00:34:43.547 [2024-07-21 03:44:28.641673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.547 [2024-07-21 03:44:28.641703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.547 qpair failed and we were unable to recover it. 00:34:43.547 [2024-07-21 03:44:28.641852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.547 [2024-07-21 03:44:28.641879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.547 qpair failed and we were unable to recover it. 00:34:43.547 [2024-07-21 03:44:28.642007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.547 [2024-07-21 03:44:28.642033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.547 qpair failed and we were unable to recover it. 00:34:43.547 [2024-07-21 03:44:28.642124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.547 [2024-07-21 03:44:28.642150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.547 qpair failed and we were unable to recover it. 00:34:43.547 [2024-07-21 03:44:28.642298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.547 [2024-07-21 03:44:28.642324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.547 qpair failed and we were unable to recover it. 00:34:43.547 [2024-07-21 03:44:28.642445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.547 [2024-07-21 03:44:28.642470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.547 qpair failed and we were unable to recover it. 00:34:43.547 [2024-07-21 03:44:28.642568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.547 [2024-07-21 03:44:28.642593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.547 qpair failed and we were unable to recover it. 00:34:43.547 [2024-07-21 03:44:28.642699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.547 [2024-07-21 03:44:28.642739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.547 qpair failed and we were unable to recover it. 00:34:43.547 [2024-07-21 03:44:28.642922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.547 [2024-07-21 03:44:28.642967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.547 qpair failed and we were unable to recover it. 00:34:43.547 [2024-07-21 03:44:28.643090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.547 [2024-07-21 03:44:28.643133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.547 qpair failed and we were unable to recover it. 00:34:43.547 [2024-07-21 03:44:28.643256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.547 [2024-07-21 03:44:28.643282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.547 qpair failed and we were unable to recover it. 00:34:43.547 [2024-07-21 03:44:28.643408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.547 [2024-07-21 03:44:28.643434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.547 qpair failed and we were unable to recover it. 00:34:43.547 [2024-07-21 03:44:28.643601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.547 [2024-07-21 03:44:28.643650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.547 qpair failed and we were unable to recover it. 00:34:43.547 [2024-07-21 03:44:28.643780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.547 [2024-07-21 03:44:28.643807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.547 qpair failed and we were unable to recover it. 00:34:43.547 [2024-07-21 03:44:28.643954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.547 [2024-07-21 03:44:28.643980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.547 qpair failed and we were unable to recover it. 00:34:43.547 [2024-07-21 03:44:28.644087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.547 [2024-07-21 03:44:28.644113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.547 qpair failed and we were unable to recover it. 00:34:43.547 [2024-07-21 03:44:28.644232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.547 [2024-07-21 03:44:28.644258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.547 qpair failed and we were unable to recover it. 00:34:43.547 [2024-07-21 03:44:28.644379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.547 [2024-07-21 03:44:28.644405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.547 qpair failed and we were unable to recover it. 00:34:43.547 [2024-07-21 03:44:28.644525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.547 [2024-07-21 03:44:28.644550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.547 qpair failed and we were unable to recover it. 00:34:43.547 [2024-07-21 03:44:28.644684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.547 [2024-07-21 03:44:28.644709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.547 qpair failed and we were unable to recover it. 00:34:43.547 [2024-07-21 03:44:28.644803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.547 [2024-07-21 03:44:28.644829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.547 qpair failed and we were unable to recover it. 00:34:43.547 [2024-07-21 03:44:28.644992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.547 [2024-07-21 03:44:28.645020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.547 qpair failed and we were unable to recover it. 00:34:43.547 [2024-07-21 03:44:28.645151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.547 [2024-07-21 03:44:28.645188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.547 qpair failed and we were unable to recover it. 00:34:43.547 [2024-07-21 03:44:28.645296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.547 [2024-07-21 03:44:28.645326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.547 qpair failed and we were unable to recover it. 00:34:43.547 [2024-07-21 03:44:28.645462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.547 [2024-07-21 03:44:28.645492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.547 qpair failed and we were unable to recover it. 00:34:43.547 [2024-07-21 03:44:28.645618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.547 [2024-07-21 03:44:28.645647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.547 qpair failed and we were unable to recover it. 00:34:43.547 [2024-07-21 03:44:28.645768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.547 [2024-07-21 03:44:28.645794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.547 qpair failed and we were unable to recover it. 00:34:43.547 [2024-07-21 03:44:28.645904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.547 [2024-07-21 03:44:28.645933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.547 qpair failed and we were unable to recover it. 00:34:43.547 [2024-07-21 03:44:28.646059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.547 [2024-07-21 03:44:28.646121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.547 qpair failed and we were unable to recover it. 00:34:43.547 [2024-07-21 03:44:28.646237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.547 [2024-07-21 03:44:28.646265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.547 qpair failed and we were unable to recover it. 00:34:43.547 [2024-07-21 03:44:28.646441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.547 [2024-07-21 03:44:28.646470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.547 qpair failed and we were unable to recover it. 00:34:43.547 [2024-07-21 03:44:28.646582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.547 [2024-07-21 03:44:28.646607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.547 qpair failed and we were unable to recover it. 00:34:43.547 [2024-07-21 03:44:28.646754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.547 [2024-07-21 03:44:28.646780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.547 qpair failed and we were unable to recover it. 00:34:43.547 [2024-07-21 03:44:28.646928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.547 [2024-07-21 03:44:28.646964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.547 qpair failed and we were unable to recover it. 00:34:43.548 [2024-07-21 03:44:28.647117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.548 [2024-07-21 03:44:28.647150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.548 qpair failed and we were unable to recover it. 00:34:43.548 [2024-07-21 03:44:28.647323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.548 [2024-07-21 03:44:28.647352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.548 qpair failed and we were unable to recover it. 00:34:43.548 [2024-07-21 03:44:28.647494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.548 [2024-07-21 03:44:28.647523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.548 qpair failed and we were unable to recover it. 00:34:43.548 [2024-07-21 03:44:28.647640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.548 [2024-07-21 03:44:28.647666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.548 qpair failed and we were unable to recover it. 00:34:43.548 [2024-07-21 03:44:28.647815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.548 [2024-07-21 03:44:28.647841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.548 qpair failed and we were unable to recover it. 00:34:43.548 [2024-07-21 03:44:28.647961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.548 [2024-07-21 03:44:28.647986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.548 qpair failed and we were unable to recover it. 00:34:43.548 [2024-07-21 03:44:28.648121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.548 [2024-07-21 03:44:28.648150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.548 qpair failed and we were unable to recover it. 00:34:43.548 [2024-07-21 03:44:28.648335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.548 [2024-07-21 03:44:28.648364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.548 qpair failed and we were unable to recover it. 00:34:43.548 [2024-07-21 03:44:28.648526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.548 [2024-07-21 03:44:28.648555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.548 qpair failed and we were unable to recover it. 00:34:43.548 [2024-07-21 03:44:28.648727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.548 [2024-07-21 03:44:28.648754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.548 qpair failed and we were unable to recover it. 00:34:43.548 [2024-07-21 03:44:28.648918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.548 [2024-07-21 03:44:28.648947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.548 qpair failed and we were unable to recover it. 00:34:43.548 [2024-07-21 03:44:28.649084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.548 [2024-07-21 03:44:28.649113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.548 qpair failed and we were unable to recover it. 00:34:43.548 [2024-07-21 03:44:28.649261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.548 [2024-07-21 03:44:28.649289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.548 qpair failed and we were unable to recover it. 00:34:43.548 [2024-07-21 03:44:28.649411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.548 [2024-07-21 03:44:28.649439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.548 qpair failed and we were unable to recover it. 00:34:43.548 [2024-07-21 03:44:28.649542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.548 [2024-07-21 03:44:28.649572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.548 qpair failed and we were unable to recover it. 00:34:43.548 [2024-07-21 03:44:28.649741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.548 [2024-07-21 03:44:28.649781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.548 qpair failed and we were unable to recover it. 00:34:43.548 [2024-07-21 03:44:28.649942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.548 [2024-07-21 03:44:28.649986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.548 qpair failed and we were unable to recover it. 00:34:43.548 [2024-07-21 03:44:28.650123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.548 [2024-07-21 03:44:28.650167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.548 qpair failed and we were unable to recover it. 00:34:43.548 [2024-07-21 03:44:28.650312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.548 [2024-07-21 03:44:28.650356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.548 qpair failed and we were unable to recover it. 00:34:43.548 [2024-07-21 03:44:28.650490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.548 [2024-07-21 03:44:28.650526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.548 qpair failed and we were unable to recover it. 00:34:43.548 [2024-07-21 03:44:28.650677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.548 [2024-07-21 03:44:28.650703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.548 qpair failed and we were unable to recover it. 00:34:43.548 [2024-07-21 03:44:28.650798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.548 [2024-07-21 03:44:28.650824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.548 qpair failed and we were unable to recover it. 00:34:43.548 [2024-07-21 03:44:28.650943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.548 [2024-07-21 03:44:28.650968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.548 qpair failed and we were unable to recover it. 00:34:43.548 [2024-07-21 03:44:28.651072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.548 [2024-07-21 03:44:28.651099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.548 qpair failed and we were unable to recover it. 00:34:43.548 [2024-07-21 03:44:28.651185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.548 [2024-07-21 03:44:28.651211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.548 qpair failed and we were unable to recover it. 00:34:43.548 [2024-07-21 03:44:28.651329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.548 [2024-07-21 03:44:28.651354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.548 qpair failed and we were unable to recover it. 00:34:43.548 [2024-07-21 03:44:28.651473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.548 [2024-07-21 03:44:28.651498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.548 qpair failed and we were unable to recover it. 00:34:43.548 [2024-07-21 03:44:28.651622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.548 [2024-07-21 03:44:28.651649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.548 qpair failed and we were unable to recover it. 00:34:43.548 [2024-07-21 03:44:28.651761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.548 [2024-07-21 03:44:28.651791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.548 qpair failed and we were unable to recover it. 00:34:43.548 [2024-07-21 03:44:28.651885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.548 [2024-07-21 03:44:28.651910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.548 qpair failed and we were unable to recover it. 00:34:43.548 [2024-07-21 03:44:28.652008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.548 [2024-07-21 03:44:28.652035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.548 qpair failed and we were unable to recover it. 00:34:43.548 [2024-07-21 03:44:28.652158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.548 [2024-07-21 03:44:28.652184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.548 qpair failed and we were unable to recover it. 00:34:43.548 [2024-07-21 03:44:28.652306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.548 [2024-07-21 03:44:28.652332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.548 qpair failed and we were unable to recover it. 00:34:43.548 [2024-07-21 03:44:28.652456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.548 [2024-07-21 03:44:28.652482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.548 qpair failed and we were unable to recover it. 00:34:43.548 [2024-07-21 03:44:28.652574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.548 [2024-07-21 03:44:28.652599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.548 qpair failed and we were unable to recover it. 00:34:43.548 [2024-07-21 03:44:28.652690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.548 [2024-07-21 03:44:28.652716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.548 qpair failed and we were unable to recover it. 00:34:43.548 [2024-07-21 03:44:28.652804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.548 [2024-07-21 03:44:28.652829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.548 qpair failed and we were unable to recover it. 00:34:43.548 [2024-07-21 03:44:28.652952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.548 [2024-07-21 03:44:28.652978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.548 qpair failed and we were unable to recover it. 00:34:43.548 [2024-07-21 03:44:28.653079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.548 [2024-07-21 03:44:28.653105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.548 qpair failed and we were unable to recover it. 00:34:43.548 [2024-07-21 03:44:28.653191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.548 [2024-07-21 03:44:28.653216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.548 qpair failed and we were unable to recover it. 00:34:43.548 [2024-07-21 03:44:28.653334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.548 [2024-07-21 03:44:28.653361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.549 qpair failed and we were unable to recover it. 00:34:43.549 [2024-07-21 03:44:28.653452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.549 [2024-07-21 03:44:28.653477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.549 qpair failed and we were unable to recover it. 00:34:43.549 [2024-07-21 03:44:28.653569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.549 [2024-07-21 03:44:28.653594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.549 qpair failed and we were unable to recover it. 00:34:43.549 [2024-07-21 03:44:28.653722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.549 [2024-07-21 03:44:28.653747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.549 qpair failed and we were unable to recover it. 00:34:43.549 [2024-07-21 03:44:28.653842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.549 [2024-07-21 03:44:28.653867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.549 qpair failed and we were unable to recover it. 00:34:43.549 [2024-07-21 03:44:28.653981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.549 [2024-07-21 03:44:28.654006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.549 qpair failed and we were unable to recover it. 00:34:43.549 [2024-07-21 03:44:28.654125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.549 [2024-07-21 03:44:28.654150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.549 qpair failed and we were unable to recover it. 00:34:43.549 [2024-07-21 03:44:28.654269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.549 [2024-07-21 03:44:28.654294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.549 qpair failed and we were unable to recover it. 00:34:43.549 [2024-07-21 03:44:28.654419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.549 [2024-07-21 03:44:28.654459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.549 qpair failed and we were unable to recover it. 00:34:43.549 [2024-07-21 03:44:28.654590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.549 [2024-07-21 03:44:28.654642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.549 qpair failed and we were unable to recover it. 00:34:43.549 [2024-07-21 03:44:28.654768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.549 [2024-07-21 03:44:28.654794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.549 qpair failed and we were unable to recover it. 00:34:43.549 [2024-07-21 03:44:28.654889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.549 [2024-07-21 03:44:28.654914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.549 qpair failed and we were unable to recover it. 00:34:43.549 [2024-07-21 03:44:28.655028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.549 [2024-07-21 03:44:28.655058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.549 qpair failed and we were unable to recover it. 00:34:43.549 [2024-07-21 03:44:28.655196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.549 [2024-07-21 03:44:28.655224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.549 qpair failed and we were unable to recover it. 00:34:43.549 [2024-07-21 03:44:28.655410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.549 [2024-07-21 03:44:28.655438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.549 qpair failed and we were unable to recover it. 00:34:43.549 [2024-07-21 03:44:28.655565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.549 [2024-07-21 03:44:28.655590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.549 qpair failed and we were unable to recover it. 00:34:43.549 [2024-07-21 03:44:28.655737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.549 [2024-07-21 03:44:28.655780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.549 qpair failed and we were unable to recover it. 00:34:43.549 [2024-07-21 03:44:28.655953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.549 [2024-07-21 03:44:28.656000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.549 qpair failed and we were unable to recover it. 00:34:43.549 [2024-07-21 03:44:28.656145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.549 [2024-07-21 03:44:28.656186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.549 qpair failed and we were unable to recover it. 00:34:43.549 [2024-07-21 03:44:28.656279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.549 [2024-07-21 03:44:28.656306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.549 qpair failed and we were unable to recover it. 00:34:43.549 [2024-07-21 03:44:28.656403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.549 [2024-07-21 03:44:28.656429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.549 qpair failed and we were unable to recover it. 00:34:43.549 [2024-07-21 03:44:28.656577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.549 [2024-07-21 03:44:28.656609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.549 qpair failed and we were unable to recover it. 00:34:43.549 [2024-07-21 03:44:28.656736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.549 [2024-07-21 03:44:28.656780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.549 qpair failed and we were unable to recover it. 00:34:43.549 [2024-07-21 03:44:28.656958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.549 [2024-07-21 03:44:28.656986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.549 qpair failed and we were unable to recover it. 00:34:43.549 [2024-07-21 03:44:28.657108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.549 [2024-07-21 03:44:28.657150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.549 qpair failed and we were unable to recover it. 00:34:43.549 [2024-07-21 03:44:28.657277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.549 [2024-07-21 03:44:28.657302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.549 qpair failed and we were unable to recover it. 00:34:43.549 [2024-07-21 03:44:28.657418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.549 [2024-07-21 03:44:28.657443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.549 qpair failed and we were unable to recover it. 00:34:43.549 [2024-07-21 03:44:28.657555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.549 [2024-07-21 03:44:28.657580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.549 qpair failed and we were unable to recover it. 00:34:43.549 [2024-07-21 03:44:28.657714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.549 [2024-07-21 03:44:28.657763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.549 qpair failed and we were unable to recover it. 00:34:43.549 [2024-07-21 03:44:28.657902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.549 [2024-07-21 03:44:28.657931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.549 qpair failed and we were unable to recover it. 00:34:43.549 [2024-07-21 03:44:28.658070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.549 [2024-07-21 03:44:28.658112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.549 qpair failed and we were unable to recover it. 00:34:43.549 [2024-07-21 03:44:28.658244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.549 [2024-07-21 03:44:28.658274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.549 qpair failed and we were unable to recover it. 00:34:43.549 [2024-07-21 03:44:28.658394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.549 [2024-07-21 03:44:28.658423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.549 qpair failed and we were unable to recover it. 00:34:43.549 [2024-07-21 03:44:28.658516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.549 [2024-07-21 03:44:28.658540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.549 qpair failed and we were unable to recover it. 00:34:43.549 [2024-07-21 03:44:28.658640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.549 [2024-07-21 03:44:28.658667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.549 qpair failed and we were unable to recover it. 00:34:43.549 [2024-07-21 03:44:28.658795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.549 [2024-07-21 03:44:28.658821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.549 qpair failed and we were unable to recover it. 00:34:43.549 [2024-07-21 03:44:28.658932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.549 [2024-07-21 03:44:28.658961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.549 qpair failed and we were unable to recover it. 00:34:43.549 [2024-07-21 03:44:28.659070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.549 [2024-07-21 03:44:28.659095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.549 qpair failed and we were unable to recover it. 00:34:43.549 [2024-07-21 03:44:28.659222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.549 [2024-07-21 03:44:28.659249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.549 qpair failed and we were unable to recover it. 00:34:43.549 [2024-07-21 03:44:28.659411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.549 [2024-07-21 03:44:28.659438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.549 qpair failed and we were unable to recover it. 00:34:43.549 [2024-07-21 03:44:28.659577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.549 [2024-07-21 03:44:28.659608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.549 qpair failed and we were unable to recover it. 00:34:43.549 [2024-07-21 03:44:28.659735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.549 [2024-07-21 03:44:28.659761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.550 qpair failed and we were unable to recover it. 00:34:43.550 [2024-07-21 03:44:28.659864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.550 [2024-07-21 03:44:28.659905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.550 qpair failed and we were unable to recover it. 00:34:43.550 [2024-07-21 03:44:28.660044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.550 [2024-07-21 03:44:28.660074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.550 qpair failed and we were unable to recover it. 00:34:43.550 [2024-07-21 03:44:28.660178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.550 [2024-07-21 03:44:28.660206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.550 qpair failed and we were unable to recover it. 00:34:43.550 [2024-07-21 03:44:28.660332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.550 [2024-07-21 03:44:28.660360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.550 qpair failed and we were unable to recover it. 00:34:43.550 [2024-07-21 03:44:28.660457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.550 [2024-07-21 03:44:28.660485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.550 qpair failed and we were unable to recover it. 00:34:43.550 [2024-07-21 03:44:28.660592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.550 [2024-07-21 03:44:28.660627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.550 qpair failed and we were unable to recover it. 00:34:43.550 [2024-07-21 03:44:28.660759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.550 [2024-07-21 03:44:28.660784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.550 qpair failed and we were unable to recover it. 00:34:43.550 [2024-07-21 03:44:28.660881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.550 [2024-07-21 03:44:28.660920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.550 qpair failed and we were unable to recover it. 00:34:43.550 [2024-07-21 03:44:28.661017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.550 [2024-07-21 03:44:28.661045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.550 qpair failed and we were unable to recover it. 00:34:43.550 [2024-07-21 03:44:28.661201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.550 [2024-07-21 03:44:28.661227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.550 qpair failed and we were unable to recover it. 00:34:43.550 [2024-07-21 03:44:28.661350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.550 [2024-07-21 03:44:28.661378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.550 qpair failed and we were unable to recover it. 00:34:43.550 [2024-07-21 03:44:28.661506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.550 [2024-07-21 03:44:28.661550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.550 qpair failed and we were unable to recover it. 00:34:43.550 [2024-07-21 03:44:28.661708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.550 [2024-07-21 03:44:28.661737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.550 qpair failed and we were unable to recover it. 00:34:43.550 [2024-07-21 03:44:28.661875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.550 [2024-07-21 03:44:28.661914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.550 qpair failed and we were unable to recover it. 00:34:43.550 [2024-07-21 03:44:28.662060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.550 [2024-07-21 03:44:28.662092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.550 qpair failed and we were unable to recover it. 00:34:43.550 [2024-07-21 03:44:28.662257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.550 [2024-07-21 03:44:28.662299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.550 qpair failed and we were unable to recover it. 00:34:43.550 [2024-07-21 03:44:28.662410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.550 [2024-07-21 03:44:28.662452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.550 qpair failed and we were unable to recover it. 00:34:43.550 [2024-07-21 03:44:28.662571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.550 [2024-07-21 03:44:28.662596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.550 qpair failed and we were unable to recover it. 00:34:43.550 [2024-07-21 03:44:28.662727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.550 [2024-07-21 03:44:28.662754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.550 qpair failed and we were unable to recover it. 00:34:43.550 [2024-07-21 03:44:28.662901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.550 [2024-07-21 03:44:28.662938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.550 qpair failed and we were unable to recover it. 00:34:43.550 [2024-07-21 03:44:28.663059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.550 [2024-07-21 03:44:28.663084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.550 qpair failed and we were unable to recover it. 00:34:43.550 [2024-07-21 03:44:28.663216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.550 [2024-07-21 03:44:28.663240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.550 qpair failed and we were unable to recover it. 00:34:43.550 [2024-07-21 03:44:28.663359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.550 [2024-07-21 03:44:28.663384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.550 qpair failed and we were unable to recover it. 00:34:43.550 [2024-07-21 03:44:28.663508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.550 [2024-07-21 03:44:28.663534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.550 qpair failed and we were unable to recover it. 00:34:43.550 [2024-07-21 03:44:28.663658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.550 [2024-07-21 03:44:28.663684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.550 qpair failed and we were unable to recover it. 00:34:43.550 [2024-07-21 03:44:28.663787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.550 [2024-07-21 03:44:28.663815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.550 qpair failed and we were unable to recover it. 00:34:43.550 [2024-07-21 03:44:28.663914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.550 [2024-07-21 03:44:28.663953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.550 qpair failed and we were unable to recover it. 00:34:43.550 [2024-07-21 03:44:28.664047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.550 [2024-07-21 03:44:28.664075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.550 qpair failed and we were unable to recover it. 00:34:43.550 [2024-07-21 03:44:28.664197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.550 [2024-07-21 03:44:28.664223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.550 qpair failed and we were unable to recover it. 00:34:43.550 [2024-07-21 03:44:28.664371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.550 [2024-07-21 03:44:28.664397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.550 qpair failed and we were unable to recover it. 00:34:43.550 [2024-07-21 03:44:28.664514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.550 [2024-07-21 03:44:28.664541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.550 qpair failed and we were unable to recover it. 00:34:43.550 [2024-07-21 03:44:28.664671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.550 [2024-07-21 03:44:28.664700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.550 qpair failed and we were unable to recover it. 00:34:43.550 [2024-07-21 03:44:28.664864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.550 [2024-07-21 03:44:28.664904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.550 qpair failed and we were unable to recover it. 00:34:43.550 [2024-07-21 03:44:28.665008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.550 [2024-07-21 03:44:28.665055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.550 qpair failed and we were unable to recover it. 00:34:43.550 [2024-07-21 03:44:28.665192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.551 [2024-07-21 03:44:28.665220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.551 qpair failed and we were unable to recover it. 00:34:43.551 [2024-07-21 03:44:28.665354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.551 [2024-07-21 03:44:28.665384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.551 qpair failed and we were unable to recover it. 00:34:43.551 [2024-07-21 03:44:28.665485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.551 [2024-07-21 03:44:28.665513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.551 qpair failed and we were unable to recover it. 00:34:43.551 [2024-07-21 03:44:28.665656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.551 [2024-07-21 03:44:28.665683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.551 qpair failed and we were unable to recover it. 00:34:43.551 [2024-07-21 03:44:28.665789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.551 [2024-07-21 03:44:28.665819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.551 qpair failed and we were unable to recover it. 00:34:43.551 [2024-07-21 03:44:28.665973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.551 [2024-07-21 03:44:28.666024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.551 qpair failed and we were unable to recover it. 00:34:43.551 [2024-07-21 03:44:28.666165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.551 [2024-07-21 03:44:28.666208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.551 qpair failed and we were unable to recover it. 00:34:43.551 [2024-07-21 03:44:28.666375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.551 [2024-07-21 03:44:28.666417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.551 qpair failed and we were unable to recover it. 00:34:43.551 [2024-07-21 03:44:28.666540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.551 [2024-07-21 03:44:28.666566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.551 qpair failed and we were unable to recover it. 00:34:43.551 [2024-07-21 03:44:28.666707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.551 [2024-07-21 03:44:28.666750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.551 qpair failed and we were unable to recover it. 00:34:43.551 [2024-07-21 03:44:28.666897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.551 [2024-07-21 03:44:28.666922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.551 qpair failed and we were unable to recover it. 00:34:43.551 [2024-07-21 03:44:28.667038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.551 [2024-07-21 03:44:28.667063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.551 qpair failed and we were unable to recover it. 00:34:43.551 [2024-07-21 03:44:28.667149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.551 [2024-07-21 03:44:28.667174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.551 qpair failed and we were unable to recover it. 00:34:43.551 [2024-07-21 03:44:28.667327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.551 [2024-07-21 03:44:28.667353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.551 qpair failed and we were unable to recover it. 00:34:43.551 [2024-07-21 03:44:28.667443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.551 [2024-07-21 03:44:28.667469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.551 qpair failed and we were unable to recover it. 00:34:43.551 [2024-07-21 03:44:28.667604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.551 [2024-07-21 03:44:28.667652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.551 qpair failed and we were unable to recover it. 00:34:43.551 [2024-07-21 03:44:28.667786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.551 [2024-07-21 03:44:28.667829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.551 qpair failed and we were unable to recover it. 00:34:43.551 [2024-07-21 03:44:28.667968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.551 [2024-07-21 03:44:28.667996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.551 qpair failed and we were unable to recover it. 00:34:43.551 [2024-07-21 03:44:28.668154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.551 [2024-07-21 03:44:28.668197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.551 qpair failed and we were unable to recover it. 00:34:43.551 [2024-07-21 03:44:28.668288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.551 [2024-07-21 03:44:28.668314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.551 qpair failed and we were unable to recover it. 00:34:43.551 [2024-07-21 03:44:28.668426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.551 [2024-07-21 03:44:28.668451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.551 qpair failed and we were unable to recover it. 00:34:43.551 [2024-07-21 03:44:28.668572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.551 [2024-07-21 03:44:28.668599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.551 qpair failed and we were unable to recover it. 00:34:43.551 [2024-07-21 03:44:28.668709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.551 [2024-07-21 03:44:28.668734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.551 qpair failed and we were unable to recover it. 00:34:43.551 [2024-07-21 03:44:28.668859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.551 [2024-07-21 03:44:28.668886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.551 qpair failed and we were unable to recover it. 00:34:43.551 [2024-07-21 03:44:28.669004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.551 [2024-07-21 03:44:28.669029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.551 qpair failed and we were unable to recover it. 00:34:43.551 [2024-07-21 03:44:28.669167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.551 [2024-07-21 03:44:28.669192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.551 qpair failed and we were unable to recover it. 00:34:43.551 [2024-07-21 03:44:28.669318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.551 [2024-07-21 03:44:28.669342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.551 qpair failed and we were unable to recover it. 00:34:43.551 [2024-07-21 03:44:28.669439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.551 [2024-07-21 03:44:28.669464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.551 qpair failed and we were unable to recover it. 00:34:43.551 [2024-07-21 03:44:28.669605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.551 [2024-07-21 03:44:28.669638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.551 qpair failed and we were unable to recover it. 00:34:43.551 [2024-07-21 03:44:28.669772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.551 [2024-07-21 03:44:28.669816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.551 qpair failed and we were unable to recover it. 00:34:43.551 [2024-07-21 03:44:28.669965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.551 [2024-07-21 03:44:28.669990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.551 qpair failed and we were unable to recover it. 00:34:43.551 [2024-07-21 03:44:28.670077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.551 [2024-07-21 03:44:28.670101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.551 qpair failed and we were unable to recover it. 00:34:43.551 [2024-07-21 03:44:28.670234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.551 [2024-07-21 03:44:28.670262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.551 qpair failed and we were unable to recover it. 00:34:43.551 [2024-07-21 03:44:28.670424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.551 [2024-07-21 03:44:28.670450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.551 qpair failed and we were unable to recover it. 00:34:43.551 [2024-07-21 03:44:28.670568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.551 [2024-07-21 03:44:28.670594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.551 qpair failed and we were unable to recover it. 00:34:43.551 [2024-07-21 03:44:28.670748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.551 [2024-07-21 03:44:28.670792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.551 qpair failed and we were unable to recover it. 00:34:43.551 [2024-07-21 03:44:28.670938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.551 [2024-07-21 03:44:28.670963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.551 qpair failed and we were unable to recover it. 00:34:43.551 [2024-07-21 03:44:28.671073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.551 [2024-07-21 03:44:28.671101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.551 qpair failed and we were unable to recover it. 00:34:43.551 [2024-07-21 03:44:28.671237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.551 [2024-07-21 03:44:28.671263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.551 qpair failed and we were unable to recover it. 00:34:43.551 [2024-07-21 03:44:28.671379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.551 [2024-07-21 03:44:28.671404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.551 qpair failed and we were unable to recover it. 00:34:43.551 [2024-07-21 03:44:28.671500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.551 [2024-07-21 03:44:28.671526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.551 qpair failed and we were unable to recover it. 00:34:43.551 [2024-07-21 03:44:28.671641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.552 [2024-07-21 03:44:28.671667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.552 qpair failed and we were unable to recover it. 00:34:43.552 [2024-07-21 03:44:28.671783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.552 [2024-07-21 03:44:28.671810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.552 qpair failed and we were unable to recover it. 00:34:43.552 [2024-07-21 03:44:28.671929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.552 [2024-07-21 03:44:28.671955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.552 qpair failed and we were unable to recover it. 00:34:43.552 [2024-07-21 03:44:28.672091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.552 [2024-07-21 03:44:28.672118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.552 qpair failed and we were unable to recover it. 00:34:43.552 [2024-07-21 03:44:28.672209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.552 [2024-07-21 03:44:28.672235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.552 qpair failed and we were unable to recover it. 00:34:43.552 [2024-07-21 03:44:28.672346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.552 [2024-07-21 03:44:28.672386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.552 qpair failed and we were unable to recover it. 00:34:43.552 [2024-07-21 03:44:28.672525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.552 [2024-07-21 03:44:28.672552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.552 qpair failed and we were unable to recover it. 00:34:43.552 [2024-07-21 03:44:28.672661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.552 [2024-07-21 03:44:28.672688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.552 qpair failed and we were unable to recover it. 00:34:43.552 [2024-07-21 03:44:28.672779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.552 [2024-07-21 03:44:28.672806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.552 qpair failed and we were unable to recover it. 00:34:43.552 [2024-07-21 03:44:28.672956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.552 [2024-07-21 03:44:28.672982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.552 qpair failed and we were unable to recover it. 00:34:43.552 [2024-07-21 03:44:28.673080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.552 [2024-07-21 03:44:28.673108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.552 qpair failed and we were unable to recover it. 00:34:43.552 [2024-07-21 03:44:28.673268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.552 [2024-07-21 03:44:28.673297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.552 qpair failed and we were unable to recover it. 00:34:43.552 [2024-07-21 03:44:28.673421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.552 [2024-07-21 03:44:28.673450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.552 qpair failed and we were unable to recover it. 00:34:43.552 [2024-07-21 03:44:28.673600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.552 [2024-07-21 03:44:28.673658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.552 qpair failed and we were unable to recover it. 00:34:43.552 [2024-07-21 03:44:28.673800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.552 [2024-07-21 03:44:28.673830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.552 qpair failed and we were unable to recover it. 00:34:43.552 [2024-07-21 03:44:28.673934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.552 [2024-07-21 03:44:28.673959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.552 qpair failed and we were unable to recover it. 00:34:43.552 [2024-07-21 03:44:28.674089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.552 [2024-07-21 03:44:28.674131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.552 qpair failed and we were unable to recover it. 00:34:43.552 [2024-07-21 03:44:28.674249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.552 [2024-07-21 03:44:28.674293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.552 qpair failed and we were unable to recover it. 00:34:43.552 [2024-07-21 03:44:28.674437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.552 [2024-07-21 03:44:28.674476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.552 qpair failed and we were unable to recover it. 00:34:43.552 [2024-07-21 03:44:28.674574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.552 [2024-07-21 03:44:28.674601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.552 qpair failed and we were unable to recover it. 00:34:43.552 [2024-07-21 03:44:28.674740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.552 [2024-07-21 03:44:28.674766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.552 qpair failed and we were unable to recover it. 00:34:43.552 [2024-07-21 03:44:28.674910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.552 [2024-07-21 03:44:28.674940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.552 qpair failed and we were unable to recover it. 00:34:43.552 [2024-07-21 03:44:28.675084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.552 [2024-07-21 03:44:28.675112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.552 qpair failed and we were unable to recover it. 00:34:43.552 [2024-07-21 03:44:28.675240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.552 [2024-07-21 03:44:28.675268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.552 qpair failed and we were unable to recover it. 00:34:43.552 [2024-07-21 03:44:28.675397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.552 [2024-07-21 03:44:28.675425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.552 qpair failed and we were unable to recover it. 00:34:43.552 [2024-07-21 03:44:28.675551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.552 [2024-07-21 03:44:28.675580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.552 qpair failed and we were unable to recover it. 00:34:43.552 [2024-07-21 03:44:28.675707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.552 [2024-07-21 03:44:28.675734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.552 qpair failed and we were unable to recover it. 00:34:43.552 [2024-07-21 03:44:28.675872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.552 [2024-07-21 03:44:28.675902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.552 qpair failed and we were unable to recover it. 00:34:43.552 [2024-07-21 03:44:28.676032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.552 [2024-07-21 03:44:28.676061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.552 qpair failed and we were unable to recover it. 00:34:43.552 [2024-07-21 03:44:28.676194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.552 [2024-07-21 03:44:28.676224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.552 qpair failed and we were unable to recover it. 00:34:43.552 [2024-07-21 03:44:28.676323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.552 [2024-07-21 03:44:28.676364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.552 qpair failed and we were unable to recover it. 00:34:43.552 [2024-07-21 03:44:28.676513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.552 [2024-07-21 03:44:28.676553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.552 qpair failed and we were unable to recover it. 00:34:43.552 [2024-07-21 03:44:28.676730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.552 [2024-07-21 03:44:28.676770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.552 qpair failed and we were unable to recover it. 00:34:43.552 [2024-07-21 03:44:28.676884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.552 [2024-07-21 03:44:28.676914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.552 qpair failed and we were unable to recover it. 00:34:43.552 [2024-07-21 03:44:28.677043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.552 [2024-07-21 03:44:28.677071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.552 qpair failed and we were unable to recover it. 00:34:43.552 [2024-07-21 03:44:28.677213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.552 [2024-07-21 03:44:28.677241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.552 qpair failed and we were unable to recover it. 00:34:43.552 [2024-07-21 03:44:28.677344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.552 [2024-07-21 03:44:28.677374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.552 qpair failed and we were unable to recover it. 00:34:43.552 [2024-07-21 03:44:28.677496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.552 [2024-07-21 03:44:28.677522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.552 qpair failed and we were unable to recover it. 00:34:43.552 [2024-07-21 03:44:28.677646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.552 [2024-07-21 03:44:28.677673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.552 qpair failed and we were unable to recover it. 00:34:43.552 [2024-07-21 03:44:28.677792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.552 [2024-07-21 03:44:28.677817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.552 qpair failed and we were unable to recover it. 00:34:43.552 [2024-07-21 03:44:28.677909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.552 [2024-07-21 03:44:28.677935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.552 qpair failed and we were unable to recover it. 00:34:43.552 [2024-07-21 03:44:28.678071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.553 [2024-07-21 03:44:28.678099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.553 qpair failed and we were unable to recover it. 00:34:43.553 [2024-07-21 03:44:28.678202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.553 [2024-07-21 03:44:28.678229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.553 qpair failed and we were unable to recover it. 00:34:43.553 [2024-07-21 03:44:28.678414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.553 [2024-07-21 03:44:28.678441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.553 qpair failed and we were unable to recover it. 00:34:43.553 [2024-07-21 03:44:28.678592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.553 [2024-07-21 03:44:28.678626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.553 qpair failed and we were unable to recover it. 00:34:43.553 [2024-07-21 03:44:28.678757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.553 [2024-07-21 03:44:28.678784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.553 qpair failed and we were unable to recover it. 00:34:43.553 [2024-07-21 03:44:28.678905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.553 [2024-07-21 03:44:28.678938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.553 qpair failed and we were unable to recover it. 00:34:43.553 [2024-07-21 03:44:28.679076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.553 [2024-07-21 03:44:28.679104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.553 qpair failed and we were unable to recover it. 00:34:43.553 [2024-07-21 03:44:28.679235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.553 [2024-07-21 03:44:28.679263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.553 qpair failed and we were unable to recover it. 00:34:43.553 [2024-07-21 03:44:28.679366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.553 [2024-07-21 03:44:28.679394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.553 qpair failed and we were unable to recover it. 00:34:43.553 [2024-07-21 03:44:28.679491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.553 [2024-07-21 03:44:28.679519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.553 qpair failed and we were unable to recover it. 00:34:43.553 [2024-07-21 03:44:28.679698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.553 [2024-07-21 03:44:28.679737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.553 qpair failed and we were unable to recover it. 00:34:43.553 [2024-07-21 03:44:28.679913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.553 [2024-07-21 03:44:28.679957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.553 qpair failed and we were unable to recover it. 00:34:43.553 [2024-07-21 03:44:28.680080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.553 [2024-07-21 03:44:28.680122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.553 qpair failed and we were unable to recover it. 00:34:43.553 [2024-07-21 03:44:28.680269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.553 [2024-07-21 03:44:28.680310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.553 qpair failed and we were unable to recover it. 00:34:43.553 [2024-07-21 03:44:28.680432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.553 [2024-07-21 03:44:28.680457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.553 qpair failed and we were unable to recover it. 00:34:43.553 [2024-07-21 03:44:28.680549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.553 [2024-07-21 03:44:28.680574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.553 qpair failed and we were unable to recover it. 00:34:43.553 [2024-07-21 03:44:28.680674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.553 [2024-07-21 03:44:28.680700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.553 qpair failed and we were unable to recover it. 00:34:43.553 [2024-07-21 03:44:28.680804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.553 [2024-07-21 03:44:28.680843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.553 qpair failed and we were unable to recover it. 00:34:43.553 [2024-07-21 03:44:28.680952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.553 [2024-07-21 03:44:28.680979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.553 qpair failed and we were unable to recover it. 00:34:43.553 [2024-07-21 03:44:28.681103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.553 [2024-07-21 03:44:28.681134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.553 qpair failed and we were unable to recover it. 00:34:43.553 [2024-07-21 03:44:28.681251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.553 [2024-07-21 03:44:28.681277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.553 qpair failed and we were unable to recover it. 00:34:43.553 [2024-07-21 03:44:28.681368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.553 [2024-07-21 03:44:28.681393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.553 qpair failed and we were unable to recover it. 00:34:43.553 [2024-07-21 03:44:28.681504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.553 [2024-07-21 03:44:28.681530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.553 qpair failed and we were unable to recover it. 00:34:43.553 [2024-07-21 03:44:28.681656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.553 [2024-07-21 03:44:28.681682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.553 qpair failed and we were unable to recover it. 00:34:43.553 [2024-07-21 03:44:28.681797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.553 [2024-07-21 03:44:28.681839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.553 qpair failed and we were unable to recover it. 00:34:43.553 [2024-07-21 03:44:28.681944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.553 [2024-07-21 03:44:28.681972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.553 qpair failed and we were unable to recover it. 00:34:43.553 [2024-07-21 03:44:28.682100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.553 [2024-07-21 03:44:28.682128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.553 qpair failed and we were unable to recover it. 00:34:43.553 [2024-07-21 03:44:28.682254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.553 [2024-07-21 03:44:28.682290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.553 qpair failed and we were unable to recover it. 00:34:43.553 [2024-07-21 03:44:28.682410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.553 [2024-07-21 03:44:28.682439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.553 qpair failed and we were unable to recover it. 00:34:43.553 [2024-07-21 03:44:28.682576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.553 [2024-07-21 03:44:28.682601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.553 qpair failed and we were unable to recover it. 00:34:43.553 [2024-07-21 03:44:28.682760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.553 [2024-07-21 03:44:28.682785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.553 qpair failed and we were unable to recover it. 00:34:43.553 [2024-07-21 03:44:28.682894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.553 [2024-07-21 03:44:28.682952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.553 qpair failed and we were unable to recover it. 00:34:43.553 [2024-07-21 03:44:28.683157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.553 [2024-07-21 03:44:28.683189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.553 qpair failed and we were unable to recover it. 00:34:43.553 [2024-07-21 03:44:28.683328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.553 [2024-07-21 03:44:28.683356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.553 qpair failed and we were unable to recover it. 00:34:43.553 [2024-07-21 03:44:28.683498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.553 [2024-07-21 03:44:28.683526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.553 qpair failed and we were unable to recover it. 00:34:43.553 [2024-07-21 03:44:28.683657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.553 [2024-07-21 03:44:28.683684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.553 qpair failed and we were unable to recover it. 00:34:43.553 [2024-07-21 03:44:28.683776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.553 [2024-07-21 03:44:28.683800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.553 qpair failed and we were unable to recover it. 00:34:43.553 [2024-07-21 03:44:28.683913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.553 [2024-07-21 03:44:28.683957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.553 qpair failed and we were unable to recover it. 00:34:43.553 [2024-07-21 03:44:28.684127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.553 [2024-07-21 03:44:28.684160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.553 qpair failed and we were unable to recover it. 00:34:43.553 [2024-07-21 03:44:28.684369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.553 [2024-07-21 03:44:28.684439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.553 qpair failed and we were unable to recover it. 00:34:43.553 [2024-07-21 03:44:28.684627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.553 [2024-07-21 03:44:28.684653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.553 qpair failed and we were unable to recover it. 00:34:43.553 [2024-07-21 03:44:28.684750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.554 [2024-07-21 03:44:28.684774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.554 qpair failed and we were unable to recover it. 00:34:43.554 [2024-07-21 03:44:28.684895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.554 [2024-07-21 03:44:28.684920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.554 qpair failed and we were unable to recover it. 00:34:43.554 [2024-07-21 03:44:28.685066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.554 [2024-07-21 03:44:28.685141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.554 qpair failed and we were unable to recover it. 00:34:43.554 [2024-07-21 03:44:28.685318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.554 [2024-07-21 03:44:28.685393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.554 qpair failed and we were unable to recover it. 00:34:43.554 [2024-07-21 03:44:28.685535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.554 [2024-07-21 03:44:28.685574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.554 qpair failed and we were unable to recover it. 00:34:43.554 [2024-07-21 03:44:28.685747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.554 [2024-07-21 03:44:28.685775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.554 qpair failed and we were unable to recover it. 00:34:43.554 [2024-07-21 03:44:28.685912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.554 [2024-07-21 03:44:28.685941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.554 qpair failed and we were unable to recover it. 00:34:43.554 [2024-07-21 03:44:28.686174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.554 [2024-07-21 03:44:28.686221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.554 qpair failed and we were unable to recover it. 00:34:43.554 [2024-07-21 03:44:28.686340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.554 [2024-07-21 03:44:28.686402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.554 qpair failed and we were unable to recover it. 00:34:43.554 [2024-07-21 03:44:28.686512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.554 [2024-07-21 03:44:28.686536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.554 qpair failed and we were unable to recover it. 00:34:43.554 [2024-07-21 03:44:28.686637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.554 [2024-07-21 03:44:28.686662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.554 qpair failed and we were unable to recover it. 00:34:43.554 [2024-07-21 03:44:28.686786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.554 [2024-07-21 03:44:28.686811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.554 qpair failed and we were unable to recover it. 00:34:43.554 [2024-07-21 03:44:28.686939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.554 [2024-07-21 03:44:28.686967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.554 qpair failed and we were unable to recover it. 00:34:43.554 [2024-07-21 03:44:28.687130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.554 [2024-07-21 03:44:28.687157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.554 qpair failed and we were unable to recover it. 00:34:43.554 [2024-07-21 03:44:28.687366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.554 [2024-07-21 03:44:28.687416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.554 qpair failed and we were unable to recover it. 00:34:43.554 [2024-07-21 03:44:28.687562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.554 [2024-07-21 03:44:28.687588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.554 qpair failed and we were unable to recover it. 00:34:43.554 [2024-07-21 03:44:28.687723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.554 [2024-07-21 03:44:28.687749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.554 qpair failed and we were unable to recover it. 00:34:43.554 [2024-07-21 03:44:28.687845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.554 [2024-07-21 03:44:28.687870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.554 qpair failed and we were unable to recover it. 00:34:43.554 [2024-07-21 03:44:28.688021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.554 [2024-07-21 03:44:28.688049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.554 qpair failed and we were unable to recover it. 00:34:43.554 [2024-07-21 03:44:28.688211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.554 [2024-07-21 03:44:28.688239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.554 qpair failed and we were unable to recover it. 00:34:43.554 [2024-07-21 03:44:28.688367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.554 [2024-07-21 03:44:28.688395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.554 qpair failed and we were unable to recover it. 00:34:43.554 [2024-07-21 03:44:28.688507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.554 [2024-07-21 03:44:28.688534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.554 qpair failed and we were unable to recover it. 00:34:43.554 [2024-07-21 03:44:28.688657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.554 [2024-07-21 03:44:28.688682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.554 qpair failed and we were unable to recover it. 00:34:43.554 [2024-07-21 03:44:28.688780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.554 [2024-07-21 03:44:28.688806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.554 qpair failed and we were unable to recover it. 00:34:43.554 [2024-07-21 03:44:28.688943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.554 [2024-07-21 03:44:28.688970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.554 qpair failed and we were unable to recover it. 00:34:43.554 [2024-07-21 03:44:28.689098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.554 [2024-07-21 03:44:28.689126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.554 qpair failed and we were unable to recover it. 00:34:43.554 [2024-07-21 03:44:28.689258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.554 [2024-07-21 03:44:28.689285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.554 qpair failed and we were unable to recover it. 00:34:43.554 [2024-07-21 03:44:28.689415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.554 [2024-07-21 03:44:28.689443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.554 qpair failed and we were unable to recover it. 00:34:43.554 [2024-07-21 03:44:28.689584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.554 [2024-07-21 03:44:28.689661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.554 qpair failed and we were unable to recover it. 00:34:43.554 [2024-07-21 03:44:28.689765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.554 [2024-07-21 03:44:28.689794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.554 qpair failed and we were unable to recover it. 00:34:43.554 [2024-07-21 03:44:28.689986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.554 [2024-07-21 03:44:28.690043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.554 qpair failed and we were unable to recover it. 00:34:43.554 [2024-07-21 03:44:28.690185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.554 [2024-07-21 03:44:28.690231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.554 qpair failed and we were unable to recover it. 00:34:43.554 [2024-07-21 03:44:28.690344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.554 [2024-07-21 03:44:28.690372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.554 qpair failed and we were unable to recover it. 00:34:43.554 [2024-07-21 03:44:28.690507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.554 [2024-07-21 03:44:28.690532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.554 qpair failed and we were unable to recover it. 00:34:43.554 [2024-07-21 03:44:28.690628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.554 [2024-07-21 03:44:28.690655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.554 qpair failed and we were unable to recover it. 00:34:43.554 [2024-07-21 03:44:28.690793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.554 [2024-07-21 03:44:28.690836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.555 qpair failed and we were unable to recover it. 00:34:43.555 [2024-07-21 03:44:28.690947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.555 [2024-07-21 03:44:28.690976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.555 qpair failed and we were unable to recover it. 00:34:43.555 [2024-07-21 03:44:28.691138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.555 [2024-07-21 03:44:28.691164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.555 qpair failed and we were unable to recover it. 00:34:43.555 [2024-07-21 03:44:28.691282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.555 [2024-07-21 03:44:28.691307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.555 qpair failed and we were unable to recover it. 00:34:43.555 [2024-07-21 03:44:28.691405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.555 [2024-07-21 03:44:28.691430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.555 qpair failed and we were unable to recover it. 00:34:43.555 [2024-07-21 03:44:28.691576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.555 [2024-07-21 03:44:28.691601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.555 qpair failed and we were unable to recover it. 00:34:43.555 [2024-07-21 03:44:28.691754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.555 [2024-07-21 03:44:28.691800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.555 qpair failed and we were unable to recover it. 00:34:43.555 [2024-07-21 03:44:28.691979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.555 [2024-07-21 03:44:28.692013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.555 qpair failed and we were unable to recover it. 00:34:43.555 [2024-07-21 03:44:28.692170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.555 [2024-07-21 03:44:28.692227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.555 qpair failed and we were unable to recover it. 00:34:43.555 [2024-07-21 03:44:28.692445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.555 [2024-07-21 03:44:28.692493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.555 qpair failed and we were unable to recover it. 00:34:43.555 [2024-07-21 03:44:28.692625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.555 [2024-07-21 03:44:28.692654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.555 qpair failed and we were unable to recover it. 00:34:43.555 [2024-07-21 03:44:28.692754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.555 [2024-07-21 03:44:28.692796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.555 qpair failed and we were unable to recover it. 00:34:43.555 [2024-07-21 03:44:28.692915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.555 [2024-07-21 03:44:28.692941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.555 qpair failed and we were unable to recover it. 00:34:43.555 [2024-07-21 03:44:28.693059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.555 [2024-07-21 03:44:28.693095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.555 qpair failed and we were unable to recover it. 00:34:43.555 [2024-07-21 03:44:28.693242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.555 [2024-07-21 03:44:28.693270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.555 qpair failed and we were unable to recover it. 00:34:43.555 [2024-07-21 03:44:28.693396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.555 [2024-07-21 03:44:28.693423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.555 qpair failed and we were unable to recover it. 00:34:43.555 [2024-07-21 03:44:28.693529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.555 [2024-07-21 03:44:28.693554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.555 qpair failed and we were unable to recover it. 00:34:43.555 [2024-07-21 03:44:28.693682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.555 [2024-07-21 03:44:28.693722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.555 qpair failed and we were unable to recover it. 00:34:43.555 [2024-07-21 03:44:28.693832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.555 [2024-07-21 03:44:28.693871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.555 qpair failed and we were unable to recover it. 00:34:43.555 [2024-07-21 03:44:28.694069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.555 [2024-07-21 03:44:28.694100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.555 qpair failed and we were unable to recover it. 00:34:43.555 [2024-07-21 03:44:28.694260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.555 [2024-07-21 03:44:28.694289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.555 qpair failed and we were unable to recover it. 00:34:43.555 [2024-07-21 03:44:28.694450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.555 [2024-07-21 03:44:28.694479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.555 qpair failed and we were unable to recover it. 00:34:43.555 [2024-07-21 03:44:28.694603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.555 [2024-07-21 03:44:28.694663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.555 qpair failed and we were unable to recover it. 00:34:43.555 [2024-07-21 03:44:28.694778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.555 [2024-07-21 03:44:28.694804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.555 qpair failed and we were unable to recover it. 00:34:43.555 [2024-07-21 03:44:28.694948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.555 [2024-07-21 03:44:28.694977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.555 qpair failed and we were unable to recover it. 00:34:43.555 [2024-07-21 03:44:28.695177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.555 [2024-07-21 03:44:28.695217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.555 qpair failed and we were unable to recover it. 00:34:43.555 [2024-07-21 03:44:28.695417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.555 [2024-07-21 03:44:28.695479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.555 qpair failed and we were unable to recover it. 00:34:43.555 [2024-07-21 03:44:28.695620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.555 [2024-07-21 03:44:28.695646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.555 qpair failed and we were unable to recover it. 00:34:43.555 [2024-07-21 03:44:28.695811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.555 [2024-07-21 03:44:28.695840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.555 qpair failed and we were unable to recover it. 00:34:43.555 [2024-07-21 03:44:28.695938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.555 [2024-07-21 03:44:28.695967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.555 qpair failed and we were unable to recover it. 00:34:43.555 [2024-07-21 03:44:28.696074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.555 [2024-07-21 03:44:28.696104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.555 qpair failed and we were unable to recover it. 00:34:43.555 [2024-07-21 03:44:28.696223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.555 [2024-07-21 03:44:28.696287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.555 qpair failed and we were unable to recover it. 00:34:43.555 [2024-07-21 03:44:28.696420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.555 [2024-07-21 03:44:28.696457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.555 qpair failed and we were unable to recover it. 00:34:43.555 [2024-07-21 03:44:28.696602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.555 [2024-07-21 03:44:28.696637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.555 qpair failed and we were unable to recover it. 00:34:43.555 [2024-07-21 03:44:28.696802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.555 [2024-07-21 03:44:28.696828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.555 qpair failed and we were unable to recover it. 00:34:43.555 [2024-07-21 03:44:28.696998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.555 [2024-07-21 03:44:28.697030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.555 qpair failed and we were unable to recover it. 00:34:43.555 [2024-07-21 03:44:28.697162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.555 [2024-07-21 03:44:28.697191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.555 qpair failed and we were unable to recover it. 00:34:43.555 [2024-07-21 03:44:28.697308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.555 [2024-07-21 03:44:28.697336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.555 qpair failed and we were unable to recover it. 00:34:43.555 [2024-07-21 03:44:28.697469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.555 [2024-07-21 03:44:28.697498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.555 qpair failed and we were unable to recover it. 00:34:43.555 [2024-07-21 03:44:28.697648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.555 [2024-07-21 03:44:28.697674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.555 qpair failed and we were unable to recover it. 00:34:43.555 [2024-07-21 03:44:28.697771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.555 [2024-07-21 03:44:28.697796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.555 qpair failed and we were unable to recover it. 00:34:43.555 [2024-07-21 03:44:28.697910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.556 [2024-07-21 03:44:28.697942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.556 qpair failed and we were unable to recover it. 00:34:43.556 [2024-07-21 03:44:28.698064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.556 [2024-07-21 03:44:28.698092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.556 qpair failed and we were unable to recover it. 00:34:43.556 [2024-07-21 03:44:28.698202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.556 [2024-07-21 03:44:28.698242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.556 qpair failed and we were unable to recover it. 00:34:43.556 [2024-07-21 03:44:28.698364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.556 [2024-07-21 03:44:28.698390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.556 qpair failed and we were unable to recover it. 00:34:43.556 [2024-07-21 03:44:28.698518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.556 [2024-07-21 03:44:28.698545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.556 qpair failed and we were unable to recover it. 00:34:43.556 [2024-07-21 03:44:28.698682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.556 [2024-07-21 03:44:28.698708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.556 qpair failed and we were unable to recover it. 00:34:43.556 [2024-07-21 03:44:28.698797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.556 [2024-07-21 03:44:28.698823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.556 qpair failed and we were unable to recover it. 00:34:43.556 [2024-07-21 03:44:28.698946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.556 [2024-07-21 03:44:28.698975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.556 qpair failed and we were unable to recover it. 00:34:43.556 [2024-07-21 03:44:28.699106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.556 [2024-07-21 03:44:28.699134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.556 qpair failed and we were unable to recover it. 00:34:43.556 [2024-07-21 03:44:28.699264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.556 [2024-07-21 03:44:28.699301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.556 qpair failed and we were unable to recover it. 00:34:43.556 [2024-07-21 03:44:28.699428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.556 [2024-07-21 03:44:28.699456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.556 qpair failed and we were unable to recover it. 00:34:43.556 [2024-07-21 03:44:28.699565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.556 [2024-07-21 03:44:28.699591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.556 qpair failed and we were unable to recover it. 00:34:43.556 [2024-07-21 03:44:28.699697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.556 [2024-07-21 03:44:28.699725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.556 qpair failed and we were unable to recover it. 00:34:43.556 [2024-07-21 03:44:28.699823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.556 [2024-07-21 03:44:28.699850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.556 qpair failed and we were unable to recover it. 00:34:43.556 [2024-07-21 03:44:28.699973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.556 [2024-07-21 03:44:28.700002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.556 qpair failed and we were unable to recover it. 00:34:43.556 [2024-07-21 03:44:28.700171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.556 [2024-07-21 03:44:28.700200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.556 qpair failed and we were unable to recover it. 00:34:43.556 [2024-07-21 03:44:28.700355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.556 [2024-07-21 03:44:28.700385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.556 qpair failed and we were unable to recover it. 00:34:43.556 [2024-07-21 03:44:28.700521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.556 [2024-07-21 03:44:28.700550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.556 qpair failed and we were unable to recover it. 00:34:43.556 [2024-07-21 03:44:28.700669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.556 [2024-07-21 03:44:28.700695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.556 qpair failed and we were unable to recover it. 00:34:43.556 [2024-07-21 03:44:28.700820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.556 [2024-07-21 03:44:28.700846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.556 qpair failed and we were unable to recover it. 00:34:43.556 [2024-07-21 03:44:28.700966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.556 [2024-07-21 03:44:28.700992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.556 qpair failed and we were unable to recover it. 00:34:43.556 [2024-07-21 03:44:28.701114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.556 [2024-07-21 03:44:28.701145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.556 qpair failed and we were unable to recover it. 00:34:43.556 [2024-07-21 03:44:28.701311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.556 [2024-07-21 03:44:28.701339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.556 qpair failed and we were unable to recover it. 00:34:43.556 [2024-07-21 03:44:28.701472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.556 [2024-07-21 03:44:28.701499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.556 qpair failed and we were unable to recover it. 00:34:43.556 [2024-07-21 03:44:28.701653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.556 [2024-07-21 03:44:28.701680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.556 qpair failed and we were unable to recover it. 00:34:43.556 [2024-07-21 03:44:28.701796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.556 [2024-07-21 03:44:28.701821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.556 qpair failed and we were unable to recover it. 00:34:43.556 [2024-07-21 03:44:28.701907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.556 [2024-07-21 03:44:28.701935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.556 qpair failed and we were unable to recover it. 00:34:43.556 [2024-07-21 03:44:28.702057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.556 [2024-07-21 03:44:28.702085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.556 qpair failed and we were unable to recover it. 00:34:43.556 [2024-07-21 03:44:28.702269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.556 [2024-07-21 03:44:28.702297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.556 qpair failed and we were unable to recover it. 00:34:43.556 [2024-07-21 03:44:28.702450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.556 [2024-07-21 03:44:28.702481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.556 qpair failed and we were unable to recover it. 00:34:43.556 [2024-07-21 03:44:28.702662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.556 [2024-07-21 03:44:28.702689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.556 qpair failed and we were unable to recover it. 00:34:43.556 [2024-07-21 03:44:28.702806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.556 [2024-07-21 03:44:28.702832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.556 qpair failed and we were unable to recover it. 00:34:43.556 [2024-07-21 03:44:28.702923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.556 [2024-07-21 03:44:28.702950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.556 qpair failed and we were unable to recover it. 00:34:43.556 [2024-07-21 03:44:28.703107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.556 [2024-07-21 03:44:28.703150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.556 qpair failed and we were unable to recover it. 00:34:43.556 [2024-07-21 03:44:28.703312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.556 [2024-07-21 03:44:28.703360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.556 qpair failed and we were unable to recover it. 00:34:43.556 [2024-07-21 03:44:28.703477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.556 [2024-07-21 03:44:28.703521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.556 qpair failed and we were unable to recover it. 00:34:43.556 [2024-07-21 03:44:28.703702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.556 [2024-07-21 03:44:28.703729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.556 qpair failed and we were unable to recover it. 00:34:43.556 [2024-07-21 03:44:28.703824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.556 [2024-07-21 03:44:28.703849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.556 qpair failed and we were unable to recover it. 00:34:43.556 [2024-07-21 03:44:28.703971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.556 [2024-07-21 03:44:28.703996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.556 qpair failed and we were unable to recover it. 00:34:43.556 [2024-07-21 03:44:28.704119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.556 [2024-07-21 03:44:28.704162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.556 qpair failed and we were unable to recover it. 00:34:43.556 [2024-07-21 03:44:28.704320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.556 [2024-07-21 03:44:28.704348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.556 qpair failed and we were unable to recover it. 00:34:43.556 [2024-07-21 03:44:28.704470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.557 [2024-07-21 03:44:28.704512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.557 qpair failed and we were unable to recover it. 00:34:43.557 [2024-07-21 03:44:28.704695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.557 [2024-07-21 03:44:28.704722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.557 qpair failed and we were unable to recover it. 00:34:43.557 [2024-07-21 03:44:28.704842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.557 [2024-07-21 03:44:28.704867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.557 qpair failed and we were unable to recover it. 00:34:43.557 [2024-07-21 03:44:28.705033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.557 [2024-07-21 03:44:28.705058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.557 qpair failed and we were unable to recover it. 00:34:43.557 [2024-07-21 03:44:28.705182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.557 [2024-07-21 03:44:28.705207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.557 qpair failed and we were unable to recover it. 00:34:43.557 [2024-07-21 03:44:28.705339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.557 [2024-07-21 03:44:28.705363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.557 qpair failed and we were unable to recover it. 00:34:43.557 [2024-07-21 03:44:28.705465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.557 [2024-07-21 03:44:28.705496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.557 qpair failed and we were unable to recover it. 00:34:43.557 [2024-07-21 03:44:28.705621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.557 [2024-07-21 03:44:28.705665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.557 qpair failed and we were unable to recover it. 00:34:43.557 [2024-07-21 03:44:28.705788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.557 [2024-07-21 03:44:28.705813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.557 qpair failed and we were unable to recover it. 00:34:43.557 [2024-07-21 03:44:28.705902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.557 [2024-07-21 03:44:28.705938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.557 qpair failed and we were unable to recover it. 00:34:43.557 [2024-07-21 03:44:28.706031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.557 [2024-07-21 03:44:28.706056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.557 qpair failed and we were unable to recover it. 00:34:43.557 [2024-07-21 03:44:28.706206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.557 [2024-07-21 03:44:28.706235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.557 qpair failed and we were unable to recover it. 00:34:43.557 [2024-07-21 03:44:28.706427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.557 [2024-07-21 03:44:28.706456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.557 qpair failed and we were unable to recover it. 00:34:43.557 [2024-07-21 03:44:28.706553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.557 [2024-07-21 03:44:28.706595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.557 qpair failed and we were unable to recover it. 00:34:43.557 [2024-07-21 03:44:28.706738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.557 [2024-07-21 03:44:28.706764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.557 qpair failed and we were unable to recover it. 00:34:43.557 [2024-07-21 03:44:28.706887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.557 [2024-07-21 03:44:28.706912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.557 qpair failed and we were unable to recover it. 00:34:43.557 [2024-07-21 03:44:28.707006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.557 [2024-07-21 03:44:28.707031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.557 qpair failed and we were unable to recover it. 00:34:43.557 [2024-07-21 03:44:28.707144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.557 [2024-07-21 03:44:28.707174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.557 qpair failed and we were unable to recover it. 00:34:43.557 [2024-07-21 03:44:28.707329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.557 [2024-07-21 03:44:28.707358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.557 qpair failed and we were unable to recover it. 00:34:43.557 [2024-07-21 03:44:28.707484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.557 [2024-07-21 03:44:28.707509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.557 qpair failed and we were unable to recover it. 00:34:43.557 [2024-07-21 03:44:28.707646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.557 [2024-07-21 03:44:28.707674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.557 qpair failed and we were unable to recover it. 00:34:43.557 [2024-07-21 03:44:28.707764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.557 [2024-07-21 03:44:28.707789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.557 qpair failed and we were unable to recover it. 00:34:43.557 [2024-07-21 03:44:28.707922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.557 [2024-07-21 03:44:28.707950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.557 qpair failed and we were unable to recover it. 00:34:43.557 [2024-07-21 03:44:28.708091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.557 [2024-07-21 03:44:28.708118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.557 qpair failed and we were unable to recover it. 00:34:43.557 [2024-07-21 03:44:28.708256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.557 [2024-07-21 03:44:28.708294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.557 qpair failed and we were unable to recover it. 00:34:43.557 [2024-07-21 03:44:28.708393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.557 [2024-07-21 03:44:28.708421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.557 qpair failed and we were unable to recover it. 00:34:43.557 [2024-07-21 03:44:28.708576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.557 [2024-07-21 03:44:28.708629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.557 qpair failed and we were unable to recover it. 00:34:43.557 [2024-07-21 03:44:28.708767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.557 [2024-07-21 03:44:28.708807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.557 qpair failed and we were unable to recover it. 00:34:43.557 [2024-07-21 03:44:28.708970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.557 [2024-07-21 03:44:28.709018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.557 qpair failed and we were unable to recover it. 00:34:43.557 [2024-07-21 03:44:28.709234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.557 [2024-07-21 03:44:28.709285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.557 qpair failed and we were unable to recover it. 00:34:43.557 [2024-07-21 03:44:28.709538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.557 [2024-07-21 03:44:28.709590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.557 qpair failed and we were unable to recover it. 00:34:43.557 [2024-07-21 03:44:28.709712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.557 [2024-07-21 03:44:28.709741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.557 qpair failed and we were unable to recover it. 00:34:43.557 [2024-07-21 03:44:28.709901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.557 [2024-07-21 03:44:28.709928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.557 qpair failed and we were unable to recover it. 00:34:43.557 [2024-07-21 03:44:28.710033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.557 [2024-07-21 03:44:28.710064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.557 qpair failed and we were unable to recover it. 00:34:43.557 [2024-07-21 03:44:28.710180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.557 [2024-07-21 03:44:28.710205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.557 qpair failed and we were unable to recover it. 00:34:43.557 [2024-07-21 03:44:28.710331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.557 [2024-07-21 03:44:28.710355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.557 qpair failed and we were unable to recover it. 00:34:43.557 [2024-07-21 03:44:28.710446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.557 [2024-07-21 03:44:28.710472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.557 qpair failed and we were unable to recover it. 00:34:43.557 [2024-07-21 03:44:28.710595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.557 [2024-07-21 03:44:28.710625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.557 qpair failed and we were unable to recover it. 00:34:43.557 [2024-07-21 03:44:28.710723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.557 [2024-07-21 03:44:28.710749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.557 qpair failed and we were unable to recover it. 00:34:43.557 [2024-07-21 03:44:28.710892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.557 [2024-07-21 03:44:28.710916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.557 qpair failed and we were unable to recover it. 00:34:43.557 [2024-07-21 03:44:28.711054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.557 [2024-07-21 03:44:28.711097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.557 qpair failed and we were unable to recover it. 00:34:43.557 [2024-07-21 03:44:28.711189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.558 [2024-07-21 03:44:28.711214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.558 qpair failed and we were unable to recover it. 00:34:43.558 [2024-07-21 03:44:28.711297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.558 [2024-07-21 03:44:28.711323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.558 qpair failed and we were unable to recover it. 00:34:43.558 [2024-07-21 03:44:28.711459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.558 [2024-07-21 03:44:28.711499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.558 qpair failed and we were unable to recover it. 00:34:43.558 [2024-07-21 03:44:28.711634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.558 [2024-07-21 03:44:28.711681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.558 qpair failed and we were unable to recover it. 00:34:43.558 [2024-07-21 03:44:28.711815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.558 [2024-07-21 03:44:28.711844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.558 qpair failed and we were unable to recover it. 00:34:43.558 [2024-07-21 03:44:28.711961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.558 [2024-07-21 03:44:28.711992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.558 qpair failed and we were unable to recover it. 00:34:43.558 [2024-07-21 03:44:28.712204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.558 [2024-07-21 03:44:28.712264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.558 qpair failed and we were unable to recover it. 00:34:43.558 [2024-07-21 03:44:28.712390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.558 [2024-07-21 03:44:28.712417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.558 qpair failed and we were unable to recover it. 00:34:43.558 [2024-07-21 03:44:28.712542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.558 [2024-07-21 03:44:28.712567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.558 qpair failed and we were unable to recover it. 00:34:43.558 [2024-07-21 03:44:28.712732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.558 [2024-07-21 03:44:28.712758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.558 qpair failed and we were unable to recover it. 00:34:43.558 [2024-07-21 03:44:28.712929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.558 [2024-07-21 03:44:28.712958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.558 qpair failed and we were unable to recover it. 00:34:43.558 [2024-07-21 03:44:28.713074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.558 [2024-07-21 03:44:28.713116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.558 qpair failed and we were unable to recover it. 00:34:43.558 [2024-07-21 03:44:28.713229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.558 [2024-07-21 03:44:28.713255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.558 qpair failed and we were unable to recover it. 00:34:43.558 [2024-07-21 03:44:28.713400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.558 [2024-07-21 03:44:28.713428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.558 qpair failed and we were unable to recover it. 00:34:43.558 [2024-07-21 03:44:28.713570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.558 [2024-07-21 03:44:28.713595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.558 qpair failed and we were unable to recover it. 00:34:43.558 [2024-07-21 03:44:28.713701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.558 [2024-07-21 03:44:28.713730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.558 qpair failed and we were unable to recover it. 00:34:43.558 [2024-07-21 03:44:28.713870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.558 [2024-07-21 03:44:28.713928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.558 qpair failed and we were unable to recover it. 00:34:43.558 [2024-07-21 03:44:28.714099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.558 [2024-07-21 03:44:28.714129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.558 qpair failed and we were unable to recover it. 00:34:43.558 [2024-07-21 03:44:28.714257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.558 [2024-07-21 03:44:28.714284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.558 qpair failed and we were unable to recover it. 00:34:43.558 [2024-07-21 03:44:28.714426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.558 [2024-07-21 03:44:28.714455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.558 qpair failed and we were unable to recover it. 00:34:43.558 [2024-07-21 03:44:28.714593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.558 [2024-07-21 03:44:28.714624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.558 qpair failed and we were unable to recover it. 00:34:43.558 [2024-07-21 03:44:28.714749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.558 [2024-07-21 03:44:28.714773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.558 qpair failed and we were unable to recover it. 00:34:43.558 [2024-07-21 03:44:28.714904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.558 [2024-07-21 03:44:28.714933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.558 qpair failed and we were unable to recover it. 00:34:43.558 [2024-07-21 03:44:28.715052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.558 [2024-07-21 03:44:28.715094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.558 qpair failed and we were unable to recover it. 00:34:43.558 [2024-07-21 03:44:28.715219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.558 [2024-07-21 03:44:28.715260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.558 qpair failed and we were unable to recover it. 00:34:43.558 [2024-07-21 03:44:28.715366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.558 [2024-07-21 03:44:28.715394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.558 qpair failed and we were unable to recover it. 00:34:43.558 [2024-07-21 03:44:28.715524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.558 [2024-07-21 03:44:28.715548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.558 qpair failed and we were unable to recover it. 00:34:43.558 [2024-07-21 03:44:28.715684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.558 [2024-07-21 03:44:28.715710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.558 qpair failed and we were unable to recover it. 00:34:43.558 [2024-07-21 03:44:28.715808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.558 [2024-07-21 03:44:28.715834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.558 qpair failed and we were unable to recover it. 00:34:43.558 [2024-07-21 03:44:28.715953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.558 [2024-07-21 03:44:28.715977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.558 qpair failed and we were unable to recover it. 00:34:43.558 [2024-07-21 03:44:28.716072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.558 [2024-07-21 03:44:28.716096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.558 qpair failed and we were unable to recover it. 00:34:43.558 [2024-07-21 03:44:28.716193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.558 [2024-07-21 03:44:28.716217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.558 qpair failed and we were unable to recover it. 00:34:43.558 [2024-07-21 03:44:28.716314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.558 [2024-07-21 03:44:28.716349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.558 qpair failed and we were unable to recover it. 00:34:43.558 [2024-07-21 03:44:28.716448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.558 [2024-07-21 03:44:28.716477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.558 qpair failed and we were unable to recover it. 00:34:43.558 [2024-07-21 03:44:28.716610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.558 [2024-07-21 03:44:28.716660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.558 qpair failed and we were unable to recover it. 00:34:43.558 [2024-07-21 03:44:28.716780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.558 [2024-07-21 03:44:28.716804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.558 qpair failed and we were unable to recover it. 00:34:43.558 [2024-07-21 03:44:28.716905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.558 [2024-07-21 03:44:28.716929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.558 qpair failed and we were unable to recover it. 00:34:43.558 [2024-07-21 03:44:28.717064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.559 [2024-07-21 03:44:28.717091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.559 qpair failed and we were unable to recover it. 00:34:43.559 [2024-07-21 03:44:28.717185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.559 [2024-07-21 03:44:28.717212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.559 qpair failed and we were unable to recover it. 00:34:43.559 [2024-07-21 03:44:28.717365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.559 [2024-07-21 03:44:28.717393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.559 qpair failed and we were unable to recover it. 00:34:43.559 [2024-07-21 03:44:28.717516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.559 [2024-07-21 03:44:28.717540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.559 qpair failed and we were unable to recover it. 00:34:43.559 [2024-07-21 03:44:28.717697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.559 [2024-07-21 03:44:28.717723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.559 qpair failed and we were unable to recover it. 00:34:43.559 [2024-07-21 03:44:28.717817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.559 [2024-07-21 03:44:28.717843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.559 qpair failed and we were unable to recover it. 00:34:43.559 [2024-07-21 03:44:28.717960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.559 [2024-07-21 03:44:28.717987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.559 qpair failed and we were unable to recover it. 00:34:43.559 [2024-07-21 03:44:28.718117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.559 [2024-07-21 03:44:28.718145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.559 qpair failed and we were unable to recover it. 00:34:43.559 [2024-07-21 03:44:28.718238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.559 [2024-07-21 03:44:28.718265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.559 qpair failed and we were unable to recover it. 00:34:43.559 [2024-07-21 03:44:28.718415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.559 [2024-07-21 03:44:28.718458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.559 qpair failed and we were unable to recover it. 00:34:43.559 [2024-07-21 03:44:28.718596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.559 [2024-07-21 03:44:28.718642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.559 qpair failed and we were unable to recover it. 00:34:43.559 [2024-07-21 03:44:28.718777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.559 [2024-07-21 03:44:28.718816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.559 qpair failed and we were unable to recover it. 00:34:43.559 [2024-07-21 03:44:28.718951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.559 [2024-07-21 03:44:28.718996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.559 qpair failed and we were unable to recover it. 00:34:43.559 [2024-07-21 03:44:28.719169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.559 [2024-07-21 03:44:28.719211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.559 qpair failed and we were unable to recover it. 00:34:43.559 [2024-07-21 03:44:28.719341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.559 [2024-07-21 03:44:28.719370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.559 qpair failed and we were unable to recover it. 00:34:43.559 [2024-07-21 03:44:28.719499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.559 [2024-07-21 03:44:28.719529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.559 qpair failed and we were unable to recover it. 00:34:43.559 [2024-07-21 03:44:28.719675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.559 [2024-07-21 03:44:28.719702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.559 qpair failed and we were unable to recover it. 00:34:43.559 [2024-07-21 03:44:28.719816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.559 [2024-07-21 03:44:28.719840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.559 qpair failed and we were unable to recover it. 00:34:43.559 [2024-07-21 03:44:28.719951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.559 [2024-07-21 03:44:28.719979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.559 qpair failed and we were unable to recover it. 00:34:43.559 [2024-07-21 03:44:28.720179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.559 [2024-07-21 03:44:28.720243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.559 qpair failed and we were unable to recover it. 00:34:43.559 [2024-07-21 03:44:28.720458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.559 [2024-07-21 03:44:28.720527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.559 qpair failed and we were unable to recover it. 00:34:43.559 [2024-07-21 03:44:28.720655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.559 [2024-07-21 03:44:28.720685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.559 qpair failed and we were unable to recover it. 00:34:43.559 [2024-07-21 03:44:28.720782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.559 [2024-07-21 03:44:28.720812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.559 qpair failed and we were unable to recover it. 00:34:43.559 [2024-07-21 03:44:28.720959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.559 [2024-07-21 03:44:28.720985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.559 qpair failed and we were unable to recover it. 00:34:43.559 [2024-07-21 03:44:28.721180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.559 [2024-07-21 03:44:28.721208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.559 qpair failed and we were unable to recover it. 00:34:43.559 [2024-07-21 03:44:28.721339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.559 [2024-07-21 03:44:28.721369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.559 qpair failed and we were unable to recover it. 00:34:43.559 [2024-07-21 03:44:28.721497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.559 [2024-07-21 03:44:28.721524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.559 qpair failed and we were unable to recover it. 00:34:43.559 [2024-07-21 03:44:28.721690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.559 [2024-07-21 03:44:28.721729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.559 qpair failed and we were unable to recover it. 00:34:43.559 [2024-07-21 03:44:28.721861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.559 [2024-07-21 03:44:28.721888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.559 qpair failed and we were unable to recover it. 00:34:43.559 [2024-07-21 03:44:28.722050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.559 [2024-07-21 03:44:28.722093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.559 qpair failed and we were unable to recover it. 00:34:43.559 [2024-07-21 03:44:28.722279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.559 [2024-07-21 03:44:28.722330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.559 qpair failed and we were unable to recover it. 00:34:43.559 [2024-07-21 03:44:28.722420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.559 [2024-07-21 03:44:28.722445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.559 qpair failed and we were unable to recover it. 00:34:43.559 [2024-07-21 03:44:28.722558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.559 [2024-07-21 03:44:28.722583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.559 qpair failed and we were unable to recover it. 00:34:43.559 [2024-07-21 03:44:28.722700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.559 [2024-07-21 03:44:28.722728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.559 qpair failed and we were unable to recover it. 00:34:43.559 [2024-07-21 03:44:28.722863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.559 [2024-07-21 03:44:28.722888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.559 qpair failed and we were unable to recover it. 00:34:43.559 [2024-07-21 03:44:28.722984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.559 [2024-07-21 03:44:28.723011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.559 qpair failed and we were unable to recover it. 00:34:43.559 [2024-07-21 03:44:28.723111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.559 [2024-07-21 03:44:28.723138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.559 qpair failed and we were unable to recover it. 00:34:43.559 [2024-07-21 03:44:28.723235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.559 [2024-07-21 03:44:28.723261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.559 qpair failed and we were unable to recover it. 00:34:43.559 [2024-07-21 03:44:28.723394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.559 [2024-07-21 03:44:28.723420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.559 qpair failed and we were unable to recover it. 00:34:43.559 [2024-07-21 03:44:28.723534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.559 [2024-07-21 03:44:28.723559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.559 qpair failed and we were unable to recover it. 00:34:43.559 [2024-07-21 03:44:28.723700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.559 [2024-07-21 03:44:28.723740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.559 qpair failed and we were unable to recover it. 00:34:43.559 [2024-07-21 03:44:28.723848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.559 [2024-07-21 03:44:28.723886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.559 qpair failed and we were unable to recover it. 00:34:43.559 [2024-07-21 03:44:28.724014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.559 [2024-07-21 03:44:28.724040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.559 qpair failed and we were unable to recover it. 00:34:43.559 [2024-07-21 03:44:28.724238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.559 [2024-07-21 03:44:28.724287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.559 qpair failed and we were unable to recover it. 00:34:43.559 [2024-07-21 03:44:28.724489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.559 [2024-07-21 03:44:28.724541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.559 qpair failed and we were unable to recover it. 00:34:43.560 [2024-07-21 03:44:28.724687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.560 [2024-07-21 03:44:28.724712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.560 qpair failed and we were unable to recover it. 00:34:43.560 [2024-07-21 03:44:28.724815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.560 [2024-07-21 03:44:28.724843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.560 qpair failed and we were unable to recover it. 00:34:43.560 [2024-07-21 03:44:28.724958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.560 [2024-07-21 03:44:28.724987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.560 qpair failed and we were unable to recover it. 00:34:43.560 [2024-07-21 03:44:28.725093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.560 [2024-07-21 03:44:28.725122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.560 qpair failed and we were unable to recover it. 00:34:43.560 [2024-07-21 03:44:28.725285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.560 [2024-07-21 03:44:28.725318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.560 qpair failed and we were unable to recover it. 00:34:43.560 [2024-07-21 03:44:28.725412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.560 [2024-07-21 03:44:28.725454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.560 qpair failed and we were unable to recover it. 00:34:43.560 [2024-07-21 03:44:28.725580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.560 [2024-07-21 03:44:28.725625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.560 qpair failed and we were unable to recover it. 00:34:43.560 [2024-07-21 03:44:28.725752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.560 [2024-07-21 03:44:28.725779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.560 qpair failed and we were unable to recover it. 00:34:43.560 [2024-07-21 03:44:28.725875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.560 [2024-07-21 03:44:28.725919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.560 qpair failed and we were unable to recover it. 00:34:43.560 [2024-07-21 03:44:28.726091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.560 [2024-07-21 03:44:28.726145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.560 qpair failed and we were unable to recover it. 00:34:43.560 [2024-07-21 03:44:28.726346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.560 [2024-07-21 03:44:28.726403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.560 qpair failed and we were unable to recover it. 00:34:43.560 [2024-07-21 03:44:28.726534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.560 [2024-07-21 03:44:28.726564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.560 qpair failed and we were unable to recover it. 00:34:43.560 [2024-07-21 03:44:28.726709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.560 [2024-07-21 03:44:28.726736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.560 qpair failed and we were unable to recover it. 00:34:43.560 [2024-07-21 03:44:28.726882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.560 [2024-07-21 03:44:28.726909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.560 qpair failed and we were unable to recover it. 00:34:43.560 [2024-07-21 03:44:28.727035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.560 [2024-07-21 03:44:28.727063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.560 qpair failed and we were unable to recover it. 00:34:43.560 [2024-07-21 03:44:28.727206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.560 [2024-07-21 03:44:28.727248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.560 qpair failed and we were unable to recover it. 00:34:43.560 [2024-07-21 03:44:28.727398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.560 [2024-07-21 03:44:28.727426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.560 qpair failed and we were unable to recover it. 00:34:43.560 [2024-07-21 03:44:28.727562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.560 [2024-07-21 03:44:28.727589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.560 qpair failed and we were unable to recover it. 00:34:43.560 [2024-07-21 03:44:28.727751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.560 [2024-07-21 03:44:28.727781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.560 qpair failed and we were unable to recover it. 00:34:43.560 [2024-07-21 03:44:28.727897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.560 [2024-07-21 03:44:28.727942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.560 qpair failed and we were unable to recover it. 00:34:43.560 [2024-07-21 03:44:28.728096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.560 [2024-07-21 03:44:28.728139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.560 qpair failed and we were unable to recover it. 00:34:43.560 [2024-07-21 03:44:28.728230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.560 [2024-07-21 03:44:28.728254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.560 qpair failed and we were unable to recover it. 00:34:43.560 [2024-07-21 03:44:28.728373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.560 [2024-07-21 03:44:28.728399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.560 qpair failed and we were unable to recover it. 00:34:43.560 [2024-07-21 03:44:28.728545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.560 [2024-07-21 03:44:28.728570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.560 qpair failed and we were unable to recover it. 00:34:43.560 [2024-07-21 03:44:28.728711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.560 [2024-07-21 03:44:28.728737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.560 qpair failed and we were unable to recover it. 00:34:43.560 [2024-07-21 03:44:28.728828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.560 [2024-07-21 03:44:28.728856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.560 qpair failed and we were unable to recover it. 00:34:43.560 [2024-07-21 03:44:28.728991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.560 [2024-07-21 03:44:28.729016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.560 qpair failed and we were unable to recover it. 00:34:43.560 [2024-07-21 03:44:28.729154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.560 [2024-07-21 03:44:28.729181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.560 qpair failed and we were unable to recover it. 00:34:43.560 [2024-07-21 03:44:28.729304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.560 [2024-07-21 03:44:28.729328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.560 qpair failed and we were unable to recover it. 00:34:43.560 [2024-07-21 03:44:28.729421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.560 [2024-07-21 03:44:28.729446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.560 qpair failed and we were unable to recover it. 00:34:43.560 [2024-07-21 03:44:28.729540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.560 [2024-07-21 03:44:28.729567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.560 qpair failed and we were unable to recover it. 00:34:43.560 [2024-07-21 03:44:28.729690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.560 [2024-07-21 03:44:28.729719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.560 qpair failed and we were unable to recover it. 00:34:43.560 [2024-07-21 03:44:28.729834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.560 [2024-07-21 03:44:28.729863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.560 qpair failed and we were unable to recover it. 00:34:43.560 [2024-07-21 03:44:28.730026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.560 [2024-07-21 03:44:28.730054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.560 qpair failed and we were unable to recover it. 00:34:43.560 [2024-07-21 03:44:28.730190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.560 [2024-07-21 03:44:28.730219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.560 qpair failed and we were unable to recover it. 00:34:43.560 [2024-07-21 03:44:28.730423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.560 [2024-07-21 03:44:28.730467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.560 qpair failed and we were unable to recover it. 00:34:43.560 [2024-07-21 03:44:28.730645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.560 [2024-07-21 03:44:28.730673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.560 qpair failed and we were unable to recover it. 00:34:43.560 [2024-07-21 03:44:28.730781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.560 [2024-07-21 03:44:28.730809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.560 qpair failed and we were unable to recover it. 00:34:43.560 [2024-07-21 03:44:28.730933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.560 [2024-07-21 03:44:28.730960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.560 qpair failed and we were unable to recover it. 00:34:43.560 [2024-07-21 03:44:28.731110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.560 [2024-07-21 03:44:28.731151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.560 qpair failed and we were unable to recover it. 00:34:43.560 [2024-07-21 03:44:28.731368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.560 [2024-07-21 03:44:28.731394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.560 qpair failed and we were unable to recover it. 00:34:43.560 [2024-07-21 03:44:28.731483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.560 [2024-07-21 03:44:28.731508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.560 qpair failed and we were unable to recover it. 00:34:43.560 [2024-07-21 03:44:28.731611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.560 [2024-07-21 03:44:28.731644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.560 qpair failed and we were unable to recover it. 00:34:43.560 [2024-07-21 03:44:28.731782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.560 [2024-07-21 03:44:28.731820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.560 qpair failed and we were unable to recover it. 00:34:43.561 [2024-07-21 03:44:28.731977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.561 [2024-07-21 03:44:28.732004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.561 qpair failed and we were unable to recover it. 00:34:43.561 [2024-07-21 03:44:28.732128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.561 [2024-07-21 03:44:28.732154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.561 qpair failed and we were unable to recover it. 00:34:43.561 [2024-07-21 03:44:28.732288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.561 [2024-07-21 03:44:28.732339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.561 qpair failed and we were unable to recover it. 00:34:43.561 [2024-07-21 03:44:28.732463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.561 [2024-07-21 03:44:28.732492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.561 qpair failed and we were unable to recover it. 00:34:43.561 [2024-07-21 03:44:28.732633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.561 [2024-07-21 03:44:28.732662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.561 qpair failed and we were unable to recover it. 00:34:43.561 [2024-07-21 03:44:28.732795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.561 [2024-07-21 03:44:28.732821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.561 qpair failed and we were unable to recover it. 00:34:43.561 [2024-07-21 03:44:28.732905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.561 [2024-07-21 03:44:28.732947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.561 qpair failed and we were unable to recover it. 00:34:43.561 [2024-07-21 03:44:28.733077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.561 [2024-07-21 03:44:28.733106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.561 qpair failed and we were unable to recover it. 00:34:43.561 [2024-07-21 03:44:28.733262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.561 [2024-07-21 03:44:28.733291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.561 qpair failed and we were unable to recover it. 00:34:43.561 [2024-07-21 03:44:28.733447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.561 [2024-07-21 03:44:28.733475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.561 qpair failed and we were unable to recover it. 00:34:43.561 [2024-07-21 03:44:28.733625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.561 [2024-07-21 03:44:28.733665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.561 qpair failed and we were unable to recover it. 00:34:43.561 [2024-07-21 03:44:28.733840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.561 [2024-07-21 03:44:28.733870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.561 qpair failed and we were unable to recover it. 00:34:43.561 [2024-07-21 03:44:28.734002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.561 [2024-07-21 03:44:28.734030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.561 qpair failed and we were unable to recover it. 00:34:43.561 [2024-07-21 03:44:28.734168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.561 [2024-07-21 03:44:28.734196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.561 qpair failed and we were unable to recover it. 00:34:43.561 [2024-07-21 03:44:28.734389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.561 [2024-07-21 03:44:28.734458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.561 qpair failed and we were unable to recover it. 00:34:43.561 [2024-07-21 03:44:28.734552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.561 [2024-07-21 03:44:28.734578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.561 qpair failed and we were unable to recover it. 00:34:43.561 [2024-07-21 03:44:28.734666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.561 [2024-07-21 03:44:28.734691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.561 qpair failed and we were unable to recover it. 00:34:43.561 [2024-07-21 03:44:28.734826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.561 [2024-07-21 03:44:28.734855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.561 qpair failed and we were unable to recover it. 00:34:43.561 [2024-07-21 03:44:28.734982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.561 [2024-07-21 03:44:28.735024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.561 qpair failed and we were unable to recover it. 00:34:43.561 [2024-07-21 03:44:28.735146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.561 [2024-07-21 03:44:28.735171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.561 qpair failed and we were unable to recover it. 00:34:43.561 [2024-07-21 03:44:28.735285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.561 [2024-07-21 03:44:28.735311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.561 qpair failed and we were unable to recover it. 00:34:43.561 [2024-07-21 03:44:28.735434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.561 [2024-07-21 03:44:28.735460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.561 qpair failed and we were unable to recover it. 00:34:43.561 [2024-07-21 03:44:28.735546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.561 [2024-07-21 03:44:28.735572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.561 qpair failed and we were unable to recover it. 00:34:43.561 [2024-07-21 03:44:28.735712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.561 [2024-07-21 03:44:28.735756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.561 qpair failed and we were unable to recover it. 00:34:43.561 [2024-07-21 03:44:28.735884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.561 [2024-07-21 03:44:28.735923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.561 qpair failed and we were unable to recover it. 00:34:43.561 [2024-07-21 03:44:28.736125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.561 [2024-07-21 03:44:28.736172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.561 qpair failed and we were unable to recover it. 00:34:43.561 [2024-07-21 03:44:28.736421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.561 [2024-07-21 03:44:28.736472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.561 qpair failed and we were unable to recover it. 00:34:43.561 [2024-07-21 03:44:28.736598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.561 [2024-07-21 03:44:28.736648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.561 qpair failed and we were unable to recover it. 00:34:43.561 [2024-07-21 03:44:28.736783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.561 [2024-07-21 03:44:28.736811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.561 qpair failed and we were unable to recover it. 00:34:43.561 [2024-07-21 03:44:28.736937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.561 [2024-07-21 03:44:28.736968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.561 qpair failed and we were unable to recover it. 00:34:43.561 [2024-07-21 03:44:28.737219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.561 [2024-07-21 03:44:28.737270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.561 qpair failed and we were unable to recover it. 00:34:43.561 [2024-07-21 03:44:28.737368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.561 [2024-07-21 03:44:28.737393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.561 qpair failed and we were unable to recover it. 00:34:43.561 [2024-07-21 03:44:28.737543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.561 [2024-07-21 03:44:28.737568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.561 qpair failed and we were unable to recover it. 00:34:43.561 [2024-07-21 03:44:28.737708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.561 [2024-07-21 03:44:28.737756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.561 qpair failed and we were unable to recover it. 00:34:43.561 [2024-07-21 03:44:28.737850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.561 [2024-07-21 03:44:28.737876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.561 qpair failed and we were unable to recover it. 00:34:43.561 [2024-07-21 03:44:28.737997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.561 [2024-07-21 03:44:28.738022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.561 qpair failed and we were unable to recover it. 00:34:43.561 [2024-07-21 03:44:28.738171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.561 [2024-07-21 03:44:28.738197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.561 qpair failed and we were unable to recover it. 00:34:43.561 [2024-07-21 03:44:28.738347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.561 [2024-07-21 03:44:28.738375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.561 qpair failed and we were unable to recover it. 00:34:43.561 [2024-07-21 03:44:28.738500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.561 [2024-07-21 03:44:28.738527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.561 qpair failed and we were unable to recover it. 00:34:43.561 [2024-07-21 03:44:28.738710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.561 [2024-07-21 03:44:28.738753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.561 qpair failed and we were unable to recover it. 00:34:43.561 [2024-07-21 03:44:28.738943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.561 [2024-07-21 03:44:28.738987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.561 qpair failed and we were unable to recover it. 00:34:43.561 [2024-07-21 03:44:28.739144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.561 [2024-07-21 03:44:28.739187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.561 qpair failed and we were unable to recover it. 00:34:43.561 [2024-07-21 03:44:28.739342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.561 [2024-07-21 03:44:28.739406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.561 qpair failed and we were unable to recover it. 00:34:43.561 [2024-07-21 03:44:28.739504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.561 [2024-07-21 03:44:28.739529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.561 qpair failed and we were unable to recover it. 00:34:43.561 [2024-07-21 03:44:28.739666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.561 [2024-07-21 03:44:28.739695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.561 qpair failed and we were unable to recover it. 00:34:43.561 [2024-07-21 03:44:28.739852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.561 [2024-07-21 03:44:28.739883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.561 qpair failed and we were unable to recover it. 00:34:43.561 [2024-07-21 03:44:28.740014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.562 [2024-07-21 03:44:28.740042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.562 qpair failed and we were unable to recover it. 00:34:43.562 [2024-07-21 03:44:28.740180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.562 [2024-07-21 03:44:28.740241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.562 qpair failed and we were unable to recover it. 00:34:43.562 [2024-07-21 03:44:28.740367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.562 [2024-07-21 03:44:28.740394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.562 qpair failed and we were unable to recover it. 00:34:43.562 [2024-07-21 03:44:28.740552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.562 [2024-07-21 03:44:28.740580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.562 qpair failed and we were unable to recover it. 00:34:43.562 [2024-07-21 03:44:28.740739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.562 [2024-07-21 03:44:28.740778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.562 qpair failed and we were unable to recover it. 00:34:43.562 [2024-07-21 03:44:28.740934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.562 [2024-07-21 03:44:28.740977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.562 qpair failed and we were unable to recover it. 00:34:43.562 [2024-07-21 03:44:28.741113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.562 [2024-07-21 03:44:28.741156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.562 qpair failed and we were unable to recover it. 00:34:43.562 [2024-07-21 03:44:28.741392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.562 [2024-07-21 03:44:28.741445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.562 qpair failed and we were unable to recover it. 00:34:43.562 [2024-07-21 03:44:28.741544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.562 [2024-07-21 03:44:28.741592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.562 qpair failed and we were unable to recover it. 00:34:43.562 [2024-07-21 03:44:28.741742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.562 [2024-07-21 03:44:28.741771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.562 qpair failed and we were unable to recover it. 00:34:43.562 [2024-07-21 03:44:28.741911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.562 [2024-07-21 03:44:28.741940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.562 qpair failed and we were unable to recover it. 00:34:43.562 [2024-07-21 03:44:28.742052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.562 [2024-07-21 03:44:28.742094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.562 qpair failed and we were unable to recover it. 00:34:43.562 [2024-07-21 03:44:28.742193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.562 [2024-07-21 03:44:28.742222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.562 qpair failed and we were unable to recover it. 00:34:43.562 [2024-07-21 03:44:28.742321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.562 [2024-07-21 03:44:28.742351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.562 qpair failed and we were unable to recover it. 00:34:43.562 [2024-07-21 03:44:28.742491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.562 [2024-07-21 03:44:28.742516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.562 qpair failed and we were unable to recover it. 00:34:43.562 [2024-07-21 03:44:28.742662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.562 [2024-07-21 03:44:28.742702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.562 qpair failed and we were unable to recover it. 00:34:43.562 [2024-07-21 03:44:28.742852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.562 [2024-07-21 03:44:28.742910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.562 qpair failed and we were unable to recover it. 00:34:43.562 [2024-07-21 03:44:28.743077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.562 [2024-07-21 03:44:28.743107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.562 qpair failed and we were unable to recover it. 00:34:43.562 [2024-07-21 03:44:28.743247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.562 [2024-07-21 03:44:28.743274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.562 qpair failed and we were unable to recover it. 00:34:43.562 [2024-07-21 03:44:28.743411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.562 [2024-07-21 03:44:28.743464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.562 qpair failed and we were unable to recover it. 00:34:43.562 [2024-07-21 03:44:28.743631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.562 [2024-07-21 03:44:28.743656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.562 qpair failed and we were unable to recover it. 00:34:43.562 [2024-07-21 03:44:28.743750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.562 [2024-07-21 03:44:28.743775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.562 qpair failed and we were unable to recover it. 00:34:43.562 [2024-07-21 03:44:28.743905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.562 [2024-07-21 03:44:28.743930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.562 qpair failed and we were unable to recover it. 00:34:43.562 [2024-07-21 03:44:28.744029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.562 [2024-07-21 03:44:28.744056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.562 qpair failed and we were unable to recover it. 00:34:43.562 [2024-07-21 03:44:28.744243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.562 [2024-07-21 03:44:28.744293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.562 qpair failed and we were unable to recover it. 00:34:43.562 [2024-07-21 03:44:28.744422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.562 [2024-07-21 03:44:28.744449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.562 qpair failed and we were unable to recover it. 00:34:43.562 [2024-07-21 03:44:28.744562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.562 [2024-07-21 03:44:28.744586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.562 qpair failed and we were unable to recover it. 00:34:43.562 [2024-07-21 03:44:28.744713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.562 [2024-07-21 03:44:28.744739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.562 qpair failed and we were unable to recover it. 00:34:43.562 [2024-07-21 03:44:28.744908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.562 [2024-07-21 03:44:28.744936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.562 qpair failed and we were unable to recover it. 00:34:43.562 [2024-07-21 03:44:28.745100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.562 [2024-07-21 03:44:28.745129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.562 qpair failed and we were unable to recover it. 00:34:43.562 [2024-07-21 03:44:28.745333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.562 [2024-07-21 03:44:28.745382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.562 qpair failed and we were unable to recover it. 00:34:43.562 [2024-07-21 03:44:28.745514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.562 [2024-07-21 03:44:28.745542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.562 qpair failed and we were unable to recover it. 00:34:43.562 [2024-07-21 03:44:28.745706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.562 [2024-07-21 03:44:28.745731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.562 qpair failed and we were unable to recover it. 00:34:43.562 [2024-07-21 03:44:28.745827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.562 [2024-07-21 03:44:28.745853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.562 qpair failed and we were unable to recover it. 00:34:43.562 [2024-07-21 03:44:28.746018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.562 [2024-07-21 03:44:28.746046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.562 qpair failed and we were unable to recover it. 00:34:43.562 [2024-07-21 03:44:28.746301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.562 [2024-07-21 03:44:28.746353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.562 qpair failed and we were unable to recover it. 00:34:43.562 [2024-07-21 03:44:28.746484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.562 [2024-07-21 03:44:28.746514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.562 qpair failed and we were unable to recover it. 00:34:43.562 [2024-07-21 03:44:28.746641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.562 [2024-07-21 03:44:28.746667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.562 qpair failed and we were unable to recover it. 00:34:43.562 [2024-07-21 03:44:28.746770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.562 [2024-07-21 03:44:28.746795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.562 qpair failed and we were unable to recover it. 00:34:43.562 [2024-07-21 03:44:28.746918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.562 [2024-07-21 03:44:28.746942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.562 qpair failed and we were unable to recover it. 00:34:43.562 [2024-07-21 03:44:28.747045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.562 [2024-07-21 03:44:28.747072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.562 qpair failed and we were unable to recover it. 00:34:43.562 [2024-07-21 03:44:28.747268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.562 [2024-07-21 03:44:28.747296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.562 qpair failed and we were unable to recover it. 00:34:43.562 [2024-07-21 03:44:28.747427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.562 [2024-07-21 03:44:28.747458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.562 qpair failed and we were unable to recover it. 00:34:43.562 [2024-07-21 03:44:28.747592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.563 [2024-07-21 03:44:28.747627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.563 qpair failed and we were unable to recover it. 00:34:43.563 [2024-07-21 03:44:28.747767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.563 [2024-07-21 03:44:28.747793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.563 qpair failed and we were unable to recover it. 00:34:43.563 [2024-07-21 03:44:28.747890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.563 [2024-07-21 03:44:28.747934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.563 qpair failed and we were unable to recover it. 00:34:43.563 [2024-07-21 03:44:28.748095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.563 [2024-07-21 03:44:28.748123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.563 qpair failed and we were unable to recover it. 00:34:43.563 [2024-07-21 03:44:28.748278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.563 [2024-07-21 03:44:28.748307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.563 qpair failed and we were unable to recover it. 00:34:43.563 [2024-07-21 03:44:28.748438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.563 [2024-07-21 03:44:28.748467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.563 qpair failed and we were unable to recover it. 00:34:43.563 [2024-07-21 03:44:28.748579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.563 [2024-07-21 03:44:28.748606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.563 qpair failed and we were unable to recover it. 00:34:43.563 [2024-07-21 03:44:28.748755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.563 [2024-07-21 03:44:28.748780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.563 qpair failed and we were unable to recover it. 00:34:43.563 [2024-07-21 03:44:28.748878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.563 [2024-07-21 03:44:28.748903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.563 qpair failed and we were unable to recover it. 00:34:43.563 [2024-07-21 03:44:28.749020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.563 [2024-07-21 03:44:28.749044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.563 qpair failed and we were unable to recover it. 00:34:43.563 [2024-07-21 03:44:28.749216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.563 [2024-07-21 03:44:28.749266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.563 qpair failed and we were unable to recover it. 00:34:43.564 [2024-07-21 03:44:28.749371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.564 [2024-07-21 03:44:28.749401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.564 qpair failed and we were unable to recover it. 00:34:43.564 [2024-07-21 03:44:28.749541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.564 [2024-07-21 03:44:28.749569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.564 qpair failed and we were unable to recover it. 00:34:43.564 [2024-07-21 03:44:28.749736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.564 [2024-07-21 03:44:28.749775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.564 qpair failed and we were unable to recover it. 00:34:43.564 [2024-07-21 03:44:28.749878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.564 [2024-07-21 03:44:28.749905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.564 qpair failed and we were unable to recover it. 00:34:43.564 [2024-07-21 03:44:28.750050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.564 [2024-07-21 03:44:28.750093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.564 qpair failed and we were unable to recover it. 00:34:43.564 [2024-07-21 03:44:28.750199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.564 [2024-07-21 03:44:28.750243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.564 qpair failed and we were unable to recover it. 00:34:43.564 [2024-07-21 03:44:28.750446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.564 [2024-07-21 03:44:28.750507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.564 qpair failed and we were unable to recover it. 00:34:43.564 [2024-07-21 03:44:28.750607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.564 [2024-07-21 03:44:28.750657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.564 qpair failed and we were unable to recover it. 00:34:43.564 [2024-07-21 03:44:28.750811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.564 [2024-07-21 03:44:28.750837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.564 qpair failed and we were unable to recover it. 00:34:43.564 [2024-07-21 03:44:28.751033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.564 [2024-07-21 03:44:28.751060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.564 qpair failed and we were unable to recover it. 00:34:43.564 [2024-07-21 03:44:28.751299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.564 [2024-07-21 03:44:28.751347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.564 qpair failed and we were unable to recover it. 00:34:43.564 [2024-07-21 03:44:28.751460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.564 [2024-07-21 03:44:28.751485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.564 qpair failed and we were unable to recover it. 00:34:43.564 [2024-07-21 03:44:28.751643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.564 [2024-07-21 03:44:28.751685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.564 qpair failed and we were unable to recover it. 00:34:43.564 [2024-07-21 03:44:28.751784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.564 [2024-07-21 03:44:28.751808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.564 qpair failed and we were unable to recover it. 00:34:43.564 [2024-07-21 03:44:28.751916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.564 [2024-07-21 03:44:28.751946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.564 qpair failed and we were unable to recover it. 00:34:43.564 [2024-07-21 03:44:28.752057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.564 [2024-07-21 03:44:28.752084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.564 qpair failed and we were unable to recover it. 00:34:43.564 [2024-07-21 03:44:28.752260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.564 [2024-07-21 03:44:28.752287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.564 qpair failed and we were unable to recover it. 00:34:43.564 [2024-07-21 03:44:28.752401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.564 [2024-07-21 03:44:28.752427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.564 qpair failed and we were unable to recover it. 00:34:43.564 [2024-07-21 03:44:28.752576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.564 [2024-07-21 03:44:28.752600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.564 qpair failed and we were unable to recover it. 00:34:43.564 [2024-07-21 03:44:28.752702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.564 [2024-07-21 03:44:28.752727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.564 qpair failed and we were unable to recover it. 00:34:43.564 [2024-07-21 03:44:28.752845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.564 [2024-07-21 03:44:28.752870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.564 qpair failed and we were unable to recover it. 00:34:43.564 [2024-07-21 03:44:28.752999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.564 [2024-07-21 03:44:28.753026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.564 qpair failed and we were unable to recover it. 00:34:43.564 [2024-07-21 03:44:28.753162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.564 [2024-07-21 03:44:28.753190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.564 qpair failed and we were unable to recover it. 00:34:43.564 [2024-07-21 03:44:28.753282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.564 [2024-07-21 03:44:28.753309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.564 qpair failed and we were unable to recover it. 00:34:43.564 [2024-07-21 03:44:28.753406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.564 [2024-07-21 03:44:28.753433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.564 qpair failed and we were unable to recover it. 00:34:43.564 [2024-07-21 03:44:28.753557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.564 [2024-07-21 03:44:28.753601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.564 qpair failed and we were unable to recover it. 00:34:43.564 [2024-07-21 03:44:28.753760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.564 [2024-07-21 03:44:28.753787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.565 qpair failed and we were unable to recover it. 00:34:43.565 [2024-07-21 03:44:28.753933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.565 [2024-07-21 03:44:28.753972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.565 qpair failed and we were unable to recover it. 00:34:43.565 [2024-07-21 03:44:28.754114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.565 [2024-07-21 03:44:28.754144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.565 qpair failed and we were unable to recover it. 00:34:43.565 [2024-07-21 03:44:28.754273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.565 [2024-07-21 03:44:28.754318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.565 qpair failed and we were unable to recover it. 00:34:43.565 [2024-07-21 03:44:28.754444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.565 [2024-07-21 03:44:28.754469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.565 qpair failed and we were unable to recover it. 00:34:43.565 [2024-07-21 03:44:28.754590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.565 [2024-07-21 03:44:28.754622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.565 qpair failed and we were unable to recover it. 00:34:43.565 [2024-07-21 03:44:28.754718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.565 [2024-07-21 03:44:28.754742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.565 qpair failed and we were unable to recover it. 00:34:43.565 [2024-07-21 03:44:28.754887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.565 [2024-07-21 03:44:28.754912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.565 qpair failed and we were unable to recover it. 00:34:43.565 [2024-07-21 03:44:28.755139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.565 [2024-07-21 03:44:28.755190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.565 qpair failed and we were unable to recover it. 00:34:43.565 [2024-07-21 03:44:28.755461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.565 [2024-07-21 03:44:28.755538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.565 qpair failed and we were unable to recover it. 00:34:43.565 [2024-07-21 03:44:28.755692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.565 [2024-07-21 03:44:28.755721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.565 qpair failed and we were unable to recover it. 00:34:43.565 [2024-07-21 03:44:28.755875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.565 [2024-07-21 03:44:28.755922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.565 qpair failed and we were unable to recover it. 00:34:43.565 [2024-07-21 03:44:28.756108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.565 [2024-07-21 03:44:28.756158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.565 qpair failed and we were unable to recover it. 00:34:43.565 [2024-07-21 03:44:28.756397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.565 [2024-07-21 03:44:28.756446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.565 qpair failed and we were unable to recover it. 00:34:43.565 [2024-07-21 03:44:28.756546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.565 [2024-07-21 03:44:28.756573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.565 qpair failed and we were unable to recover it. 00:34:43.565 [2024-07-21 03:44:28.756753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.565 [2024-07-21 03:44:28.756797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.565 qpair failed and we were unable to recover it. 00:34:43.565 [2024-07-21 03:44:28.756888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.565 [2024-07-21 03:44:28.756915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.565 qpair failed and we were unable to recover it. 00:34:43.565 [2024-07-21 03:44:28.757036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.565 [2024-07-21 03:44:28.757078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.565 qpair failed and we were unable to recover it. 00:34:43.565 [2024-07-21 03:44:28.757273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.565 [2024-07-21 03:44:28.757326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.565 qpair failed and we were unable to recover it. 00:34:43.565 [2024-07-21 03:44:28.757431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.565 [2024-07-21 03:44:28.757470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.565 qpair failed and we were unable to recover it. 00:34:43.565 [2024-07-21 03:44:28.757628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.565 [2024-07-21 03:44:28.757656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.565 qpair failed and we were unable to recover it. 00:34:43.565 [2024-07-21 03:44:28.757777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.565 [2024-07-21 03:44:28.757803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.565 qpair failed and we were unable to recover it. 00:34:43.565 [2024-07-21 03:44:28.757955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.565 [2024-07-21 03:44:28.757991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.565 qpair failed and we were unable to recover it. 00:34:43.565 [2024-07-21 03:44:28.758186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.565 [2024-07-21 03:44:28.758246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.565 qpair failed and we were unable to recover it. 00:34:43.565 [2024-07-21 03:44:28.758378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.565 [2024-07-21 03:44:28.758407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.565 qpair failed and we were unable to recover it. 00:34:43.565 [2024-07-21 03:44:28.758548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.565 [2024-07-21 03:44:28.758573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.565 qpair failed and we were unable to recover it. 00:34:43.565 [2024-07-21 03:44:28.758700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.565 [2024-07-21 03:44:28.758726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.565 qpair failed and we were unable to recover it. 00:34:43.566 [2024-07-21 03:44:28.758851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.566 [2024-07-21 03:44:28.758879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.566 qpair failed and we were unable to recover it. 00:34:43.566 [2024-07-21 03:44:28.759092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.566 [2024-07-21 03:44:28.759121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.566 qpair failed and we were unable to recover it. 00:34:43.566 [2024-07-21 03:44:28.759227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.566 [2024-07-21 03:44:28.759256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.566 qpair failed and we were unable to recover it. 00:34:43.566 [2024-07-21 03:44:28.759416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.566 [2024-07-21 03:44:28.759444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.566 qpair failed and we were unable to recover it. 00:34:43.566 [2024-07-21 03:44:28.759588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.566 [2024-07-21 03:44:28.759620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.566 qpair failed and we were unable to recover it. 00:34:43.566 [2024-07-21 03:44:28.759712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.566 [2024-07-21 03:44:28.759738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.566 qpair failed and we were unable to recover it. 00:34:43.566 [2024-07-21 03:44:28.759847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.566 [2024-07-21 03:44:28.759886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.566 qpair failed and we were unable to recover it. 00:34:43.566 [2024-07-21 03:44:28.760016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.566 [2024-07-21 03:44:28.760046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.566 qpair failed and we were unable to recover it. 00:34:43.566 [2024-07-21 03:44:28.760196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.566 [2024-07-21 03:44:28.760238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.566 qpair failed and we were unable to recover it. 00:34:43.566 [2024-07-21 03:44:28.760370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.566 [2024-07-21 03:44:28.760399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.566 qpair failed and we were unable to recover it. 00:34:43.566 [2024-07-21 03:44:28.760515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.566 [2024-07-21 03:44:28.760540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.566 qpair failed and we were unable to recover it. 00:34:43.566 [2024-07-21 03:44:28.760632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.566 [2024-07-21 03:44:28.760658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.566 qpair failed and we were unable to recover it. 00:34:43.566 [2024-07-21 03:44:28.760757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.566 [2024-07-21 03:44:28.760783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.566 qpair failed and we were unable to recover it. 00:34:43.566 [2024-07-21 03:44:28.760879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.566 [2024-07-21 03:44:28.760922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.566 qpair failed and we were unable to recover it. 00:34:43.566 [2024-07-21 03:44:28.761023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.566 [2024-07-21 03:44:28.761053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.566 qpair failed and we were unable to recover it. 00:34:43.566 [2024-07-21 03:44:28.761181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.566 [2024-07-21 03:44:28.761222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.566 qpair failed and we were unable to recover it. 00:34:43.566 [2024-07-21 03:44:28.761363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.566 [2024-07-21 03:44:28.761391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.566 qpair failed and we were unable to recover it. 00:34:43.566 [2024-07-21 03:44:28.761525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.566 [2024-07-21 03:44:28.761553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.566 qpair failed and we were unable to recover it. 00:34:43.566 [2024-07-21 03:44:28.761704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.566 [2024-07-21 03:44:28.761732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.566 qpair failed and we were unable to recover it. 00:34:43.566 [2024-07-21 03:44:28.761851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.566 [2024-07-21 03:44:28.761881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.566 qpair failed and we were unable to recover it. 00:34:43.566 [2024-07-21 03:44:28.762014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.566 [2024-07-21 03:44:28.762043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.566 qpair failed and we were unable to recover it. 00:34:43.566 [2024-07-21 03:44:28.762202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.566 [2024-07-21 03:44:28.762231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.566 qpair failed and we were unable to recover it. 00:34:43.566 [2024-07-21 03:44:28.762373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.566 [2024-07-21 03:44:28.762438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.566 qpair failed and we were unable to recover it. 00:34:43.566 [2024-07-21 03:44:28.762595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.566 [2024-07-21 03:44:28.762631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.566 qpair failed and we were unable to recover it. 00:34:43.566 [2024-07-21 03:44:28.762754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.566 [2024-07-21 03:44:28.762780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.566 qpair failed and we were unable to recover it. 00:34:43.566 [2024-07-21 03:44:28.762870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.566 [2024-07-21 03:44:28.762895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.566 qpair failed and we were unable to recover it. 00:34:43.566 [2024-07-21 03:44:28.763060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.566 [2024-07-21 03:44:28.763103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.566 qpair failed and we were unable to recover it. 00:34:43.567 [2024-07-21 03:44:28.763290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.567 [2024-07-21 03:44:28.763346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.567 qpair failed and we were unable to recover it. 00:34:43.567 [2024-07-21 03:44:28.763464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.567 [2024-07-21 03:44:28.763492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.567 qpair failed and we were unable to recover it. 00:34:43.567 [2024-07-21 03:44:28.763609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.567 [2024-07-21 03:44:28.763641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.567 qpair failed and we were unable to recover it. 00:34:43.567 [2024-07-21 03:44:28.763775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.567 [2024-07-21 03:44:28.763804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.567 qpair failed and we were unable to recover it. 00:34:43.567 [2024-07-21 03:44:28.763965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.567 [2024-07-21 03:44:28.763993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.567 qpair failed and we were unable to recover it. 00:34:43.567 [2024-07-21 03:44:28.764179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.567 [2024-07-21 03:44:28.764233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.567 qpair failed and we were unable to recover it. 00:34:43.567 [2024-07-21 03:44:28.764327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.567 [2024-07-21 03:44:28.764356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.567 qpair failed and we were unable to recover it. 00:34:43.567 [2024-07-21 03:44:28.764483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.567 [2024-07-21 03:44:28.764527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.567 qpair failed and we were unable to recover it. 00:34:43.567 [2024-07-21 03:44:28.764653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.567 [2024-07-21 03:44:28.764679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.567 qpair failed and we were unable to recover it. 00:34:43.567 [2024-07-21 03:44:28.764771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.567 [2024-07-21 03:44:28.764797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.567 qpair failed and we were unable to recover it. 00:34:43.567 [2024-07-21 03:44:28.764918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.567 [2024-07-21 03:44:28.764943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.567 qpair failed and we were unable to recover it. 00:34:43.567 [2024-07-21 03:44:28.765118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.567 [2024-07-21 03:44:28.765170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.567 qpair failed and we were unable to recover it. 00:34:43.567 [2024-07-21 03:44:28.765286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.567 [2024-07-21 03:44:28.765311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.567 qpair failed and we were unable to recover it. 00:34:43.567 [2024-07-21 03:44:28.765487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.567 [2024-07-21 03:44:28.765515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.567 qpair failed and we were unable to recover it. 00:34:43.567 [2024-07-21 03:44:28.765646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.567 [2024-07-21 03:44:28.765688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.567 qpair failed and we were unable to recover it. 00:34:43.567 [2024-07-21 03:44:28.765800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.567 [2024-07-21 03:44:28.765826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.567 qpair failed and we were unable to recover it. 00:34:43.567 [2024-07-21 03:44:28.765913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.567 [2024-07-21 03:44:28.765938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.567 qpair failed and we were unable to recover it. 00:34:43.567 [2024-07-21 03:44:28.766073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.567 [2024-07-21 03:44:28.766101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.567 qpair failed and we were unable to recover it. 00:34:43.567 [2024-07-21 03:44:28.766224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.567 [2024-07-21 03:44:28.766265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.567 qpair failed and we were unable to recover it. 00:34:43.567 [2024-07-21 03:44:28.766389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.567 [2024-07-21 03:44:28.766417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.567 qpair failed and we were unable to recover it. 00:34:43.567 [2024-07-21 03:44:28.766539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.567 [2024-07-21 03:44:28.766568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.567 qpair failed and we were unable to recover it. 00:34:43.567 [2024-07-21 03:44:28.766685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.567 [2024-07-21 03:44:28.766710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.567 qpair failed and we were unable to recover it. 00:34:43.567 [2024-07-21 03:44:28.766860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.567 [2024-07-21 03:44:28.766890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.567 qpair failed and we were unable to recover it. 00:34:43.567 [2024-07-21 03:44:28.767007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.567 [2024-07-21 03:44:28.767037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.567 qpair failed and we were unable to recover it. 00:34:43.567 [2024-07-21 03:44:28.767163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.567 [2024-07-21 03:44:28.767191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.567 qpair failed and we were unable to recover it. 00:34:43.567 [2024-07-21 03:44:28.767291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.567 [2024-07-21 03:44:28.767319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.567 qpair failed and we were unable to recover it. 00:34:43.567 [2024-07-21 03:44:28.767410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.567 [2024-07-21 03:44:28.767438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.567 qpair failed and we were unable to recover it. 00:34:43.568 [2024-07-21 03:44:28.767566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.568 [2024-07-21 03:44:28.767606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.568 qpair failed and we were unable to recover it. 00:34:43.568 [2024-07-21 03:44:28.767744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.568 [2024-07-21 03:44:28.767771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.568 qpair failed and we were unable to recover it. 00:34:43.568 [2024-07-21 03:44:28.767905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.568 [2024-07-21 03:44:28.767948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.568 qpair failed and we were unable to recover it. 00:34:43.568 [2024-07-21 03:44:28.768064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.568 [2024-07-21 03:44:28.768109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.568 qpair failed and we were unable to recover it. 00:34:43.568 [2024-07-21 03:44:28.768221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.568 [2024-07-21 03:44:28.768247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.568 qpair failed and we were unable to recover it. 00:34:43.568 [2024-07-21 03:44:28.768369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.568 [2024-07-21 03:44:28.768396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.568 qpair failed and we were unable to recover it. 00:34:43.568 [2024-07-21 03:44:28.768520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.568 [2024-07-21 03:44:28.768547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.568 qpair failed and we were unable to recover it. 00:34:43.568 [2024-07-21 03:44:28.768649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.568 [2024-07-21 03:44:28.768678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.568 qpair failed and we were unable to recover it. 00:34:43.568 [2024-07-21 03:44:28.768796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.568 [2024-07-21 03:44:28.768835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.568 qpair failed and we were unable to recover it. 00:34:43.568 [2024-07-21 03:44:28.768942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.568 [2024-07-21 03:44:28.768968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.568 qpair failed and we were unable to recover it. 00:34:43.568 [2024-07-21 03:44:28.769090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.568 [2024-07-21 03:44:28.769115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.568 qpair failed and we were unable to recover it. 00:34:43.568 [2024-07-21 03:44:28.769230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.568 [2024-07-21 03:44:28.769255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.568 qpair failed and we were unable to recover it. 00:34:43.568 [2024-07-21 03:44:28.769389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.568 [2024-07-21 03:44:28.769416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.568 qpair failed and we were unable to recover it. 00:34:43.568 [2024-07-21 03:44:28.769551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.568 [2024-07-21 03:44:28.769581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.568 qpair failed and we were unable to recover it. 00:34:43.568 [2024-07-21 03:44:28.769736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.568 [2024-07-21 03:44:28.769762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.568 qpair failed and we were unable to recover it. 00:34:43.568 [2024-07-21 03:44:28.769880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.568 [2024-07-21 03:44:28.769908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.568 qpair failed and we were unable to recover it. 00:34:43.568 [2024-07-21 03:44:28.770038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.568 [2024-07-21 03:44:28.770066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.568 qpair failed and we were unable to recover it. 00:34:43.568 [2024-07-21 03:44:28.770187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.568 [2024-07-21 03:44:28.770215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.568 qpair failed and we were unable to recover it. 00:34:43.568 [2024-07-21 03:44:28.770326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.568 [2024-07-21 03:44:28.770372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.568 qpair failed and we were unable to recover it. 00:34:43.568 [2024-07-21 03:44:28.770489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.568 [2024-07-21 03:44:28.770515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.568 qpair failed and we were unable to recover it. 00:34:43.568 [2024-07-21 03:44:28.770661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.568 [2024-07-21 03:44:28.770687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.568 qpair failed and we were unable to recover it. 00:34:43.568 [2024-07-21 03:44:28.770825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.568 [2024-07-21 03:44:28.770869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.568 qpair failed and we were unable to recover it. 00:34:43.568 [2024-07-21 03:44:28.771035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.568 [2024-07-21 03:44:28.771067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.568 qpair failed and we were unable to recover it. 00:34:43.568 [2024-07-21 03:44:28.771161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.568 [2024-07-21 03:44:28.771189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.568 qpair failed and we were unable to recover it. 00:34:43.568 [2024-07-21 03:44:28.771330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.568 [2024-07-21 03:44:28.771360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.568 qpair failed and we were unable to recover it. 00:34:43.568 [2024-07-21 03:44:28.771498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.568 [2024-07-21 03:44:28.771524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.568 qpair failed and we were unable to recover it. 00:34:43.568 [2024-07-21 03:44:28.771659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.569 [2024-07-21 03:44:28.771698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.569 qpair failed and we were unable to recover it. 00:34:43.569 [2024-07-21 03:44:28.771853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.569 [2024-07-21 03:44:28.771897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.569 qpair failed and we were unable to recover it. 00:34:43.569 [2024-07-21 03:44:28.772028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.569 [2024-07-21 03:44:28.772057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.569 qpair failed and we were unable to recover it. 00:34:43.569 [2024-07-21 03:44:28.772257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.569 [2024-07-21 03:44:28.772314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.569 qpair failed and we were unable to recover it. 00:34:43.569 [2024-07-21 03:44:28.772474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.569 [2024-07-21 03:44:28.772502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.569 qpair failed and we were unable to recover it. 00:34:43.569 [2024-07-21 03:44:28.772639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.569 [2024-07-21 03:44:28.772665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.569 qpair failed and we were unable to recover it. 00:34:43.569 [2024-07-21 03:44:28.772791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.569 [2024-07-21 03:44:28.772816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.569 qpair failed and we were unable to recover it. 00:34:43.569 [2024-07-21 03:44:28.772926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.569 [2024-07-21 03:44:28.772954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.569 qpair failed and we were unable to recover it. 00:34:43.569 [2024-07-21 03:44:28.773066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.569 [2024-07-21 03:44:28.773096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.569 qpair failed and we were unable to recover it. 00:34:43.569 [2024-07-21 03:44:28.773227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.569 [2024-07-21 03:44:28.773262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.569 qpair failed and we were unable to recover it. 00:34:43.569 [2024-07-21 03:44:28.773399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.569 [2024-07-21 03:44:28.773428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.569 qpair failed and we were unable to recover it. 00:34:43.569 [2024-07-21 03:44:28.773597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.569 [2024-07-21 03:44:28.773634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.569 qpair failed and we were unable to recover it. 00:34:43.569 [2024-07-21 03:44:28.773755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.569 [2024-07-21 03:44:28.773781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.569 qpair failed and we were unable to recover it. 00:34:43.569 [2024-07-21 03:44:28.773872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.569 [2024-07-21 03:44:28.773898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.569 qpair failed and we were unable to recover it. 00:34:43.569 [2024-07-21 03:44:28.774055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.569 [2024-07-21 03:44:28.774113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.569 qpair failed and we were unable to recover it. 00:34:43.569 [2024-07-21 03:44:28.774306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.569 [2024-07-21 03:44:28.774360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.569 qpair failed and we were unable to recover it. 00:34:43.569 [2024-07-21 03:44:28.774529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.569 [2024-07-21 03:44:28.774557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.569 qpair failed and we were unable to recover it. 00:34:43.569 [2024-07-21 03:44:28.774674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.569 [2024-07-21 03:44:28.774702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.569 qpair failed and we were unable to recover it. 00:34:43.569 [2024-07-21 03:44:28.774850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.569 [2024-07-21 03:44:28.774875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.569 qpair failed and we were unable to recover it. 00:34:43.569 [2024-07-21 03:44:28.775024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.570 [2024-07-21 03:44:28.775054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.570 qpair failed and we were unable to recover it. 00:34:43.570 [2024-07-21 03:44:28.775304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.570 [2024-07-21 03:44:28.775354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.570 qpair failed and we were unable to recover it. 00:34:43.570 [2024-07-21 03:44:28.775487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.570 [2024-07-21 03:44:28.775516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.570 qpair failed and we were unable to recover it. 00:34:43.570 [2024-07-21 03:44:28.775633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.570 [2024-07-21 03:44:28.775675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.570 qpair failed and we were unable to recover it. 00:34:43.570 [2024-07-21 03:44:28.775798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.570 [2024-07-21 03:44:28.775825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.570 qpair failed and we were unable to recover it. 00:34:43.570 [2024-07-21 03:44:28.775945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.570 [2024-07-21 03:44:28.775972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.570 qpair failed and we were unable to recover it. 00:34:43.570 [2024-07-21 03:44:28.776170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.570 [2024-07-21 03:44:28.776216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.570 qpair failed and we were unable to recover it. 00:34:43.570 [2024-07-21 03:44:28.776330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.570 [2024-07-21 03:44:28.776376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.570 qpair failed and we were unable to recover it. 00:34:43.570 [2024-07-21 03:44:28.776533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.570 [2024-07-21 03:44:28.776561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.570 qpair failed and we were unable to recover it. 00:34:43.570 [2024-07-21 03:44:28.776703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.570 [2024-07-21 03:44:28.776729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.570 qpair failed and we were unable to recover it. 00:34:43.570 [2024-07-21 03:44:28.776846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.570 [2024-07-21 03:44:28.776871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.570 qpair failed and we were unable to recover it. 00:34:43.570 [2024-07-21 03:44:28.776990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.570 [2024-07-21 03:44:28.777032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.570 qpair failed and we were unable to recover it. 00:34:43.570 [2024-07-21 03:44:28.777182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.570 [2024-07-21 03:44:28.777252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.570 qpair failed and we were unable to recover it. 00:34:43.570 [2024-07-21 03:44:28.777364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.570 [2024-07-21 03:44:28.777405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.570 qpair failed and we were unable to recover it. 00:34:43.570 [2024-07-21 03:44:28.777507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.570 [2024-07-21 03:44:28.777536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.570 qpair failed and we were unable to recover it. 00:34:43.570 [2024-07-21 03:44:28.777666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.570 [2024-07-21 03:44:28.777693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.570 qpair failed and we were unable to recover it. 00:34:43.570 [2024-07-21 03:44:28.777782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.570 [2024-07-21 03:44:28.777807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.570 qpair failed and we were unable to recover it. 00:34:43.570 [2024-07-21 03:44:28.777913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.570 [2024-07-21 03:44:28.777952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.570 qpair failed and we were unable to recover it. 00:34:43.570 [2024-07-21 03:44:28.778053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.570 [2024-07-21 03:44:28.778080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.570 qpair failed and we were unable to recover it. 00:34:43.570 [2024-07-21 03:44:28.778181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.570 [2024-07-21 03:44:28.778206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.570 qpair failed and we were unable to recover it. 00:34:43.570 [2024-07-21 03:44:28.778324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.570 [2024-07-21 03:44:28.778349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.570 qpair failed and we were unable to recover it. 00:34:43.570 [2024-07-21 03:44:28.778522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.570 [2024-07-21 03:44:28.778553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.570 qpair failed and we were unable to recover it. 00:34:43.570 [2024-07-21 03:44:28.778694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.570 [2024-07-21 03:44:28.778721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.570 qpair failed and we were unable to recover it. 00:34:43.570 [2024-07-21 03:44:28.778808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.570 [2024-07-21 03:44:28.778833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.570 qpair failed and we were unable to recover it. 00:34:43.570 [2024-07-21 03:44:28.779073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.570 [2024-07-21 03:44:28.779129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.570 qpair failed and we were unable to recover it. 00:34:43.570 [2024-07-21 03:44:28.779268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.570 [2024-07-21 03:44:28.779294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.570 qpair failed and we were unable to recover it. 00:34:43.570 [2024-07-21 03:44:28.779414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.570 [2024-07-21 03:44:28.779439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.570 qpair failed and we were unable to recover it. 00:34:43.570 [2024-07-21 03:44:28.779634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.571 [2024-07-21 03:44:28.779659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.571 qpair failed and we were unable to recover it. 00:34:43.571 [2024-07-21 03:44:28.779801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.571 [2024-07-21 03:44:28.779826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.571 qpair failed and we were unable to recover it. 00:34:43.571 [2024-07-21 03:44:28.779993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.571 [2024-07-21 03:44:28.780023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.571 qpair failed and we were unable to recover it. 00:34:43.571 [2024-07-21 03:44:28.780180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.571 [2024-07-21 03:44:28.780236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.571 qpair failed and we were unable to recover it. 00:34:43.571 [2024-07-21 03:44:28.780386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.571 [2024-07-21 03:44:28.780412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.571 qpair failed and we were unable to recover it. 00:34:43.571 [2024-07-21 03:44:28.780520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.571 [2024-07-21 03:44:28.780549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.571 qpair failed and we were unable to recover it. 00:34:43.571 [2024-07-21 03:44:28.780693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.571 [2024-07-21 03:44:28.780719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.571 qpair failed and we were unable to recover it. 00:34:43.571 [2024-07-21 03:44:28.780865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.571 [2024-07-21 03:44:28.780890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.571 qpair failed and we were unable to recover it. 00:34:43.571 [2024-07-21 03:44:28.780976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.571 [2024-07-21 03:44:28.781002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.571 qpair failed and we were unable to recover it. 00:34:43.571 [2024-07-21 03:44:28.781199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.571 [2024-07-21 03:44:28.781251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.571 qpair failed and we were unable to recover it. 00:34:43.571 [2024-07-21 03:44:28.781371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.571 [2024-07-21 03:44:28.781415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.571 qpair failed and we were unable to recover it. 00:34:43.571 [2024-07-21 03:44:28.781561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.571 [2024-07-21 03:44:28.781591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.571 qpair failed and we were unable to recover it. 00:34:43.571 [2024-07-21 03:44:28.781785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.571 [2024-07-21 03:44:28.781824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.571 qpair failed and we were unable to recover it. 00:34:43.571 [2024-07-21 03:44:28.781930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.571 [2024-07-21 03:44:28.781957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.571 qpair failed and we were unable to recover it. 00:34:43.571 [2024-07-21 03:44:28.782053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.571 [2024-07-21 03:44:28.782081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.571 qpair failed and we were unable to recover it. 00:34:43.571 [2024-07-21 03:44:28.782280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.571 [2024-07-21 03:44:28.782308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.571 qpair failed and we were unable to recover it. 00:34:43.571 [2024-07-21 03:44:28.782423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.571 [2024-07-21 03:44:28.782450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.571 qpair failed and we were unable to recover it. 00:34:43.571 [2024-07-21 03:44:28.782546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.571 [2024-07-21 03:44:28.782571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.571 qpair failed and we were unable to recover it. 00:34:43.571 [2024-07-21 03:44:28.782735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.571 [2024-07-21 03:44:28.782779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.571 qpair failed and we were unable to recover it. 00:34:43.571 [2024-07-21 03:44:28.782925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.571 [2024-07-21 03:44:28.782951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.571 qpair failed and we were unable to recover it. 00:34:43.571 [2024-07-21 03:44:28.783050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.571 [2024-07-21 03:44:28.783075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.571 qpair failed and we were unable to recover it. 00:34:43.571 [2024-07-21 03:44:28.783187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.571 [2024-07-21 03:44:28.783212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.571 qpair failed and we were unable to recover it. 00:34:43.571 [2024-07-21 03:44:28.783312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.571 [2024-07-21 03:44:28.783338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.571 qpair failed and we were unable to recover it. 00:34:43.571 [2024-07-21 03:44:28.783445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.571 [2024-07-21 03:44:28.783474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.571 qpair failed and we were unable to recover it. 00:34:43.571 [2024-07-21 03:44:28.783582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.571 [2024-07-21 03:44:28.783609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.571 qpair failed and we were unable to recover it. 00:34:43.571 [2024-07-21 03:44:28.783791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.571 [2024-07-21 03:44:28.783816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.571 qpair failed and we were unable to recover it. 00:34:43.571 [2024-07-21 03:44:28.783905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.571 [2024-07-21 03:44:28.783945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.571 qpair failed and we were unable to recover it. 00:34:43.571 [2024-07-21 03:44:28.784047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.571 [2024-07-21 03:44:28.784089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.571 qpair failed and we were unable to recover it. 00:34:43.572 [2024-07-21 03:44:28.784238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.572 [2024-07-21 03:44:28.784264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.572 qpair failed and we were unable to recover it. 00:34:43.572 [2024-07-21 03:44:28.784381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.572 [2024-07-21 03:44:28.784405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.572 qpair failed and we were unable to recover it. 00:34:43.572 [2024-07-21 03:44:28.784493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.572 [2024-07-21 03:44:28.784523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.572 qpair failed and we were unable to recover it. 00:34:43.572 [2024-07-21 03:44:28.784660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.572 [2024-07-21 03:44:28.784700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.572 qpair failed and we were unable to recover it. 00:34:43.572 [2024-07-21 03:44:28.784815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.572 [2024-07-21 03:44:28.784854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.572 qpair failed and we were unable to recover it. 00:34:43.572 [2024-07-21 03:44:28.784984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.572 [2024-07-21 03:44:28.785011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.572 qpair failed and we were unable to recover it. 00:34:43.572 [2024-07-21 03:44:28.785165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.572 [2024-07-21 03:44:28.785208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.572 qpair failed and we were unable to recover it. 00:34:43.572 [2024-07-21 03:44:28.785404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.572 [2024-07-21 03:44:28.785433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.572 qpair failed and we were unable to recover it. 00:34:43.572 [2024-07-21 03:44:28.785549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.572 [2024-07-21 03:44:28.785575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.572 qpair failed and we were unable to recover it. 00:34:43.572 [2024-07-21 03:44:28.785686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.572 [2024-07-21 03:44:28.785714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.572 qpair failed and we were unable to recover it. 00:34:43.572 [2024-07-21 03:44:28.785861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.572 [2024-07-21 03:44:28.785887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.572 qpair failed and we were unable to recover it. 00:34:43.572 [2024-07-21 03:44:28.786006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.572 [2024-07-21 03:44:28.786032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.572 qpair failed and we were unable to recover it. 00:34:43.572 [2024-07-21 03:44:28.786178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.572 [2024-07-21 03:44:28.786203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.572 qpair failed and we were unable to recover it. 00:34:43.572 [2024-07-21 03:44:28.786374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.572 [2024-07-21 03:44:28.786403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.572 qpair failed and we were unable to recover it. 00:34:43.572 [2024-07-21 03:44:28.786558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.572 [2024-07-21 03:44:28.786586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.572 qpair failed and we were unable to recover it. 00:34:43.572 [2024-07-21 03:44:28.786705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.572 [2024-07-21 03:44:28.786734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.572 qpair failed and we were unable to recover it. 00:34:43.572 [2024-07-21 03:44:28.786837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.572 [2024-07-21 03:44:28.786861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.572 qpair failed and we were unable to recover it. 00:34:43.572 [2024-07-21 03:44:28.786968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.572 [2024-07-21 03:44:28.786996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.572 qpair failed and we were unable to recover it. 00:34:43.572 [2024-07-21 03:44:28.787131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.572 [2024-07-21 03:44:28.787181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.572 qpair failed and we were unable to recover it. 00:34:43.572 [2024-07-21 03:44:28.787419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.572 [2024-07-21 03:44:28.787474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.572 qpair failed and we were unable to recover it. 00:34:43.572 [2024-07-21 03:44:28.787622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.572 [2024-07-21 03:44:28.787680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.572 qpair failed and we were unable to recover it. 00:34:43.572 [2024-07-21 03:44:28.787809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.572 [2024-07-21 03:44:28.787836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.572 qpair failed and we were unable to recover it. 00:34:43.572 [2024-07-21 03:44:28.788051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.572 [2024-07-21 03:44:28.788100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.572 qpair failed and we were unable to recover it. 00:34:43.572 [2024-07-21 03:44:28.788263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.572 [2024-07-21 03:44:28.788317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.572 qpair failed and we were unable to recover it. 00:34:43.572 [2024-07-21 03:44:28.788473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.572 [2024-07-21 03:44:28.788502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.572 qpair failed and we were unable to recover it. 00:34:43.572 [2024-07-21 03:44:28.788636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.572 [2024-07-21 03:44:28.788663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.572 qpair failed and we were unable to recover it. 00:34:43.572 [2024-07-21 03:44:28.788790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.572 [2024-07-21 03:44:28.788817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.572 qpair failed and we were unable to recover it. 00:34:43.572 [2024-07-21 03:44:28.788987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.572 [2024-07-21 03:44:28.789016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.572 qpair failed and we were unable to recover it. 00:34:43.572 [2024-07-21 03:44:28.789131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.572 [2024-07-21 03:44:28.789175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.572 qpair failed and we were unable to recover it. 00:34:43.572 [2024-07-21 03:44:28.789353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.572 [2024-07-21 03:44:28.789383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.572 qpair failed and we were unable to recover it. 00:34:43.572 [2024-07-21 03:44:28.789498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.572 [2024-07-21 03:44:28.789524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.572 qpair failed and we were unable to recover it. 00:34:43.572 [2024-07-21 03:44:28.789670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.572 [2024-07-21 03:44:28.789696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.572 qpair failed and we were unable to recover it. 00:34:43.572 [2024-07-21 03:44:28.789811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.572 [2024-07-21 03:44:28.789836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.572 qpair failed and we were unable to recover it. 00:34:43.573 [2024-07-21 03:44:28.789983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.573 [2024-07-21 03:44:28.790011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.573 qpair failed and we were unable to recover it. 00:34:43.573 [2024-07-21 03:44:28.790122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.573 [2024-07-21 03:44:28.790164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.573 qpair failed and we were unable to recover it. 00:34:43.573 [2024-07-21 03:44:28.790290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.573 [2024-07-21 03:44:28.790319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.573 qpair failed and we were unable to recover it. 00:34:43.573 [2024-07-21 03:44:28.790425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.573 [2024-07-21 03:44:28.790453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.573 qpair failed and we were unable to recover it. 00:34:43.573 [2024-07-21 03:44:28.790618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.573 [2024-07-21 03:44:28.790644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.573 qpair failed and we were unable to recover it. 00:34:43.573 [2024-07-21 03:44:28.790760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.573 [2024-07-21 03:44:28.790784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.573 qpair failed and we were unable to recover it. 00:34:43.573 [2024-07-21 03:44:28.790879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.573 [2024-07-21 03:44:28.790904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.573 qpair failed and we were unable to recover it. 00:34:43.573 [2024-07-21 03:44:28.791012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.573 [2024-07-21 03:44:28.791042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.573 qpair failed and we were unable to recover it. 00:34:43.573 [2024-07-21 03:44:28.791167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.573 [2024-07-21 03:44:28.791195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.573 qpair failed and we were unable to recover it. 00:34:43.573 [2024-07-21 03:44:28.791300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.573 [2024-07-21 03:44:28.791347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.573 qpair failed and we were unable to recover it. 00:34:43.573 [2024-07-21 03:44:28.791493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.573 [2024-07-21 03:44:28.791518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.573 qpair failed and we were unable to recover it. 00:34:43.573 [2024-07-21 03:44:28.791641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.573 [2024-07-21 03:44:28.791667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.573 qpair failed and we were unable to recover it. 00:34:43.573 [2024-07-21 03:44:28.791774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.573 [2024-07-21 03:44:28.791802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.573 qpair failed and we were unable to recover it. 00:34:43.573 [2024-07-21 03:44:28.791901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.573 [2024-07-21 03:44:28.791928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.573 qpair failed and we were unable to recover it. 00:34:43.573 [2024-07-21 03:44:28.792035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.573 [2024-07-21 03:44:28.792066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.573 qpair failed and we were unable to recover it. 00:34:43.573 [2024-07-21 03:44:28.792183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.573 [2024-07-21 03:44:28.792209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.573 qpair failed and we were unable to recover it. 00:34:43.573 [2024-07-21 03:44:28.792351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.573 [2024-07-21 03:44:28.792380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.573 qpair failed and we were unable to recover it. 00:34:43.573 [2024-07-21 03:44:28.792500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.573 [2024-07-21 03:44:28.792529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.573 qpair failed and we were unable to recover it. 00:34:43.573 [2024-07-21 03:44:28.792693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.573 [2024-07-21 03:44:28.792732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.573 qpair failed and we were unable to recover it. 00:34:43.573 [2024-07-21 03:44:28.792861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.573 [2024-07-21 03:44:28.792917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.573 qpair failed and we were unable to recover it. 00:34:43.573 [2024-07-21 03:44:28.793033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.573 [2024-07-21 03:44:28.793063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.573 qpair failed and we were unable to recover it. 00:34:43.573 [2024-07-21 03:44:28.793229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.573 [2024-07-21 03:44:28.793259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.573 qpair failed and we were unable to recover it. 00:34:43.573 [2024-07-21 03:44:28.793354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.573 [2024-07-21 03:44:28.793381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.573 qpair failed and we were unable to recover it. 00:34:43.573 [2024-07-21 03:44:28.793519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.573 [2024-07-21 03:44:28.793545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.573 qpair failed and we were unable to recover it. 00:34:43.573 [2024-07-21 03:44:28.793634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.573 [2024-07-21 03:44:28.793660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.573 qpair failed and we were unable to recover it. 00:34:43.573 [2024-07-21 03:44:28.793749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.573 [2024-07-21 03:44:28.793774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.573 qpair failed and we were unable to recover it. 00:34:43.573 [2024-07-21 03:44:28.793877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.573 [2024-07-21 03:44:28.793904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.573 qpair failed and we were unable to recover it. 00:34:43.573 [2024-07-21 03:44:28.794040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.573 [2024-07-21 03:44:28.794068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.573 qpair failed and we were unable to recover it. 00:34:43.573 [2024-07-21 03:44:28.794162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.573 [2024-07-21 03:44:28.794189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.573 qpair failed and we were unable to recover it. 00:34:43.573 [2024-07-21 03:44:28.794317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.573 [2024-07-21 03:44:28.794345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.573 qpair failed and we were unable to recover it. 00:34:43.573 [2024-07-21 03:44:28.794455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.573 [2024-07-21 03:44:28.794481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.573 qpair failed and we were unable to recover it. 00:34:43.573 [2024-07-21 03:44:28.794651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.573 [2024-07-21 03:44:28.794690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.573 qpair failed and we were unable to recover it. 00:34:43.573 [2024-07-21 03:44:28.794821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.573 [2024-07-21 03:44:28.794847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.573 qpair failed and we were unable to recover it. 00:34:43.573 [2024-07-21 03:44:28.794954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.573 [2024-07-21 03:44:28.794982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.573 qpair failed and we were unable to recover it. 00:34:43.573 [2024-07-21 03:44:28.795100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.573 [2024-07-21 03:44:28.795157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.573 qpair failed and we were unable to recover it. 00:34:43.573 [2024-07-21 03:44:28.795280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.573 [2024-07-21 03:44:28.795309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.573 qpair failed and we were unable to recover it. 00:34:43.573 [2024-07-21 03:44:28.795435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.573 [2024-07-21 03:44:28.795475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.574 qpair failed and we were unable to recover it. 00:34:43.574 [2024-07-21 03:44:28.795643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.574 [2024-07-21 03:44:28.795670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.574 qpair failed and we were unable to recover it. 00:34:43.574 [2024-07-21 03:44:28.795794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.574 [2024-07-21 03:44:28.795820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.574 qpair failed and we were unable to recover it. 00:34:43.574 [2024-07-21 03:44:28.795943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.574 [2024-07-21 03:44:28.795983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.574 qpair failed and we were unable to recover it. 00:34:43.574 [2024-07-21 03:44:28.796140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.574 [2024-07-21 03:44:28.796167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.574 qpair failed and we were unable to recover it. 00:34:43.574 [2024-07-21 03:44:28.796283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.574 [2024-07-21 03:44:28.796325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.574 qpair failed and we were unable to recover it. 00:34:43.574 [2024-07-21 03:44:28.796457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.574 [2024-07-21 03:44:28.796484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.574 qpair failed and we were unable to recover it. 00:34:43.574 [2024-07-21 03:44:28.796610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.574 [2024-07-21 03:44:28.796664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.574 qpair failed and we were unable to recover it. 00:34:43.574 [2024-07-21 03:44:28.796797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.574 [2024-07-21 03:44:28.796822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.574 qpair failed and we were unable to recover it. 00:34:43.574 [2024-07-21 03:44:28.796945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.574 [2024-07-21 03:44:28.796982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.574 qpair failed and we were unable to recover it. 00:34:43.574 [2024-07-21 03:44:28.797103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.574 [2024-07-21 03:44:28.797140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.574 qpair failed and we were unable to recover it. 00:34:43.574 [2024-07-21 03:44:28.797288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.574 [2024-07-21 03:44:28.797317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.574 qpair failed and we were unable to recover it. 00:34:43.574 [2024-07-21 03:44:28.797440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.574 [2024-07-21 03:44:28.797466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.574 qpair failed and we were unable to recover it. 00:34:43.574 [2024-07-21 03:44:28.797649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.574 [2024-07-21 03:44:28.797681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.574 qpair failed and we were unable to recover it. 00:34:43.574 [2024-07-21 03:44:28.797779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.574 [2024-07-21 03:44:28.797805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.574 qpair failed and we were unable to recover it. 00:34:43.574 [2024-07-21 03:44:28.797894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.574 [2024-07-21 03:44:28.797919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.574 qpair failed and we were unable to recover it. 00:34:43.574 [2024-07-21 03:44:28.798055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.574 [2024-07-21 03:44:28.798083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.574 qpair failed and we were unable to recover it. 00:34:43.574 [2024-07-21 03:44:28.798274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.574 [2024-07-21 03:44:28.798302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.574 qpair failed and we were unable to recover it. 00:34:43.574 [2024-07-21 03:44:28.798397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.574 [2024-07-21 03:44:28.798424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.574 qpair failed and we were unable to recover it. 00:34:43.574 [2024-07-21 03:44:28.798551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.574 [2024-07-21 03:44:28.798579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.574 qpair failed and we were unable to recover it. 00:34:43.574 [2024-07-21 03:44:28.798703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.574 [2024-07-21 03:44:28.798730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.574 qpair failed and we were unable to recover it. 00:34:43.574 [2024-07-21 03:44:28.798832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.574 [2024-07-21 03:44:28.798859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.574 qpair failed and we were unable to recover it. 00:34:43.574 [2024-07-21 03:44:28.798991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.574 [2024-07-21 03:44:28.799031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.574 qpair failed and we were unable to recover it. 00:34:43.574 [2024-07-21 03:44:28.799238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.574 [2024-07-21 03:44:28.799304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.574 qpair failed and we were unable to recover it. 00:34:43.574 [2024-07-21 03:44:28.799500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.574 [2024-07-21 03:44:28.799546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.574 qpair failed and we were unable to recover it. 00:34:43.574 [2024-07-21 03:44:28.799677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.574 [2024-07-21 03:44:28.799705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.574 qpair failed and we were unable to recover it. 00:34:43.574 [2024-07-21 03:44:28.799844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.574 [2024-07-21 03:44:28.799893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.574 qpair failed and we were unable to recover it. 00:34:43.574 [2024-07-21 03:44:28.800039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.574 [2024-07-21 03:44:28.800082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.574 qpair failed and we were unable to recover it. 00:34:43.574 [2024-07-21 03:44:28.800216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.574 [2024-07-21 03:44:28.800259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.574 qpair failed and we were unable to recover it. 00:34:43.574 [2024-07-21 03:44:28.800353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.574 [2024-07-21 03:44:28.800381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.574 qpair failed and we were unable to recover it. 00:34:43.574 [2024-07-21 03:44:28.800540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.574 [2024-07-21 03:44:28.800566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.574 qpair failed and we were unable to recover it. 00:34:43.574 [2024-07-21 03:44:28.800695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.574 [2024-07-21 03:44:28.800740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.574 qpair failed and we were unable to recover it. 00:34:43.574 [2024-07-21 03:44:28.800833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.574 [2024-07-21 03:44:28.800869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.574 qpair failed and we were unable to recover it. 00:34:43.856 [2024-07-21 03:44:28.801011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.856 [2024-07-21 03:44:28.801049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.856 qpair failed and we were unable to recover it. 00:34:43.856 [2024-07-21 03:44:28.801173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.856 [2024-07-21 03:44:28.801200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.856 qpair failed and we were unable to recover it. 00:34:43.856 [2024-07-21 03:44:28.801302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.856 [2024-07-21 03:44:28.801328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.856 qpair failed and we were unable to recover it. 00:34:43.856 [2024-07-21 03:44:28.801421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.856 [2024-07-21 03:44:28.801445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.856 qpair failed and we were unable to recover it. 00:34:43.856 [2024-07-21 03:44:28.801546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.856 [2024-07-21 03:44:28.801586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.856 qpair failed and we were unable to recover it. 00:34:43.856 [2024-07-21 03:44:28.801725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.856 [2024-07-21 03:44:28.801755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.856 qpair failed and we were unable to recover it. 00:34:43.856 [2024-07-21 03:44:28.801868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.856 [2024-07-21 03:44:28.801896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.856 qpair failed and we were unable to recover it. 00:34:43.856 [2024-07-21 03:44:28.802111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.856 [2024-07-21 03:44:28.802178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.856 qpair failed and we were unable to recover it. 00:34:43.856 [2024-07-21 03:44:28.802336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.856 [2024-07-21 03:44:28.802387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.856 qpair failed and we were unable to recover it. 00:34:43.856 [2024-07-21 03:44:28.802501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.856 [2024-07-21 03:44:28.802527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.856 qpair failed and we were unable to recover it. 00:34:43.856 [2024-07-21 03:44:28.802625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.856 [2024-07-21 03:44:28.802655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.856 qpair failed and we were unable to recover it. 00:34:43.856 [2024-07-21 03:44:28.802768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.856 [2024-07-21 03:44:28.802794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.856 qpair failed and we were unable to recover it. 00:34:43.856 [2024-07-21 03:44:28.802916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.857 [2024-07-21 03:44:28.802943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.857 qpair failed and we were unable to recover it. 00:34:43.857 [2024-07-21 03:44:28.803151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.857 [2024-07-21 03:44:28.803179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.857 qpair failed and we were unable to recover it. 00:34:43.857 [2024-07-21 03:44:28.803382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.857 [2024-07-21 03:44:28.803411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.857 qpair failed and we were unable to recover it. 00:34:43.857 [2024-07-21 03:44:28.803522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.857 [2024-07-21 03:44:28.803548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.857 qpair failed and we were unable to recover it. 00:34:43.857 [2024-07-21 03:44:28.803682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.857 [2024-07-21 03:44:28.803714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.857 qpair failed and we were unable to recover it. 00:34:43.857 [2024-07-21 03:44:28.803806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.857 [2024-07-21 03:44:28.803831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.857 qpair failed and we were unable to recover it. 00:34:43.857 [2024-07-21 03:44:28.803951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.857 [2024-07-21 03:44:28.803975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.857 qpair failed and we were unable to recover it. 00:34:43.857 [2024-07-21 03:44:28.804076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.857 [2024-07-21 03:44:28.804101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.857 qpair failed and we were unable to recover it. 00:34:43.857 [2024-07-21 03:44:28.804224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.857 [2024-07-21 03:44:28.804266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.857 qpair failed and we were unable to recover it. 00:34:43.857 [2024-07-21 03:44:28.804398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.857 [2024-07-21 03:44:28.804426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.857 qpair failed and we were unable to recover it. 00:34:43.857 [2024-07-21 03:44:28.804583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.857 [2024-07-21 03:44:28.804611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.857 qpair failed and we were unable to recover it. 00:34:43.857 [2024-07-21 03:44:28.804729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.857 [2024-07-21 03:44:28.804754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.857 qpair failed and we were unable to recover it. 00:34:43.857 [2024-07-21 03:44:28.804874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.857 [2024-07-21 03:44:28.804899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.857 qpair failed and we were unable to recover it. 00:34:43.857 [2024-07-21 03:44:28.805022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.857 [2024-07-21 03:44:28.805047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.857 qpair failed and we were unable to recover it. 00:34:43.857 [2024-07-21 03:44:28.805153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.857 [2024-07-21 03:44:28.805180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.857 qpair failed and we were unable to recover it. 00:34:43.857 [2024-07-21 03:44:28.805340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.857 [2024-07-21 03:44:28.805368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.857 qpair failed and we were unable to recover it. 00:34:43.857 [2024-07-21 03:44:28.805497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.857 [2024-07-21 03:44:28.805524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.857 qpair failed and we were unable to recover it. 00:34:43.857 [2024-07-21 03:44:28.805700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.857 [2024-07-21 03:44:28.805727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.857 qpair failed and we were unable to recover it. 00:34:43.857 [2024-07-21 03:44:28.805821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.857 [2024-07-21 03:44:28.805847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.857 qpair failed and we were unable to recover it. 00:34:43.857 [2024-07-21 03:44:28.805949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.857 [2024-07-21 03:44:28.805973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.857 qpair failed and we were unable to recover it. 00:34:43.857 [2024-07-21 03:44:28.806083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.857 [2024-07-21 03:44:28.806111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.857 qpair failed and we were unable to recover it. 00:34:43.857 [2024-07-21 03:44:28.806279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.857 [2024-07-21 03:44:28.806306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.857 qpair failed and we were unable to recover it. 00:34:43.857 [2024-07-21 03:44:28.806407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.857 [2024-07-21 03:44:28.806439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.857 qpair failed and we were unable to recover it. 00:34:43.857 [2024-07-21 03:44:28.806569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.857 [2024-07-21 03:44:28.806596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.857 qpair failed and we were unable to recover it. 00:34:43.857 [2024-07-21 03:44:28.806734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.857 [2024-07-21 03:44:28.806761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.857 qpair failed and we were unable to recover it. 00:34:43.857 [2024-07-21 03:44:28.806892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.857 [2024-07-21 03:44:28.806920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.857 qpair failed and we were unable to recover it. 00:34:43.857 [2024-07-21 03:44:28.807017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.857 [2024-07-21 03:44:28.807044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.857 qpair failed and we were unable to recover it. 00:34:43.857 [2024-07-21 03:44:28.807138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.857 [2024-07-21 03:44:28.807165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.857 qpair failed and we were unable to recover it. 00:34:43.857 [2024-07-21 03:44:28.807269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.857 [2024-07-21 03:44:28.807297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.857 qpair failed and we were unable to recover it. 00:34:43.857 [2024-07-21 03:44:28.807431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.857 [2024-07-21 03:44:28.807460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.857 qpair failed and we were unable to recover it. 00:34:43.857 [2024-07-21 03:44:28.807579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.857 [2024-07-21 03:44:28.807623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.857 qpair failed and we were unable to recover it. 00:34:43.857 [2024-07-21 03:44:28.807751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.857 [2024-07-21 03:44:28.807778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.857 qpair failed and we were unable to recover it. 00:34:43.857 [2024-07-21 03:44:28.807930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.857 [2024-07-21 03:44:28.807974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.857 qpair failed and we were unable to recover it. 00:34:43.857 [2024-07-21 03:44:28.808110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.857 [2024-07-21 03:44:28.808140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.857 qpair failed and we were unable to recover it. 00:34:43.857 [2024-07-21 03:44:28.808282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.857 [2024-07-21 03:44:28.808325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.857 qpair failed and we were unable to recover it. 00:34:43.857 [2024-07-21 03:44:28.808455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.857 [2024-07-21 03:44:28.808484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.857 qpair failed and we were unable to recover it. 00:34:43.857 [2024-07-21 03:44:28.808586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.857 [2024-07-21 03:44:28.808622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.857 qpair failed and we were unable to recover it. 00:34:43.857 [2024-07-21 03:44:28.808743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.857 [2024-07-21 03:44:28.808769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.857 qpair failed and we were unable to recover it. 00:34:43.857 [2024-07-21 03:44:28.808891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.857 [2024-07-21 03:44:28.808915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.857 qpair failed and we were unable to recover it. 00:34:43.857 [2024-07-21 03:44:28.809034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.857 [2024-07-21 03:44:28.809058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.857 qpair failed and we were unable to recover it. 00:34:43.858 [2024-07-21 03:44:28.809277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.858 [2024-07-21 03:44:28.809323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.858 qpair failed and we were unable to recover it. 00:34:43.858 [2024-07-21 03:44:28.809484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.858 [2024-07-21 03:44:28.809511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.858 qpair failed and we were unable to recover it. 00:34:43.858 [2024-07-21 03:44:28.809660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.858 [2024-07-21 03:44:28.809686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.858 qpair failed and we were unable to recover it. 00:34:43.858 [2024-07-21 03:44:28.809810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.858 [2024-07-21 03:44:28.809835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.858 qpair failed and we were unable to recover it. 00:34:43.858 [2024-07-21 03:44:28.809939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.858 [2024-07-21 03:44:28.809966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.858 qpair failed and we were unable to recover it. 00:34:43.858 [2024-07-21 03:44:28.810082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.858 [2024-07-21 03:44:28.810121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.858 qpair failed and we were unable to recover it. 00:34:43.858 [2024-07-21 03:44:28.810254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.858 [2024-07-21 03:44:28.810280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.858 qpair failed and we were unable to recover it. 00:34:43.858 [2024-07-21 03:44:28.810378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.858 [2024-07-21 03:44:28.810406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.858 qpair failed and we were unable to recover it. 00:34:43.858 [2024-07-21 03:44:28.810539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.858 [2024-07-21 03:44:28.810566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.858 qpair failed and we were unable to recover it. 00:34:43.858 [2024-07-21 03:44:28.810681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.858 [2024-07-21 03:44:28.810710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.858 qpair failed and we were unable to recover it. 00:34:43.858 [2024-07-21 03:44:28.810873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.858 [2024-07-21 03:44:28.810916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.858 qpair failed and we were unable to recover it. 00:34:43.858 [2024-07-21 03:44:28.811079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.858 [2024-07-21 03:44:28.811123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.858 qpair failed and we were unable to recover it. 00:34:43.858 [2024-07-21 03:44:28.811264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.858 [2024-07-21 03:44:28.811309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.858 qpair failed and we were unable to recover it. 00:34:43.858 [2024-07-21 03:44:28.811454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.858 [2024-07-21 03:44:28.811479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.858 qpair failed and we were unable to recover it. 00:34:43.858 [2024-07-21 03:44:28.811573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.858 [2024-07-21 03:44:28.811598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.858 qpair failed and we were unable to recover it. 00:34:43.858 [2024-07-21 03:44:28.811719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.858 [2024-07-21 03:44:28.811761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.858 qpair failed and we were unable to recover it. 00:34:43.858 [2024-07-21 03:44:28.811892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.858 [2024-07-21 03:44:28.811923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.858 qpair failed and we were unable to recover it. 00:34:43.858 [2024-07-21 03:44:28.812084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.858 [2024-07-21 03:44:28.812111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.858 qpair failed and we were unable to recover it. 00:34:43.858 [2024-07-21 03:44:28.812220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.858 [2024-07-21 03:44:28.812260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.858 qpair failed and we were unable to recover it. 00:34:43.858 [2024-07-21 03:44:28.812380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.858 [2024-07-21 03:44:28.812405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.858 qpair failed and we were unable to recover it. 00:34:43.858 [2024-07-21 03:44:28.812529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.858 [2024-07-21 03:44:28.812553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.858 qpair failed and we were unable to recover it. 00:34:43.858 [2024-07-21 03:44:28.812674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.858 [2024-07-21 03:44:28.812699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.858 qpair failed and we were unable to recover it. 00:34:43.858 [2024-07-21 03:44:28.812816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.858 [2024-07-21 03:44:28.812845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.858 qpair failed and we were unable to recover it. 00:34:43.858 [2024-07-21 03:44:28.813009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.858 [2024-07-21 03:44:28.813036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.858 qpair failed and we were unable to recover it. 00:34:43.858 [2024-07-21 03:44:28.813144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.858 [2024-07-21 03:44:28.813172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.858 qpair failed and we were unable to recover it. 00:34:43.858 [2024-07-21 03:44:28.813302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.858 [2024-07-21 03:44:28.813334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.858 qpair failed and we were unable to recover it. 00:34:43.858 [2024-07-21 03:44:28.813454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.858 [2024-07-21 03:44:28.813493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.858 qpair failed and we were unable to recover it. 00:34:43.858 [2024-07-21 03:44:28.813636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.858 [2024-07-21 03:44:28.813676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.858 qpair failed and we were unable to recover it. 00:34:43.858 [2024-07-21 03:44:28.813833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.858 [2024-07-21 03:44:28.813860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.858 qpair failed and we were unable to recover it. 00:34:43.858 [2024-07-21 03:44:28.813995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.858 [2024-07-21 03:44:28.814021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.858 qpair failed and we were unable to recover it. 00:34:43.858 [2024-07-21 03:44:28.814283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.858 [2024-07-21 03:44:28.814336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.858 qpair failed and we were unable to recover it. 00:34:43.858 [2024-07-21 03:44:28.814471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.858 [2024-07-21 03:44:28.814500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.858 qpair failed and we were unable to recover it. 00:34:43.858 [2024-07-21 03:44:28.814649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.858 [2024-07-21 03:44:28.814684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.858 qpair failed and we were unable to recover it. 00:34:43.858 [2024-07-21 03:44:28.814803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.858 [2024-07-21 03:44:28.814827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.858 qpair failed and we were unable to recover it. 00:34:43.858 [2024-07-21 03:44:28.814965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.858 [2024-07-21 03:44:28.814993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.858 qpair failed and we were unable to recover it. 00:34:43.858 [2024-07-21 03:44:28.815190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.858 [2024-07-21 03:44:28.815244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.858 qpair failed and we were unable to recover it. 00:34:43.858 [2024-07-21 03:44:28.815415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.858 [2024-07-21 03:44:28.815465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.858 qpair failed and we were unable to recover it. 00:34:43.858 [2024-07-21 03:44:28.815579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.858 [2024-07-21 03:44:28.815610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.858 qpair failed and we were unable to recover it. 00:34:43.858 [2024-07-21 03:44:28.815791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.858 [2024-07-21 03:44:28.815818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.858 qpair failed and we were unable to recover it. 00:34:43.858 [2024-07-21 03:44:28.815908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.858 [2024-07-21 03:44:28.815950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.858 qpair failed and we were unable to recover it. 00:34:43.858 [2024-07-21 03:44:28.816102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.858 [2024-07-21 03:44:28.816153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.858 qpair failed and we were unable to recover it. 00:34:43.859 [2024-07-21 03:44:28.816316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.859 [2024-07-21 03:44:28.816383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.859 qpair failed and we were unable to recover it. 00:34:43.859 [2024-07-21 03:44:28.816506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.859 [2024-07-21 03:44:28.816535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.859 qpair failed and we were unable to recover it. 00:34:43.859 [2024-07-21 03:44:28.816637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.859 [2024-07-21 03:44:28.816681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.859 qpair failed and we were unable to recover it. 00:34:43.859 [2024-07-21 03:44:28.816782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.859 [2024-07-21 03:44:28.816808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.859 qpair failed and we were unable to recover it. 00:34:43.859 [2024-07-21 03:44:28.816929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.859 [2024-07-21 03:44:28.816953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.859 qpair failed and we were unable to recover it. 00:34:43.859 [2024-07-21 03:44:28.817091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.859 [2024-07-21 03:44:28.817118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.859 qpair failed and we were unable to recover it. 00:34:43.859 [2024-07-21 03:44:28.817320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.859 [2024-07-21 03:44:28.817348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.859 qpair failed and we were unable to recover it. 00:34:43.859 [2024-07-21 03:44:28.817445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.859 [2024-07-21 03:44:28.817473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.859 qpair failed and we were unable to recover it. 00:34:43.859 [2024-07-21 03:44:28.817592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.859 [2024-07-21 03:44:28.817630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.859 qpair failed and we were unable to recover it. 00:34:43.859 [2024-07-21 03:44:28.817753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.859 [2024-07-21 03:44:28.817777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.859 qpair failed and we were unable to recover it. 00:34:43.859 [2024-07-21 03:44:28.817900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.859 [2024-07-21 03:44:28.817924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.859 qpair failed and we were unable to recover it. 00:34:43.859 [2024-07-21 03:44:28.818018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.859 [2024-07-21 03:44:28.818043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.859 qpair failed and we were unable to recover it. 00:34:43.859 [2024-07-21 03:44:28.818190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.859 [2024-07-21 03:44:28.818217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.859 qpair failed and we were unable to recover it. 00:34:43.859 [2024-07-21 03:44:28.818344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.859 [2024-07-21 03:44:28.818372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.859 qpair failed and we were unable to recover it. 00:34:43.859 [2024-07-21 03:44:28.818515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.859 [2024-07-21 03:44:28.818540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.859 qpair failed and we were unable to recover it. 00:34:43.859 [2024-07-21 03:44:28.818683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.859 [2024-07-21 03:44:28.818708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.859 qpair failed and we were unable to recover it. 00:34:43.859 [2024-07-21 03:44:28.818802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.859 [2024-07-21 03:44:28.818827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.859 qpair failed and we were unable to recover it. 00:34:43.859 [2024-07-21 03:44:28.818965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.859 [2024-07-21 03:44:28.818992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.859 qpair failed and we were unable to recover it. 00:34:43.859 [2024-07-21 03:44:28.819112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.859 [2024-07-21 03:44:28.819153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.859 qpair failed and we were unable to recover it. 00:34:43.859 [2024-07-21 03:44:28.819325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.859 [2024-07-21 03:44:28.819354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.859 qpair failed and we were unable to recover it. 00:34:43.859 [2024-07-21 03:44:28.819485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.859 [2024-07-21 03:44:28.819517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.859 qpair failed and we were unable to recover it. 00:34:43.859 [2024-07-21 03:44:28.819671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.859 [2024-07-21 03:44:28.819710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.859 qpair failed and we were unable to recover it. 00:34:43.859 [2024-07-21 03:44:28.819817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.859 [2024-07-21 03:44:28.819843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.859 qpair failed and we were unable to recover it. 00:34:43.859 [2024-07-21 03:44:28.819995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.859 [2024-07-21 03:44:28.820038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.859 qpair failed and we were unable to recover it. 00:34:43.859 [2024-07-21 03:44:28.820152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.859 [2024-07-21 03:44:28.820197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.859 qpair failed and we were unable to recover it. 00:34:43.859 [2024-07-21 03:44:28.820493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.859 [2024-07-21 03:44:28.820545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.859 qpair failed and we were unable to recover it. 00:34:43.859 [2024-07-21 03:44:28.820641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.859 [2024-07-21 03:44:28.820668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.859 qpair failed and we were unable to recover it. 00:34:43.859 [2024-07-21 03:44:28.820816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.859 [2024-07-21 03:44:28.820841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.859 qpair failed and we were unable to recover it. 00:34:43.859 [2024-07-21 03:44:28.820961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.859 [2024-07-21 03:44:28.820988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.859 qpair failed and we were unable to recover it. 00:34:43.859 [2024-07-21 03:44:28.821085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.859 [2024-07-21 03:44:28.821111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.859 qpair failed and we were unable to recover it. 00:34:43.859 [2024-07-21 03:44:28.821224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.859 [2024-07-21 03:44:28.821249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.859 qpair failed and we were unable to recover it. 00:34:43.859 [2024-07-21 03:44:28.821349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.859 [2024-07-21 03:44:28.821375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.859 qpair failed and we were unable to recover it. 00:34:43.859 [2024-07-21 03:44:28.821498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.859 [2024-07-21 03:44:28.821523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.859 qpair failed and we were unable to recover it. 00:34:43.859 [2024-07-21 03:44:28.821666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.859 [2024-07-21 03:44:28.821706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.859 qpair failed and we were unable to recover it. 00:34:43.859 [2024-07-21 03:44:28.821803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.859 [2024-07-21 03:44:28.821829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.859 qpair failed and we were unable to recover it. 00:34:43.859 [2024-07-21 03:44:28.821946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.859 [2024-07-21 03:44:28.821981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.859 qpair failed and we were unable to recover it. 00:34:43.859 [2024-07-21 03:44:28.822107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.859 [2024-07-21 03:44:28.822133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.859 qpair failed and we were unable to recover it. 00:34:43.859 [2024-07-21 03:44:28.822256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.859 [2024-07-21 03:44:28.822283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.859 qpair failed and we were unable to recover it. 00:34:43.859 [2024-07-21 03:44:28.822376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.859 [2024-07-21 03:44:28.822402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.859 qpair failed and we were unable to recover it. 00:34:43.859 [2024-07-21 03:44:28.822526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.859 [2024-07-21 03:44:28.822553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.859 qpair failed and we were unable to recover it. 00:34:43.859 [2024-07-21 03:44:28.822650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.860 [2024-07-21 03:44:28.822693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.860 qpair failed and we were unable to recover it. 00:34:43.860 [2024-07-21 03:44:28.822792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.860 [2024-07-21 03:44:28.822822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.860 qpair failed and we were unable to recover it. 00:34:43.860 [2024-07-21 03:44:28.822948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.860 [2024-07-21 03:44:28.822976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.860 qpair failed and we were unable to recover it. 00:34:43.860 [2024-07-21 03:44:28.823109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.860 [2024-07-21 03:44:28.823139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.860 qpair failed and we were unable to recover it. 00:34:43.860 [2024-07-21 03:44:28.823274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.860 [2024-07-21 03:44:28.823302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.860 qpair failed and we were unable to recover it. 00:34:43.860 [2024-07-21 03:44:28.823426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.860 [2024-07-21 03:44:28.823454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.860 qpair failed and we were unable to recover it. 00:34:43.860 [2024-07-21 03:44:28.823579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.860 [2024-07-21 03:44:28.823605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.860 qpair failed and we were unable to recover it. 00:34:43.860 [2024-07-21 03:44:28.823773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.860 [2024-07-21 03:44:28.823817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.860 qpair failed and we were unable to recover it. 00:34:43.860 [2024-07-21 03:44:28.824078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.860 [2024-07-21 03:44:28.824138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.860 qpair failed and we were unable to recover it. 00:34:43.860 [2024-07-21 03:44:28.824317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.860 [2024-07-21 03:44:28.824375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.860 qpair failed and we were unable to recover it. 00:34:43.860 [2024-07-21 03:44:28.824534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.860 [2024-07-21 03:44:28.824559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.860 qpair failed and we were unable to recover it. 00:34:43.860 [2024-07-21 03:44:28.824657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.860 [2024-07-21 03:44:28.824684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.860 qpair failed and we were unable to recover it. 00:34:43.860 [2024-07-21 03:44:28.824806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.860 [2024-07-21 03:44:28.824832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.860 qpair failed and we were unable to recover it. 00:34:43.860 [2024-07-21 03:44:28.824941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.860 [2024-07-21 03:44:28.824971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.860 qpair failed and we were unable to recover it. 00:34:43.860 [2024-07-21 03:44:28.825217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.860 [2024-07-21 03:44:28.825269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.860 qpair failed and we were unable to recover it. 00:34:43.860 [2024-07-21 03:44:28.825363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.860 [2024-07-21 03:44:28.825392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.860 qpair failed and we were unable to recover it. 00:34:43.860 [2024-07-21 03:44:28.825556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.860 [2024-07-21 03:44:28.825581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.860 qpair failed and we were unable to recover it. 00:34:43.860 [2024-07-21 03:44:28.825685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.860 [2024-07-21 03:44:28.825712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.860 qpair failed and we were unable to recover it. 00:34:43.860 [2024-07-21 03:44:28.825801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.860 [2024-07-21 03:44:28.825827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.860 qpair failed and we were unable to recover it. 00:34:43.860 [2024-07-21 03:44:28.825917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.860 [2024-07-21 03:44:28.825944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.860 qpair failed and we were unable to recover it. 00:34:43.860 [2024-07-21 03:44:28.826069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.860 [2024-07-21 03:44:28.826095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.860 qpair failed and we were unable to recover it. 00:34:43.860 [2024-07-21 03:44:28.826235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.860 [2024-07-21 03:44:28.826264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.860 qpair failed and we were unable to recover it. 00:34:43.860 [2024-07-21 03:44:28.826467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.860 [2024-07-21 03:44:28.826497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.860 qpair failed and we were unable to recover it. 00:34:43.860 [2024-07-21 03:44:28.826669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.860 [2024-07-21 03:44:28.826695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.860 qpair failed and we were unable to recover it. 00:34:43.860 [2024-07-21 03:44:28.826840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.860 [2024-07-21 03:44:28.826865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.860 qpair failed and we were unable to recover it. 00:34:43.860 [2024-07-21 03:44:28.827015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.860 [2024-07-21 03:44:28.827040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.860 qpair failed and we were unable to recover it. 00:34:43.860 [2024-07-21 03:44:28.827128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.860 [2024-07-21 03:44:28.827156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.860 qpair failed and we were unable to recover it. 00:34:43.860 [2024-07-21 03:44:28.827323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.860 [2024-07-21 03:44:28.827352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.860 qpair failed and we were unable to recover it. 00:34:43.860 [2024-07-21 03:44:28.827462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.860 [2024-07-21 03:44:28.827506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.860 qpair failed and we were unable to recover it. 00:34:43.860 [2024-07-21 03:44:28.827691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.860 [2024-07-21 03:44:28.827721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.860 qpair failed and we were unable to recover it. 00:34:43.860 [2024-07-21 03:44:28.827846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.860 [2024-07-21 03:44:28.827873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.860 qpair failed and we were unable to recover it. 00:34:43.860 [2024-07-21 03:44:28.828033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.860 [2024-07-21 03:44:28.828058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.860 qpair failed and we were unable to recover it. 00:34:43.860 [2024-07-21 03:44:28.828175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.860 [2024-07-21 03:44:28.828203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.860 qpair failed and we were unable to recover it. 00:34:43.860 [2024-07-21 03:44:28.828369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.860 [2024-07-21 03:44:28.828398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.860 qpair failed and we were unable to recover it. 00:34:43.860 [2024-07-21 03:44:28.828524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.860 [2024-07-21 03:44:28.828551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.860 qpair failed and we were unable to recover it. 00:34:43.860 [2024-07-21 03:44:28.828697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.860 [2024-07-21 03:44:28.828728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.860 qpair failed and we were unable to recover it. 00:34:43.860 [2024-07-21 03:44:28.828851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.860 [2024-07-21 03:44:28.828892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.861 qpair failed and we were unable to recover it. 00:34:43.861 [2024-07-21 03:44:28.829015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.861 [2024-07-21 03:44:28.829040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.861 qpair failed and we were unable to recover it. 00:34:43.861 [2024-07-21 03:44:28.829184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.861 [2024-07-21 03:44:28.829212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.861 qpair failed and we were unable to recover it. 00:34:43.861 [2024-07-21 03:44:28.829346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.861 [2024-07-21 03:44:28.829374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.861 qpair failed and we were unable to recover it. 00:34:43.861 [2024-07-21 03:44:28.829502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.861 [2024-07-21 03:44:28.829531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.861 qpair failed and we were unable to recover it. 00:34:43.861 [2024-07-21 03:44:28.829681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.861 [2024-07-21 03:44:28.829709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.861 qpair failed and we were unable to recover it. 00:34:43.861 [2024-07-21 03:44:28.829814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.861 [2024-07-21 03:44:28.829840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.861 qpair failed and we were unable to recover it. 00:34:43.861 [2024-07-21 03:44:28.829960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.861 [2024-07-21 03:44:28.830002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.861 qpair failed and we were unable to recover it. 00:34:43.861 [2024-07-21 03:44:28.830107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.861 [2024-07-21 03:44:28.830137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.861 qpair failed and we were unable to recover it. 00:34:43.861 [2024-07-21 03:44:28.830271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.861 [2024-07-21 03:44:28.830300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.861 qpair failed and we were unable to recover it. 00:34:43.861 [2024-07-21 03:44:28.830407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.861 [2024-07-21 03:44:28.830435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.861 qpair failed and we were unable to recover it. 00:34:43.861 [2024-07-21 03:44:28.830593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.861 [2024-07-21 03:44:28.830629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.861 qpair failed and we were unable to recover it. 00:34:43.861 [2024-07-21 03:44:28.830741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.861 [2024-07-21 03:44:28.830766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.861 qpair failed and we were unable to recover it. 00:34:43.861 [2024-07-21 03:44:28.830869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.861 [2024-07-21 03:44:28.830909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.861 qpair failed and we were unable to recover it. 00:34:43.861 [2024-07-21 03:44:28.831043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.861 [2024-07-21 03:44:28.831086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.861 qpair failed and we were unable to recover it. 00:34:43.861 [2024-07-21 03:44:28.831196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.861 [2024-07-21 03:44:28.831239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.861 qpair failed and we were unable to recover it. 00:34:43.861 [2024-07-21 03:44:28.831429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.861 [2024-07-21 03:44:28.831473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.861 qpair failed and we were unable to recover it. 00:34:43.861 [2024-07-21 03:44:28.831563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.861 [2024-07-21 03:44:28.831591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.861 qpair failed and we were unable to recover it. 00:34:43.861 [2024-07-21 03:44:28.831699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.861 [2024-07-21 03:44:28.831738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.861 qpair failed and we were unable to recover it. 00:34:43.861 [2024-07-21 03:44:28.831862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.861 [2024-07-21 03:44:28.831888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.861 qpair failed and we were unable to recover it. 00:34:43.861 [2024-07-21 03:44:28.832040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.861 [2024-07-21 03:44:28.832068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.861 qpair failed and we were unable to recover it. 00:34:43.861 [2024-07-21 03:44:28.832203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.861 [2024-07-21 03:44:28.832257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.861 qpair failed and we were unable to recover it. 00:34:43.861 [2024-07-21 03:44:28.832358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.861 [2024-07-21 03:44:28.832385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.861 qpair failed and we were unable to recover it. 00:34:43.861 [2024-07-21 03:44:28.832551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.861 [2024-07-21 03:44:28.832578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.861 qpair failed and we were unable to recover it. 00:34:43.861 [2024-07-21 03:44:28.832685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.861 [2024-07-21 03:44:28.832724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.861 qpair failed and we were unable to recover it. 00:34:43.861 [2024-07-21 03:44:28.832828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.861 [2024-07-21 03:44:28.832855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.861 qpair failed and we were unable to recover it. 00:34:43.861 [2024-07-21 03:44:28.833090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.861 [2024-07-21 03:44:28.833146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.861 qpair failed and we were unable to recover it. 00:34:43.861 [2024-07-21 03:44:28.833271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.861 [2024-07-21 03:44:28.833322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.861 qpair failed and we were unable to recover it. 00:34:43.861 [2024-07-21 03:44:28.833496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.861 [2024-07-21 03:44:28.833521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.861 qpair failed and we were unable to recover it. 00:34:43.861 [2024-07-21 03:44:28.833640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.861 [2024-07-21 03:44:28.833667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.861 qpair failed and we were unable to recover it. 00:34:43.861 [2024-07-21 03:44:28.833784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.861 [2024-07-21 03:44:28.833809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.861 qpair failed and we were unable to recover it. 00:34:43.861 [2024-07-21 03:44:28.833931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.861 [2024-07-21 03:44:28.833972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.861 qpair failed and we were unable to recover it. 00:34:43.862 [2024-07-21 03:44:28.834106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.862 [2024-07-21 03:44:28.834133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.862 qpair failed and we were unable to recover it. 00:34:43.862 [2024-07-21 03:44:28.834264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.862 [2024-07-21 03:44:28.834292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.862 qpair failed and we were unable to recover it. 00:34:43.862 [2024-07-21 03:44:28.834387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.862 [2024-07-21 03:44:28.834415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.862 qpair failed and we were unable to recover it. 00:34:43.862 [2024-07-21 03:44:28.834574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.862 [2024-07-21 03:44:28.834634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.862 qpair failed and we were unable to recover it. 00:34:43.862 [2024-07-21 03:44:28.834746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.862 [2024-07-21 03:44:28.834773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.862 qpair failed and we were unable to recover it. 00:34:43.862 [2024-07-21 03:44:28.834892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.862 [2024-07-21 03:44:28.834916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.862 qpair failed and we were unable to recover it. 00:34:43.862 [2024-07-21 03:44:28.835034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.862 [2024-07-21 03:44:28.835064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.862 qpair failed and we were unable to recover it. 00:34:43.862 [2024-07-21 03:44:28.835259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.862 [2024-07-21 03:44:28.835315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.862 qpair failed and we were unable to recover it. 00:34:43.862 [2024-07-21 03:44:28.835457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.862 [2024-07-21 03:44:28.835485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.862 qpair failed and we were unable to recover it. 00:34:43.862 [2024-07-21 03:44:28.835626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.862 [2024-07-21 03:44:28.835672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.862 qpair failed and we were unable to recover it. 00:34:43.862 [2024-07-21 03:44:28.835764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.862 [2024-07-21 03:44:28.835788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.862 qpair failed and we were unable to recover it. 00:34:43.862 [2024-07-21 03:44:28.835905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.862 [2024-07-21 03:44:28.835930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.862 qpair failed and we were unable to recover it. 00:34:43.862 [2024-07-21 03:44:28.836032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.862 [2024-07-21 03:44:28.836119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.862 qpair failed and we were unable to recover it. 00:34:43.862 [2024-07-21 03:44:28.836236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.862 [2024-07-21 03:44:28.836300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.862 qpair failed and we were unable to recover it. 00:34:43.862 [2024-07-21 03:44:28.836453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.862 [2024-07-21 03:44:28.836482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.862 qpair failed and we were unable to recover it. 00:34:43.862 [2024-07-21 03:44:28.836582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.862 [2024-07-21 03:44:28.836612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.862 qpair failed and we were unable to recover it. 00:34:43.862 [2024-07-21 03:44:28.836742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.862 [2024-07-21 03:44:28.836768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.862 qpair failed and we were unable to recover it. 00:34:43.862 [2024-07-21 03:44:28.836891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.862 [2024-07-21 03:44:28.836920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.862 qpair failed and we were unable to recover it. 00:34:43.862 [2024-07-21 03:44:28.837109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.862 [2024-07-21 03:44:28.837171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.862 qpair failed and we were unable to recover it. 00:34:43.862 [2024-07-21 03:44:28.837302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.862 [2024-07-21 03:44:28.837390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.862 qpair failed and we were unable to recover it. 00:34:43.862 [2024-07-21 03:44:28.837495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.862 [2024-07-21 03:44:28.837536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.862 qpair failed and we were unable to recover it. 00:34:43.862 [2024-07-21 03:44:28.837666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.862 [2024-07-21 03:44:28.837692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.862 qpair failed and we were unable to recover it. 00:34:43.862 [2024-07-21 03:44:28.837812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.862 [2024-07-21 03:44:28.837839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.862 qpair failed and we were unable to recover it. 00:34:43.862 [2024-07-21 03:44:28.837955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.862 [2024-07-21 03:44:28.837981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.862 qpair failed and we were unable to recover it. 00:34:43.862 [2024-07-21 03:44:28.838145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.862 [2024-07-21 03:44:28.838187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.862 qpair failed and we were unable to recover it. 00:34:43.862 [2024-07-21 03:44:28.838359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.862 [2024-07-21 03:44:28.838388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.862 qpair failed and we were unable to recover it. 00:34:43.862 [2024-07-21 03:44:28.838550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.862 [2024-07-21 03:44:28.838579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.862 qpair failed and we were unable to recover it. 00:34:43.862 [2024-07-21 03:44:28.838694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.862 [2024-07-21 03:44:28.838721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.862 qpair failed and we were unable to recover it. 00:34:43.862 [2024-07-21 03:44:28.838851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.862 [2024-07-21 03:44:28.838877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.862 qpair failed and we were unable to recover it. 00:34:43.862 [2024-07-21 03:44:28.839002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.862 [2024-07-21 03:44:28.839027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.862 qpair failed and we were unable to recover it. 00:34:43.862 [2024-07-21 03:44:28.839166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.862 [2024-07-21 03:44:28.839194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.862 qpair failed and we were unable to recover it. 00:34:43.862 [2024-07-21 03:44:28.839355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.862 [2024-07-21 03:44:28.839384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.862 qpair failed and we were unable to recover it. 00:34:43.862 [2024-07-21 03:44:28.839513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.862 [2024-07-21 03:44:28.839544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.862 qpair failed and we were unable to recover it. 00:34:43.862 [2024-07-21 03:44:28.839710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.862 [2024-07-21 03:44:28.839737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.862 qpair failed and we were unable to recover it. 00:34:43.862 [2024-07-21 03:44:28.839884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.862 [2024-07-21 03:44:28.839914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.862 qpair failed and we were unable to recover it. 00:34:43.862 [2024-07-21 03:44:28.840083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.862 [2024-07-21 03:44:28.840150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.862 qpair failed and we were unable to recover it. 00:34:43.862 [2024-07-21 03:44:28.840252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.862 [2024-07-21 03:44:28.840281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.862 qpair failed and we were unable to recover it. 00:34:43.862 [2024-07-21 03:44:28.840408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.862 [2024-07-21 03:44:28.840437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.862 qpair failed and we were unable to recover it. 00:34:43.862 [2024-07-21 03:44:28.840563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.862 [2024-07-21 03:44:28.840592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.862 qpair failed and we were unable to recover it. 00:34:43.862 [2024-07-21 03:44:28.840760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.862 [2024-07-21 03:44:28.840786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.862 qpair failed and we were unable to recover it. 00:34:43.862 [2024-07-21 03:44:28.840915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.862 [2024-07-21 03:44:28.840972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.862 qpair failed and we were unable to recover it. 00:34:43.863 [2024-07-21 03:44:28.841121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.863 [2024-07-21 03:44:28.841153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.863 qpair failed and we were unable to recover it. 00:34:43.863 [2024-07-21 03:44:28.841318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.863 [2024-07-21 03:44:28.841371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.863 qpair failed and we were unable to recover it. 00:34:43.863 [2024-07-21 03:44:28.841477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.863 [2024-07-21 03:44:28.841505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.863 qpair failed and we were unable to recover it. 00:34:43.863 [2024-07-21 03:44:28.841624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.863 [2024-07-21 03:44:28.841651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.863 qpair failed and we were unable to recover it. 00:34:43.863 [2024-07-21 03:44:28.841751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.863 [2024-07-21 03:44:28.841776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.863 qpair failed and we were unable to recover it. 00:34:43.863 [2024-07-21 03:44:28.841871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.863 [2024-07-21 03:44:28.841914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.863 qpair failed and we were unable to recover it. 00:34:43.863 [2024-07-21 03:44:28.842075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.863 [2024-07-21 03:44:28.842104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.863 qpair failed and we were unable to recover it. 00:34:43.863 [2024-07-21 03:44:28.842256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.863 [2024-07-21 03:44:28.842322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.863 qpair failed and we were unable to recover it. 00:34:43.863 [2024-07-21 03:44:28.842512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.863 [2024-07-21 03:44:28.842539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.863 qpair failed and we were unable to recover it. 00:34:43.863 [2024-07-21 03:44:28.842709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.863 [2024-07-21 03:44:28.842735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.863 qpair failed and we were unable to recover it. 00:34:43.863 [2024-07-21 03:44:28.842847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.863 [2024-07-21 03:44:28.842890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.863 qpair failed and we were unable to recover it. 00:34:43.863 [2024-07-21 03:44:28.843049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.863 [2024-07-21 03:44:28.843076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.863 qpair failed and we were unable to recover it. 00:34:43.863 [2024-07-21 03:44:28.843177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.863 [2024-07-21 03:44:28.843205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.863 qpair failed and we were unable to recover it. 00:34:43.863 [2024-07-21 03:44:28.843301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.863 [2024-07-21 03:44:28.843328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.863 qpair failed and we were unable to recover it. 00:34:43.863 [2024-07-21 03:44:28.843481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.863 [2024-07-21 03:44:28.843527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.863 qpair failed and we were unable to recover it. 00:34:43.863 [2024-07-21 03:44:28.843641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.863 [2024-07-21 03:44:28.843667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.863 qpair failed and we were unable to recover it. 00:34:43.863 [2024-07-21 03:44:28.843764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.863 [2024-07-21 03:44:28.843790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.863 qpair failed and we were unable to recover it. 00:34:43.863 [2024-07-21 03:44:28.843935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.863 [2024-07-21 03:44:28.843960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.863 qpair failed and we were unable to recover it. 00:34:43.863 [2024-07-21 03:44:28.844100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.863 [2024-07-21 03:44:28.844128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.863 qpair failed and we were unable to recover it. 00:34:43.863 [2024-07-21 03:44:28.844298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.863 [2024-07-21 03:44:28.844350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.863 qpair failed and we were unable to recover it. 00:34:43.863 [2024-07-21 03:44:28.844519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.863 [2024-07-21 03:44:28.844550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.863 qpair failed and we were unable to recover it. 00:34:43.863 [2024-07-21 03:44:28.844711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.863 [2024-07-21 03:44:28.844752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.863 qpair failed and we were unable to recover it. 00:34:43.863 [2024-07-21 03:44:28.844868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.863 [2024-07-21 03:44:28.844896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.863 qpair failed and we were unable to recover it. 00:34:43.863 [2024-07-21 03:44:28.845060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.863 [2024-07-21 03:44:28.845100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.863 qpair failed and we were unable to recover it. 00:34:43.863 [2024-07-21 03:44:28.845229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.863 [2024-07-21 03:44:28.845255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.863 qpair failed and we were unable to recover it. 00:34:43.863 [2024-07-21 03:44:28.845349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.863 [2024-07-21 03:44:28.845375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.863 qpair failed and we were unable to recover it. 00:34:43.863 [2024-07-21 03:44:28.845464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.863 [2024-07-21 03:44:28.845489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.863 qpair failed and we were unable to recover it. 00:34:43.863 [2024-07-21 03:44:28.845610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.863 [2024-07-21 03:44:28.845647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.863 qpair failed and we were unable to recover it. 00:34:43.863 [2024-07-21 03:44:28.845770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.863 [2024-07-21 03:44:28.845795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.863 qpair failed and we were unable to recover it. 00:34:43.863 [2024-07-21 03:44:28.845915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.863 [2024-07-21 03:44:28.845939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.863 qpair failed and we were unable to recover it. 00:34:43.863 [2024-07-21 03:44:28.846050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.863 [2024-07-21 03:44:28.846075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.863 qpair failed and we were unable to recover it. 00:34:43.863 [2024-07-21 03:44:28.846219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.863 [2024-07-21 03:44:28.846244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.863 qpair failed and we were unable to recover it. 00:34:43.863 [2024-07-21 03:44:28.846392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.863 [2024-07-21 03:44:28.846416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.863 qpair failed and we were unable to recover it. 00:34:43.863 [2024-07-21 03:44:28.846537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.863 [2024-07-21 03:44:28.846567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.863 qpair failed and we were unable to recover it. 00:34:43.863 [2024-07-21 03:44:28.846660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.863 [2024-07-21 03:44:28.846686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.863 qpair failed and we were unable to recover it. 00:34:43.863 [2024-07-21 03:44:28.846781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.863 [2024-07-21 03:44:28.846806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.863 qpair failed and we were unable to recover it. 00:34:43.863 [2024-07-21 03:44:28.846904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.863 [2024-07-21 03:44:28.846929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.863 qpair failed and we were unable to recover it. 00:34:43.863 [2024-07-21 03:44:28.847051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.863 [2024-07-21 03:44:28.847078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.863 qpair failed and we were unable to recover it. 00:34:43.863 [2024-07-21 03:44:28.847179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.863 [2024-07-21 03:44:28.847204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.863 qpair failed and we were unable to recover it. 00:34:43.863 [2024-07-21 03:44:28.847325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.863 [2024-07-21 03:44:28.847351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.863 qpair failed and we were unable to recover it. 00:34:43.863 [2024-07-21 03:44:28.847463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.863 [2024-07-21 03:44:28.847504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.863 qpair failed and we were unable to recover it. 00:34:43.864 [2024-07-21 03:44:28.847669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.864 [2024-07-21 03:44:28.847708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.864 qpair failed and we were unable to recover it. 00:34:43.864 [2024-07-21 03:44:28.847836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.864 [2024-07-21 03:44:28.847863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.864 qpair failed and we were unable to recover it. 00:34:43.864 [2024-07-21 03:44:28.847954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.864 [2024-07-21 03:44:28.847980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.864 qpair failed and we were unable to recover it. 00:34:43.864 [2024-07-21 03:44:28.848100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.864 [2024-07-21 03:44:28.848126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.864 qpair failed and we were unable to recover it. 00:34:43.864 [2024-07-21 03:44:28.848270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.864 [2024-07-21 03:44:28.848295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.864 qpair failed and we were unable to recover it. 00:34:43.864 [2024-07-21 03:44:28.848391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.864 [2024-07-21 03:44:28.848420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.864 qpair failed and we were unable to recover it. 00:34:43.864 [2024-07-21 03:44:28.848558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.864 [2024-07-21 03:44:28.848597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.864 qpair failed and we were unable to recover it. 00:34:43.864 [2024-07-21 03:44:28.848708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.864 [2024-07-21 03:44:28.848746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.864 qpair failed and we were unable to recover it. 00:34:43.864 [2024-07-21 03:44:28.848845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.864 [2024-07-21 03:44:28.848891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.864 qpair failed and we were unable to recover it. 00:34:43.864 [2024-07-21 03:44:28.848996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.864 [2024-07-21 03:44:28.849026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.864 qpair failed and we were unable to recover it. 00:34:43.864 [2024-07-21 03:44:28.849272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.864 [2024-07-21 03:44:28.849326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.864 qpair failed and we were unable to recover it. 00:34:43.864 [2024-07-21 03:44:28.849459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.864 [2024-07-21 03:44:28.849487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.864 qpair failed and we were unable to recover it. 00:34:43.864 [2024-07-21 03:44:28.849601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.864 [2024-07-21 03:44:28.849634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.864 qpair failed and we were unable to recover it. 00:34:43.864 [2024-07-21 03:44:28.849757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.864 [2024-07-21 03:44:28.849782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.864 qpair failed and we were unable to recover it. 00:34:43.864 [2024-07-21 03:44:28.849933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.864 [2024-07-21 03:44:28.849976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.864 qpair failed and we were unable to recover it. 00:34:43.864 [2024-07-21 03:44:28.850085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.864 [2024-07-21 03:44:28.850114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.864 qpair failed and we were unable to recover it. 00:34:43.864 [2024-07-21 03:44:28.850273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.864 [2024-07-21 03:44:28.850323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.864 qpair failed and we were unable to recover it. 00:34:43.864 [2024-07-21 03:44:28.850470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.864 [2024-07-21 03:44:28.850495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.864 qpair failed and we were unable to recover it. 00:34:43.864 [2024-07-21 03:44:28.850659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.864 [2024-07-21 03:44:28.850689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.864 qpair failed and we were unable to recover it. 00:34:43.864 [2024-07-21 03:44:28.850867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.864 [2024-07-21 03:44:28.850910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.864 qpair failed and we were unable to recover it. 00:34:43.864 [2024-07-21 03:44:28.851049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.864 [2024-07-21 03:44:28.851114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.864 qpair failed and we were unable to recover it. 00:34:43.864 [2024-07-21 03:44:28.851322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.864 [2024-07-21 03:44:28.851372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.864 qpair failed and we were unable to recover it. 00:34:43.864 [2024-07-21 03:44:28.851496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.864 [2024-07-21 03:44:28.851544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.864 qpair failed and we were unable to recover it. 00:34:43.864 [2024-07-21 03:44:28.851636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.864 [2024-07-21 03:44:28.851661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.864 qpair failed and we were unable to recover it. 00:34:43.864 [2024-07-21 03:44:28.851752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.864 [2024-07-21 03:44:28.851777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.864 qpair failed and we were unable to recover it. 00:34:43.864 [2024-07-21 03:44:28.851924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.864 [2024-07-21 03:44:28.851949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.864 qpair failed and we were unable to recover it. 00:34:43.864 [2024-07-21 03:44:28.852110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.864 [2024-07-21 03:44:28.852138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.864 qpair failed and we were unable to recover it. 00:34:43.864 [2024-07-21 03:44:28.852269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.864 [2024-07-21 03:44:28.852356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.864 qpair failed and we were unable to recover it. 00:34:43.864 [2024-07-21 03:44:28.852501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.864 [2024-07-21 03:44:28.852526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.864 qpair failed and we were unable to recover it. 00:34:43.864 [2024-07-21 03:44:28.852622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.864 [2024-07-21 03:44:28.852647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.864 qpair failed and we were unable to recover it. 00:34:43.864 [2024-07-21 03:44:28.852761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.864 [2024-07-21 03:44:28.852785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.864 qpair failed and we were unable to recover it. 00:34:43.864 [2024-07-21 03:44:28.852879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.864 [2024-07-21 03:44:28.852903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.864 qpair failed and we were unable to recover it. 00:34:43.864 [2024-07-21 03:44:28.852997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.864 [2024-07-21 03:44:28.853022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.864 qpair failed and we were unable to recover it. 00:34:43.864 [2024-07-21 03:44:28.853168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.864 [2024-07-21 03:44:28.853198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.864 qpair failed and we were unable to recover it. 00:34:43.864 [2024-07-21 03:44:28.853298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.864 [2024-07-21 03:44:28.853326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.864 qpair failed and we were unable to recover it. 00:34:43.864 [2024-07-21 03:44:28.853416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.864 [2024-07-21 03:44:28.853442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.864 qpair failed and we were unable to recover it. 00:34:43.864 [2024-07-21 03:44:28.853599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.864 [2024-07-21 03:44:28.853650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.864 qpair failed and we were unable to recover it. 00:34:43.864 [2024-07-21 03:44:28.853788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.864 [2024-07-21 03:44:28.853827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.864 qpair failed and we were unable to recover it. 00:34:43.864 [2024-07-21 03:44:28.853933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.864 [2024-07-21 03:44:28.853990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.864 qpair failed and we were unable to recover it. 00:34:43.864 [2024-07-21 03:44:28.854169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.864 [2024-07-21 03:44:28.854216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.864 qpair failed and we were unable to recover it. 00:34:43.864 [2024-07-21 03:44:28.854452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.864 [2024-07-21 03:44:28.854481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.864 qpair failed and we were unable to recover it. 00:34:43.865 [2024-07-21 03:44:28.854620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.865 [2024-07-21 03:44:28.854665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.865 qpair failed and we were unable to recover it. 00:34:43.865 [2024-07-21 03:44:28.854792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.865 [2024-07-21 03:44:28.854818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.865 qpair failed and we were unable to recover it. 00:34:43.865 [2024-07-21 03:44:28.854936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.865 [2024-07-21 03:44:28.854961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.865 qpair failed and we were unable to recover it. 00:34:43.865 [2024-07-21 03:44:28.855121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.865 [2024-07-21 03:44:28.855150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.865 qpair failed and we were unable to recover it. 00:34:43.865 [2024-07-21 03:44:28.855284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.865 [2024-07-21 03:44:28.855312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.865 qpair failed and we were unable to recover it. 00:34:43.865 [2024-07-21 03:44:28.855430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.865 [2024-07-21 03:44:28.855460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.865 qpair failed and we were unable to recover it. 00:34:43.865 [2024-07-21 03:44:28.855626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.865 [2024-07-21 03:44:28.855670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.865 qpair failed and we were unable to recover it. 00:34:43.865 [2024-07-21 03:44:28.855787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.865 [2024-07-21 03:44:28.855813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.865 qpair failed and we were unable to recover it. 00:34:43.865 [2024-07-21 03:44:28.855914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.865 [2024-07-21 03:44:28.855942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.865 qpair failed and we were unable to recover it. 00:34:43.865 [2024-07-21 03:44:28.856076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.865 [2024-07-21 03:44:28.856118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.865 qpair failed and we were unable to recover it. 00:34:43.865 [2024-07-21 03:44:28.856277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.865 [2024-07-21 03:44:28.856305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.865 qpair failed and we were unable to recover it. 00:34:43.865 [2024-07-21 03:44:28.856435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.865 [2024-07-21 03:44:28.856464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.865 qpair failed and we were unable to recover it. 00:34:43.865 [2024-07-21 03:44:28.856559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.865 [2024-07-21 03:44:28.856588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.865 qpair failed and we were unable to recover it. 00:34:43.865 [2024-07-21 03:44:28.856742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.865 [2024-07-21 03:44:28.856768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.865 qpair failed and we were unable to recover it. 00:34:43.865 [2024-07-21 03:44:28.856915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.865 [2024-07-21 03:44:28.856943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.866 qpair failed and we were unable to recover it. 00:34:43.866 [2024-07-21 03:44:28.857039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.866 [2024-07-21 03:44:28.857077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.866 qpair failed and we were unable to recover it. 00:34:43.866 [2024-07-21 03:44:28.857222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.866 [2024-07-21 03:44:28.857250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.866 qpair failed and we were unable to recover it. 00:34:43.866 [2024-07-21 03:44:28.857378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.866 [2024-07-21 03:44:28.857408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.866 qpair failed and we were unable to recover it. 00:34:43.866 [2024-07-21 03:44:28.857519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.866 [2024-07-21 03:44:28.857550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.866 qpair failed and we were unable to recover it. 00:34:43.866 [2024-07-21 03:44:28.857679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.866 [2024-07-21 03:44:28.857706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.866 qpair failed and we were unable to recover it. 00:34:43.866 [2024-07-21 03:44:28.857834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.866 [2024-07-21 03:44:28.857860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.866 qpair failed and we were unable to recover it. 00:34:43.866 [2024-07-21 03:44:28.857953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.866 [2024-07-21 03:44:28.857987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.866 qpair failed and we were unable to recover it. 00:34:43.866 [2024-07-21 03:44:28.858160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.866 [2024-07-21 03:44:28.858188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.866 qpair failed and we were unable to recover it. 00:34:43.866 [2024-07-21 03:44:28.858294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.866 [2024-07-21 03:44:28.858335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.866 qpair failed and we were unable to recover it. 00:34:43.866 [2024-07-21 03:44:28.858431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.866 [2024-07-21 03:44:28.858459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.866 qpair failed and we were unable to recover it. 00:34:43.866 [2024-07-21 03:44:28.858622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.866 [2024-07-21 03:44:28.858661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.867 qpair failed and we were unable to recover it. 00:34:43.867 [2024-07-21 03:44:28.858787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.867 [2024-07-21 03:44:28.858814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.867 qpair failed and we were unable to recover it. 00:34:43.867 [2024-07-21 03:44:28.858903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.867 [2024-07-21 03:44:28.858928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.867 qpair failed and we were unable to recover it. 00:34:43.867 [2024-07-21 03:44:28.859083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.867 [2024-07-21 03:44:28.859111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.867 qpair failed and we were unable to recover it. 00:34:43.867 [2024-07-21 03:44:28.859265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.867 [2024-07-21 03:44:28.859293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.867 qpair failed and we were unable to recover it. 00:34:43.867 [2024-07-21 03:44:28.859519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.867 [2024-07-21 03:44:28.859580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.867 qpair failed and we were unable to recover it. 00:34:43.867 [2024-07-21 03:44:28.859708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.867 [2024-07-21 03:44:28.859736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.867 qpair failed and we were unable to recover it. 00:34:43.867 [2024-07-21 03:44:28.859891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.867 [2024-07-21 03:44:28.859920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.867 qpair failed and we were unable to recover it. 00:34:43.867 [2024-07-21 03:44:28.860112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.867 [2024-07-21 03:44:28.860163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.867 qpair failed and we were unable to recover it. 00:34:43.867 [2024-07-21 03:44:28.860295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.867 [2024-07-21 03:44:28.860355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.867 qpair failed and we were unable to recover it. 00:34:43.867 [2024-07-21 03:44:28.860514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.867 [2024-07-21 03:44:28.860543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.867 qpair failed and we were unable to recover it. 00:34:43.867 [2024-07-21 03:44:28.860678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.867 [2024-07-21 03:44:28.860717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.867 qpair failed and we were unable to recover it. 00:34:43.867 [2024-07-21 03:44:28.860825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.867 [2024-07-21 03:44:28.860853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.867 qpair failed and we were unable to recover it. 00:34:43.867 [2024-07-21 03:44:28.861075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.867 [2024-07-21 03:44:28.861164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.867 qpair failed and we were unable to recover it. 00:34:43.867 [2024-07-21 03:44:28.861363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.867 [2024-07-21 03:44:28.861416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.867 qpair failed and we were unable to recover it. 00:34:43.867 [2024-07-21 03:44:28.861545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.867 [2024-07-21 03:44:28.861574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.867 qpair failed and we were unable to recover it. 00:34:43.867 [2024-07-21 03:44:28.861706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.867 [2024-07-21 03:44:28.861745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.867 qpair failed and we were unable to recover it. 00:34:43.867 [2024-07-21 03:44:28.861895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.867 [2024-07-21 03:44:28.861950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.867 qpair failed and we were unable to recover it. 00:34:43.867 [2024-07-21 03:44:28.862185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.867 [2024-07-21 03:44:28.862237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.867 qpair failed and we were unable to recover it. 00:34:43.867 [2024-07-21 03:44:28.862352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.867 [2024-07-21 03:44:28.862417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.867 qpair failed and we were unable to recover it. 00:34:43.867 [2024-07-21 03:44:28.862514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.867 [2024-07-21 03:44:28.862543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.867 qpair failed and we were unable to recover it. 00:34:43.867 [2024-07-21 03:44:28.862686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.867 [2024-07-21 03:44:28.862725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.867 qpair failed and we were unable to recover it. 00:34:43.867 [2024-07-21 03:44:28.862828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.867 [2024-07-21 03:44:28.862854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.867 qpair failed and we were unable to recover it. 00:34:43.867 [2024-07-21 03:44:28.863057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.867 [2024-07-21 03:44:28.863086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.867 qpair failed and we were unable to recover it. 00:34:43.867 [2024-07-21 03:44:28.863246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.867 [2024-07-21 03:44:28.863275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.867 qpair failed and we were unable to recover it. 00:34:43.867 [2024-07-21 03:44:28.863396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.867 [2024-07-21 03:44:28.863438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.867 qpair failed and we were unable to recover it. 00:34:43.867 [2024-07-21 03:44:28.863588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.867 [2024-07-21 03:44:28.863621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.867 qpair failed and we were unable to recover it. 00:34:43.867 [2024-07-21 03:44:28.863741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.867 [2024-07-21 03:44:28.863768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.867 qpair failed and we were unable to recover it. 00:34:43.867 [2024-07-21 03:44:28.863886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.867 [2024-07-21 03:44:28.863931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.867 qpair failed and we were unable to recover it. 00:34:43.867 [2024-07-21 03:44:28.864061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.867 [2024-07-21 03:44:28.864089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.867 qpair failed and we were unable to recover it. 00:34:43.867 [2024-07-21 03:44:28.864186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.867 [2024-07-21 03:44:28.864214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.867 qpair failed and we were unable to recover it. 00:34:43.867 [2024-07-21 03:44:28.864339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.867 [2024-07-21 03:44:28.864383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.867 qpair failed and we were unable to recover it. 00:34:43.867 [2024-07-21 03:44:28.864567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.867 [2024-07-21 03:44:28.864606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.867 qpair failed and we were unable to recover it. 00:34:43.867 [2024-07-21 03:44:28.864723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.867 [2024-07-21 03:44:28.864750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.867 qpair failed and we were unable to recover it. 00:34:43.867 [2024-07-21 03:44:28.864863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.867 [2024-07-21 03:44:28.864891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.867 qpair failed and we were unable to recover it. 00:34:43.867 [2024-07-21 03:44:28.865045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.867 [2024-07-21 03:44:28.865074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.867 qpair failed and we were unable to recover it. 00:34:43.867 [2024-07-21 03:44:28.865199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.867 [2024-07-21 03:44:28.865242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.867 qpair failed and we were unable to recover it. 00:34:43.867 [2024-07-21 03:44:28.865434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.867 [2024-07-21 03:44:28.865487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.867 qpair failed and we were unable to recover it. 00:34:43.867 [2024-07-21 03:44:28.865650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.867 [2024-07-21 03:44:28.865689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.867 qpair failed and we were unable to recover it. 00:34:43.867 [2024-07-21 03:44:28.865844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.867 [2024-07-21 03:44:28.865871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.867 qpair failed and we were unable to recover it. 00:34:43.867 [2024-07-21 03:44:28.866036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.867 [2024-07-21 03:44:28.866062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.867 qpair failed and we were unable to recover it. 00:34:43.867 [2024-07-21 03:44:28.866206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.867 [2024-07-21 03:44:28.866232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.868 qpair failed and we were unable to recover it. 00:34:43.868 [2024-07-21 03:44:28.866373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.868 [2024-07-21 03:44:28.866399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.868 qpair failed and we were unable to recover it. 00:34:43.868 [2024-07-21 03:44:28.866490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.868 [2024-07-21 03:44:28.866518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.868 qpair failed and we were unable to recover it. 00:34:43.868 [2024-07-21 03:44:28.866618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.868 [2024-07-21 03:44:28.866646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.868 qpair failed and we were unable to recover it. 00:34:43.868 [2024-07-21 03:44:28.866801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.868 [2024-07-21 03:44:28.866827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.868 qpair failed and we were unable to recover it. 00:34:43.868 [2024-07-21 03:44:28.866943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.868 [2024-07-21 03:44:28.866972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.868 qpair failed and we were unable to recover it. 00:34:43.868 [2024-07-21 03:44:28.867161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.868 [2024-07-21 03:44:28.867219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.868 qpair failed and we were unable to recover it. 00:34:43.868 [2024-07-21 03:44:28.867334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.868 [2024-07-21 03:44:28.867363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.868 qpair failed and we were unable to recover it. 00:34:43.868 [2024-07-21 03:44:28.867467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.868 [2024-07-21 03:44:28.867496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.868 qpair failed and we were unable to recover it. 00:34:43.868 [2024-07-21 03:44:28.867639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.868 [2024-07-21 03:44:28.867666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.868 qpair failed and we were unable to recover it. 00:34:43.868 [2024-07-21 03:44:28.867755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.868 [2024-07-21 03:44:28.867780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.868 qpair failed and we were unable to recover it. 00:34:43.868 [2024-07-21 03:44:28.867885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.868 [2024-07-21 03:44:28.867912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.868 qpair failed and we were unable to recover it. 00:34:43.868 [2024-07-21 03:44:28.868070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.868 [2024-07-21 03:44:28.868098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.868 qpair failed and we were unable to recover it. 00:34:43.868 [2024-07-21 03:44:28.868231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.868 [2024-07-21 03:44:28.868258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.868 qpair failed and we were unable to recover it. 00:34:43.868 [2024-07-21 03:44:28.868383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.868 [2024-07-21 03:44:28.868411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.868 qpair failed and we were unable to recover it. 00:34:43.868 [2024-07-21 03:44:28.868522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.868 [2024-07-21 03:44:28.868549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.868 qpair failed and we were unable to recover it. 00:34:43.868 [2024-07-21 03:44:28.868662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.868 [2024-07-21 03:44:28.868690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.868 qpair failed and we were unable to recover it. 00:34:43.868 [2024-07-21 03:44:28.868843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.868 [2024-07-21 03:44:28.868888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.868 qpair failed and we were unable to recover it. 00:34:43.868 [2024-07-21 03:44:28.869126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.868 [2024-07-21 03:44:28.869177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.868 qpair failed and we were unable to recover it. 00:34:43.868 [2024-07-21 03:44:28.869298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.868 [2024-07-21 03:44:28.869353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.868 qpair failed and we were unable to recover it. 00:34:43.868 [2024-07-21 03:44:28.869505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.868 [2024-07-21 03:44:28.869531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.868 qpair failed and we were unable to recover it. 00:34:43.868 [2024-07-21 03:44:28.869634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.868 [2024-07-21 03:44:28.869661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.868 qpair failed and we were unable to recover it. 00:34:43.868 [2024-07-21 03:44:28.869745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.868 [2024-07-21 03:44:28.869770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.868 qpair failed and we were unable to recover it. 00:34:43.868 [2024-07-21 03:44:28.869918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.868 [2024-07-21 03:44:28.869949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.868 qpair failed and we were unable to recover it. 00:34:43.868 [2024-07-21 03:44:28.870086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.868 [2024-07-21 03:44:28.870115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.868 qpair failed and we were unable to recover it. 00:34:43.868 [2024-07-21 03:44:28.870365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.868 [2024-07-21 03:44:28.870413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.868 qpair failed and we were unable to recover it. 00:34:43.868 [2024-07-21 03:44:28.870549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.868 [2024-07-21 03:44:28.870575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.868 qpair failed and we were unable to recover it. 00:34:43.868 [2024-07-21 03:44:28.870674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.868 [2024-07-21 03:44:28.870703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.868 qpair failed and we were unable to recover it. 00:34:43.868 [2024-07-21 03:44:28.870817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.868 [2024-07-21 03:44:28.870856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.868 qpair failed and we were unable to recover it. 00:34:43.868 [2024-07-21 03:44:28.870985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.868 [2024-07-21 03:44:28.871014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.868 qpair failed and we were unable to recover it. 00:34:43.868 [2024-07-21 03:44:28.871211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.868 [2024-07-21 03:44:28.871271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.868 qpair failed and we were unable to recover it. 00:34:43.868 [2024-07-21 03:44:28.871377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.868 [2024-07-21 03:44:28.871418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.868 qpair failed and we were unable to recover it. 00:34:43.868 [2024-07-21 03:44:28.871572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.868 [2024-07-21 03:44:28.871598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.868 qpair failed and we were unable to recover it. 00:34:43.868 [2024-07-21 03:44:28.871706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.868 [2024-07-21 03:44:28.871731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.868 qpair failed and we were unable to recover it. 00:34:43.868 [2024-07-21 03:44:28.871823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.868 [2024-07-21 03:44:28.871849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.868 qpair failed and we were unable to recover it. 00:34:43.868 [2024-07-21 03:44:28.872008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.868 [2024-07-21 03:44:28.872035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.868 qpair failed and we were unable to recover it. 00:34:43.868 [2024-07-21 03:44:28.872240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.868 [2024-07-21 03:44:28.872266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.868 qpair failed and we were unable to recover it. 00:34:43.868 [2024-07-21 03:44:28.872376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.868 [2024-07-21 03:44:28.872403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.868 qpair failed and we were unable to recover it. 00:34:43.868 [2024-07-21 03:44:28.872543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.868 [2024-07-21 03:44:28.872567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.868 qpair failed and we were unable to recover it. 00:34:43.868 [2024-07-21 03:44:28.872684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.868 [2024-07-21 03:44:28.872709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.868 qpair failed and we were unable to recover it. 00:34:43.868 [2024-07-21 03:44:28.872833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.868 [2024-07-21 03:44:28.872859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.868 qpair failed and we were unable to recover it. 00:34:43.868 [2024-07-21 03:44:28.873029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.869 [2024-07-21 03:44:28.873056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.869 qpair failed and we were unable to recover it. 00:34:43.869 [2024-07-21 03:44:28.873182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.869 [2024-07-21 03:44:28.873210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.869 qpair failed and we were unable to recover it. 00:34:43.869 [2024-07-21 03:44:28.873342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.869 [2024-07-21 03:44:28.873371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.869 qpair failed and we were unable to recover it. 00:34:43.869 [2024-07-21 03:44:28.873467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.869 [2024-07-21 03:44:28.873494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.869 qpair failed and we were unable to recover it. 00:34:43.869 [2024-07-21 03:44:28.873607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.869 [2024-07-21 03:44:28.873663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.869 qpair failed and we were unable to recover it. 00:34:43.869 [2024-07-21 03:44:28.873776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.869 [2024-07-21 03:44:28.873820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.869 qpair failed and we were unable to recover it. 00:34:43.869 [2024-07-21 03:44:28.873928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.869 [2024-07-21 03:44:28.873967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.869 qpair failed and we were unable to recover it. 00:34:43.869 [2024-07-21 03:44:28.874141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.869 [2024-07-21 03:44:28.874187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.869 qpair failed and we were unable to recover it. 00:34:43.869 [2024-07-21 03:44:28.874327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.869 [2024-07-21 03:44:28.874356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.869 qpair failed and we were unable to recover it. 00:34:43.869 [2024-07-21 03:44:28.874464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.869 [2024-07-21 03:44:28.874491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.869 qpair failed and we were unable to recover it. 00:34:43.869 [2024-07-21 03:44:28.874584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.869 [2024-07-21 03:44:28.874608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.869 qpair failed and we were unable to recover it. 00:34:43.869 [2024-07-21 03:44:28.874756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.869 [2024-07-21 03:44:28.874787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.869 qpair failed and we were unable to recover it. 00:34:43.869 [2024-07-21 03:44:28.874953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.869 [2024-07-21 03:44:28.874985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.869 qpair failed and we were unable to recover it. 00:34:43.869 [2024-07-21 03:44:28.875091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.869 [2024-07-21 03:44:28.875119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.869 qpair failed and we were unable to recover it. 00:34:43.869 [2024-07-21 03:44:28.875274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.869 [2024-07-21 03:44:28.875302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.869 qpair failed and we were unable to recover it. 00:34:43.869 [2024-07-21 03:44:28.875508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.869 [2024-07-21 03:44:28.875564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.869 qpair failed and we were unable to recover it. 00:34:43.869 [2024-07-21 03:44:28.875715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.869 [2024-07-21 03:44:28.875742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.869 qpair failed and we were unable to recover it. 00:34:43.869 [2024-07-21 03:44:28.875854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.869 [2024-07-21 03:44:28.875885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.869 qpair failed and we were unable to recover it. 00:34:43.869 [2024-07-21 03:44:28.876048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.869 [2024-07-21 03:44:28.876074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.869 qpair failed and we were unable to recover it. 00:34:43.869 [2024-07-21 03:44:28.876283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.869 [2024-07-21 03:44:28.876337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.869 qpair failed and we were unable to recover it. 00:34:43.869 [2024-07-21 03:44:28.876436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.869 [2024-07-21 03:44:28.876461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.869 qpair failed and we were unable to recover it. 00:34:43.869 [2024-07-21 03:44:28.876585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.869 [2024-07-21 03:44:28.876611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.869 qpair failed and we were unable to recover it. 00:34:43.869 [2024-07-21 03:44:28.876737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.869 [2024-07-21 03:44:28.876779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.869 qpair failed and we were unable to recover it. 00:34:43.869 [2024-07-21 03:44:28.876884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.869 [2024-07-21 03:44:28.876912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.869 qpair failed and we were unable to recover it. 00:34:43.869 [2024-07-21 03:44:28.877047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.869 [2024-07-21 03:44:28.877071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.869 qpair failed and we were unable to recover it. 00:34:43.869 [2024-07-21 03:44:28.877219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.869 [2024-07-21 03:44:28.877262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.869 qpair failed and we were unable to recover it. 00:34:43.869 [2024-07-21 03:44:28.877418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.869 [2024-07-21 03:44:28.877444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.869 qpair failed and we were unable to recover it. 00:34:43.869 [2024-07-21 03:44:28.877547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.869 [2024-07-21 03:44:28.877578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.869 qpair failed and we were unable to recover it. 00:34:43.869 [2024-07-21 03:44:28.877746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.869 [2024-07-21 03:44:28.877786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.869 qpair failed and we were unable to recover it. 00:34:43.869 [2024-07-21 03:44:28.877946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.869 [2024-07-21 03:44:28.877977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.869 qpair failed and we were unable to recover it. 00:34:43.869 [2024-07-21 03:44:28.878219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.869 [2024-07-21 03:44:28.878247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.869 qpair failed and we were unable to recover it. 00:34:43.869 [2024-07-21 03:44:28.878437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.869 [2024-07-21 03:44:28.878462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.869 qpair failed and we were unable to recover it. 00:34:43.869 [2024-07-21 03:44:28.878562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.869 [2024-07-21 03:44:28.878587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.869 qpair failed and we were unable to recover it. 00:34:43.869 [2024-07-21 03:44:28.878718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.869 [2024-07-21 03:44:28.878745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.869 qpair failed and we were unable to recover it. 00:34:43.869 [2024-07-21 03:44:28.878847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.869 [2024-07-21 03:44:28.878876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.869 qpair failed and we were unable to recover it. 00:34:43.869 [2024-07-21 03:44:28.879133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.869 [2024-07-21 03:44:28.879186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.869 qpair failed and we were unable to recover it. 00:34:43.869 [2024-07-21 03:44:28.879377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.869 [2024-07-21 03:44:28.879439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.869 qpair failed and we were unable to recover it. 00:34:43.869 [2024-07-21 03:44:28.879595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.869 [2024-07-21 03:44:28.879635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.869 qpair failed and we were unable to recover it. 00:34:43.869 [2024-07-21 03:44:28.879772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.869 [2024-07-21 03:44:28.879797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.869 qpair failed and we were unable to recover it. 00:34:43.869 [2024-07-21 03:44:28.879944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.869 [2024-07-21 03:44:28.879973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.869 qpair failed and we were unable to recover it. 00:34:43.869 [2024-07-21 03:44:28.880177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.869 [2024-07-21 03:44:28.880239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.869 qpair failed and we were unable to recover it. 00:34:43.869 [2024-07-21 03:44:28.880478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.869 [2024-07-21 03:44:28.880531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.870 qpair failed and we were unable to recover it. 00:34:43.870 [2024-07-21 03:44:28.880667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.870 [2024-07-21 03:44:28.880693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.870 qpair failed and we were unable to recover it. 00:34:43.870 [2024-07-21 03:44:28.880817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.870 [2024-07-21 03:44:28.880842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.870 qpair failed and we were unable to recover it. 00:34:43.870 [2024-07-21 03:44:28.880945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.870 [2024-07-21 03:44:28.880973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.870 qpair failed and we were unable to recover it. 00:34:43.870 [2024-07-21 03:44:28.881085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.870 [2024-07-21 03:44:28.881127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.870 qpair failed and we were unable to recover it. 00:34:43.870 [2024-07-21 03:44:28.881285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.870 [2024-07-21 03:44:28.881314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.870 qpair failed and we were unable to recover it. 00:34:43.870 [2024-07-21 03:44:28.881415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.870 [2024-07-21 03:44:28.881443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.870 qpair failed and we were unable to recover it. 00:34:43.870 [2024-07-21 03:44:28.881632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.870 [2024-07-21 03:44:28.881672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.870 qpair failed and we were unable to recover it. 00:34:43.870 [2024-07-21 03:44:28.881835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.870 [2024-07-21 03:44:28.881873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.870 qpair failed and we were unable to recover it. 00:34:43.870 [2024-07-21 03:44:28.882028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.870 [2024-07-21 03:44:28.882059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.870 qpair failed and we were unable to recover it. 00:34:43.870 [2024-07-21 03:44:28.882188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.870 [2024-07-21 03:44:28.882216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.870 qpair failed and we were unable to recover it. 00:34:43.870 [2024-07-21 03:44:28.882378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.870 [2024-07-21 03:44:28.882406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.870 qpair failed and we were unable to recover it. 00:34:43.870 [2024-07-21 03:44:28.882564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.870 [2024-07-21 03:44:28.882593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.870 qpair failed and we were unable to recover it. 00:34:43.870 [2024-07-21 03:44:28.882752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.870 [2024-07-21 03:44:28.882778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.870 qpair failed and we were unable to recover it. 00:34:43.870 [2024-07-21 03:44:28.882868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.870 [2024-07-21 03:44:28.882894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.870 qpair failed and we were unable to recover it. 00:34:43.870 [2024-07-21 03:44:28.883062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.870 [2024-07-21 03:44:28.883115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.870 qpair failed and we were unable to recover it. 00:34:43.870 [2024-07-21 03:44:28.883270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.870 [2024-07-21 03:44:28.883323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.870 qpair failed and we were unable to recover it. 00:34:43.870 [2024-07-21 03:44:28.883423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.870 [2024-07-21 03:44:28.883451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.870 qpair failed and we were unable to recover it. 00:34:43.870 [2024-07-21 03:44:28.883599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.870 [2024-07-21 03:44:28.883665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.870 qpair failed and we were unable to recover it. 00:34:43.870 [2024-07-21 03:44:28.883771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.870 [2024-07-21 03:44:28.883811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.870 qpair failed and we were unable to recover it. 00:34:43.870 [2024-07-21 03:44:28.883935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.870 [2024-07-21 03:44:28.883974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.870 qpair failed and we were unable to recover it. 00:34:43.870 [2024-07-21 03:44:28.884222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.870 [2024-07-21 03:44:28.884274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.870 qpair failed and we were unable to recover it. 00:34:43.870 [2024-07-21 03:44:28.884475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.870 [2024-07-21 03:44:28.884530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.870 qpair failed and we were unable to recover it. 00:34:43.870 [2024-07-21 03:44:28.884646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.870 [2024-07-21 03:44:28.884673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.870 qpair failed and we were unable to recover it. 00:34:43.870 [2024-07-21 03:44:28.884817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.870 [2024-07-21 03:44:28.884858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.870 qpair failed and we were unable to recover it. 00:34:43.870 [2024-07-21 03:44:28.885033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.870 [2024-07-21 03:44:28.885078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.870 qpair failed and we were unable to recover it. 00:34:43.870 [2024-07-21 03:44:28.885215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.870 [2024-07-21 03:44:28.885276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.870 qpair failed and we were unable to recover it. 00:34:43.870 [2024-07-21 03:44:28.885422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.870 [2024-07-21 03:44:28.885448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.870 qpair failed and we were unable to recover it. 00:34:43.870 [2024-07-21 03:44:28.885569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.870 [2024-07-21 03:44:28.885595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.870 qpair failed and we were unable to recover it. 00:34:43.870 [2024-07-21 03:44:28.885752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.870 [2024-07-21 03:44:28.885796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.870 qpair failed and we were unable to recover it. 00:34:43.870 [2024-07-21 03:44:28.885968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.870 [2024-07-21 03:44:28.886013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.870 qpair failed and we were unable to recover it. 00:34:43.870 [2024-07-21 03:44:28.886127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.870 [2024-07-21 03:44:28.886174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.870 qpair failed and we were unable to recover it. 00:34:43.870 [2024-07-21 03:44:28.886309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.870 [2024-07-21 03:44:28.886335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.870 qpair failed and we were unable to recover it. 00:34:43.870 [2024-07-21 03:44:28.886453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.870 [2024-07-21 03:44:28.886478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.870 qpair failed and we were unable to recover it. 00:34:43.870 [2024-07-21 03:44:28.886594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.870 [2024-07-21 03:44:28.886634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.870 qpair failed and we were unable to recover it. 00:34:43.870 [2024-07-21 03:44:28.886758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.870 [2024-07-21 03:44:28.886783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.870 qpair failed and we were unable to recover it. 00:34:43.871 [2024-07-21 03:44:28.886933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.871 [2024-07-21 03:44:28.886958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.871 qpair failed and we were unable to recover it. 00:34:43.871 [2024-07-21 03:44:28.887078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.871 [2024-07-21 03:44:28.887104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.871 qpair failed and we were unable to recover it. 00:34:43.871 [2024-07-21 03:44:28.887222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.871 [2024-07-21 03:44:28.887248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.871 qpair failed and we were unable to recover it. 00:34:43.871 [2024-07-21 03:44:28.887371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.871 [2024-07-21 03:44:28.887396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.871 qpair failed and we were unable to recover it. 00:34:43.871 [2024-07-21 03:44:28.887515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.871 [2024-07-21 03:44:28.887542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.871 qpair failed and we were unable to recover it. 00:34:43.871 [2024-07-21 03:44:28.887648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.871 [2024-07-21 03:44:28.887674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.871 qpair failed and we were unable to recover it. 00:34:43.871 [2024-07-21 03:44:28.887807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.871 [2024-07-21 03:44:28.887850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.871 qpair failed and we were unable to recover it. 00:34:43.871 [2024-07-21 03:44:28.887992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.871 [2024-07-21 03:44:28.888023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.871 qpair failed and we were unable to recover it. 00:34:43.871 [2024-07-21 03:44:28.888205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.871 [2024-07-21 03:44:28.888233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.871 qpair failed and we were unable to recover it. 00:34:43.871 [2024-07-21 03:44:28.888378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.871 [2024-07-21 03:44:28.888404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.871 qpair failed and we were unable to recover it. 00:34:43.871 [2024-07-21 03:44:28.888531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.871 [2024-07-21 03:44:28.888557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.871 qpair failed and we were unable to recover it. 00:34:43.871 [2024-07-21 03:44:28.888656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.871 [2024-07-21 03:44:28.888682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.871 qpair failed and we were unable to recover it. 00:34:43.871 [2024-07-21 03:44:28.888780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.871 [2024-07-21 03:44:28.888805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.871 qpair failed and we were unable to recover it. 00:34:43.871 [2024-07-21 03:44:28.888900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.871 [2024-07-21 03:44:28.888926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.871 qpair failed and we were unable to recover it. 00:34:43.871 [2024-07-21 03:44:28.889047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.871 [2024-07-21 03:44:28.889072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.871 qpair failed and we were unable to recover it. 00:34:43.871 [2024-07-21 03:44:28.889211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.871 [2024-07-21 03:44:28.889239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.871 qpair failed and we were unable to recover it. 00:34:43.871 [2024-07-21 03:44:28.889372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.871 [2024-07-21 03:44:28.889400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.871 qpair failed and we were unable to recover it. 00:34:43.871 [2024-07-21 03:44:28.889547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.871 [2024-07-21 03:44:28.889590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.871 qpair failed and we were unable to recover it. 00:34:43.871 [2024-07-21 03:44:28.889776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.871 [2024-07-21 03:44:28.889824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.871 qpair failed and we were unable to recover it. 00:34:43.871 [2024-07-21 03:44:28.889943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.871 [2024-07-21 03:44:28.889972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.871 qpair failed and we were unable to recover it. 00:34:43.871 [2024-07-21 03:44:28.890136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.871 [2024-07-21 03:44:28.890162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.871 qpair failed and we were unable to recover it. 00:34:43.871 [2024-07-21 03:44:28.890350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.871 [2024-07-21 03:44:28.890400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.871 qpair failed and we were unable to recover it. 00:34:43.871 [2024-07-21 03:44:28.890498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.871 [2024-07-21 03:44:28.890529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.871 qpair failed and we were unable to recover it. 00:34:43.871 [2024-07-21 03:44:28.890666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.871 [2024-07-21 03:44:28.890692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.871 qpair failed and we were unable to recover it. 00:34:43.871 [2024-07-21 03:44:28.890818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.871 [2024-07-21 03:44:28.890843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.871 qpair failed and we were unable to recover it. 00:34:43.871 [2024-07-21 03:44:28.890966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.871 [2024-07-21 03:44:28.890991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.871 qpair failed and we were unable to recover it. 00:34:43.871 [2024-07-21 03:44:28.891110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.871 [2024-07-21 03:44:28.891134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.871 qpair failed and we were unable to recover it. 00:34:43.871 [2024-07-21 03:44:28.891254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.871 [2024-07-21 03:44:28.891279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.871 qpair failed and we were unable to recover it. 00:34:43.871 [2024-07-21 03:44:28.891370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.871 [2024-07-21 03:44:28.891395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.871 qpair failed and we were unable to recover it. 00:34:43.871 [2024-07-21 03:44:28.891514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.871 [2024-07-21 03:44:28.891540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.871 qpair failed and we were unable to recover it. 00:34:43.871 [2024-07-21 03:44:28.891708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.871 [2024-07-21 03:44:28.891752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.871 qpair failed and we were unable to recover it. 00:34:43.871 [2024-07-21 03:44:28.891891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.871 [2024-07-21 03:44:28.891921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.871 qpair failed and we were unable to recover it. 00:34:43.871 [2024-07-21 03:44:28.892026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.871 [2024-07-21 03:44:28.892056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.871 qpair failed and we were unable to recover it. 00:34:43.871 [2024-07-21 03:44:28.892153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.871 [2024-07-21 03:44:28.892194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.871 qpair failed and we were unable to recover it. 00:34:43.871 [2024-07-21 03:44:28.892312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.871 [2024-07-21 03:44:28.892337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.871 qpair failed and we were unable to recover it. 00:34:43.871 [2024-07-21 03:44:28.892434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.871 [2024-07-21 03:44:28.892459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.871 qpair failed and we were unable to recover it. 00:34:43.871 [2024-07-21 03:44:28.892580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.871 [2024-07-21 03:44:28.892606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.871 qpair failed and we were unable to recover it. 00:34:43.871 [2024-07-21 03:44:28.892721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.871 [2024-07-21 03:44:28.892747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.871 qpair failed and we were unable to recover it. 00:34:43.871 [2024-07-21 03:44:28.892859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.871 [2024-07-21 03:44:28.892889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.871 qpair failed and we were unable to recover it. 00:34:43.871 [2024-07-21 03:44:28.893016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.871 [2024-07-21 03:44:28.893046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.871 qpair failed and we were unable to recover it. 00:34:43.871 [2024-07-21 03:44:28.893205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.871 [2024-07-21 03:44:28.893233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.872 qpair failed and we were unable to recover it. 00:34:43.872 [2024-07-21 03:44:28.893330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.872 [2024-07-21 03:44:28.893360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.872 qpair failed and we were unable to recover it. 00:34:43.872 [2024-07-21 03:44:28.893505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.872 [2024-07-21 03:44:28.893533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.872 qpair failed and we were unable to recover it. 00:34:43.872 [2024-07-21 03:44:28.893704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.872 [2024-07-21 03:44:28.893748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.872 qpair failed and we were unable to recover it. 00:34:43.872 [2024-07-21 03:44:28.893915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.872 [2024-07-21 03:44:28.893946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.872 qpair failed and we were unable to recover it. 00:34:43.872 [2024-07-21 03:44:28.894130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.872 [2024-07-21 03:44:28.894185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.872 qpair failed and we were unable to recover it. 00:34:43.872 [2024-07-21 03:44:28.894336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.872 [2024-07-21 03:44:28.894386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.872 qpair failed and we were unable to recover it. 00:34:43.872 [2024-07-21 03:44:28.894518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.872 [2024-07-21 03:44:28.894544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.872 qpair failed and we were unable to recover it. 00:34:43.872 [2024-07-21 03:44:28.894681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.872 [2024-07-21 03:44:28.894709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.872 qpair failed and we were unable to recover it. 00:34:43.872 [2024-07-21 03:44:28.894837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.872 [2024-07-21 03:44:28.894862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.872 qpair failed and we were unable to recover it. 00:34:43.872 [2024-07-21 03:44:28.894994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.872 [2024-07-21 03:44:28.895036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.872 qpair failed and we were unable to recover it. 00:34:43.872 [2024-07-21 03:44:28.895200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.872 [2024-07-21 03:44:28.895228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.872 qpair failed and we were unable to recover it. 00:34:43.872 [2024-07-21 03:44:28.895385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.872 [2024-07-21 03:44:28.895413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.872 qpair failed and we were unable to recover it. 00:34:43.872 [2024-07-21 03:44:28.895572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.872 [2024-07-21 03:44:28.895600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.872 qpair failed and we were unable to recover it. 00:34:43.872 [2024-07-21 03:44:28.895754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.872 [2024-07-21 03:44:28.895782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.872 qpair failed and we were unable to recover it. 00:34:43.872 [2024-07-21 03:44:28.895889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.872 [2024-07-21 03:44:28.895926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.872 qpair failed and we were unable to recover it. 00:34:43.872 [2024-07-21 03:44:28.896090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.872 [2024-07-21 03:44:28.896118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.872 qpair failed and we were unable to recover it. 00:34:43.872 [2024-07-21 03:44:28.896247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.872 [2024-07-21 03:44:28.896320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.872 qpair failed and we were unable to recover it. 00:34:43.872 [2024-07-21 03:44:28.896448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.872 [2024-07-21 03:44:28.896477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.872 qpair failed and we were unable to recover it. 00:34:43.872 [2024-07-21 03:44:28.896572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.872 [2024-07-21 03:44:28.896600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.872 qpair failed and we were unable to recover it. 00:34:43.872 [2024-07-21 03:44:28.896754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.872 [2024-07-21 03:44:28.896782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.872 qpair failed and we were unable to recover it. 00:34:43.872 [2024-07-21 03:44:28.896914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.872 [2024-07-21 03:44:28.896977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.872 qpair failed and we were unable to recover it. 00:34:43.872 [2024-07-21 03:44:28.897117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.872 [2024-07-21 03:44:28.897165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.872 qpair failed and we were unable to recover it. 00:34:43.872 [2024-07-21 03:44:28.897380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.872 [2024-07-21 03:44:28.897433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.872 qpair failed and we were unable to recover it. 00:34:43.872 [2024-07-21 03:44:28.897567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.872 [2024-07-21 03:44:28.897596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.872 qpair failed and we were unable to recover it. 00:34:43.872 [2024-07-21 03:44:28.897744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.872 [2024-07-21 03:44:28.897770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.872 qpair failed and we were unable to recover it. 00:34:43.872 [2024-07-21 03:44:28.897859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.872 [2024-07-21 03:44:28.897884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.872 qpair failed and we were unable to recover it. 00:34:43.872 [2024-07-21 03:44:28.898087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.872 [2024-07-21 03:44:28.898146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.872 qpair failed and we were unable to recover it. 00:34:43.872 [2024-07-21 03:44:28.898256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.872 [2024-07-21 03:44:28.898281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.872 qpair failed and we were unable to recover it. 00:34:43.872 [2024-07-21 03:44:28.898492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.872 [2024-07-21 03:44:28.898553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.872 qpair failed and we were unable to recover it. 00:34:43.872 [2024-07-21 03:44:28.898706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.872 [2024-07-21 03:44:28.898732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.872 qpair failed and we were unable to recover it. 00:34:43.872 [2024-07-21 03:44:28.898849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.872 [2024-07-21 03:44:28.898874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.872 qpair failed and we were unable to recover it. 00:34:43.872 [2024-07-21 03:44:28.898993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.872 [2024-07-21 03:44:28.899018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.872 qpair failed and we were unable to recover it. 00:34:43.872 [2024-07-21 03:44:28.899191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.872 [2024-07-21 03:44:28.899219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.872 qpair failed and we were unable to recover it. 00:34:43.872 [2024-07-21 03:44:28.899351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.872 [2024-07-21 03:44:28.899379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.872 qpair failed and we were unable to recover it. 00:34:43.872 [2024-07-21 03:44:28.899528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.872 [2024-07-21 03:44:28.899555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.872 qpair failed and we were unable to recover it. 00:34:43.872 [2024-07-21 03:44:28.899711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.872 [2024-07-21 03:44:28.899751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.872 qpair failed and we were unable to recover it. 00:34:43.872 [2024-07-21 03:44:28.899886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.872 [2024-07-21 03:44:28.899932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.872 qpair failed and we were unable to recover it. 00:34:43.872 [2024-07-21 03:44:28.900052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.872 [2024-07-21 03:44:28.900082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.872 qpair failed and we were unable to recover it. 00:34:43.872 [2024-07-21 03:44:28.900341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.872 [2024-07-21 03:44:28.900391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.872 qpair failed and we were unable to recover it. 00:34:43.872 [2024-07-21 03:44:28.900494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.872 [2024-07-21 03:44:28.900521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.872 qpair failed and we were unable to recover it. 00:34:43.872 [2024-07-21 03:44:28.900680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.872 [2024-07-21 03:44:28.900706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.872 qpair failed and we were unable to recover it. 00:34:43.873 [2024-07-21 03:44:28.900858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.873 [2024-07-21 03:44:28.900883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.873 qpair failed and we were unable to recover it. 00:34:43.873 [2024-07-21 03:44:28.900979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.873 [2024-07-21 03:44:28.901004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.873 qpair failed and we were unable to recover it. 00:34:43.873 [2024-07-21 03:44:28.901147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.873 [2024-07-21 03:44:28.901171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.873 qpair failed and we were unable to recover it. 00:34:43.873 [2024-07-21 03:44:28.901267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.873 [2024-07-21 03:44:28.901293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.873 qpair failed and we were unable to recover it. 00:34:43.873 [2024-07-21 03:44:28.901461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.873 [2024-07-21 03:44:28.901499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.873 qpair failed and we were unable to recover it. 00:34:43.873 [2024-07-21 03:44:28.901606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.873 [2024-07-21 03:44:28.901663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.873 qpair failed and we were unable to recover it. 00:34:43.873 [2024-07-21 03:44:28.901829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.873 [2024-07-21 03:44:28.901858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.873 qpair failed and we were unable to recover it. 00:34:43.873 [2024-07-21 03:44:28.901970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.873 [2024-07-21 03:44:28.902004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.873 qpair failed and we were unable to recover it. 00:34:43.873 [2024-07-21 03:44:28.902239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.873 [2024-07-21 03:44:28.902289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.873 qpair failed and we were unable to recover it. 00:34:43.873 [2024-07-21 03:44:28.902387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.873 [2024-07-21 03:44:28.902415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.873 qpair failed and we were unable to recover it. 00:34:43.873 [2024-07-21 03:44:28.902545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.873 [2024-07-21 03:44:28.902571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.873 qpair failed and we were unable to recover it. 00:34:43.873 [2024-07-21 03:44:28.902675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.873 [2024-07-21 03:44:28.902701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.873 qpair failed and we were unable to recover it. 00:34:43.873 [2024-07-21 03:44:28.902802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.873 [2024-07-21 03:44:28.902827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.873 qpair failed and we were unable to recover it. 00:34:43.873 [2024-07-21 03:44:28.902959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.873 [2024-07-21 03:44:28.902988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.873 qpair failed and we were unable to recover it. 00:34:43.873 [2024-07-21 03:44:28.903121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.873 [2024-07-21 03:44:28.903151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.873 qpair failed and we were unable to recover it. 00:34:43.873 [2024-07-21 03:44:28.903285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.873 [2024-07-21 03:44:28.903313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.873 qpair failed and we were unable to recover it. 00:34:43.873 [2024-07-21 03:44:28.903405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.873 [2024-07-21 03:44:28.903446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.873 qpair failed and we were unable to recover it. 00:34:43.873 [2024-07-21 03:44:28.903533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.873 [2024-07-21 03:44:28.903558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.873 qpair failed and we were unable to recover it. 00:34:43.873 [2024-07-21 03:44:28.903655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.873 [2024-07-21 03:44:28.903681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.873 qpair failed and we were unable to recover it. 00:34:43.873 [2024-07-21 03:44:28.903823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.873 [2024-07-21 03:44:28.903852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.873 qpair failed and we were unable to recover it. 00:34:43.873 [2024-07-21 03:44:28.903957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.873 [2024-07-21 03:44:28.903986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.873 qpair failed and we were unable to recover it. 00:34:43.873 [2024-07-21 03:44:28.904096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.873 [2024-07-21 03:44:28.904125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.873 qpair failed and we were unable to recover it. 00:34:43.873 [2024-07-21 03:44:28.904314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.873 [2024-07-21 03:44:28.904360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.873 qpair failed and we were unable to recover it. 00:34:43.873 [2024-07-21 03:44:28.904475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.873 [2024-07-21 03:44:28.904501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.873 qpair failed and we were unable to recover it. 00:34:43.873 [2024-07-21 03:44:28.904643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.873 [2024-07-21 03:44:28.904669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.873 qpair failed and we were unable to recover it. 00:34:43.873 [2024-07-21 03:44:28.904779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.873 [2024-07-21 03:44:28.904809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.873 qpair failed and we were unable to recover it. 00:34:43.873 [2024-07-21 03:44:28.904935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.873 [2024-07-21 03:44:28.904964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.873 qpair failed and we were unable to recover it. 00:34:43.873 [2024-07-21 03:44:28.905140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.873 [2024-07-21 03:44:28.905202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.873 qpair failed and we were unable to recover it. 00:34:43.873 [2024-07-21 03:44:28.905318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.873 [2024-07-21 03:44:28.905343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.873 qpair failed and we were unable to recover it. 00:34:43.873 [2024-07-21 03:44:28.905492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.873 [2024-07-21 03:44:28.905517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.873 qpair failed and we were unable to recover it. 00:34:43.873 [2024-07-21 03:44:28.905609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.873 [2024-07-21 03:44:28.905643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.873 qpair failed and we were unable to recover it. 00:34:43.873 [2024-07-21 03:44:28.905739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.873 [2024-07-21 03:44:28.905764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.873 qpair failed and we were unable to recover it. 00:34:43.873 [2024-07-21 03:44:28.905849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.873 [2024-07-21 03:44:28.905874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.873 qpair failed and we were unable to recover it. 00:34:43.873 [2024-07-21 03:44:28.906008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.873 [2024-07-21 03:44:28.906047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.873 qpair failed and we were unable to recover it. 00:34:43.873 [2024-07-21 03:44:28.906177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.873 [2024-07-21 03:44:28.906205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.873 qpair failed and we were unable to recover it. 00:34:43.873 [2024-07-21 03:44:28.906305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.873 [2024-07-21 03:44:28.906330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.873 qpair failed and we were unable to recover it. 00:34:43.873 [2024-07-21 03:44:28.906441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.873 [2024-07-21 03:44:28.906467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.873 qpair failed and we were unable to recover it. 00:34:43.873 [2024-07-21 03:44:28.906586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.873 [2024-07-21 03:44:28.906620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.873 qpair failed and we were unable to recover it. 00:34:43.873 [2024-07-21 03:44:28.906722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.873 [2024-07-21 03:44:28.906764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.873 qpair failed and we were unable to recover it. 00:34:43.873 [2024-07-21 03:44:28.906879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.873 [2024-07-21 03:44:28.906905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.873 qpair failed and we were unable to recover it. 00:34:43.873 [2024-07-21 03:44:28.907013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.873 [2024-07-21 03:44:28.907042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.874 qpair failed and we were unable to recover it. 00:34:43.874 [2024-07-21 03:44:28.907204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.874 [2024-07-21 03:44:28.907232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.874 qpair failed and we were unable to recover it. 00:34:43.874 [2024-07-21 03:44:28.907330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.874 [2024-07-21 03:44:28.907357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.874 qpair failed and we were unable to recover it. 00:34:43.874 [2024-07-21 03:44:28.907497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.874 [2024-07-21 03:44:28.907525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.874 qpair failed and we were unable to recover it. 00:34:43.874 [2024-07-21 03:44:28.907690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.874 [2024-07-21 03:44:28.907730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.874 qpair failed and we were unable to recover it. 00:34:43.874 [2024-07-21 03:44:28.907910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.874 [2024-07-21 03:44:28.907959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.874 qpair failed and we were unable to recover it. 00:34:43.874 [2024-07-21 03:44:28.908073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.874 [2024-07-21 03:44:28.908118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.874 qpair failed and we were unable to recover it. 00:34:43.874 [2024-07-21 03:44:28.908259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.874 [2024-07-21 03:44:28.908306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.874 qpair failed and we were unable to recover it. 00:34:43.874 [2024-07-21 03:44:28.908457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.874 [2024-07-21 03:44:28.908482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.874 qpair failed and we were unable to recover it. 00:34:43.874 [2024-07-21 03:44:28.908603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.874 [2024-07-21 03:44:28.908634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.874 qpair failed and we were unable to recover it. 00:34:43.874 [2024-07-21 03:44:28.908743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.874 [2024-07-21 03:44:28.908772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.874 qpair failed and we were unable to recover it. 00:34:43.874 [2024-07-21 03:44:28.908879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.874 [2024-07-21 03:44:28.908905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.874 qpair failed and we were unable to recover it. 00:34:43.874 [2024-07-21 03:44:28.908998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.874 [2024-07-21 03:44:28.909024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.874 qpair failed and we were unable to recover it. 00:34:43.874 [2024-07-21 03:44:28.909150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.874 [2024-07-21 03:44:28.909176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.874 qpair failed and we were unable to recover it. 00:34:43.874 [2024-07-21 03:44:28.909301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.874 [2024-07-21 03:44:28.909329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.874 qpair failed and we were unable to recover it. 00:34:43.874 [2024-07-21 03:44:28.909417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.874 [2024-07-21 03:44:28.909443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.874 qpair failed and we were unable to recover it. 00:34:43.874 [2024-07-21 03:44:28.909566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.874 [2024-07-21 03:44:28.909594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.874 qpair failed and we were unable to recover it. 00:34:43.874 [2024-07-21 03:44:28.909742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.874 [2024-07-21 03:44:28.909771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.874 qpair failed and we were unable to recover it. 00:34:43.874 [2024-07-21 03:44:28.909881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.874 [2024-07-21 03:44:28.909909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.874 qpair failed and we were unable to recover it. 00:34:43.874 [2024-07-21 03:44:28.910047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.874 [2024-07-21 03:44:28.910075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.874 qpair failed and we were unable to recover it. 00:34:43.874 [2024-07-21 03:44:28.910233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.874 [2024-07-21 03:44:28.910299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.874 qpair failed and we were unable to recover it. 00:34:43.874 [2024-07-21 03:44:28.910405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.874 [2024-07-21 03:44:28.910448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.874 qpair failed and we were unable to recover it. 00:34:43.874 [2024-07-21 03:44:28.910569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.874 [2024-07-21 03:44:28.910597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.874 qpair failed and we were unable to recover it. 00:34:43.874 [2024-07-21 03:44:28.910755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.874 [2024-07-21 03:44:28.910782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.874 qpair failed and we were unable to recover it. 00:34:43.874 [2024-07-21 03:44:28.910919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.874 [2024-07-21 03:44:28.910947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.874 qpair failed and we were unable to recover it. 00:34:43.874 [2024-07-21 03:44:28.911105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.874 [2024-07-21 03:44:28.911133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.874 qpair failed and we were unable to recover it. 00:34:43.874 [2024-07-21 03:44:28.911247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.874 [2024-07-21 03:44:28.911276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.874 qpair failed and we were unable to recover it. 00:34:43.874 [2024-07-21 03:44:28.911398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.874 [2024-07-21 03:44:28.911424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.874 qpair failed and we were unable to recover it. 00:34:43.874 [2024-07-21 03:44:28.911567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.874 [2024-07-21 03:44:28.911592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.874 qpair failed and we were unable to recover it. 00:34:43.874 [2024-07-21 03:44:28.911740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.874 [2024-07-21 03:44:28.911769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.874 qpair failed and we were unable to recover it. 00:34:43.874 [2024-07-21 03:44:28.911878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.874 [2024-07-21 03:44:28.911908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.874 qpair failed and we were unable to recover it. 00:34:43.874 [2024-07-21 03:44:28.912009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.874 [2024-07-21 03:44:28.912038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.874 qpair failed and we were unable to recover it. 00:34:43.874 [2024-07-21 03:44:28.912218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.874 [2024-07-21 03:44:28.912268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.874 qpair failed and we were unable to recover it. 00:34:43.874 [2024-07-21 03:44:28.912403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.874 [2024-07-21 03:44:28.912431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.874 qpair failed and we were unable to recover it. 00:34:43.874 [2024-07-21 03:44:28.912531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.874 [2024-07-21 03:44:28.912560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.874 qpair failed and we were unable to recover it. 00:34:43.874 [2024-07-21 03:44:28.912685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.874 [2024-07-21 03:44:28.912712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.874 qpair failed and we were unable to recover it. 00:34:43.874 [2024-07-21 03:44:28.912798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.874 [2024-07-21 03:44:28.912824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.874 qpair failed and we were unable to recover it. 00:34:43.874 [2024-07-21 03:44:28.912971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.874 [2024-07-21 03:44:28.912999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.874 qpair failed and we were unable to recover it. 00:34:43.874 [2024-07-21 03:44:28.913177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.874 [2024-07-21 03:44:28.913205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.874 qpair failed and we were unable to recover it. 00:34:43.874 [2024-07-21 03:44:28.913332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.874 [2024-07-21 03:44:28.913361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.874 qpair failed and we were unable to recover it. 00:34:43.874 [2024-07-21 03:44:28.913464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.874 [2024-07-21 03:44:28.913494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.874 qpair failed and we were unable to recover it. 00:34:43.874 [2024-07-21 03:44:28.913658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.875 [2024-07-21 03:44:28.913685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.875 qpair failed and we were unable to recover it. 00:34:43.875 [2024-07-21 03:44:28.913778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.875 [2024-07-21 03:44:28.913803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.875 qpair failed and we were unable to recover it. 00:34:43.875 [2024-07-21 03:44:28.913946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.875 [2024-07-21 03:44:28.913974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.875 qpair failed and we were unable to recover it. 00:34:43.875 [2024-07-21 03:44:28.914113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.875 [2024-07-21 03:44:28.914158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.875 qpair failed and we were unable to recover it. 00:34:43.875 [2024-07-21 03:44:28.914318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.875 [2024-07-21 03:44:28.914347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.875 qpair failed and we were unable to recover it. 00:34:43.875 [2024-07-21 03:44:28.914485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.875 [2024-07-21 03:44:28.914513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.875 qpair failed and we were unable to recover it. 00:34:43.875 [2024-07-21 03:44:28.914671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.875 [2024-07-21 03:44:28.914702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.875 qpair failed and we were unable to recover it. 00:34:43.875 [2024-07-21 03:44:28.914821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.875 [2024-07-21 03:44:28.914847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.875 qpair failed and we were unable to recover it. 00:34:43.875 [2024-07-21 03:44:28.914989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.875 [2024-07-21 03:44:28.915017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.875 qpair failed and we were unable to recover it. 00:34:43.875 [2024-07-21 03:44:28.915211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.875 [2024-07-21 03:44:28.915240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.875 qpair failed and we were unable to recover it. 00:34:43.875 [2024-07-21 03:44:28.915375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.875 [2024-07-21 03:44:28.915404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.875 qpair failed and we were unable to recover it. 00:34:43.875 [2024-07-21 03:44:28.915536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.875 [2024-07-21 03:44:28.915565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.875 qpair failed and we were unable to recover it. 00:34:43.875 [2024-07-21 03:44:28.915711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.875 [2024-07-21 03:44:28.915737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.875 qpair failed and we were unable to recover it. 00:34:43.875 [2024-07-21 03:44:28.915885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.875 [2024-07-21 03:44:28.915910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.875 qpair failed and we were unable to recover it. 00:34:43.875 [2024-07-21 03:44:28.916061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.875 [2024-07-21 03:44:28.916094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.875 qpair failed and we were unable to recover it. 00:34:43.875 [2024-07-21 03:44:28.916258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.875 [2024-07-21 03:44:28.916287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.875 qpair failed and we were unable to recover it. 00:34:43.875 [2024-07-21 03:44:28.916418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.875 [2024-07-21 03:44:28.916447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.875 qpair failed and we were unable to recover it. 00:34:43.875 [2024-07-21 03:44:28.916609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.875 [2024-07-21 03:44:28.916663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.875 qpair failed and we were unable to recover it. 00:34:43.875 [2024-07-21 03:44:28.916785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.875 [2024-07-21 03:44:28.916811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.875 qpair failed and we were unable to recover it. 00:34:43.875 [2024-07-21 03:44:28.916928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.875 [2024-07-21 03:44:28.916970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.875 qpair failed and we were unable to recover it. 00:34:43.875 [2024-07-21 03:44:28.917189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.875 [2024-07-21 03:44:28.917244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.875 qpair failed and we were unable to recover it. 00:34:43.875 [2024-07-21 03:44:28.917361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.875 [2024-07-21 03:44:28.917405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.875 qpair failed and we were unable to recover it. 00:34:43.875 [2024-07-21 03:44:28.917545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.875 [2024-07-21 03:44:28.917573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.875 qpair failed and we were unable to recover it. 00:34:43.875 [2024-07-21 03:44:28.917750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.875 [2024-07-21 03:44:28.917777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.875 qpair failed and we were unable to recover it. 00:34:43.875 [2024-07-21 03:44:28.917898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.875 [2024-07-21 03:44:28.917926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.875 qpair failed and we were unable to recover it. 00:34:43.875 [2024-07-21 03:44:28.918018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.875 [2024-07-21 03:44:28.918043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.875 qpair failed and we were unable to recover it. 00:34:43.875 [2024-07-21 03:44:28.918163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.875 [2024-07-21 03:44:28.918191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.875 qpair failed and we were unable to recover it. 00:34:43.875 [2024-07-21 03:44:28.918309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.875 [2024-07-21 03:44:28.918352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.875 qpair failed and we were unable to recover it. 00:34:43.875 [2024-07-21 03:44:28.918485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.875 [2024-07-21 03:44:28.918513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.875 qpair failed and we were unable to recover it. 00:34:43.875 [2024-07-21 03:44:28.918627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.875 [2024-07-21 03:44:28.918678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.875 qpair failed and we were unable to recover it. 00:34:43.875 [2024-07-21 03:44:28.918777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.875 [2024-07-21 03:44:28.918804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.875 qpair failed and we were unable to recover it. 00:34:43.875 [2024-07-21 03:44:28.918900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.875 [2024-07-21 03:44:28.918925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.875 qpair failed and we were unable to recover it. 00:34:43.875 [2024-07-21 03:44:28.919071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.875 [2024-07-21 03:44:28.919099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.875 qpair failed and we were unable to recover it. 00:34:43.875 [2024-07-21 03:44:28.919235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.875 [2024-07-21 03:44:28.919263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.875 qpair failed and we were unable to recover it. 00:34:43.875 [2024-07-21 03:44:28.919392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.875 [2024-07-21 03:44:28.919422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.875 qpair failed and we were unable to recover it. 00:34:43.875 [2024-07-21 03:44:28.919556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.876 [2024-07-21 03:44:28.919584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.876 qpair failed and we were unable to recover it. 00:34:43.876 [2024-07-21 03:44:28.919712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.876 [2024-07-21 03:44:28.919738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.876 qpair failed and we were unable to recover it. 00:34:43.876 [2024-07-21 03:44:28.919858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.876 [2024-07-21 03:44:28.919884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.876 qpair failed and we were unable to recover it. 00:34:43.876 [2024-07-21 03:44:28.920020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.876 [2024-07-21 03:44:28.920049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.876 qpair failed and we were unable to recover it. 00:34:43.876 [2024-07-21 03:44:28.920157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.876 [2024-07-21 03:44:28.920199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.876 qpair failed and we were unable to recover it. 00:34:43.876 [2024-07-21 03:44:28.920333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.876 [2024-07-21 03:44:28.920361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.876 qpair failed and we were unable to recover it. 00:34:43.876 [2024-07-21 03:44:28.920488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.876 [2024-07-21 03:44:28.920517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.876 qpair failed and we were unable to recover it. 00:34:43.876 [2024-07-21 03:44:28.920679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.876 [2024-07-21 03:44:28.920718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.876 qpair failed and we were unable to recover it. 00:34:43.876 [2024-07-21 03:44:28.920856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.876 [2024-07-21 03:44:28.920895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.876 qpair failed and we were unable to recover it. 00:34:43.876 [2024-07-21 03:44:28.921070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.876 [2024-07-21 03:44:28.921099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.876 qpair failed and we were unable to recover it. 00:34:43.876 [2024-07-21 03:44:28.921225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.876 [2024-07-21 03:44:28.921253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.876 qpair failed and we were unable to recover it. 00:34:43.876 [2024-07-21 03:44:28.921386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.876 [2024-07-21 03:44:28.921422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.876 qpair failed and we were unable to recover it. 00:34:43.876 [2024-07-21 03:44:28.921555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.876 [2024-07-21 03:44:28.921583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.876 qpair failed and we were unable to recover it. 00:34:43.876 [2024-07-21 03:44:28.921718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.876 [2024-07-21 03:44:28.921747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.876 qpair failed and we were unable to recover it. 00:34:43.876 [2024-07-21 03:44:28.921861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.876 [2024-07-21 03:44:28.921890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.876 qpair failed and we were unable to recover it. 00:34:43.876 [2024-07-21 03:44:28.922022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.876 [2024-07-21 03:44:28.922052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.876 qpair failed and we were unable to recover it. 00:34:43.876 [2024-07-21 03:44:28.922183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.876 [2024-07-21 03:44:28.922212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.876 qpair failed and we were unable to recover it. 00:34:43.876 [2024-07-21 03:44:28.922339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.876 [2024-07-21 03:44:28.922367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.876 qpair failed and we were unable to recover it. 00:34:43.876 [2024-07-21 03:44:28.922481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.876 [2024-07-21 03:44:28.922524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.876 qpair failed and we were unable to recover it. 00:34:43.876 [2024-07-21 03:44:28.922655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.876 [2024-07-21 03:44:28.922683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.876 qpair failed and we were unable to recover it. 00:34:43.876 [2024-07-21 03:44:28.922833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.876 [2024-07-21 03:44:28.922858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.876 qpair failed and we were unable to recover it. 00:34:43.876 [2024-07-21 03:44:28.922998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.876 [2024-07-21 03:44:28.923027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.876 qpair failed and we were unable to recover it. 00:34:43.876 [2024-07-21 03:44:28.923131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.876 [2024-07-21 03:44:28.923159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.876 qpair failed and we were unable to recover it. 00:34:43.876 [2024-07-21 03:44:28.923318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.876 [2024-07-21 03:44:28.923346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.876 qpair failed and we were unable to recover it. 00:34:43.876 [2024-07-21 03:44:28.923476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.876 [2024-07-21 03:44:28.923505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.876 qpair failed and we were unable to recover it. 00:34:43.876 [2024-07-21 03:44:28.923642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.876 [2024-07-21 03:44:28.923684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.876 qpair failed and we were unable to recover it. 00:34:43.876 [2024-07-21 03:44:28.923827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.876 [2024-07-21 03:44:28.923853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.876 qpair failed and we were unable to recover it. 00:34:43.876 [2024-07-21 03:44:28.923989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.876 [2024-07-21 03:44:28.924018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.876 qpair failed and we were unable to recover it. 00:34:43.876 [2024-07-21 03:44:28.924211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.876 [2024-07-21 03:44:28.924271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.876 qpair failed and we were unable to recover it. 00:34:43.876 [2024-07-21 03:44:28.924405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.876 [2024-07-21 03:44:28.924447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.876 qpair failed and we were unable to recover it. 00:34:43.876 [2024-07-21 03:44:28.924623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.876 [2024-07-21 03:44:28.924680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.876 qpair failed and we were unable to recover it. 00:34:43.876 [2024-07-21 03:44:28.924785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.876 [2024-07-21 03:44:28.924812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.876 qpair failed and we were unable to recover it. 00:34:43.876 [2024-07-21 03:44:28.924936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.876 [2024-07-21 03:44:28.924962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.876 qpair failed and we were unable to recover it. 00:34:43.876 [2024-07-21 03:44:28.925050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.876 [2024-07-21 03:44:28.925077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.876 qpair failed and we were unable to recover it. 00:34:43.876 [2024-07-21 03:44:28.925303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.876 [2024-07-21 03:44:28.925358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.876 qpair failed and we were unable to recover it. 00:34:43.876 [2024-07-21 03:44:28.925517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.876 [2024-07-21 03:44:28.925546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.876 qpair failed and we were unable to recover it. 00:34:43.876 [2024-07-21 03:44:28.925670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.876 [2024-07-21 03:44:28.925698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.876 qpair failed and we were unable to recover it. 00:34:43.876 [2024-07-21 03:44:28.925821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.876 [2024-07-21 03:44:28.925846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.876 qpair failed and we were unable to recover it. 00:34:43.876 [2024-07-21 03:44:28.925945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.876 [2024-07-21 03:44:28.925975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.876 qpair failed and we were unable to recover it. 00:34:43.876 [2024-07-21 03:44:28.926119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.876 [2024-07-21 03:44:28.926145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.876 qpair failed and we were unable to recover it. 00:34:43.876 [2024-07-21 03:44:28.926348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.876 [2024-07-21 03:44:28.926405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.877 qpair failed and we were unable to recover it. 00:34:43.877 [2024-07-21 03:44:28.926551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.877 [2024-07-21 03:44:28.926577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.877 qpair failed and we were unable to recover it. 00:34:43.877 [2024-07-21 03:44:28.926695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.877 [2024-07-21 03:44:28.926736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.877 qpair failed and we were unable to recover it. 00:34:43.877 [2024-07-21 03:44:28.926878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.877 [2024-07-21 03:44:28.926917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.877 qpair failed and we were unable to recover it. 00:34:43.877 [2024-07-21 03:44:28.927072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.877 [2024-07-21 03:44:28.927136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.877 qpair failed and we were unable to recover it. 00:34:43.877 [2024-07-21 03:44:28.927286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.877 [2024-07-21 03:44:28.927358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.877 qpair failed and we were unable to recover it. 00:34:43.877 [2024-07-21 03:44:28.927510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.877 [2024-07-21 03:44:28.927537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.877 qpair failed and we were unable to recover it. 00:34:43.877 [2024-07-21 03:44:28.927661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.877 [2024-07-21 03:44:28.927688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.877 qpair failed and we were unable to recover it. 00:34:43.877 [2024-07-21 03:44:28.927801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.877 [2024-07-21 03:44:28.927827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.877 qpair failed and we were unable to recover it. 00:34:43.877 [2024-07-21 03:44:28.927950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.877 [2024-07-21 03:44:28.927981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.877 qpair failed and we were unable to recover it. 00:34:43.877 [2024-07-21 03:44:28.928157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.877 [2024-07-21 03:44:28.928186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.877 qpair failed and we were unable to recover it. 00:34:43.877 [2024-07-21 03:44:28.928288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.877 [2024-07-21 03:44:28.928317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.877 qpair failed and we were unable to recover it. 00:34:43.877 [2024-07-21 03:44:28.928450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.877 [2024-07-21 03:44:28.928479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.877 qpair failed and we were unable to recover it. 00:34:43.877 [2024-07-21 03:44:28.928643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.877 [2024-07-21 03:44:28.928682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.877 qpair failed and we were unable to recover it. 00:34:43.877 [2024-07-21 03:44:28.928784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.877 [2024-07-21 03:44:28.928811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.877 qpair failed and we were unable to recover it. 00:34:43.877 [2024-07-21 03:44:28.928960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.877 [2024-07-21 03:44:28.928987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.877 qpair failed and we were unable to recover it. 00:34:43.877 [2024-07-21 03:44:28.929104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.877 [2024-07-21 03:44:28.929134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.877 qpair failed and we were unable to recover it. 00:34:43.877 [2024-07-21 03:44:28.929259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.877 [2024-07-21 03:44:28.929302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.877 qpair failed and we were unable to recover it. 00:34:43.877 [2024-07-21 03:44:28.929427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.877 [2024-07-21 03:44:28.929453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.877 qpair failed and we were unable to recover it. 00:34:43.877 [2024-07-21 03:44:28.929582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.877 [2024-07-21 03:44:28.929608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.877 qpair failed and we were unable to recover it. 00:34:43.877 [2024-07-21 03:44:28.929707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.877 [2024-07-21 03:44:28.929733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.877 qpair failed and we were unable to recover it. 00:34:43.877 [2024-07-21 03:44:28.929865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.877 [2024-07-21 03:44:28.929904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.877 qpair failed and we were unable to recover it. 00:34:43.877 [2024-07-21 03:44:28.930026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.877 [2024-07-21 03:44:28.930055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.877 qpair failed and we were unable to recover it. 00:34:43.877 [2024-07-21 03:44:28.930178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.877 [2024-07-21 03:44:28.930243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.877 qpair failed and we were unable to recover it. 00:34:43.877 [2024-07-21 03:44:28.930411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.877 [2024-07-21 03:44:28.930463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.877 qpair failed and we were unable to recover it. 00:34:43.877 [2024-07-21 03:44:28.930620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.877 [2024-07-21 03:44:28.930648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.877 qpair failed and we were unable to recover it. 00:34:43.877 [2024-07-21 03:44:28.930748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.877 [2024-07-21 03:44:28.930773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.877 qpair failed and we were unable to recover it. 00:34:43.877 [2024-07-21 03:44:28.930892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.877 [2024-07-21 03:44:28.930948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.877 qpair failed and we were unable to recover it. 00:34:43.877 [2024-07-21 03:44:28.931187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.877 [2024-07-21 03:44:28.931241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.877 qpair failed and we were unable to recover it. 00:34:43.877 [2024-07-21 03:44:28.931467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.877 [2024-07-21 03:44:28.931493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.877 qpair failed and we were unable to recover it. 00:34:43.877 [2024-07-21 03:44:28.931645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.877 [2024-07-21 03:44:28.931677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.877 qpair failed and we were unable to recover it. 00:34:43.877 [2024-07-21 03:44:28.931811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.877 [2024-07-21 03:44:28.931840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.877 qpair failed and we were unable to recover it. 00:34:43.877 [2024-07-21 03:44:28.931972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.877 [2024-07-21 03:44:28.932005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.877 qpair failed and we were unable to recover it. 00:34:43.877 [2024-07-21 03:44:28.932127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.877 [2024-07-21 03:44:28.932156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.877 qpair failed and we were unable to recover it. 00:34:43.877 [2024-07-21 03:44:28.932313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.877 [2024-07-21 03:44:28.932342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.877 qpair failed and we were unable to recover it. 00:34:43.877 [2024-07-21 03:44:28.932465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.877 [2024-07-21 03:44:28.932491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.877 qpair failed and we were unable to recover it. 00:34:43.877 [2024-07-21 03:44:28.932579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.877 [2024-07-21 03:44:28.932606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.877 qpair failed and we were unable to recover it. 00:34:43.877 [2024-07-21 03:44:28.932735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.877 [2024-07-21 03:44:28.932760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.877 qpair failed and we were unable to recover it. 00:34:43.877 [2024-07-21 03:44:28.932873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.877 [2024-07-21 03:44:28.932922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.877 qpair failed and we were unable to recover it. 00:34:43.877 [2024-07-21 03:44:28.933062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.877 [2024-07-21 03:44:28.933091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.877 qpair failed and we were unable to recover it. 00:34:43.877 [2024-07-21 03:44:28.933274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.877 [2024-07-21 03:44:28.933317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.877 qpair failed and we were unable to recover it. 00:34:43.877 [2024-07-21 03:44:28.933409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.877 [2024-07-21 03:44:28.933435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.878 qpair failed and we were unable to recover it. 00:34:43.878 [2024-07-21 03:44:28.933581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.878 [2024-07-21 03:44:28.933607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.878 qpair failed and we were unable to recover it. 00:34:43.878 [2024-07-21 03:44:28.933732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.878 [2024-07-21 03:44:28.933758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.878 qpair failed and we were unable to recover it. 00:34:43.878 [2024-07-21 03:44:28.933863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.878 [2024-07-21 03:44:28.933892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.878 qpair failed and we were unable to recover it. 00:34:43.878 [2024-07-21 03:44:28.934093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.878 [2024-07-21 03:44:28.934135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.878 qpair failed and we were unable to recover it. 00:34:43.878 [2024-07-21 03:44:28.934276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.878 [2024-07-21 03:44:28.934315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.878 qpair failed and we were unable to recover it. 00:34:43.878 [2024-07-21 03:44:28.934435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.878 [2024-07-21 03:44:28.934461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.878 qpair failed and we were unable to recover it. 00:34:43.878 [2024-07-21 03:44:28.934606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.878 [2024-07-21 03:44:28.934658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.878 qpair failed and we were unable to recover it. 00:34:43.878 [2024-07-21 03:44:28.934805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.878 [2024-07-21 03:44:28.934830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.878 qpair failed and we were unable to recover it. 00:34:43.878 [2024-07-21 03:44:28.935091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.878 [2024-07-21 03:44:28.935141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.878 qpair failed and we were unable to recover it. 00:34:43.878 [2024-07-21 03:44:28.935302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.878 [2024-07-21 03:44:28.935328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.878 qpair failed and we were unable to recover it. 00:34:43.878 [2024-07-21 03:44:28.935494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.878 [2024-07-21 03:44:28.935520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.878 qpair failed and we were unable to recover it. 00:34:43.878 [2024-07-21 03:44:28.935654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.878 [2024-07-21 03:44:28.935694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.878 qpair failed and we were unable to recover it. 00:34:43.878 [2024-07-21 03:44:28.935825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.878 [2024-07-21 03:44:28.935852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.878 qpair failed and we were unable to recover it. 00:34:43.878 [2024-07-21 03:44:28.936009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.878 [2024-07-21 03:44:28.936037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.878 qpair failed and we were unable to recover it. 00:34:43.878 [2024-07-21 03:44:28.936173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.878 [2024-07-21 03:44:28.936202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.878 qpair failed and we were unable to recover it. 00:34:43.878 [2024-07-21 03:44:28.936327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.878 [2024-07-21 03:44:28.936355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.878 qpair failed and we were unable to recover it. 00:34:43.878 [2024-07-21 03:44:28.936480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.878 [2024-07-21 03:44:28.936509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.878 qpair failed and we were unable to recover it. 00:34:43.878 [2024-07-21 03:44:28.936659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.878 [2024-07-21 03:44:28.936685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.878 qpair failed and we were unable to recover it. 00:34:43.878 [2024-07-21 03:44:28.936771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.878 [2024-07-21 03:44:28.936797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.878 qpair failed and we were unable to recover it. 00:34:43.878 [2024-07-21 03:44:28.936944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.878 [2024-07-21 03:44:28.936969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.878 qpair failed and we were unable to recover it. 00:34:43.878 [2024-07-21 03:44:28.937112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.878 [2024-07-21 03:44:28.937140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.878 qpair failed and we were unable to recover it. 00:34:43.878 [2024-07-21 03:44:28.937272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.878 [2024-07-21 03:44:28.937300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.878 qpair failed and we were unable to recover it. 00:34:43.878 [2024-07-21 03:44:28.937496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.878 [2024-07-21 03:44:28.937524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.878 qpair failed and we were unable to recover it. 00:34:43.878 [2024-07-21 03:44:28.937636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.878 [2024-07-21 03:44:28.937682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.878 qpair failed and we were unable to recover it. 00:34:43.878 [2024-07-21 03:44:28.937776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.878 [2024-07-21 03:44:28.937801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.878 qpair failed and we were unable to recover it. 00:34:43.878 [2024-07-21 03:44:28.937887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.878 [2024-07-21 03:44:28.937912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.878 qpair failed and we were unable to recover it. 00:34:43.878 [2024-07-21 03:44:28.938035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.878 [2024-07-21 03:44:28.938060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.878 qpair failed and we were unable to recover it. 00:34:43.878 [2024-07-21 03:44:28.938180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.878 [2024-07-21 03:44:28.938242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.878 qpair failed and we were unable to recover it. 00:34:43.878 [2024-07-21 03:44:28.938355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.878 [2024-07-21 03:44:28.938396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.878 qpair failed and we were unable to recover it. 00:34:43.878 [2024-07-21 03:44:28.938501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.878 [2024-07-21 03:44:28.938545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.878 qpair failed and we were unable to recover it. 00:34:43.878 [2024-07-21 03:44:28.938664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.878 [2024-07-21 03:44:28.938690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.878 qpair failed and we were unable to recover it. 00:34:43.878 [2024-07-21 03:44:28.938809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.878 [2024-07-21 03:44:28.938836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.878 qpair failed and we were unable to recover it. 00:34:43.878 [2024-07-21 03:44:28.938951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.878 [2024-07-21 03:44:28.938976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.878 qpair failed and we were unable to recover it. 00:34:43.878 [2024-07-21 03:44:28.939117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.878 [2024-07-21 03:44:28.939146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.878 qpair failed and we were unable to recover it. 00:34:43.878 [2024-07-21 03:44:28.939312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.878 [2024-07-21 03:44:28.939340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.878 qpair failed and we were unable to recover it. 00:34:43.878 [2024-07-21 03:44:28.939446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.878 [2024-07-21 03:44:28.939475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.878 qpair failed and we were unable to recover it. 00:34:43.878 [2024-07-21 03:44:28.939597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.878 [2024-07-21 03:44:28.939668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.878 qpair failed and we were unable to recover it. 00:34:43.878 [2024-07-21 03:44:28.939806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.878 [2024-07-21 03:44:28.939834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.878 qpair failed and we were unable to recover it. 00:34:43.878 [2024-07-21 03:44:28.940021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.878 [2024-07-21 03:44:28.940087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.878 qpair failed and we were unable to recover it. 00:34:43.878 [2024-07-21 03:44:28.940263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.878 [2024-07-21 03:44:28.940290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.878 qpair failed and we were unable to recover it. 00:34:43.878 [2024-07-21 03:44:28.940486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.879 [2024-07-21 03:44:28.940515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.879 qpair failed and we were unable to recover it. 00:34:43.879 [2024-07-21 03:44:28.940659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.879 [2024-07-21 03:44:28.940687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.879 qpair failed and we were unable to recover it. 00:34:43.879 [2024-07-21 03:44:28.940807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.879 [2024-07-21 03:44:28.940832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.879 qpair failed and we were unable to recover it. 00:34:43.879 [2024-07-21 03:44:28.940952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.879 [2024-07-21 03:44:28.940980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.879 qpair failed and we were unable to recover it. 00:34:43.879 [2024-07-21 03:44:28.941109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.879 [2024-07-21 03:44:28.941138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.879 qpair failed and we were unable to recover it. 00:34:43.879 [2024-07-21 03:44:28.941264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.879 [2024-07-21 03:44:28.941292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.879 qpair failed and we were unable to recover it. 00:34:43.879 [2024-07-21 03:44:28.941436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.879 [2024-07-21 03:44:28.941479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.879 qpair failed and we were unable to recover it. 00:34:43.879 [2024-07-21 03:44:28.941598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.879 [2024-07-21 03:44:28.941639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.879 qpair failed and we were unable to recover it. 00:34:43.879 [2024-07-21 03:44:28.941777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.879 [2024-07-21 03:44:28.941816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.879 qpair failed and we were unable to recover it. 00:34:43.879 [2024-07-21 03:44:28.941917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.879 [2024-07-21 03:44:28.941942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.879 qpair failed and we were unable to recover it. 00:34:43.879 [2024-07-21 03:44:28.942107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.879 [2024-07-21 03:44:28.942136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.879 qpair failed and we were unable to recover it. 00:34:43.879 [2024-07-21 03:44:28.942240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.879 [2024-07-21 03:44:28.942269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.879 qpair failed and we were unable to recover it. 00:34:43.879 [2024-07-21 03:44:28.942420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.879 [2024-07-21 03:44:28.942471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.879 qpair failed and we were unable to recover it. 00:34:43.879 [2024-07-21 03:44:28.942602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.879 [2024-07-21 03:44:28.942653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.879 qpair failed and we were unable to recover it. 00:34:43.879 [2024-07-21 03:44:28.942761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.879 [2024-07-21 03:44:28.942788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.879 qpair failed and we were unable to recover it. 00:34:43.879 [2024-07-21 03:44:28.942906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.879 [2024-07-21 03:44:28.942936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.879 qpair failed and we were unable to recover it. 00:34:43.879 [2024-07-21 03:44:28.943060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.879 [2024-07-21 03:44:28.943085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.879 qpair failed and we were unable to recover it. 00:34:43.879 [2024-07-21 03:44:28.943239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.879 [2024-07-21 03:44:28.943265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.879 qpair failed and we were unable to recover it. 00:34:43.879 [2024-07-21 03:44:28.943370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.879 [2024-07-21 03:44:28.943441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.879 qpair failed and we were unable to recover it. 00:34:43.879 [2024-07-21 03:44:28.943608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.879 [2024-07-21 03:44:28.943644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.879 qpair failed and we were unable to recover it. 00:34:43.879 [2024-07-21 03:44:28.943744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.879 [2024-07-21 03:44:28.943773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.879 qpair failed and we were unable to recover it. 00:34:43.879 [2024-07-21 03:44:28.943902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.879 [2024-07-21 03:44:28.943938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.879 qpair failed and we were unable to recover it. 00:34:43.879 [2024-07-21 03:44:28.944116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.879 [2024-07-21 03:44:28.944145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.879 qpair failed and we were unable to recover it. 00:34:43.879 [2024-07-21 03:44:28.944330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.879 [2024-07-21 03:44:28.944395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.879 qpair failed and we were unable to recover it. 00:34:43.879 [2024-07-21 03:44:28.944572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.879 [2024-07-21 03:44:28.944598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.879 qpair failed and we were unable to recover it. 00:34:43.879 [2024-07-21 03:44:28.944710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.879 [2024-07-21 03:44:28.944739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.879 qpair failed and we were unable to recover it. 00:34:43.879 [2024-07-21 03:44:28.944837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.879 [2024-07-21 03:44:28.944864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.879 qpair failed and we were unable to recover it. 00:34:43.879 [2024-07-21 03:44:28.944977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.879 [2024-07-21 03:44:28.945020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.879 qpair failed and we were unable to recover it. 00:34:43.879 [2024-07-21 03:44:28.945160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.879 [2024-07-21 03:44:28.945203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.879 qpair failed and we were unable to recover it. 00:34:43.879 [2024-07-21 03:44:28.945322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.879 [2024-07-21 03:44:28.945348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.879 qpair failed and we were unable to recover it. 00:34:43.879 [2024-07-21 03:44:28.945465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.879 [2024-07-21 03:44:28.945490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.879 qpair failed and we were unable to recover it. 00:34:43.879 [2024-07-21 03:44:28.945584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.879 [2024-07-21 03:44:28.945611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.879 qpair failed and we were unable to recover it. 00:34:43.879 [2024-07-21 03:44:28.945734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.879 [2024-07-21 03:44:28.945759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.879 qpair failed and we were unable to recover it. 00:34:43.879 [2024-07-21 03:44:28.945856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.879 [2024-07-21 03:44:28.945885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.879 qpair failed and we were unable to recover it. 00:34:43.879 [2024-07-21 03:44:28.946027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.879 [2024-07-21 03:44:28.946056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.879 qpair failed and we were unable to recover it. 00:34:43.879 [2024-07-21 03:44:28.946264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.879 [2024-07-21 03:44:28.946315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.880 qpair failed and we were unable to recover it. 00:34:43.880 [2024-07-21 03:44:28.946421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.880 [2024-07-21 03:44:28.946447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.880 qpair failed and we were unable to recover it. 00:34:43.880 [2024-07-21 03:44:28.946538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.880 [2024-07-21 03:44:28.946563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.880 qpair failed and we were unable to recover it. 00:34:43.880 [2024-07-21 03:44:28.946726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.880 [2024-07-21 03:44:28.946771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.880 qpair failed and we were unable to recover it. 00:34:43.880 [2024-07-21 03:44:28.946935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.880 [2024-07-21 03:44:28.946981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.880 qpair failed and we were unable to recover it. 00:34:43.880 [2024-07-21 03:44:28.947118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.880 [2024-07-21 03:44:28.947161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.880 qpair failed and we were unable to recover it. 00:34:43.880 [2024-07-21 03:44:28.947308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.880 [2024-07-21 03:44:28.947334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.880 qpair failed and we were unable to recover it. 00:34:43.880 [2024-07-21 03:44:28.947423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.880 [2024-07-21 03:44:28.947447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.880 qpair failed and we were unable to recover it. 00:34:43.880 [2024-07-21 03:44:28.947564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.880 [2024-07-21 03:44:28.947589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.880 qpair failed and we were unable to recover it. 00:34:43.880 [2024-07-21 03:44:28.947712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.880 [2024-07-21 03:44:28.947743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.880 qpair failed and we were unable to recover it. 00:34:43.880 [2024-07-21 03:44:28.947857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.880 [2024-07-21 03:44:28.947887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.880 qpair failed and we were unable to recover it. 00:34:43.880 [2024-07-21 03:44:28.947989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.880 [2024-07-21 03:44:28.948030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.880 qpair failed and we were unable to recover it. 00:34:43.880 [2024-07-21 03:44:28.948193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.880 [2024-07-21 03:44:28.948219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.880 qpair failed and we were unable to recover it. 00:34:43.880 [2024-07-21 03:44:28.948338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.880 [2024-07-21 03:44:28.948363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.880 qpair failed and we were unable to recover it. 00:34:43.880 [2024-07-21 03:44:28.948483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.880 [2024-07-21 03:44:28.948509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.880 qpair failed and we were unable to recover it. 00:34:43.880 [2024-07-21 03:44:28.948604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.880 [2024-07-21 03:44:28.948640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.880 qpair failed and we were unable to recover it. 00:34:43.880 [2024-07-21 03:44:28.948749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.880 [2024-07-21 03:44:28.948793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.880 qpair failed and we were unable to recover it. 00:34:43.880 [2024-07-21 03:44:28.948936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.880 [2024-07-21 03:44:28.948978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.880 qpair failed and we were unable to recover it. 00:34:43.880 [2024-07-21 03:44:28.949086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.880 [2024-07-21 03:44:28.949115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.880 qpair failed and we were unable to recover it. 00:34:43.880 [2024-07-21 03:44:28.949224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.880 [2024-07-21 03:44:28.949249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.880 qpair failed and we were unable to recover it. 00:34:43.880 [2024-07-21 03:44:28.949339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.880 [2024-07-21 03:44:28.949365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.880 qpair failed and we were unable to recover it. 00:34:43.880 [2024-07-21 03:44:28.949488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.880 [2024-07-21 03:44:28.949513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.880 qpair failed and we were unable to recover it. 00:34:43.880 [2024-07-21 03:44:28.949631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.880 [2024-07-21 03:44:28.949657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.880 qpair failed and we were unable to recover it. 00:34:43.880 [2024-07-21 03:44:28.949772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.880 [2024-07-21 03:44:28.949797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.880 qpair failed and we were unable to recover it. 00:34:43.880 [2024-07-21 03:44:28.949886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.880 [2024-07-21 03:44:28.949922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.880 qpair failed and we were unable to recover it. 00:34:43.880 [2024-07-21 03:44:28.950071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.880 [2024-07-21 03:44:28.950095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.880 qpair failed and we were unable to recover it. 00:34:43.880 [2024-07-21 03:44:28.950220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.880 [2024-07-21 03:44:28.950260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.880 qpair failed and we were unable to recover it. 00:34:43.880 [2024-07-21 03:44:28.950382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.880 [2024-07-21 03:44:28.950410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.880 qpair failed and we were unable to recover it. 00:34:43.880 [2024-07-21 03:44:28.950533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.880 [2024-07-21 03:44:28.950564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.880 qpair failed and we were unable to recover it. 00:34:43.880 [2024-07-21 03:44:28.950688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.880 [2024-07-21 03:44:28.950717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.880 qpair failed and we were unable to recover it. 00:34:43.880 [2024-07-21 03:44:28.950846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.880 [2024-07-21 03:44:28.950875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.880 qpair failed and we were unable to recover it. 00:34:43.880 [2024-07-21 03:44:28.951052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.880 [2024-07-21 03:44:28.951095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.880 qpair failed and we were unable to recover it. 00:34:43.880 [2024-07-21 03:44:28.951213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.880 [2024-07-21 03:44:28.951257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.880 qpair failed and we were unable to recover it. 00:34:43.880 [2024-07-21 03:44:28.951396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.880 [2024-07-21 03:44:28.951425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.880 qpair failed and we were unable to recover it. 00:34:43.880 [2024-07-21 03:44:28.951548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.880 [2024-07-21 03:44:28.951574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.880 qpair failed and we were unable to recover it. 00:34:43.880 [2024-07-21 03:44:28.951700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.880 [2024-07-21 03:44:28.951726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.880 qpair failed and we were unable to recover it. 00:34:43.880 [2024-07-21 03:44:28.951866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.880 [2024-07-21 03:44:28.951895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.880 qpair failed and we were unable to recover it. 00:34:43.880 [2024-07-21 03:44:28.952029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.880 [2024-07-21 03:44:28.952071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.880 qpair failed and we were unable to recover it. 00:34:43.880 [2024-07-21 03:44:28.952213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.880 [2024-07-21 03:44:28.952243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.880 qpair failed and we were unable to recover it. 00:34:43.880 [2024-07-21 03:44:28.952358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.880 [2024-07-21 03:44:28.952386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.880 qpair failed and we were unable to recover it. 00:34:43.880 [2024-07-21 03:44:28.952521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.880 [2024-07-21 03:44:28.952552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.880 qpair failed and we were unable to recover it. 00:34:43.880 [2024-07-21 03:44:28.952713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.881 [2024-07-21 03:44:28.952740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.881 qpair failed and we were unable to recover it. 00:34:43.881 [2024-07-21 03:44:28.952854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.881 [2024-07-21 03:44:28.952884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.881 qpair failed and we were unable to recover it. 00:34:43.881 [2024-07-21 03:44:28.952982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.881 [2024-07-21 03:44:28.953011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.881 qpair failed and we were unable to recover it. 00:34:43.881 [2024-07-21 03:44:28.953166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.881 [2024-07-21 03:44:28.953194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.881 qpair failed and we were unable to recover it. 00:34:43.881 [2024-07-21 03:44:28.953295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.881 [2024-07-21 03:44:28.953323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.881 qpair failed and we were unable to recover it. 00:34:43.881 [2024-07-21 03:44:28.953491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.881 [2024-07-21 03:44:28.953519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.881 qpair failed and we were unable to recover it. 00:34:43.881 [2024-07-21 03:44:28.953664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.881 [2024-07-21 03:44:28.953692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.881 qpair failed and we were unable to recover it. 00:34:43.881 [2024-07-21 03:44:28.953839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.881 [2024-07-21 03:44:28.953865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.881 qpair failed and we were unable to recover it. 00:34:43.881 [2024-07-21 03:44:28.953983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.881 [2024-07-21 03:44:28.954008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.881 qpair failed and we were unable to recover it. 00:34:43.881 [2024-07-21 03:44:28.954120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.881 [2024-07-21 03:44:28.954146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.881 qpair failed and we were unable to recover it. 00:34:43.881 [2024-07-21 03:44:28.954319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.881 [2024-07-21 03:44:28.954362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.881 qpair failed and we were unable to recover it. 00:34:43.881 [2024-07-21 03:44:28.954496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.881 [2024-07-21 03:44:28.954524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.881 qpair failed and we were unable to recover it. 00:34:43.881 [2024-07-21 03:44:28.954682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.881 [2024-07-21 03:44:28.954708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.881 qpair failed and we were unable to recover it. 00:34:43.881 [2024-07-21 03:44:28.954812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.881 [2024-07-21 03:44:28.954841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.881 qpair failed and we were unable to recover it. 00:34:43.881 [2024-07-21 03:44:28.954992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.881 [2024-07-21 03:44:28.955036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.881 qpair failed and we were unable to recover it. 00:34:43.881 [2024-07-21 03:44:28.955181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.881 [2024-07-21 03:44:28.955211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.881 qpair failed and we were unable to recover it. 00:34:43.881 [2024-07-21 03:44:28.955345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.881 [2024-07-21 03:44:28.955373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.881 qpair failed and we were unable to recover it. 00:34:43.881 [2024-07-21 03:44:28.955484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.881 [2024-07-21 03:44:28.955514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.881 qpair failed and we were unable to recover it. 00:34:43.881 [2024-07-21 03:44:28.955692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.881 [2024-07-21 03:44:28.955719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.881 qpair failed and we were unable to recover it. 00:34:43.881 [2024-07-21 03:44:28.955842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.881 [2024-07-21 03:44:28.955870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.881 qpair failed and we were unable to recover it. 00:34:43.881 [2024-07-21 03:44:28.956008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.881 [2024-07-21 03:44:28.956052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.881 qpair failed and we were unable to recover it. 00:34:43.881 [2024-07-21 03:44:28.956165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.881 [2024-07-21 03:44:28.956207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.881 qpair failed and we were unable to recover it. 00:34:43.881 [2024-07-21 03:44:28.956304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.881 [2024-07-21 03:44:28.956329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.881 qpair failed and we were unable to recover it. 00:34:43.881 [2024-07-21 03:44:28.956449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.881 [2024-07-21 03:44:28.956474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.881 qpair failed and we were unable to recover it. 00:34:43.881 [2024-07-21 03:44:28.956568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.881 [2024-07-21 03:44:28.956593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.881 qpair failed and we were unable to recover it. 00:34:43.881 [2024-07-21 03:44:28.956737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.881 [2024-07-21 03:44:28.956764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.881 qpair failed and we were unable to recover it. 00:34:43.881 [2024-07-21 03:44:28.956881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.881 [2024-07-21 03:44:28.956907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.881 qpair failed and we were unable to recover it. 00:34:43.881 [2024-07-21 03:44:28.956994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.881 [2024-07-21 03:44:28.957028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.881 qpair failed and we were unable to recover it. 00:34:43.881 [2024-07-21 03:44:28.957194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.881 [2024-07-21 03:44:28.957222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.881 qpair failed and we were unable to recover it. 00:34:43.881 [2024-07-21 03:44:28.957348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.881 [2024-07-21 03:44:28.957376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.881 qpair failed and we were unable to recover it. 00:34:43.881 [2024-07-21 03:44:28.957482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.881 [2024-07-21 03:44:28.957509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.881 qpair failed and we were unable to recover it. 00:34:43.881 [2024-07-21 03:44:28.957629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.881 [2024-07-21 03:44:28.957672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.881 qpair failed and we were unable to recover it. 00:34:43.881 [2024-07-21 03:44:28.957797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.881 [2024-07-21 03:44:28.957825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.881 qpair failed and we were unable to recover it. 00:34:43.881 [2024-07-21 03:44:28.957964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.881 [2024-07-21 03:44:28.957992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.881 qpair failed and we were unable to recover it. 00:34:43.881 [2024-07-21 03:44:28.958119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.881 [2024-07-21 03:44:28.958148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.881 qpair failed and we were unable to recover it. 00:34:43.881 [2024-07-21 03:44:28.958253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.881 [2024-07-21 03:44:28.958283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.881 qpair failed and we were unable to recover it. 00:34:43.881 [2024-07-21 03:44:28.958420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.881 [2024-07-21 03:44:28.958449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.881 qpair failed and we were unable to recover it. 00:34:43.881 [2024-07-21 03:44:28.958610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.881 [2024-07-21 03:44:28.958644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.881 qpair failed and we were unable to recover it. 00:34:43.881 [2024-07-21 03:44:28.958805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.881 [2024-07-21 03:44:28.958848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.881 qpair failed and we were unable to recover it. 00:34:43.881 [2024-07-21 03:44:28.958963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.881 [2024-07-21 03:44:28.958994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.881 qpair failed and we were unable to recover it. 00:34:43.881 [2024-07-21 03:44:28.959152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.881 [2024-07-21 03:44:28.959182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.881 qpair failed and we were unable to recover it. 00:34:43.881 [2024-07-21 03:44:28.959383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.882 [2024-07-21 03:44:28.959432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.882 qpair failed and we were unable to recover it. 00:34:43.882 [2024-07-21 03:44:28.959536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.882 [2024-07-21 03:44:28.959566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.882 qpair failed and we were unable to recover it. 00:34:43.882 [2024-07-21 03:44:28.959703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.882 [2024-07-21 03:44:28.959730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.882 qpair failed and we were unable to recover it. 00:34:43.882 [2024-07-21 03:44:28.959874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.882 [2024-07-21 03:44:28.959903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.882 qpair failed and we were unable to recover it. 00:34:43.882 [2024-07-21 03:44:28.960033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.882 [2024-07-21 03:44:28.960063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.882 qpair failed and we were unable to recover it. 00:34:43.882 [2024-07-21 03:44:28.960204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.882 [2024-07-21 03:44:28.960233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.882 qpair failed and we were unable to recover it. 00:34:43.882 [2024-07-21 03:44:28.960422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.882 [2024-07-21 03:44:28.960484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.882 qpair failed and we were unable to recover it. 00:34:43.882 [2024-07-21 03:44:28.960611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.882 [2024-07-21 03:44:28.960659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.882 qpair failed and we were unable to recover it. 00:34:43.882 [2024-07-21 03:44:28.960779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.882 [2024-07-21 03:44:28.960805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.882 qpair failed and we were unable to recover it. 00:34:43.882 [2024-07-21 03:44:28.960896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.882 [2024-07-21 03:44:28.960922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.882 qpair failed and we were unable to recover it. 00:34:43.882 [2024-07-21 03:44:28.961056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.882 [2024-07-21 03:44:28.961085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.882 qpair failed and we were unable to recover it. 00:34:43.882 [2024-07-21 03:44:28.961197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.882 [2024-07-21 03:44:28.961227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.882 qpair failed and we were unable to recover it. 00:34:43.882 [2024-07-21 03:44:28.961356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.882 [2024-07-21 03:44:28.961385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.882 qpair failed and we were unable to recover it. 00:34:43.882 [2024-07-21 03:44:28.961512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.882 [2024-07-21 03:44:28.961550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.882 qpair failed and we were unable to recover it. 00:34:43.882 [2024-07-21 03:44:28.961693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.882 [2024-07-21 03:44:28.961721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.882 qpair failed and we were unable to recover it. 00:34:43.882 [2024-07-21 03:44:28.961814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.882 [2024-07-21 03:44:28.961839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.882 qpair failed and we were unable to recover it. 00:34:43.882 [2024-07-21 03:44:28.961963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.882 [2024-07-21 03:44:28.962028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.882 qpair failed and we were unable to recover it. 00:34:43.882 [2024-07-21 03:44:28.962150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.882 [2024-07-21 03:44:28.962213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.882 qpair failed and we were unable to recover it. 00:34:43.882 [2024-07-21 03:44:28.962337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.882 [2024-07-21 03:44:28.962364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.882 qpair failed and we were unable to recover it. 00:34:43.882 [2024-07-21 03:44:28.962497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.882 [2024-07-21 03:44:28.962527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.882 qpair failed and we were unable to recover it. 00:34:43.882 [2024-07-21 03:44:28.962671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.882 [2024-07-21 03:44:28.962696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.882 qpair failed and we were unable to recover it. 00:34:43.882 [2024-07-21 03:44:28.962821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.882 [2024-07-21 03:44:28.962849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.882 qpair failed and we were unable to recover it. 00:34:43.882 [2024-07-21 03:44:28.962975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.882 [2024-07-21 03:44:28.963022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.882 qpair failed and we were unable to recover it. 00:34:43.882 [2024-07-21 03:44:28.963136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.882 [2024-07-21 03:44:28.963165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.882 qpair failed and we were unable to recover it. 00:34:43.882 [2024-07-21 03:44:28.963275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.882 [2024-07-21 03:44:28.963302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.882 qpair failed and we were unable to recover it. 00:34:43.882 [2024-07-21 03:44:28.963451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.882 [2024-07-21 03:44:28.963476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.882 qpair failed and we were unable to recover it. 00:34:43.882 [2024-07-21 03:44:28.963591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.882 [2024-07-21 03:44:28.963622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.882 qpair failed and we were unable to recover it. 00:34:43.882 [2024-07-21 03:44:28.963752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.882 [2024-07-21 03:44:28.963777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.882 qpair failed and we were unable to recover it. 00:34:43.882 [2024-07-21 03:44:28.963872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.882 [2024-07-21 03:44:28.963898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.882 qpair failed and we were unable to recover it. 00:34:43.882 [2024-07-21 03:44:28.963983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.882 [2024-07-21 03:44:28.964008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.882 qpair failed and we were unable to recover it. 00:34:43.882 [2024-07-21 03:44:28.964121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.882 [2024-07-21 03:44:28.964146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.882 qpair failed and we were unable to recover it. 00:34:43.882 [2024-07-21 03:44:28.964264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.882 [2024-07-21 03:44:28.964289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.882 qpair failed and we were unable to recover it. 00:34:43.882 [2024-07-21 03:44:28.964386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.882 [2024-07-21 03:44:28.964411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.882 qpair failed and we were unable to recover it. 00:34:43.882 [2024-07-21 03:44:28.964524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.882 [2024-07-21 03:44:28.964549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.882 qpair failed and we were unable to recover it. 00:34:43.882 [2024-07-21 03:44:28.964642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.882 [2024-07-21 03:44:28.964668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.882 qpair failed and we were unable to recover it. 00:34:43.882 [2024-07-21 03:44:28.964811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.882 [2024-07-21 03:44:28.964837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.882 qpair failed and we were unable to recover it. 00:34:43.882 [2024-07-21 03:44:28.964982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.882 [2024-07-21 03:44:28.965007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.882 qpair failed and we were unable to recover it. 00:34:43.882 [2024-07-21 03:44:28.965104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.882 [2024-07-21 03:44:28.965130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.882 qpair failed and we were unable to recover it. 00:34:43.882 [2024-07-21 03:44:28.965253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.882 [2024-07-21 03:44:28.965278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.882 qpair failed and we were unable to recover it. 00:34:43.882 [2024-07-21 03:44:28.965395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.882 [2024-07-21 03:44:28.965420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.882 qpair failed and we were unable to recover it. 00:34:43.882 [2024-07-21 03:44:28.965550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.882 [2024-07-21 03:44:28.965575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.882 qpair failed and we were unable to recover it. 00:34:43.882 [2024-07-21 03:44:28.965699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.883 [2024-07-21 03:44:28.965743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.883 qpair failed and we were unable to recover it. 00:34:43.883 [2024-07-21 03:44:28.965841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.883 [2024-07-21 03:44:28.965869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.883 qpair failed and we were unable to recover it. 00:34:43.883 [2024-07-21 03:44:28.965963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.883 [2024-07-21 03:44:28.965989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.883 qpair failed and we were unable to recover it. 00:34:43.883 [2024-07-21 03:44:28.966113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.883 [2024-07-21 03:44:28.966139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.883 qpair failed and we were unable to recover it. 00:34:43.883 [2024-07-21 03:44:28.966265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.883 [2024-07-21 03:44:28.966289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.883 qpair failed and we were unable to recover it. 00:34:43.883 [2024-07-21 03:44:28.966390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.883 [2024-07-21 03:44:28.966430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.883 qpair failed and we were unable to recover it. 00:34:43.883 [2024-07-21 03:44:28.966560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.883 [2024-07-21 03:44:28.966588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.883 qpair failed and we were unable to recover it. 00:34:43.883 [2024-07-21 03:44:28.966694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.883 [2024-07-21 03:44:28.966721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.883 qpair failed and we were unable to recover it. 00:34:43.883 [2024-07-21 03:44:28.966846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.883 [2024-07-21 03:44:28.966871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.883 qpair failed and we were unable to recover it. 00:34:43.883 [2024-07-21 03:44:28.967016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.883 [2024-07-21 03:44:28.967045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.883 qpair failed and we were unable to recover it. 00:34:43.883 [2024-07-21 03:44:28.967156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.883 [2024-07-21 03:44:28.967182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.883 qpair failed and we were unable to recover it. 00:34:43.883 [2024-07-21 03:44:28.967301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.883 [2024-07-21 03:44:28.967329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.883 qpair failed and we were unable to recover it. 00:34:43.883 [2024-07-21 03:44:28.967437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.883 [2024-07-21 03:44:28.967471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.883 qpair failed and we were unable to recover it. 00:34:43.883 [2024-07-21 03:44:28.967583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.883 [2024-07-21 03:44:28.967609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.883 qpair failed and we were unable to recover it. 00:34:43.883 [2024-07-21 03:44:28.967748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.883 [2024-07-21 03:44:28.967774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.883 qpair failed and we were unable to recover it. 00:34:43.883 [2024-07-21 03:44:28.967864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.883 [2024-07-21 03:44:28.967889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.883 qpair failed and we were unable to recover it. 00:34:43.883 [2024-07-21 03:44:28.968021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.883 [2024-07-21 03:44:28.968050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.883 qpair failed and we were unable to recover it. 00:34:43.883 [2024-07-21 03:44:28.968218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.883 [2024-07-21 03:44:28.968247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.883 qpair failed and we were unable to recover it. 00:34:43.883 [2024-07-21 03:44:28.968356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.883 [2024-07-21 03:44:28.968381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.883 qpair failed and we were unable to recover it. 00:34:43.883 [2024-07-21 03:44:28.968533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.883 [2024-07-21 03:44:28.968559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.883 qpair failed and we were unable to recover it. 00:34:43.883 [2024-07-21 03:44:28.968692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.883 [2024-07-21 03:44:28.968719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.883 qpair failed and we were unable to recover it. 00:34:43.883 [2024-07-21 03:44:28.968806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.883 [2024-07-21 03:44:28.968832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.883 qpair failed and we were unable to recover it. 00:34:43.883 [2024-07-21 03:44:28.968950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.883 [2024-07-21 03:44:28.968976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.883 qpair failed and we were unable to recover it. 00:34:43.883 [2024-07-21 03:44:28.969137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.883 [2024-07-21 03:44:28.969165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.883 qpair failed and we were unable to recover it. 00:34:43.883 [2024-07-21 03:44:28.969297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.883 [2024-07-21 03:44:28.969325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.883 qpair failed and we were unable to recover it. 00:34:43.883 [2024-07-21 03:44:28.969435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.883 [2024-07-21 03:44:28.969478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.883 qpair failed and we were unable to recover it. 00:34:43.883 [2024-07-21 03:44:28.969598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.883 [2024-07-21 03:44:28.969658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.883 qpair failed and we were unable to recover it. 00:34:43.883 [2024-07-21 03:44:28.969799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.883 [2024-07-21 03:44:28.969838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.883 qpair failed and we were unable to recover it. 00:34:43.883 [2024-07-21 03:44:28.970017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.883 [2024-07-21 03:44:28.970070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.883 qpair failed and we were unable to recover it. 00:34:43.883 [2024-07-21 03:44:28.970231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.883 [2024-07-21 03:44:28.970294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.883 qpair failed and we were unable to recover it. 00:34:43.883 [2024-07-21 03:44:28.970416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.883 [2024-07-21 03:44:28.970441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.883 qpair failed and we were unable to recover it. 00:34:43.883 [2024-07-21 03:44:28.970601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.883 [2024-07-21 03:44:28.970633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.883 qpair failed and we were unable to recover it. 00:34:43.883 [2024-07-21 03:44:28.970731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.883 [2024-07-21 03:44:28.970757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.883 qpair failed and we were unable to recover it. 00:34:43.883 [2024-07-21 03:44:28.970879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.883 [2024-07-21 03:44:28.970905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.883 qpair failed and we were unable to recover it. 00:34:43.883 [2024-07-21 03:44:28.971017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.883 [2024-07-21 03:44:28.971042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.883 qpair failed and we were unable to recover it. 00:34:43.883 [2024-07-21 03:44:28.971157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.884 [2024-07-21 03:44:28.971183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.884 qpair failed and we were unable to recover it. 00:34:43.884 [2024-07-21 03:44:28.971337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.884 [2024-07-21 03:44:28.971367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.884 qpair failed and we were unable to recover it. 00:34:43.884 [2024-07-21 03:44:28.971521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.884 [2024-07-21 03:44:28.971549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.884 qpair failed and we were unable to recover it. 00:34:43.884 [2024-07-21 03:44:28.971703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.884 [2024-07-21 03:44:28.971732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.884 qpair failed and we were unable to recover it. 00:34:43.884 [2024-07-21 03:44:28.971860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.884 [2024-07-21 03:44:28.971916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.884 qpair failed and we were unable to recover it. 00:34:43.884 [2024-07-21 03:44:28.972059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.884 [2024-07-21 03:44:28.972086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.884 qpair failed and we were unable to recover it. 00:34:43.884 [2024-07-21 03:44:28.972177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.884 [2024-07-21 03:44:28.972202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.884 qpair failed and we were unable to recover it. 00:34:43.884 [2024-07-21 03:44:28.972308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.884 [2024-07-21 03:44:28.972335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.884 qpair failed and we were unable to recover it. 00:34:43.884 [2024-07-21 03:44:28.972446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.884 [2024-07-21 03:44:28.972486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.884 qpair failed and we were unable to recover it. 00:34:43.884 [2024-07-21 03:44:28.972626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.884 [2024-07-21 03:44:28.972654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.884 qpair failed and we were unable to recover it. 00:34:43.884 [2024-07-21 03:44:28.972767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.884 [2024-07-21 03:44:28.972791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.884 qpair failed and we were unable to recover it. 00:34:43.884 [2024-07-21 03:44:28.972930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.884 [2024-07-21 03:44:28.972955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.884 qpair failed and we were unable to recover it. 00:34:43.884 [2024-07-21 03:44:28.973068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.884 [2024-07-21 03:44:28.973109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.884 qpair failed and we were unable to recover it. 00:34:43.884 [2024-07-21 03:44:28.973209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.884 [2024-07-21 03:44:28.973237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.884 qpair failed and we were unable to recover it. 00:34:43.884 [2024-07-21 03:44:28.973386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.884 [2024-07-21 03:44:28.973410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.884 qpair failed and we were unable to recover it. 00:34:43.884 [2024-07-21 03:44:28.973549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.884 [2024-07-21 03:44:28.973591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.884 qpair failed and we were unable to recover it. 00:34:43.884 [2024-07-21 03:44:28.973724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.884 [2024-07-21 03:44:28.973750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.884 qpair failed and we were unable to recover it. 00:34:43.884 [2024-07-21 03:44:28.973864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.884 [2024-07-21 03:44:28.973889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.884 qpair failed and we were unable to recover it. 00:34:43.884 [2024-07-21 03:44:28.974008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.884 [2024-07-21 03:44:28.974033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.884 qpair failed and we were unable to recover it. 00:34:43.884 [2024-07-21 03:44:28.974130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.884 [2024-07-21 03:44:28.974156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.884 qpair failed and we were unable to recover it. 00:34:43.884 [2024-07-21 03:44:28.974272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.884 [2024-07-21 03:44:28.974299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.884 qpair failed and we were unable to recover it. 00:34:43.884 [2024-07-21 03:44:28.974437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.884 [2024-07-21 03:44:28.974481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.884 qpair failed and we were unable to recover it. 00:34:43.884 [2024-07-21 03:44:28.974604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.884 [2024-07-21 03:44:28.974671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.884 qpair failed and we were unable to recover it. 00:34:43.884 [2024-07-21 03:44:28.974812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.884 [2024-07-21 03:44:28.974851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.884 qpair failed and we were unable to recover it. 00:34:43.884 [2024-07-21 03:44:28.974986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.884 [2024-07-21 03:44:28.975014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.884 qpair failed and we were unable to recover it. 00:34:43.884 [2024-07-21 03:44:28.975138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.884 [2024-07-21 03:44:28.975164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.884 qpair failed and we were unable to recover it. 00:34:43.884 [2024-07-21 03:44:28.975274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.884 [2024-07-21 03:44:28.975302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.884 qpair failed and we were unable to recover it. 00:34:43.884 [2024-07-21 03:44:28.975463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.884 [2024-07-21 03:44:28.975489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.884 qpair failed and we were unable to recover it. 00:34:43.884 [2024-07-21 03:44:28.975593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.884 [2024-07-21 03:44:28.975638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.884 qpair failed and we were unable to recover it. 00:34:43.884 [2024-07-21 03:44:28.975809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.884 [2024-07-21 03:44:28.975836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.884 qpair failed and we were unable to recover it. 00:34:43.884 [2024-07-21 03:44:28.975960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.884 [2024-07-21 03:44:28.975986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.884 qpair failed and we were unable to recover it. 00:34:43.884 [2024-07-21 03:44:28.976093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.884 [2024-07-21 03:44:28.976119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.884 qpair failed and we were unable to recover it. 00:34:43.884 [2024-07-21 03:44:28.976274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.884 [2024-07-21 03:44:28.976300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.884 qpair failed and we were unable to recover it. 00:34:43.884 [2024-07-21 03:44:28.976399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.884 [2024-07-21 03:44:28.976427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.884 qpair failed and we were unable to recover it. 00:34:43.884 [2024-07-21 03:44:28.976548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.884 [2024-07-21 03:44:28.976573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.884 qpair failed and we were unable to recover it. 00:34:43.884 [2024-07-21 03:44:28.976664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.884 [2024-07-21 03:44:28.976689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.884 qpair failed and we were unable to recover it. 00:34:43.884 [2024-07-21 03:44:28.976775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.884 [2024-07-21 03:44:28.976799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.884 qpair failed and we were unable to recover it. 00:34:43.884 [2024-07-21 03:44:28.976905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.884 [2024-07-21 03:44:28.976933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.884 qpair failed and we were unable to recover it. 00:34:43.884 [2024-07-21 03:44:28.977030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.884 [2024-07-21 03:44:28.977059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.884 qpair failed and we were unable to recover it. 00:34:43.884 [2024-07-21 03:44:28.977219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.884 [2024-07-21 03:44:28.977247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.884 qpair failed and we were unable to recover it. 00:34:43.884 [2024-07-21 03:44:28.977382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.884 [2024-07-21 03:44:28.977410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.884 qpair failed and we were unable to recover it. 00:34:43.885 [2024-07-21 03:44:28.977525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.885 [2024-07-21 03:44:28.977550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.885 qpair failed and we were unable to recover it. 00:34:43.885 [2024-07-21 03:44:28.977719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.885 [2024-07-21 03:44:28.977749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.885 qpair failed and we were unable to recover it. 00:34:43.885 [2024-07-21 03:44:28.977846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.885 [2024-07-21 03:44:28.977873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.885 qpair failed and we were unable to recover it. 00:34:43.885 [2024-07-21 03:44:28.977978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.885 [2024-07-21 03:44:28.978007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.885 qpair failed and we were unable to recover it. 00:34:43.885 [2024-07-21 03:44:28.978145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.885 [2024-07-21 03:44:28.978175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.885 qpair failed and we were unable to recover it. 00:34:43.885 [2024-07-21 03:44:28.978332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.885 [2024-07-21 03:44:28.978361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.885 qpair failed and we were unable to recover it. 00:34:43.885 [2024-07-21 03:44:28.978490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.885 [2024-07-21 03:44:28.978518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.885 qpair failed and we were unable to recover it. 00:34:43.885 [2024-07-21 03:44:28.978642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.885 [2024-07-21 03:44:28.978670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.885 qpair failed and we were unable to recover it. 00:34:43.885 [2024-07-21 03:44:28.978763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.885 [2024-07-21 03:44:28.978788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.885 qpair failed and we were unable to recover it. 00:34:43.885 [2024-07-21 03:44:28.978905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.885 [2024-07-21 03:44:28.978930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.885 qpair failed and we were unable to recover it. 00:34:43.885 [2024-07-21 03:44:28.979072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.885 [2024-07-21 03:44:28.979100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.885 qpair failed and we were unable to recover it. 00:34:43.885 [2024-07-21 03:44:28.979213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.885 [2024-07-21 03:44:28.979238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.885 qpair failed and we were unable to recover it. 00:34:43.885 [2024-07-21 03:44:28.979385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.885 [2024-07-21 03:44:28.979413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.885 qpair failed and we were unable to recover it. 00:34:43.885 [2024-07-21 03:44:28.979534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.885 [2024-07-21 03:44:28.979561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.885 qpair failed and we were unable to recover it. 00:34:43.885 [2024-07-21 03:44:28.979715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.885 [2024-07-21 03:44:28.979755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.885 qpair failed and we were unable to recover it. 00:34:43.885 [2024-07-21 03:44:28.979863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.885 [2024-07-21 03:44:28.979902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.885 qpair failed and we were unable to recover it. 00:34:43.885 [2024-07-21 03:44:28.980028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.885 [2024-07-21 03:44:28.980071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.885 qpair failed and we were unable to recover it. 00:34:43.885 [2024-07-21 03:44:28.980210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.885 [2024-07-21 03:44:28.980239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.885 qpair failed and we were unable to recover it. 00:34:43.885 [2024-07-21 03:44:28.980436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.885 [2024-07-21 03:44:28.980464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.885 qpair failed and we were unable to recover it. 00:34:43.885 [2024-07-21 03:44:28.980572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.885 [2024-07-21 03:44:28.980602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.885 qpair failed and we were unable to recover it. 00:34:43.885 [2024-07-21 03:44:28.980721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.885 [2024-07-21 03:44:28.980748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.885 qpair failed and we were unable to recover it. 00:34:43.885 [2024-07-21 03:44:28.980859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.885 [2024-07-21 03:44:28.980898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.885 qpair failed and we were unable to recover it. 00:34:43.885 [2024-07-21 03:44:28.981081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.885 [2024-07-21 03:44:28.981126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.885 qpair failed and we were unable to recover it. 00:34:43.885 [2024-07-21 03:44:28.981238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.885 [2024-07-21 03:44:28.981285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.885 qpair failed and we were unable to recover it. 00:34:43.885 [2024-07-21 03:44:28.981483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.885 [2024-07-21 03:44:28.981542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.885 qpair failed and we were unable to recover it. 00:34:43.885 [2024-07-21 03:44:28.981659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.885 [2024-07-21 03:44:28.981684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.885 qpair failed and we were unable to recover it. 00:34:43.885 [2024-07-21 03:44:28.981851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.885 [2024-07-21 03:44:28.981894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.885 qpair failed and we were unable to recover it. 00:34:43.885 [2024-07-21 03:44:28.982081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.885 [2024-07-21 03:44:28.982146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.885 qpair failed and we were unable to recover it. 00:34:43.885 [2024-07-21 03:44:28.982385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.885 [2024-07-21 03:44:28.982413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.885 qpair failed and we were unable to recover it. 00:34:43.885 [2024-07-21 03:44:28.982552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.885 [2024-07-21 03:44:28.982577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.885 qpair failed and we were unable to recover it. 00:34:43.885 [2024-07-21 03:44:28.982728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.885 [2024-07-21 03:44:28.982763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.885 qpair failed and we were unable to recover it. 00:34:43.885 [2024-07-21 03:44:28.982867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.885 [2024-07-21 03:44:28.982894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.885 qpair failed and we were unable to recover it. 00:34:43.885 [2024-07-21 03:44:28.983094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.885 [2024-07-21 03:44:28.983142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.885 qpair failed and we were unable to recover it. 00:34:43.885 [2024-07-21 03:44:28.983303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.885 [2024-07-21 03:44:28.983328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.885 qpair failed and we were unable to recover it. 00:34:43.885 [2024-07-21 03:44:28.983464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.885 [2024-07-21 03:44:28.983489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.885 qpair failed and we were unable to recover it. 00:34:43.885 [2024-07-21 03:44:28.983656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.885 [2024-07-21 03:44:28.983696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.885 qpair failed and we were unable to recover it. 00:34:43.885 [2024-07-21 03:44:28.983869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.885 [2024-07-21 03:44:28.983899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.885 qpair failed and we were unable to recover it. 00:34:43.885 [2024-07-21 03:44:28.984006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.885 [2024-07-21 03:44:28.984035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.885 qpair failed and we were unable to recover it. 00:34:43.885 [2024-07-21 03:44:28.984248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.885 [2024-07-21 03:44:28.984305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.885 qpair failed and we were unable to recover it. 00:34:43.885 [2024-07-21 03:44:28.984444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.885 [2024-07-21 03:44:28.984471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.885 qpair failed and we were unable to recover it. 00:34:43.885 [2024-07-21 03:44:28.984565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.885 [2024-07-21 03:44:28.984590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.885 qpair failed and we were unable to recover it. 00:34:43.886 [2024-07-21 03:44:28.984709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.886 [2024-07-21 03:44:28.984748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.886 qpair failed and we were unable to recover it. 00:34:43.886 [2024-07-21 03:44:28.984850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.886 [2024-07-21 03:44:28.984876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.886 qpair failed and we were unable to recover it. 00:34:43.886 [2024-07-21 03:44:28.985010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.886 [2024-07-21 03:44:28.985037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.886 qpair failed and we were unable to recover it. 00:34:43.886 [2024-07-21 03:44:28.985158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.886 [2024-07-21 03:44:28.985201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.886 qpair failed and we were unable to recover it. 00:34:43.886 [2024-07-21 03:44:28.985460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.886 [2024-07-21 03:44:28.985511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.886 qpair failed and we were unable to recover it. 00:34:43.886 [2024-07-21 03:44:28.985648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.886 [2024-07-21 03:44:28.985691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.886 qpair failed and we were unable to recover it. 00:34:43.886 [2024-07-21 03:44:28.985810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.886 [2024-07-21 03:44:28.985835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.886 qpair failed and we were unable to recover it. 00:34:43.886 [2024-07-21 03:44:28.985933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.886 [2024-07-21 03:44:28.985961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.886 qpair failed and we were unable to recover it. 00:34:43.886 [2024-07-21 03:44:28.986095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.886 [2024-07-21 03:44:28.986123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.886 qpair failed and we were unable to recover it. 00:34:43.886 [2024-07-21 03:44:28.986263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.886 [2024-07-21 03:44:28.986306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.886 qpair failed and we were unable to recover it. 00:34:43.886 [2024-07-21 03:44:28.986424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.886 [2024-07-21 03:44:28.986451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.886 qpair failed and we were unable to recover it. 00:34:43.886 [2024-07-21 03:44:28.986541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.886 [2024-07-21 03:44:28.986566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.886 qpair failed and we were unable to recover it. 00:34:43.886 [2024-07-21 03:44:28.986662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.886 [2024-07-21 03:44:28.986689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.886 qpair failed and we were unable to recover it. 00:34:43.886 [2024-07-21 03:44:28.986815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.886 [2024-07-21 03:44:28.986857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.886 qpair failed and we were unable to recover it. 00:34:43.886 [2024-07-21 03:44:28.987027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.886 [2024-07-21 03:44:28.987063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.886 qpair failed and we were unable to recover it. 00:34:43.886 [2024-07-21 03:44:28.987194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.886 [2024-07-21 03:44:28.987259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.886 qpair failed and we were unable to recover it. 00:34:43.886 [2024-07-21 03:44:28.987425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.886 [2024-07-21 03:44:28.987459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.886 qpair failed and we were unable to recover it. 00:34:43.886 [2024-07-21 03:44:28.987596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.886 [2024-07-21 03:44:28.987639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.886 qpair failed and we were unable to recover it. 00:34:43.886 [2024-07-21 03:44:28.987778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.886 [2024-07-21 03:44:28.987804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.886 qpair failed and we were unable to recover it. 00:34:43.886 [2024-07-21 03:44:28.987914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.886 [2024-07-21 03:44:28.987942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.886 qpair failed and we were unable to recover it. 00:34:43.886 [2024-07-21 03:44:28.988073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.886 [2024-07-21 03:44:28.988102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.886 qpair failed and we were unable to recover it. 00:34:43.886 [2024-07-21 03:44:28.988233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.886 [2024-07-21 03:44:28.988261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.886 qpair failed and we were unable to recover it. 00:34:43.886 [2024-07-21 03:44:28.988373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.886 [2024-07-21 03:44:28.988401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.886 qpair failed and we were unable to recover it. 00:34:43.886 [2024-07-21 03:44:28.988549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.886 [2024-07-21 03:44:28.988576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.886 qpair failed and we were unable to recover it. 00:34:43.886 [2024-07-21 03:44:28.988678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.886 [2024-07-21 03:44:28.988705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.886 qpair failed and we were unable to recover it. 00:34:43.886 [2024-07-21 03:44:28.988876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.886 [2024-07-21 03:44:28.988904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.886 qpair failed and we were unable to recover it. 00:34:43.886 [2024-07-21 03:44:28.989060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.886 [2024-07-21 03:44:28.989103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.886 qpair failed and we were unable to recover it. 00:34:43.886 [2024-07-21 03:44:28.989249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.886 [2024-07-21 03:44:28.989295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.886 qpair failed and we were unable to recover it. 00:34:43.886 [2024-07-21 03:44:28.989417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.886 [2024-07-21 03:44:28.989443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.886 qpair failed and we were unable to recover it. 00:34:43.886 [2024-07-21 03:44:28.989582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.886 [2024-07-21 03:44:28.989636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.886 qpair failed and we were unable to recover it. 00:34:43.886 [2024-07-21 03:44:28.989771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.886 [2024-07-21 03:44:28.989798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.886 qpair failed and we were unable to recover it. 00:34:43.886 [2024-07-21 03:44:28.989970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.886 [2024-07-21 03:44:28.989999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.886 qpair failed and we were unable to recover it. 00:34:43.886 [2024-07-21 03:44:28.990147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.886 [2024-07-21 03:44:28.990181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.886 qpair failed and we were unable to recover it. 00:34:43.886 [2024-07-21 03:44:28.990392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.886 [2024-07-21 03:44:28.990449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.886 qpair failed and we were unable to recover it. 00:34:43.886 [2024-07-21 03:44:28.990591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.886 [2024-07-21 03:44:28.990634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.886 qpair failed and we were unable to recover it. 00:34:43.886 [2024-07-21 03:44:28.990747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.886 [2024-07-21 03:44:28.990775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.886 qpair failed and we were unable to recover it. 00:34:43.886 [2024-07-21 03:44:28.990926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.886 [2024-07-21 03:44:28.990952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.886 qpair failed and we were unable to recover it. 00:34:43.886 [2024-07-21 03:44:28.991118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.886 [2024-07-21 03:44:28.991147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.886 qpair failed and we were unable to recover it. 00:34:43.886 [2024-07-21 03:44:28.991252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.886 [2024-07-21 03:44:28.991280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.886 qpair failed and we were unable to recover it. 00:34:43.886 [2024-07-21 03:44:28.991405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.886 [2024-07-21 03:44:28.991434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.886 qpair failed and we were unable to recover it. 00:34:43.886 [2024-07-21 03:44:28.991537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.886 [2024-07-21 03:44:28.991562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.886 qpair failed and we were unable to recover it. 00:34:43.887 [2024-07-21 03:44:28.991695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.887 [2024-07-21 03:44:28.991722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.887 qpair failed and we were unable to recover it. 00:34:43.887 [2024-07-21 03:44:28.991846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.887 [2024-07-21 03:44:28.991871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.887 qpair failed and we were unable to recover it. 00:34:43.887 [2024-07-21 03:44:28.992001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.887 [2024-07-21 03:44:28.992027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.887 qpair failed and we were unable to recover it. 00:34:43.887 [2024-07-21 03:44:28.992153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.887 [2024-07-21 03:44:28.992178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.887 qpair failed and we were unable to recover it. 00:34:43.887 [2024-07-21 03:44:28.992275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.887 [2024-07-21 03:44:28.992301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.887 qpair failed and we were unable to recover it. 00:34:43.887 [2024-07-21 03:44:28.992440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.887 [2024-07-21 03:44:28.992469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.887 qpair failed and we were unable to recover it. 00:34:43.887 [2024-07-21 03:44:28.992598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.887 [2024-07-21 03:44:28.992635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.887 qpair failed and we were unable to recover it. 00:34:43.887 [2024-07-21 03:44:28.992748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.887 [2024-07-21 03:44:28.992774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.887 qpair failed and we were unable to recover it. 00:34:43.887 [2024-07-21 03:44:28.992864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.887 [2024-07-21 03:44:28.992889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.887 qpair failed and we were unable to recover it. 00:34:43.887 [2024-07-21 03:44:28.992979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.887 [2024-07-21 03:44:28.993004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.887 qpair failed and we were unable to recover it. 00:34:43.887 [2024-07-21 03:44:28.993129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.887 [2024-07-21 03:44:28.993155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.887 qpair failed and we were unable to recover it. 00:34:43.887 [2024-07-21 03:44:28.993302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.887 [2024-07-21 03:44:28.993331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.887 qpair failed and we were unable to recover it. 00:34:43.887 [2024-07-21 03:44:28.993426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.887 [2024-07-21 03:44:28.993454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.887 qpair failed and we were unable to recover it. 00:34:43.887 [2024-07-21 03:44:28.993575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.887 [2024-07-21 03:44:28.993602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.887 qpair failed and we were unable to recover it. 00:34:43.887 [2024-07-21 03:44:28.993735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.887 [2024-07-21 03:44:28.993761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.887 qpair failed and we were unable to recover it. 00:34:43.887 [2024-07-21 03:44:28.993858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.887 [2024-07-21 03:44:28.993887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.887 qpair failed and we were unable to recover it. 00:34:43.887 [2024-07-21 03:44:28.994025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.887 [2024-07-21 03:44:28.994054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.887 qpair failed and we were unable to recover it. 00:34:43.887 [2024-07-21 03:44:28.994188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.887 [2024-07-21 03:44:28.994231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.887 qpair failed and we were unable to recover it. 00:34:43.887 [2024-07-21 03:44:28.994358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.887 [2024-07-21 03:44:28.994386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.887 qpair failed and we were unable to recover it. 00:34:43.887 [2024-07-21 03:44:28.994521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.887 [2024-07-21 03:44:28.994549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.887 qpair failed and we were unable to recover it. 00:34:43.887 [2024-07-21 03:44:28.994710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.887 [2024-07-21 03:44:28.994736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.887 qpair failed and we were unable to recover it. 00:34:43.887 [2024-07-21 03:44:28.994823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.887 [2024-07-21 03:44:28.994848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.887 qpair failed and we were unable to recover it. 00:34:43.887 [2024-07-21 03:44:28.994996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.887 [2024-07-21 03:44:28.995026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.887 qpair failed and we were unable to recover it. 00:34:43.887 [2024-07-21 03:44:28.995185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.887 [2024-07-21 03:44:28.995214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.887 qpair failed and we were unable to recover it. 00:34:43.887 [2024-07-21 03:44:28.995349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.887 [2024-07-21 03:44:28.995384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.887 qpair failed and we were unable to recover it. 00:34:43.887 [2024-07-21 03:44:28.995497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.887 [2024-07-21 03:44:28.995528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.887 qpair failed and we were unable to recover it. 00:34:43.887 [2024-07-21 03:44:28.995647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.887 [2024-07-21 03:44:28.995674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.887 qpair failed and we were unable to recover it. 00:34:43.887 [2024-07-21 03:44:28.995773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.887 [2024-07-21 03:44:28.995800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.887 qpair failed and we were unable to recover it. 00:34:43.887 [2024-07-21 03:44:28.995892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.887 [2024-07-21 03:44:28.995918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.887 qpair failed and we were unable to recover it. 00:34:43.887 [2024-07-21 03:44:28.996062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.887 [2024-07-21 03:44:28.996092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.887 qpair failed and we were unable to recover it. 00:34:43.887 [2024-07-21 03:44:28.996223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.887 [2024-07-21 03:44:28.996252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.887 qpair failed and we were unable to recover it. 00:34:43.887 [2024-07-21 03:44:28.996380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.887 [2024-07-21 03:44:28.996409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.887 qpair failed and we were unable to recover it. 00:34:43.887 [2024-07-21 03:44:28.996546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.887 [2024-07-21 03:44:28.996571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.887 qpair failed and we were unable to recover it. 00:34:43.887 [2024-07-21 03:44:28.996698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.887 [2024-07-21 03:44:28.996724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.887 qpair failed and we were unable to recover it. 00:34:43.887 [2024-07-21 03:44:28.996853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.887 [2024-07-21 03:44:28.996895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.887 qpair failed and we were unable to recover it. 00:34:43.887 [2024-07-21 03:44:28.997017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.887 [2024-07-21 03:44:28.997060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.887 qpair failed and we were unable to recover it. 00:34:43.887 [2024-07-21 03:44:28.997217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.887 [2024-07-21 03:44:28.997246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.887 qpair failed and we were unable to recover it. 00:34:43.887 [2024-07-21 03:44:28.997344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.887 [2024-07-21 03:44:28.997374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.887 qpair failed and we were unable to recover it. 00:34:43.887 [2024-07-21 03:44:28.997512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.887 [2024-07-21 03:44:28.997538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.887 qpair failed and we were unable to recover it. 00:34:43.887 [2024-07-21 03:44:28.997659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.887 [2024-07-21 03:44:28.997686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.887 qpair failed and we were unable to recover it. 00:34:43.887 [2024-07-21 03:44:28.997831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.887 [2024-07-21 03:44:28.997857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.888 qpair failed and we were unable to recover it. 00:34:43.888 [2024-07-21 03:44:28.997989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.888 [2024-07-21 03:44:28.998030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.888 qpair failed and we were unable to recover it. 00:34:43.888 [2024-07-21 03:44:28.998148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.888 [2024-07-21 03:44:28.998181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.888 qpair failed and we were unable to recover it. 00:34:43.888 [2024-07-21 03:44:28.998338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.888 [2024-07-21 03:44:28.998366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.888 qpair failed and we were unable to recover it. 00:34:43.888 [2024-07-21 03:44:28.998523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.888 [2024-07-21 03:44:28.998551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.888 qpair failed and we were unable to recover it. 00:34:43.888 [2024-07-21 03:44:28.998678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.888 [2024-07-21 03:44:28.998705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.888 qpair failed and we were unable to recover it. 00:34:43.888 [2024-07-21 03:44:28.998799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.888 [2024-07-21 03:44:28.998825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.888 qpair failed and we were unable to recover it. 00:34:43.888 [2024-07-21 03:44:28.998922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.888 [2024-07-21 03:44:28.998948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.888 qpair failed and we were unable to recover it. 00:34:43.888 [2024-07-21 03:44:28.999038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.888 [2024-07-21 03:44:28.999081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.888 qpair failed and we were unable to recover it. 00:34:43.888 [2024-07-21 03:44:28.999217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.888 [2024-07-21 03:44:28.999245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.888 qpair failed and we were unable to recover it. 00:34:43.888 [2024-07-21 03:44:28.999347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.888 [2024-07-21 03:44:28.999391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.888 qpair failed and we were unable to recover it. 00:34:43.888 [2024-07-21 03:44:28.999520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.888 [2024-07-21 03:44:28.999548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.888 qpair failed and we were unable to recover it. 00:34:43.888 [2024-07-21 03:44:28.999679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.888 [2024-07-21 03:44:28.999706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.888 qpair failed and we were unable to recover it. 00:34:43.888 [2024-07-21 03:44:28.999854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.888 [2024-07-21 03:44:28.999880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.888 qpair failed and we were unable to recover it. 00:34:43.888 [2024-07-21 03:44:28.999979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.888 [2024-07-21 03:44:29.000022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.888 qpair failed and we were unable to recover it. 00:34:43.888 [2024-07-21 03:44:29.000147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.888 [2024-07-21 03:44:29.000180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.888 qpair failed and we were unable to recover it. 00:34:43.888 [2024-07-21 03:44:29.000334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.888 [2024-07-21 03:44:29.000362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.888 qpair failed and we were unable to recover it. 00:34:43.888 [2024-07-21 03:44:29.000520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.888 [2024-07-21 03:44:29.000548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.888 qpair failed and we were unable to recover it. 00:34:43.888 [2024-07-21 03:44:29.000707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.888 [2024-07-21 03:44:29.000733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.888 qpair failed and we were unable to recover it. 00:34:43.888 [2024-07-21 03:44:29.000863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.888 [2024-07-21 03:44:29.000889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.888 qpair failed and we were unable to recover it. 00:34:43.888 [2024-07-21 03:44:29.001000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.888 [2024-07-21 03:44:29.001043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.888 qpair failed and we were unable to recover it. 00:34:43.888 [2024-07-21 03:44:29.001148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.888 [2024-07-21 03:44:29.001178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.888 qpair failed and we were unable to recover it. 00:34:43.888 [2024-07-21 03:44:29.001295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.888 [2024-07-21 03:44:29.001335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.888 qpair failed and we were unable to recover it. 00:34:43.888 [2024-07-21 03:44:29.001470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.888 [2024-07-21 03:44:29.001498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.888 qpair failed and we were unable to recover it. 00:34:43.888 [2024-07-21 03:44:29.001633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.888 [2024-07-21 03:44:29.001677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.888 qpair failed and we were unable to recover it. 00:34:43.888 [2024-07-21 03:44:29.001796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.888 [2024-07-21 03:44:29.001821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.888 qpair failed and we were unable to recover it. 00:34:43.888 [2024-07-21 03:44:29.001940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.888 [2024-07-21 03:44:29.001982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.888 qpair failed and we were unable to recover it. 00:34:43.888 [2024-07-21 03:44:29.002111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.888 [2024-07-21 03:44:29.002139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.888 qpair failed and we were unable to recover it. 00:34:43.888 [2024-07-21 03:44:29.002274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.888 [2024-07-21 03:44:29.002299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.888 qpair failed and we were unable to recover it. 00:34:43.888 [2024-07-21 03:44:29.002394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.888 [2024-07-21 03:44:29.002420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.888 qpair failed and we were unable to recover it. 00:34:43.888 [2024-07-21 03:44:29.002556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.888 [2024-07-21 03:44:29.002584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.888 qpair failed and we were unable to recover it. 00:34:43.888 [2024-07-21 03:44:29.002708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.888 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 2568345 Killed "${NVMF_APP[@]}" "$@" 00:34:43.888 [2024-07-21 03:44:29.002735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.888 qpair failed and we were unable to recover it. 00:34:43.888 [2024-07-21 03:44:29.002830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.888 [2024-07-21 03:44:29.002857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.888 qpair failed and we were unable to recover it. 00:34:43.888 [2024-07-21 03:44:29.002964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.889 [2024-07-21 03:44:29.002990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.889 qpair failed and we were unable to recover it. 00:34:43.889 03:44:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:34:43.889 [2024-07-21 03:44:29.003094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.889 [2024-07-21 03:44:29.003120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.889 qpair failed and we were unable to recover it. 00:34:43.889 [2024-07-21 03:44:29.003239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.889 [2024-07-21 03:44:29.003266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b9 03:44:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:34:43.889 0 with addr=10.0.0.2, port=4420 00:34:43.889 qpair failed and we were unable to recover it. 00:34:43.889 03:44:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:43.889 [2024-07-21 03:44:29.003429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.889 [2024-07-21 03:44:29.003458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.889 qpair failed and we were unable to recover it. 00:34:43.889 03:44:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@720 -- # xtrace_disable 00:34:43.889 [2024-07-21 03:44:29.003578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.889 [2024-07-21 03:44:29.003605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.889 qpair failed and we were unable to recover it. 00:34:43.889 03:44:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:43.889 [2024-07-21 03:44:29.003749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.889 [2024-07-21 03:44:29.003788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.889 qpair failed and we were unable to recover it. 00:34:43.889 [2024-07-21 03:44:29.003949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.889 [2024-07-21 03:44:29.003993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.889 qpair failed and we were unable to recover it. 00:34:43.889 [2024-07-21 03:44:29.004146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.889 [2024-07-21 03:44:29.004173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.889 qpair failed and we were unable to recover it. 00:34:43.889 [2024-07-21 03:44:29.004298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.889 [2024-07-21 03:44:29.004324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.889 qpair failed and we were unable to recover it. 00:34:43.889 [2024-07-21 03:44:29.004477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.889 [2024-07-21 03:44:29.004502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.889 qpair failed and we were unable to recover it. 00:34:43.889 [2024-07-21 03:44:29.004665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.889 [2024-07-21 03:44:29.004692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.889 qpair failed and we were unable to recover it. 00:34:43.889 [2024-07-21 03:44:29.004781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.889 [2024-07-21 03:44:29.004823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.889 qpair failed and we were unable to recover it. 00:34:43.889 [2024-07-21 03:44:29.004963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.889 [2024-07-21 03:44:29.004989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.889 qpair failed and we were unable to recover it. 00:34:43.889 [2024-07-21 03:44:29.005151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.889 [2024-07-21 03:44:29.005177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.889 qpair failed and we were unable to recover it. 00:34:43.889 [2024-07-21 03:44:29.005275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.889 [2024-07-21 03:44:29.005318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.889 qpair failed and we were unable to recover it. 00:34:43.889 [2024-07-21 03:44:29.005456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.889 [2024-07-21 03:44:29.005485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.889 qpair failed and we were unable to recover it. 00:34:43.889 [2024-07-21 03:44:29.005654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.889 [2024-07-21 03:44:29.005681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.889 qpair failed and we were unable to recover it. 00:34:43.889 [2024-07-21 03:44:29.005786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.889 [2024-07-21 03:44:29.005813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.889 qpair failed and we were unable to recover it. 00:34:43.889 [2024-07-21 03:44:29.005957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.889 [2024-07-21 03:44:29.005988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.889 qpair failed and we were unable to recover it. 00:34:43.889 [2024-07-21 03:44:29.006124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.889 [2024-07-21 03:44:29.006160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.889 qpair failed and we were unable to recover it. 00:34:43.889 [2024-07-21 03:44:29.006255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.889 [2024-07-21 03:44:29.006281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.889 qpair failed and we were unable to recover it. 00:34:43.889 [2024-07-21 03:44:29.006420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.889 [2024-07-21 03:44:29.006448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.889 qpair failed and we were unable to recover it. 00:34:43.889 [2024-07-21 03:44:29.006558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.889 [2024-07-21 03:44:29.006584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.889 qpair failed and we were unable to recover it. 00:34:43.889 [2024-07-21 03:44:29.006711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.889 [2024-07-21 03:44:29.006737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.889 qpair failed and we were unable to recover it. 00:34:43.889 [2024-07-21 03:44:29.006883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.889 [2024-07-21 03:44:29.006913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.889 qpair failed and we were unable to recover it. 00:34:43.889 [2024-07-21 03:44:29.007057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.889 [2024-07-21 03:44:29.007083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.889 qpair failed and we were unable to recover it. 00:34:43.889 03:44:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=2568895 00:34:43.889 03:44:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:34:43.889 [2024-07-21 03:44:29.007177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.889 [2024-07-21 03:44:29.007205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.889 qpair failed and we were unable to recover it. 00:34:43.889 03:44:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 2568895 00:34:43.889 [2024-07-21 03:44:29.007357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.889 [2024-07-21 03:44:29.007383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.889 03:44:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@827 -- # '[' -z 2568895 ']' 00:34:43.889 qpair failed and we were unable to recover it. 00:34:43.889 [2024-07-21 03:44:29.007499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.889 03:44:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:43.889 [2024-07-21 03:44:29.007526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.889 qpair failed and we were unable to recover it. 00:34:43.889 03:44:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@832 -- # local max_retries=100 00:34:43.889 [2024-07-21 03:44:29.007674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.889 [2024-07-21 03:44:29.007719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.889 qpair failed and we were unable to recover it. 00:34:43.889 03:44:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:43.889 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:43.889 [2024-07-21 03:44:29.007866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.889 03:44:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # xtrace_disable 00:34:43.889 [2024-07-21 03:44:29.007893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.889 qpair failed and we were unable to recover it. 00:34:43.889 03:44:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:43.889 [2024-07-21 03:44:29.008023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.889 [2024-07-21 03:44:29.008049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.889 qpair failed and we were unable to recover it. 00:34:43.889 [2024-07-21 03:44:29.008139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.889 [2024-07-21 03:44:29.008166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.889 qpair failed and we were unable to recover it. 00:34:43.889 [2024-07-21 03:44:29.008298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.889 [2024-07-21 03:44:29.008325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.889 qpair failed and we were unable to recover it. 00:34:43.889 [2024-07-21 03:44:29.008448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.889 [2024-07-21 03:44:29.008474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.889 qpair failed and we were unable to recover it. 00:34:43.889 [2024-07-21 03:44:29.008573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.890 [2024-07-21 03:44:29.008610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.890 qpair failed and we were unable to recover it. 00:34:43.890 [2024-07-21 03:44:29.008738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.890 [2024-07-21 03:44:29.008763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.890 qpair failed and we were unable to recover it. 00:34:43.890 [2024-07-21 03:44:29.008857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.890 [2024-07-21 03:44:29.008884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.890 qpair failed and we were unable to recover it. 00:34:43.890 [2024-07-21 03:44:29.009007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.890 [2024-07-21 03:44:29.009036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.890 qpair failed and we were unable to recover it. 00:34:43.890 [2024-07-21 03:44:29.009211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.890 [2024-07-21 03:44:29.009240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.890 qpair failed and we were unable to recover it. 00:34:43.890 [2024-07-21 03:44:29.009386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.890 [2024-07-21 03:44:29.009412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.890 qpair failed and we were unable to recover it. 00:34:43.890 [2024-07-21 03:44:29.009506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.890 [2024-07-21 03:44:29.009533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.890 qpair failed and we were unable to recover it. 00:34:43.890 [2024-07-21 03:44:29.009640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.890 [2024-07-21 03:44:29.009667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.890 qpair failed and we were unable to recover it. 00:34:43.890 [2024-07-21 03:44:29.009770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.890 [2024-07-21 03:44:29.009796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.890 qpair failed and we were unable to recover it. 00:34:43.890 [2024-07-21 03:44:29.009918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.890 [2024-07-21 03:44:29.009947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.890 qpair failed and we were unable to recover it. 00:34:43.890 [2024-07-21 03:44:29.010074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.890 [2024-07-21 03:44:29.010105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.890 qpair failed and we were unable to recover it. 00:34:43.890 [2024-07-21 03:44:29.010251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.890 [2024-07-21 03:44:29.010277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.890 qpair failed and we were unable to recover it. 00:34:43.890 [2024-07-21 03:44:29.010372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.890 [2024-07-21 03:44:29.010397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.890 qpair failed and we were unable to recover it. 00:34:43.890 [2024-07-21 03:44:29.010565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.890 [2024-07-21 03:44:29.010593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.890 qpair failed and we were unable to recover it. 00:34:43.890 [2024-07-21 03:44:29.010757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.890 [2024-07-21 03:44:29.010784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.890 qpair failed and we were unable to recover it. 00:34:43.890 [2024-07-21 03:44:29.010934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.890 [2024-07-21 03:44:29.010960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.890 qpair failed and we were unable to recover it. 00:34:43.890 [2024-07-21 03:44:29.011087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.890 [2024-07-21 03:44:29.011113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.890 qpair failed and we were unable to recover it. 00:34:43.890 [2024-07-21 03:44:29.011262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.890 [2024-07-21 03:44:29.011288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.890 qpair failed and we were unable to recover it. 00:34:43.890 [2024-07-21 03:44:29.011383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.890 [2024-07-21 03:44:29.011412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.890 qpair failed and we were unable to recover it. 00:34:43.890 [2024-07-21 03:44:29.011504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.890 [2024-07-21 03:44:29.011530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.890 qpair failed and we were unable to recover it. 00:34:43.890 [2024-07-21 03:44:29.011653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.890 [2024-07-21 03:44:29.011684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.890 qpair failed and we were unable to recover it. 00:34:43.890 [2024-07-21 03:44:29.011811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.890 [2024-07-21 03:44:29.011854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.890 qpair failed and we were unable to recover it. 00:34:43.890 [2024-07-21 03:44:29.012013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.890 [2024-07-21 03:44:29.012042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.890 qpair failed and we were unable to recover it. 00:34:43.890 [2024-07-21 03:44:29.012162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.890 [2024-07-21 03:44:29.012188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.890 qpair failed and we were unable to recover it. 00:34:43.890 [2024-07-21 03:44:29.012306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.890 [2024-07-21 03:44:29.012333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.890 qpair failed and we were unable to recover it. 00:34:43.890 [2024-07-21 03:44:29.012421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.890 [2024-07-21 03:44:29.012466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.890 qpair failed and we were unable to recover it. 00:34:43.890 [2024-07-21 03:44:29.012619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.890 [2024-07-21 03:44:29.012645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.890 qpair failed and we were unable to recover it. 00:34:43.890 [2024-07-21 03:44:29.012762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.890 [2024-07-21 03:44:29.012788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.890 qpair failed and we were unable to recover it. 00:34:43.890 [2024-07-21 03:44:29.012969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.890 [2024-07-21 03:44:29.012998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.890 qpair failed and we were unable to recover it. 00:34:43.890 [2024-07-21 03:44:29.013110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.890 [2024-07-21 03:44:29.013135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.890 qpair failed and we were unable to recover it. 00:34:43.890 [2024-07-21 03:44:29.013284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.890 [2024-07-21 03:44:29.013310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.890 qpair failed and we were unable to recover it. 00:34:43.890 [2024-07-21 03:44:29.013485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.890 [2024-07-21 03:44:29.013511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.890 qpair failed and we were unable to recover it. 00:34:43.890 [2024-07-21 03:44:29.013663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.890 [2024-07-21 03:44:29.013688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.890 qpair failed and we were unable to recover it. 00:34:43.890 [2024-07-21 03:44:29.013787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.890 [2024-07-21 03:44:29.013828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.890 qpair failed and we were unable to recover it. 00:34:43.890 [2024-07-21 03:44:29.013937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.890 [2024-07-21 03:44:29.013980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.890 qpair failed and we were unable to recover it. 00:34:43.890 [2024-07-21 03:44:29.014069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.890 [2024-07-21 03:44:29.014097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.890 qpair failed and we were unable to recover it. 00:34:43.890 [2024-07-21 03:44:29.014246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.890 [2024-07-21 03:44:29.014272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.890 qpair failed and we were unable to recover it. 00:34:43.890 [2024-07-21 03:44:29.014388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.890 [2024-07-21 03:44:29.014417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.890 qpair failed and we were unable to recover it. 00:34:43.890 [2024-07-21 03:44:29.014562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.890 [2024-07-21 03:44:29.014589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.890 qpair failed and we were unable to recover it. 00:34:43.890 [2024-07-21 03:44:29.014692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.890 [2024-07-21 03:44:29.014721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.890 qpair failed and we were unable to recover it. 00:34:43.890 [2024-07-21 03:44:29.014871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.890 [2024-07-21 03:44:29.014900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.890 qpair failed and we were unable to recover it. 00:34:43.890 [2024-07-21 03:44:29.015019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.891 [2024-07-21 03:44:29.015045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.891 qpair failed and we were unable to recover it. 00:34:43.891 [2024-07-21 03:44:29.015166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.891 [2024-07-21 03:44:29.015192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.891 qpair failed and we were unable to recover it. 00:34:43.891 [2024-07-21 03:44:29.015301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.891 [2024-07-21 03:44:29.015329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.891 qpair failed and we were unable to recover it. 00:34:43.891 [2024-07-21 03:44:29.015479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.891 [2024-07-21 03:44:29.015505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.891 qpair failed and we were unable to recover it. 00:34:43.891 [2024-07-21 03:44:29.015593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.891 [2024-07-21 03:44:29.015627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.891 qpair failed and we were unable to recover it. 00:34:43.891 [2024-07-21 03:44:29.015714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.891 [2024-07-21 03:44:29.015756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.891 qpair failed and we were unable to recover it. 00:34:43.891 [2024-07-21 03:44:29.015876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.891 [2024-07-21 03:44:29.015902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.891 qpair failed and we were unable to recover it. 00:34:43.891 [2024-07-21 03:44:29.016000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.891 [2024-07-21 03:44:29.016027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.891 qpair failed and we were unable to recover it. 00:34:43.891 [2024-07-21 03:44:29.016113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.891 [2024-07-21 03:44:29.016139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.891 qpair failed and we were unable to recover it. 00:34:43.891 [2024-07-21 03:44:29.016228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.891 [2024-07-21 03:44:29.016253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.891 qpair failed and we were unable to recover it. 00:34:43.891 [2024-07-21 03:44:29.016376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.891 [2024-07-21 03:44:29.016402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.891 qpair failed and we were unable to recover it. 00:34:43.891 [2024-07-21 03:44:29.016519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.891 [2024-07-21 03:44:29.016544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.891 qpair failed and we were unable to recover it. 00:34:43.891 [2024-07-21 03:44:29.016650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.891 [2024-07-21 03:44:29.016677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.891 qpair failed and we were unable to recover it. 00:34:43.891 [2024-07-21 03:44:29.016792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.891 [2024-07-21 03:44:29.016818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.891 qpair failed and we were unable to recover it. 00:34:43.891 [2024-07-21 03:44:29.016965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.891 [2024-07-21 03:44:29.017003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.891 qpair failed and we were unable to recover it. 00:34:43.891 [2024-07-21 03:44:29.017175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.891 [2024-07-21 03:44:29.017201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.891 qpair failed and we were unable to recover it. 00:34:43.891 [2024-07-21 03:44:29.017315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.891 [2024-07-21 03:44:29.017340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.891 qpair failed and we were unable to recover it. 00:34:43.891 [2024-07-21 03:44:29.017485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.891 [2024-07-21 03:44:29.017513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.891 qpair failed and we were unable to recover it. 00:34:43.891 [2024-07-21 03:44:29.017633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.891 [2024-07-21 03:44:29.017659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.891 qpair failed and we were unable to recover it. 00:34:43.891 [2024-07-21 03:44:29.017777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.891 [2024-07-21 03:44:29.017807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.891 qpair failed and we were unable to recover it. 00:34:43.891 [2024-07-21 03:44:29.017925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.891 [2024-07-21 03:44:29.017953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.891 qpair failed and we were unable to recover it. 00:34:43.891 [2024-07-21 03:44:29.018093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.891 [2024-07-21 03:44:29.018119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.891 qpair failed and we were unable to recover it. 00:34:43.891 [2024-07-21 03:44:29.018233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.891 [2024-07-21 03:44:29.018268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.891 qpair failed and we were unable to recover it. 00:34:43.891 [2024-07-21 03:44:29.018373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.891 [2024-07-21 03:44:29.018399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.891 qpair failed and we were unable to recover it. 00:34:43.891 [2024-07-21 03:44:29.018478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.891 [2024-07-21 03:44:29.018503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.891 qpair failed and we were unable to recover it. 00:34:43.891 [2024-07-21 03:44:29.018635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.891 [2024-07-21 03:44:29.018662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.891 qpair failed and we were unable to recover it. 00:34:43.891 [2024-07-21 03:44:29.018818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.891 [2024-07-21 03:44:29.018844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.891 qpair failed and we were unable to recover it. 00:34:43.891 [2024-07-21 03:44:29.018942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.891 [2024-07-21 03:44:29.018967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.891 qpair failed and we were unable to recover it. 00:34:43.891 [2024-07-21 03:44:29.019087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.891 [2024-07-21 03:44:29.019113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.891 qpair failed and we were unable to recover it. 00:34:43.891 [2024-07-21 03:44:29.019279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.891 [2024-07-21 03:44:29.019307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.891 qpair failed and we were unable to recover it. 00:34:43.891 [2024-07-21 03:44:29.019448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.891 [2024-07-21 03:44:29.019474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.891 qpair failed and we were unable to recover it. 00:34:43.891 [2024-07-21 03:44:29.019599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.891 [2024-07-21 03:44:29.019637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.891 qpair failed and we were unable to recover it. 00:34:43.891 [2024-07-21 03:44:29.019791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.891 [2024-07-21 03:44:29.019820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.891 qpair failed and we were unable to recover it. 00:34:43.891 [2024-07-21 03:44:29.019942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.891 [2024-07-21 03:44:29.019968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.891 qpair failed and we were unable to recover it. 00:34:43.891 [2024-07-21 03:44:29.020094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.891 [2024-07-21 03:44:29.020121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.891 qpair failed and we were unable to recover it. 00:34:43.891 [2024-07-21 03:44:29.020244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.891 [2024-07-21 03:44:29.020270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.891 qpair failed and we were unable to recover it. 00:34:43.891 [2024-07-21 03:44:29.020385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.891 [2024-07-21 03:44:29.020410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.891 qpair failed and we were unable to recover it. 00:34:43.891 [2024-07-21 03:44:29.020530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.891 [2024-07-21 03:44:29.020558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.891 qpair failed and we were unable to recover it. 00:34:43.891 [2024-07-21 03:44:29.020693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.891 [2024-07-21 03:44:29.020720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.891 qpair failed and we were unable to recover it. 00:34:43.891 [2024-07-21 03:44:29.020844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.891 [2024-07-21 03:44:29.020869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.891 qpair failed and we were unable to recover it. 00:34:43.891 [2024-07-21 03:44:29.020988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.891 [2024-07-21 03:44:29.021014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.891 qpair failed and we were unable to recover it. 00:34:43.892 [2024-07-21 03:44:29.021131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.892 [2024-07-21 03:44:29.021156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.892 qpair failed and we were unable to recover it. 00:34:43.892 [2024-07-21 03:44:29.021251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.892 [2024-07-21 03:44:29.021276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.892 qpair failed and we were unable to recover it. 00:34:43.892 [2024-07-21 03:44:29.021367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.892 [2024-07-21 03:44:29.021394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.892 qpair failed and we were unable to recover it. 00:34:43.892 [2024-07-21 03:44:29.021491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.892 [2024-07-21 03:44:29.021516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.892 qpair failed and we were unable to recover it. 00:34:43.892 [2024-07-21 03:44:29.021669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.892 [2024-07-21 03:44:29.021695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.892 qpair failed and we were unable to recover it. 00:34:43.892 [2024-07-21 03:44:29.021788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.892 [2024-07-21 03:44:29.021814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.892 qpair failed and we were unable to recover it. 00:34:43.892 [2024-07-21 03:44:29.021943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.892 [2024-07-21 03:44:29.021974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.892 qpair failed and we were unable to recover it. 00:34:43.892 [2024-07-21 03:44:29.022120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.892 [2024-07-21 03:44:29.022146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.892 qpair failed and we were unable to recover it. 00:34:43.892 [2024-07-21 03:44:29.022293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.892 [2024-07-21 03:44:29.022319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.892 qpair failed and we were unable to recover it. 00:34:43.892 [2024-07-21 03:44:29.022435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.892 [2024-07-21 03:44:29.022461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.892 qpair failed and we were unable to recover it. 00:34:43.892 [2024-07-21 03:44:29.022590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.892 [2024-07-21 03:44:29.022631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.892 qpair failed and we were unable to recover it. 00:34:43.892 [2024-07-21 03:44:29.022763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.892 [2024-07-21 03:44:29.022806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.892 qpair failed and we were unable to recover it. 00:34:43.892 [2024-07-21 03:44:29.022911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.892 [2024-07-21 03:44:29.022940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.892 qpair failed and we were unable to recover it. 00:34:43.892 [2024-07-21 03:44:29.023083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.892 [2024-07-21 03:44:29.023109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.892 qpair failed and we were unable to recover it. 00:34:43.892 [2024-07-21 03:44:29.023256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.892 [2024-07-21 03:44:29.023297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.892 qpair failed and we were unable to recover it. 00:34:43.892 [2024-07-21 03:44:29.023401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.892 [2024-07-21 03:44:29.023429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.892 qpair failed and we were unable to recover it. 00:34:43.892 [2024-07-21 03:44:29.023558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.892 [2024-07-21 03:44:29.023584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.892 qpair failed and we were unable to recover it. 00:34:43.892 [2024-07-21 03:44:29.023680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.892 [2024-07-21 03:44:29.023706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.892 qpair failed and we were unable to recover it. 00:34:43.892 [2024-07-21 03:44:29.023818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.892 [2024-07-21 03:44:29.023850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.892 qpair failed and we were unable to recover it. 00:34:43.892 [2024-07-21 03:44:29.023998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.892 [2024-07-21 03:44:29.024023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.892 qpair failed and we were unable to recover it. 00:34:43.892 [2024-07-21 03:44:29.024116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.892 [2024-07-21 03:44:29.024143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.892 qpair failed and we were unable to recover it. 00:34:43.892 [2024-07-21 03:44:29.024287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.892 [2024-07-21 03:44:29.024316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.892 qpair failed and we were unable to recover it. 00:34:43.892 [2024-07-21 03:44:29.024435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.892 [2024-07-21 03:44:29.024462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.892 qpair failed and we were unable to recover it. 00:34:43.892 [2024-07-21 03:44:29.024555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.892 [2024-07-21 03:44:29.024582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.892 qpair failed and we were unable to recover it. 00:34:43.892 [2024-07-21 03:44:29.024718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.892 [2024-07-21 03:44:29.024747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.892 qpair failed and we were unable to recover it. 00:34:43.892 [2024-07-21 03:44:29.024899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.892 [2024-07-21 03:44:29.024924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.892 qpair failed and we were unable to recover it. 00:34:43.892 [2024-07-21 03:44:29.025046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.892 [2024-07-21 03:44:29.025072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.892 qpair failed and we were unable to recover it. 00:34:43.892 [2024-07-21 03:44:29.025248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.892 [2024-07-21 03:44:29.025277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.892 qpair failed and we were unable to recover it. 00:34:43.892 [2024-07-21 03:44:29.025388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.892 [2024-07-21 03:44:29.025414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.892 qpair failed and we were unable to recover it. 00:34:43.892 [2024-07-21 03:44:29.025534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.892 [2024-07-21 03:44:29.025560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.892 qpair failed and we were unable to recover it. 00:34:43.892 [2024-07-21 03:44:29.025665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.892 [2024-07-21 03:44:29.025692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.892 qpair failed and we were unable to recover it. 00:34:43.892 [2024-07-21 03:44:29.025777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.892 [2024-07-21 03:44:29.025803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.892 qpair failed and we were unable to recover it. 00:34:43.892 [2024-07-21 03:44:29.025902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.892 [2024-07-21 03:44:29.025928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.892 qpair failed and we were unable to recover it. 00:34:43.892 [2024-07-21 03:44:29.026045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.892 [2024-07-21 03:44:29.026072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.892 qpair failed and we were unable to recover it. 00:34:43.892 [2024-07-21 03:44:29.026195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.892 [2024-07-21 03:44:29.026220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.893 qpair failed and we were unable to recover it. 00:34:43.893 [2024-07-21 03:44:29.026342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.893 [2024-07-21 03:44:29.026383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.893 qpair failed and we were unable to recover it. 00:34:43.893 [2024-07-21 03:44:29.026516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.893 [2024-07-21 03:44:29.026545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.893 qpair failed and we were unable to recover it. 00:34:43.893 [2024-07-21 03:44:29.026668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.893 [2024-07-21 03:44:29.026695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.893 qpair failed and we were unable to recover it. 00:34:43.893 [2024-07-21 03:44:29.026819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.893 [2024-07-21 03:44:29.026844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.893 qpair failed and we were unable to recover it. 00:34:43.893 [2024-07-21 03:44:29.026974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.893 [2024-07-21 03:44:29.027000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.893 qpair failed and we were unable to recover it. 00:34:43.893 [2024-07-21 03:44:29.027114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.893 [2024-07-21 03:44:29.027139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.893 qpair failed and we were unable to recover it. 00:34:43.893 [2024-07-21 03:44:29.027240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.893 [2024-07-21 03:44:29.027265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.893 qpair failed and we were unable to recover it. 00:34:43.893 [2024-07-21 03:44:29.027404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.893 [2024-07-21 03:44:29.027432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.893 qpair failed and we were unable to recover it. 00:34:43.893 [2024-07-21 03:44:29.027550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.893 [2024-07-21 03:44:29.027577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.893 qpair failed and we were unable to recover it. 00:34:43.893 [2024-07-21 03:44:29.027708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.893 [2024-07-21 03:44:29.027734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.893 qpair failed and we were unable to recover it. 00:34:43.893 [2024-07-21 03:44:29.027914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.893 [2024-07-21 03:44:29.027942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.893 qpair failed and we were unable to recover it. 00:34:43.893 [2024-07-21 03:44:29.028086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.893 [2024-07-21 03:44:29.028113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.893 qpair failed and we were unable to recover it. 00:34:43.893 [2024-07-21 03:44:29.028227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.893 [2024-07-21 03:44:29.028253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.893 qpair failed and we were unable to recover it. 00:34:43.893 [2024-07-21 03:44:29.028403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.893 [2024-07-21 03:44:29.028432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.893 qpair failed and we were unable to recover it. 00:34:43.893 [2024-07-21 03:44:29.028549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.893 [2024-07-21 03:44:29.028574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.893 qpair failed and we were unable to recover it. 00:34:43.893 [2024-07-21 03:44:29.028673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.893 [2024-07-21 03:44:29.028699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.893 qpair failed and we were unable to recover it. 00:34:43.893 [2024-07-21 03:44:29.028809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.893 [2024-07-21 03:44:29.028854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.893 qpair failed and we were unable to recover it. 00:34:43.893 [2024-07-21 03:44:29.028986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.893 [2024-07-21 03:44:29.029012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.893 qpair failed and we were unable to recover it. 00:34:43.893 [2024-07-21 03:44:29.029110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.893 [2024-07-21 03:44:29.029136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.893 qpair failed and we were unable to recover it. 00:34:43.893 [2024-07-21 03:44:29.029222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.893 [2024-07-21 03:44:29.029248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.893 qpair failed and we were unable to recover it. 00:34:43.893 [2024-07-21 03:44:29.029395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.893 [2024-07-21 03:44:29.029420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.893 qpair failed and we were unable to recover it. 00:34:43.893 [2024-07-21 03:44:29.029583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.893 [2024-07-21 03:44:29.029612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.893 qpair failed and we were unable to recover it. 00:34:43.893 [2024-07-21 03:44:29.029761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.893 [2024-07-21 03:44:29.029787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.893 qpair failed and we were unable to recover it. 00:34:43.893 [2024-07-21 03:44:29.029912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.893 [2024-07-21 03:44:29.029947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.893 qpair failed and we were unable to recover it. 00:34:43.893 [2024-07-21 03:44:29.030036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.893 [2024-07-21 03:44:29.030062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.893 qpair failed and we were unable to recover it. 00:34:43.893 [2024-07-21 03:44:29.030180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.893 [2024-07-21 03:44:29.030208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.893 qpair failed and we were unable to recover it. 00:34:43.893 [2024-07-21 03:44:29.030359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.893 [2024-07-21 03:44:29.030384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.893 qpair failed and we were unable to recover it. 00:34:43.893 [2024-07-21 03:44:29.030480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.893 [2024-07-21 03:44:29.030507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.893 qpair failed and we were unable to recover it. 00:34:43.893 [2024-07-21 03:44:29.030693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.893 [2024-07-21 03:44:29.030719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.893 qpair failed and we were unable to recover it. 00:34:43.893 [2024-07-21 03:44:29.030835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.893 [2024-07-21 03:44:29.030861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.893 qpair failed and we were unable to recover it. 00:34:43.893 [2024-07-21 03:44:29.030992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.893 [2024-07-21 03:44:29.031018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.893 qpair failed and we were unable to recover it. 00:34:43.893 [2024-07-21 03:44:29.031158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.893 [2024-07-21 03:44:29.031187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.893 qpair failed and we were unable to recover it. 00:34:43.893 [2024-07-21 03:44:29.031293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.893 [2024-07-21 03:44:29.031319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.893 qpair failed and we were unable to recover it. 00:34:43.893 [2024-07-21 03:44:29.031437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.893 [2024-07-21 03:44:29.031463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.893 qpair failed and we were unable to recover it. 00:34:43.893 [2024-07-21 03:44:29.031573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.893 [2024-07-21 03:44:29.031602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.893 qpair failed and we were unable to recover it. 00:34:43.893 [2024-07-21 03:44:29.031716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.893 [2024-07-21 03:44:29.031742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.893 qpair failed and we were unable to recover it. 00:34:43.893 [2024-07-21 03:44:29.031865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.893 [2024-07-21 03:44:29.031891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.893 qpair failed and we were unable to recover it. 00:34:43.893 [2024-07-21 03:44:29.032040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.893 [2024-07-21 03:44:29.032069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.893 qpair failed and we were unable to recover it. 00:34:43.893 [2024-07-21 03:44:29.032170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.893 [2024-07-21 03:44:29.032195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.893 qpair failed and we were unable to recover it. 00:34:43.893 [2024-07-21 03:44:29.032313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.893 [2024-07-21 03:44:29.032338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.893 qpair failed and we were unable to recover it. 00:34:43.893 [2024-07-21 03:44:29.032514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.893 [2024-07-21 03:44:29.032540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.894 qpair failed and we were unable to recover it. 00:34:43.894 [2024-07-21 03:44:29.032658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.894 [2024-07-21 03:44:29.032685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.894 qpair failed and we were unable to recover it. 00:34:43.894 [2024-07-21 03:44:29.032765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.894 [2024-07-21 03:44:29.032790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.894 qpair failed and we were unable to recover it. 00:34:43.894 [2024-07-21 03:44:29.032924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.894 [2024-07-21 03:44:29.032952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.894 qpair failed and we were unable to recover it. 00:34:43.894 [2024-07-21 03:44:29.033060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.894 [2024-07-21 03:44:29.033086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.894 qpair failed and we were unable to recover it. 00:34:43.894 [2024-07-21 03:44:29.033176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.894 [2024-07-21 03:44:29.033201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.894 qpair failed and we were unable to recover it. 00:34:43.894 [2024-07-21 03:44:29.033324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.894 [2024-07-21 03:44:29.033351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.894 qpair failed and we were unable to recover it. 00:34:43.894 [2024-07-21 03:44:29.033444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.894 [2024-07-21 03:44:29.033470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.894 qpair failed and we were unable to recover it. 00:34:43.894 [2024-07-21 03:44:29.033569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.894 [2024-07-21 03:44:29.033627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.894 qpair failed and we were unable to recover it. 00:34:43.894 [2024-07-21 03:44:29.033775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.894 [2024-07-21 03:44:29.033807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.894 qpair failed and we were unable to recover it. 00:34:43.894 [2024-07-21 03:44:29.033908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.894 [2024-07-21 03:44:29.033956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.894 qpair failed and we were unable to recover it. 00:34:43.894 [2024-07-21 03:44:29.034109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.894 [2024-07-21 03:44:29.034135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.894 qpair failed and we were unable to recover it. 00:34:43.894 [2024-07-21 03:44:29.034282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.894 [2024-07-21 03:44:29.034311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.894 qpair failed and we were unable to recover it. 00:34:43.894 [2024-07-21 03:44:29.034478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.894 [2024-07-21 03:44:29.034504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.894 qpair failed and we were unable to recover it. 00:34:43.894 [2024-07-21 03:44:29.034638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.894 [2024-07-21 03:44:29.034665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.894 qpair failed and we were unable to recover it. 00:34:43.894 [2024-07-21 03:44:29.034760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.894 [2024-07-21 03:44:29.034786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.894 qpair failed and we were unable to recover it. 00:34:43.894 [2024-07-21 03:44:29.034880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.894 [2024-07-21 03:44:29.034905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.894 qpair failed and we were unable to recover it. 00:34:43.894 [2024-07-21 03:44:29.035052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.894 [2024-07-21 03:44:29.035078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.894 qpair failed and we were unable to recover it. 00:34:43.894 [2024-07-21 03:44:29.035201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.894 [2024-07-21 03:44:29.035227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.894 qpair failed and we were unable to recover it. 00:34:43.894 [2024-07-21 03:44:29.035355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.894 [2024-07-21 03:44:29.035380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.894 qpair failed and we were unable to recover it. 00:34:43.894 [2024-07-21 03:44:29.035469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.894 [2024-07-21 03:44:29.035495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.894 qpair failed and we were unable to recover it. 00:34:43.894 [2024-07-21 03:44:29.035638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.894 [2024-07-21 03:44:29.035668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.894 qpair failed and we were unable to recover it. 00:34:43.894 [2024-07-21 03:44:29.035783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.894 [2024-07-21 03:44:29.035809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.894 qpair failed and we were unable to recover it. 00:34:43.894 [2024-07-21 03:44:29.035899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.894 [2024-07-21 03:44:29.035926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.894 qpair failed and we were unable to recover it. 00:34:43.894 [2024-07-21 03:44:29.036100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.894 [2024-07-21 03:44:29.036128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.894 qpair failed and we were unable to recover it. 00:34:43.894 [2024-07-21 03:44:29.036244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.894 [2024-07-21 03:44:29.036269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.894 qpair failed and we were unable to recover it. 00:34:43.894 [2024-07-21 03:44:29.036389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.894 [2024-07-21 03:44:29.036415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.894 qpair failed and we were unable to recover it. 00:34:43.894 [2024-07-21 03:44:29.036524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.894 [2024-07-21 03:44:29.036554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.894 qpair failed and we were unable to recover it. 00:34:43.894 [2024-07-21 03:44:29.036680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.894 [2024-07-21 03:44:29.036706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.894 qpair failed and we were unable to recover it. 00:34:43.894 [2024-07-21 03:44:29.036825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.894 [2024-07-21 03:44:29.036850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.894 qpair failed and we were unable to recover it. 00:34:43.894 [2024-07-21 03:44:29.036962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.894 [2024-07-21 03:44:29.036991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.894 qpair failed and we were unable to recover it. 00:34:43.894 [2024-07-21 03:44:29.037165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.894 [2024-07-21 03:44:29.037191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.894 qpair failed and we were unable to recover it. 00:34:43.894 [2024-07-21 03:44:29.037351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.894 [2024-07-21 03:44:29.037379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.894 qpair failed and we were unable to recover it. 00:34:43.894 [2024-07-21 03:44:29.037533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.894 [2024-07-21 03:44:29.037561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.894 qpair failed and we were unable to recover it. 00:34:43.894 [2024-07-21 03:44:29.037689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.894 [2024-07-21 03:44:29.037715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.894 qpair failed and we were unable to recover it. 00:34:43.894 [2024-07-21 03:44:29.037819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.894 [2024-07-21 03:44:29.037844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.894 qpair failed and we were unable to recover it. 00:34:43.894 [2024-07-21 03:44:29.037945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.894 [2024-07-21 03:44:29.037970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.894 qpair failed and we were unable to recover it. 00:34:43.894 [2024-07-21 03:44:29.038095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.894 [2024-07-21 03:44:29.038120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.894 qpair failed and we were unable to recover it. 00:34:43.894 [2024-07-21 03:44:29.038215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.894 [2024-07-21 03:44:29.038240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.894 qpair failed and we were unable to recover it. 00:34:43.894 [2024-07-21 03:44:29.038339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.894 [2024-07-21 03:44:29.038368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.894 qpair failed and we were unable to recover it. 00:34:43.894 [2024-07-21 03:44:29.038515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.894 [2024-07-21 03:44:29.038541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.894 qpair failed and we were unable to recover it. 00:34:43.894 [2024-07-21 03:44:29.038672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.895 [2024-07-21 03:44:29.038699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.895 qpair failed and we were unable to recover it. 00:34:43.895 [2024-07-21 03:44:29.038817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.895 [2024-07-21 03:44:29.038843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.895 qpair failed and we were unable to recover it. 00:34:43.895 [2024-07-21 03:44:29.038971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.895 [2024-07-21 03:44:29.038998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.895 qpair failed and we were unable to recover it. 00:34:43.895 [2024-07-21 03:44:29.039122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.895 [2024-07-21 03:44:29.039164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.895 qpair failed and we were unable to recover it. 00:34:43.895 [2024-07-21 03:44:29.039290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.895 [2024-07-21 03:44:29.039319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.895 qpair failed and we were unable to recover it. 00:34:43.895 [2024-07-21 03:44:29.039463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.895 [2024-07-21 03:44:29.039490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.895 qpair failed and we were unable to recover it. 00:34:43.895 [2024-07-21 03:44:29.039619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.895 [2024-07-21 03:44:29.039646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.895 qpair failed and we were unable to recover it. 00:34:43.895 [2024-07-21 03:44:29.039796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.895 [2024-07-21 03:44:29.039825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.895 qpair failed and we were unable to recover it. 00:34:43.895 [2024-07-21 03:44:29.039965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.895 [2024-07-21 03:44:29.039990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.895 qpair failed and we were unable to recover it. 00:34:43.895 [2024-07-21 03:44:29.040110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.895 [2024-07-21 03:44:29.040139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.895 qpair failed and we were unable to recover it. 00:34:43.895 [2024-07-21 03:44:29.040252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.895 [2024-07-21 03:44:29.040281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.895 qpair failed and we were unable to recover it. 00:34:43.895 [2024-07-21 03:44:29.040399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.895 [2024-07-21 03:44:29.040425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.895 qpair failed and we were unable to recover it. 00:34:43.895 [2024-07-21 03:44:29.040519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.895 [2024-07-21 03:44:29.040545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.895 qpair failed and we were unable to recover it. 00:34:43.895 [2024-07-21 03:44:29.040674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.895 [2024-07-21 03:44:29.040704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.895 qpair failed and we were unable to recover it. 00:34:43.895 [2024-07-21 03:44:29.040838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.895 [2024-07-21 03:44:29.040864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.895 qpair failed and we were unable to recover it. 00:34:43.895 [2024-07-21 03:44:29.040994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.895 [2024-07-21 03:44:29.041020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.895 qpair failed and we were unable to recover it. 00:34:43.895 [2024-07-21 03:44:29.041162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.895 [2024-07-21 03:44:29.041190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.895 qpair failed and we were unable to recover it. 00:34:43.895 [2024-07-21 03:44:29.041358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.895 [2024-07-21 03:44:29.041383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.895 qpair failed and we were unable to recover it. 00:34:43.895 [2024-07-21 03:44:29.041500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.895 [2024-07-21 03:44:29.041526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.895 qpair failed and we were unable to recover it. 00:34:43.895 [2024-07-21 03:44:29.041681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.895 [2024-07-21 03:44:29.041711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.895 qpair failed and we were unable to recover it. 00:34:43.895 [2024-07-21 03:44:29.041823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.895 [2024-07-21 03:44:29.041849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.895 qpair failed and we were unable to recover it. 00:34:43.895 [2024-07-21 03:44:29.041973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.895 [2024-07-21 03:44:29.041999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.895 qpair failed and we were unable to recover it. 00:34:43.895 [2024-07-21 03:44:29.042145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.895 [2024-07-21 03:44:29.042175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.895 qpair failed and we were unable to recover it. 00:34:43.895 [2024-07-21 03:44:29.042330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.895 [2024-07-21 03:44:29.042355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.895 qpair failed and we were unable to recover it. 00:34:43.895 [2024-07-21 03:44:29.042459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.895 [2024-07-21 03:44:29.042499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.895 qpair failed and we were unable to recover it. 00:34:43.895 [2024-07-21 03:44:29.042636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.895 [2024-07-21 03:44:29.042665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.895 qpair failed and we were unable to recover it. 00:34:43.895 [2024-07-21 03:44:29.042782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.895 [2024-07-21 03:44:29.042809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.895 qpair failed and we were unable to recover it. 00:34:43.895 [2024-07-21 03:44:29.042897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.895 [2024-07-21 03:44:29.042930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.895 qpair failed and we were unable to recover it. 00:34:43.895 [2024-07-21 03:44:29.043076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.895 [2024-07-21 03:44:29.043101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.895 qpair failed and we were unable to recover it. 00:34:43.895 [2024-07-21 03:44:29.043192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.895 [2024-07-21 03:44:29.043217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.895 qpair failed and we were unable to recover it. 00:34:43.895 [2024-07-21 03:44:29.043364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.895 [2024-07-21 03:44:29.043391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.895 qpair failed and we were unable to recover it. 00:34:43.895 [2024-07-21 03:44:29.043552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.895 [2024-07-21 03:44:29.043579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.895 qpair failed and we were unable to recover it. 00:34:43.895 [2024-07-21 03:44:29.043732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.895 [2024-07-21 03:44:29.043758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.895 qpair failed and we were unable to recover it. 00:34:43.895 [2024-07-21 03:44:29.043859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.895 [2024-07-21 03:44:29.043884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.895 qpair failed and we were unable to recover it. 00:34:43.895 [2024-07-21 03:44:29.044005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.895 [2024-07-21 03:44:29.044035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.895 qpair failed and we were unable to recover it. 00:34:43.895 [2024-07-21 03:44:29.044152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.895 [2024-07-21 03:44:29.044177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.895 qpair failed and we were unable to recover it. 00:34:43.895 [2024-07-21 03:44:29.044333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.895 [2024-07-21 03:44:29.044358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.895 qpair failed and we were unable to recover it. 00:34:43.895 [2024-07-21 03:44:29.044520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.895 [2024-07-21 03:44:29.044548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.895 qpair failed and we were unable to recover it. 00:34:43.895 [2024-07-21 03:44:29.044678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.895 [2024-07-21 03:44:29.044705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.895 qpair failed and we were unable to recover it. 00:34:43.895 [2024-07-21 03:44:29.044848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.895 [2024-07-21 03:44:29.044874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.895 qpair failed and we were unable to recover it. 00:34:43.895 [2024-07-21 03:44:29.045043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.895 [2024-07-21 03:44:29.045072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.895 qpair failed and we were unable to recover it. 00:34:43.895 [2024-07-21 03:44:29.045179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.896 [2024-07-21 03:44:29.045207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.896 qpair failed and we were unable to recover it. 00:34:43.896 [2024-07-21 03:44:29.045313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.896 [2024-07-21 03:44:29.045342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.896 qpair failed and we were unable to recover it. 00:34:43.896 [2024-07-21 03:44:29.045435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.896 [2024-07-21 03:44:29.045463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.896 qpair failed and we were unable to recover it. 00:34:43.896 [2024-07-21 03:44:29.045587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.896 [2024-07-21 03:44:29.045619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.896 qpair failed and we were unable to recover it. 00:34:43.896 [2024-07-21 03:44:29.045784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.896 [2024-07-21 03:44:29.045814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.896 qpair failed and we were unable to recover it. 00:34:43.896 [2024-07-21 03:44:29.045974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.896 [2024-07-21 03:44:29.046001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.896 qpair failed and we were unable to recover it. 00:34:43.896 [2024-07-21 03:44:29.046154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.896 [2024-07-21 03:44:29.046180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.896 qpair failed and we were unable to recover it. 00:34:43.896 [2024-07-21 03:44:29.046278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.896 [2024-07-21 03:44:29.046306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.896 qpair failed and we were unable to recover it. 00:34:43.896 [2024-07-21 03:44:29.046403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.896 [2024-07-21 03:44:29.046433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.896 qpair failed and we were unable to recover it. 00:34:43.896 [2024-07-21 03:44:29.046525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.896 [2024-07-21 03:44:29.046551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.896 qpair failed and we were unable to recover it. 00:34:43.896 [2024-07-21 03:44:29.046691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.896 [2024-07-21 03:44:29.046718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.896 qpair failed and we were unable to recover it. 00:34:43.896 [2024-07-21 03:44:29.046860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.896 [2024-07-21 03:44:29.046889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.896 qpair failed and we were unable to recover it. 00:34:43.896 [2024-07-21 03:44:29.047016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.896 [2024-07-21 03:44:29.047044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.896 qpair failed and we were unable to recover it. 00:34:43.896 [2024-07-21 03:44:29.047182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.896 [2024-07-21 03:44:29.047210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.896 qpair failed and we were unable to recover it. 00:34:43.896 [2024-07-21 03:44:29.047368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.896 [2024-07-21 03:44:29.047397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.896 qpair failed and we were unable to recover it. 00:34:43.896 [2024-07-21 03:44:29.047527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.896 [2024-07-21 03:44:29.047556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.896 qpair failed and we were unable to recover it. 00:34:43.896 [2024-07-21 03:44:29.047709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.896 [2024-07-21 03:44:29.047739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.896 qpair failed and we were unable to recover it. 00:34:43.896 [2024-07-21 03:44:29.047865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.896 [2024-07-21 03:44:29.047894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.896 qpair failed and we were unable to recover it. 00:34:43.896 [2024-07-21 03:44:29.048018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.896 [2024-07-21 03:44:29.048048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.896 qpair failed and we were unable to recover it. 00:34:43.896 [2024-07-21 03:44:29.048143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.896 [2024-07-21 03:44:29.048173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.896 qpair failed and we were unable to recover it. 00:34:43.896 [2024-07-21 03:44:29.048280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.896 [2024-07-21 03:44:29.048309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.896 qpair failed and we were unable to recover it. 00:34:43.896 [2024-07-21 03:44:29.048473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.896 [2024-07-21 03:44:29.048499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.896 qpair failed and we were unable to recover it. 00:34:43.896 [2024-07-21 03:44:29.048659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.896 [2024-07-21 03:44:29.048704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.896 qpair failed and we were unable to recover it. 00:34:43.896 [2024-07-21 03:44:29.048865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.896 [2024-07-21 03:44:29.048891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.896 qpair failed and we were unable to recover it. 00:34:43.896 [2024-07-21 03:44:29.049013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.896 [2024-07-21 03:44:29.049040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.896 qpair failed and we were unable to recover it. 00:34:43.896 [2024-07-21 03:44:29.049134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.896 [2024-07-21 03:44:29.049161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.896 qpair failed and we were unable to recover it. 00:34:43.896 [2024-07-21 03:44:29.049327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.896 [2024-07-21 03:44:29.049356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.896 qpair failed and we were unable to recover it. 00:34:43.896 [2024-07-21 03:44:29.049470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.896 [2024-07-21 03:44:29.049499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.896 qpair failed and we were unable to recover it. 00:34:43.896 [2024-07-21 03:44:29.049668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.896 [2024-07-21 03:44:29.049694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.896 qpair failed and we were unable to recover it. 00:34:43.896 [2024-07-21 03:44:29.049796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.896 [2024-07-21 03:44:29.049821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.896 qpair failed and we were unable to recover it. 00:34:43.896 [2024-07-21 03:44:29.049996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.896 [2024-07-21 03:44:29.050024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.896 qpair failed and we were unable to recover it. 00:34:43.896 [2024-07-21 03:44:29.050145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.896 [2024-07-21 03:44:29.050171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.896 qpair failed and we were unable to recover it. 00:34:43.896 [2024-07-21 03:44:29.050262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.896 [2024-07-21 03:44:29.050287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.896 qpair failed and we were unable to recover it. 00:34:43.896 [2024-07-21 03:44:29.050374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.896 [2024-07-21 03:44:29.050417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.896 qpair failed and we were unable to recover it. 00:34:43.896 [2024-07-21 03:44:29.050571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.896 [2024-07-21 03:44:29.050597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.896 qpair failed and we were unable to recover it. 00:34:43.896 [2024-07-21 03:44:29.050721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.897 [2024-07-21 03:44:29.050764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.897 qpair failed and we were unable to recover it. 00:34:43.897 [2024-07-21 03:44:29.050871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.897 [2024-07-21 03:44:29.050900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.897 qpair failed and we were unable to recover it. 00:34:43.897 [2024-07-21 03:44:29.051069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.897 [2024-07-21 03:44:29.051094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.897 qpair failed and we were unable to recover it. 00:34:43.897 [2024-07-21 03:44:29.051188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.897 [2024-07-21 03:44:29.051214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.897 qpair failed and we were unable to recover it. 00:34:43.897 [2024-07-21 03:44:29.051301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.897 [2024-07-21 03:44:29.051327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.897 qpair failed and we were unable to recover it. 00:34:43.897 [2024-07-21 03:44:29.051428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.897 [2024-07-21 03:44:29.051453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.897 qpair failed and we were unable to recover it. 00:34:43.897 [2024-07-21 03:44:29.051555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.897 [2024-07-21 03:44:29.051580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.897 qpair failed and we were unable to recover it. 00:34:43.897 [2024-07-21 03:44:29.051680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.897 [2024-07-21 03:44:29.051706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.897 qpair failed and we were unable to recover it. 00:34:43.897 [2024-07-21 03:44:29.051744] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:34:43.897 [2024-07-21 03:44:29.051823] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:43.897 [2024-07-21 03:44:29.051824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.897 [2024-07-21 03:44:29.051851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.897 qpair failed and we were unable to recover it. 00:34:43.897 [2024-07-21 03:44:29.051951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.897 [2024-07-21 03:44:29.051975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.897 qpair failed and we were unable to recover it. 00:34:43.897 [2024-07-21 03:44:29.052099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.897 [2024-07-21 03:44:29.052142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.897 qpair failed and we were unable to recover it. 00:34:43.897 [2024-07-21 03:44:29.052293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.897 [2024-07-21 03:44:29.052319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.897 qpair failed and we were unable to recover it. 00:34:43.897 [2024-07-21 03:44:29.052473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.897 [2024-07-21 03:44:29.052517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.897 qpair failed and we were unable to recover it. 00:34:43.897 [2024-07-21 03:44:29.052669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.897 [2024-07-21 03:44:29.052701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.897 qpair failed and we were unable to recover it. 00:34:43.897 [2024-07-21 03:44:29.052816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.897 [2024-07-21 03:44:29.052842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.897 qpair failed and we were unable to recover it. 00:34:43.897 [2024-07-21 03:44:29.052979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.897 [2024-07-21 03:44:29.053006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.897 qpair failed and we were unable to recover it. 00:34:43.897 [2024-07-21 03:44:29.053128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.897 [2024-07-21 03:44:29.053158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.897 qpair failed and we were unable to recover it. 00:34:43.897 [2024-07-21 03:44:29.053307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.897 [2024-07-21 03:44:29.053334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.897 qpair failed and we were unable to recover it. 00:34:43.897 [2024-07-21 03:44:29.053466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.897 [2024-07-21 03:44:29.053494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.897 qpair failed and we were unable to recover it. 00:34:43.897 [2024-07-21 03:44:29.053620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.897 [2024-07-21 03:44:29.053650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.897 qpair failed and we were unable to recover it. 00:34:43.897 [2024-07-21 03:44:29.053799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.897 [2024-07-21 03:44:29.053826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.897 qpair failed and we were unable to recover it. 00:34:43.897 [2024-07-21 03:44:29.053952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.897 [2024-07-21 03:44:29.053978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.897 qpair failed and we were unable to recover it. 00:34:43.897 [2024-07-21 03:44:29.054127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.897 [2024-07-21 03:44:29.054155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.897 qpair failed and we were unable to recover it. 00:34:43.897 [2024-07-21 03:44:29.054273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.897 [2024-07-21 03:44:29.054298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.897 qpair failed and we were unable to recover it. 00:34:43.897 [2024-07-21 03:44:29.054420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.897 [2024-07-21 03:44:29.054445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.897 qpair failed and we were unable to recover it. 00:34:43.897 [2024-07-21 03:44:29.054562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.897 [2024-07-21 03:44:29.054627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.897 qpair failed and we were unable to recover it. 00:34:43.897 [2024-07-21 03:44:29.054772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.897 [2024-07-21 03:44:29.054798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.897 qpair failed and we were unable to recover it. 00:34:43.897 [2024-07-21 03:44:29.054897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.897 [2024-07-21 03:44:29.054923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.897 qpair failed and we were unable to recover it. 00:34:43.897 [2024-07-21 03:44:29.055059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.897 [2024-07-21 03:44:29.055088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.897 qpair failed and we were unable to recover it. 00:34:43.897 [2024-07-21 03:44:29.055227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.897 [2024-07-21 03:44:29.055253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.897 qpair failed and we were unable to recover it. 00:34:43.897 [2024-07-21 03:44:29.055382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.897 [2024-07-21 03:44:29.055410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.897 qpair failed and we were unable to recover it. 00:34:43.897 [2024-07-21 03:44:29.055507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.897 [2024-07-21 03:44:29.055534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.897 qpair failed and we were unable to recover it. 00:34:43.897 [2024-07-21 03:44:29.055642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.897 [2024-07-21 03:44:29.055669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.897 qpair failed and we were unable to recover it. 00:34:43.897 [2024-07-21 03:44:29.055756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.897 [2024-07-21 03:44:29.055782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.897 qpair failed and we were unable to recover it. 00:34:43.897 [2024-07-21 03:44:29.055894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.897 [2024-07-21 03:44:29.055924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.897 qpair failed and we were unable to recover it. 00:34:43.897 [2024-07-21 03:44:29.056072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.897 [2024-07-21 03:44:29.056098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.897 qpair failed and we were unable to recover it. 00:34:43.897 [2024-07-21 03:44:29.056232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.897 [2024-07-21 03:44:29.056260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.897 qpair failed and we were unable to recover it. 00:34:43.897 [2024-07-21 03:44:29.056401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.897 [2024-07-21 03:44:29.056430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.897 qpair failed and we were unable to recover it. 00:34:43.897 [2024-07-21 03:44:29.056584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.897 [2024-07-21 03:44:29.056611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.897 qpair failed and we were unable to recover it. 00:34:43.897 [2024-07-21 03:44:29.056729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.897 [2024-07-21 03:44:29.056756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.897 qpair failed and we were unable to recover it. 00:34:43.897 [2024-07-21 03:44:29.056845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.898 [2024-07-21 03:44:29.056871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.898 qpair failed and we were unable to recover it. 00:34:43.898 [2024-07-21 03:44:29.057007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.898 [2024-07-21 03:44:29.057046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.898 qpair failed and we were unable to recover it. 00:34:43.898 [2024-07-21 03:44:29.057193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.898 [2024-07-21 03:44:29.057224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.898 qpair failed and we were unable to recover it. 00:34:43.898 [2024-07-21 03:44:29.057349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.898 [2024-07-21 03:44:29.057375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.898 qpair failed and we were unable to recover it. 00:34:43.898 [2024-07-21 03:44:29.057549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.898 [2024-07-21 03:44:29.057579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.898 qpair failed and we were unable to recover it. 00:34:43.898 [2024-07-21 03:44:29.057708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.898 [2024-07-21 03:44:29.057738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.898 qpair failed and we were unable to recover it. 00:34:43.898 [2024-07-21 03:44:29.057837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.898 [2024-07-21 03:44:29.057865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.898 qpair failed and we were unable to recover it. 00:34:43.898 [2024-07-21 03:44:29.058014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.898 [2024-07-21 03:44:29.058058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.898 qpair failed and we were unable to recover it. 00:34:43.898 [2024-07-21 03:44:29.058218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.898 [2024-07-21 03:44:29.058245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.898 qpair failed and we were unable to recover it. 00:34:43.898 [2024-07-21 03:44:29.058386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.898 [2024-07-21 03:44:29.058428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.898 qpair failed and we were unable to recover it. 00:34:43.898 [2024-07-21 03:44:29.058520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.898 [2024-07-21 03:44:29.058546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.898 qpair failed and we were unable to recover it. 00:34:43.898 [2024-07-21 03:44:29.058658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.898 [2024-07-21 03:44:29.058684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.898 qpair failed and we were unable to recover it. 00:34:43.898 [2024-07-21 03:44:29.058800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.898 [2024-07-21 03:44:29.058840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.898 qpair failed and we were unable to recover it. 00:34:43.898 [2024-07-21 03:44:29.058936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.898 [2024-07-21 03:44:29.058962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.898 qpair failed and we were unable to recover it. 00:34:43.898 [2024-07-21 03:44:29.059059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.898 [2024-07-21 03:44:29.059102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.898 qpair failed and we were unable to recover it. 00:34:43.898 [2024-07-21 03:44:29.059288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.898 [2024-07-21 03:44:29.059343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.898 qpair failed and we were unable to recover it. 00:34:43.898 [2024-07-21 03:44:29.059502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.898 [2024-07-21 03:44:29.059552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.898 qpair failed and we were unable to recover it. 00:34:43.898 [2024-07-21 03:44:29.059679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.898 [2024-07-21 03:44:29.059706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.898 qpair failed and we were unable to recover it. 00:34:43.898 [2024-07-21 03:44:29.059837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.898 [2024-07-21 03:44:29.059863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.898 qpair failed and we were unable to recover it. 00:34:43.898 [2024-07-21 03:44:29.060012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.898 [2024-07-21 03:44:29.060037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.898 qpair failed and we were unable to recover it. 00:34:43.898 [2024-07-21 03:44:29.060153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.898 [2024-07-21 03:44:29.060179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.898 qpair failed and we were unable to recover it. 00:34:43.898 [2024-07-21 03:44:29.060373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.898 [2024-07-21 03:44:29.060407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.898 qpair failed and we were unable to recover it. 00:34:43.898 [2024-07-21 03:44:29.060554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.898 [2024-07-21 03:44:29.060583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.898 qpair failed and we were unable to recover it. 00:34:43.898 [2024-07-21 03:44:29.060731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.898 [2024-07-21 03:44:29.060770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.898 qpair failed and we were unable to recover it. 00:34:43.898 [2024-07-21 03:44:29.060948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.898 [2024-07-21 03:44:29.060979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.898 qpair failed and we were unable to recover it. 00:34:43.898 [2024-07-21 03:44:29.061080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.898 [2024-07-21 03:44:29.061109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.898 qpair failed and we were unable to recover it. 00:34:43.898 [2024-07-21 03:44:29.061353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.898 [2024-07-21 03:44:29.061404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.898 qpair failed and we were unable to recover it. 00:34:43.898 [2024-07-21 03:44:29.061544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.898 [2024-07-21 03:44:29.061570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.898 qpair failed and we were unable to recover it. 00:34:43.898 [2024-07-21 03:44:29.061670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.898 [2024-07-21 03:44:29.061698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.898 qpair failed and we were unable to recover it. 00:34:43.898 [2024-07-21 03:44:29.061819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.898 [2024-07-21 03:44:29.061845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.898 qpair failed and we were unable to recover it. 00:34:43.898 [2024-07-21 03:44:29.061945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.898 [2024-07-21 03:44:29.061971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.898 qpair failed and we were unable to recover it. 00:34:43.898 [2024-07-21 03:44:29.062108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.898 [2024-07-21 03:44:29.062137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.898 qpair failed and we were unable to recover it. 00:34:43.898 [2024-07-21 03:44:29.062411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.898 [2024-07-21 03:44:29.062464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.898 qpair failed and we were unable to recover it. 00:34:43.898 [2024-07-21 03:44:29.062602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.898 [2024-07-21 03:44:29.062640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.898 qpair failed and we were unable to recover it. 00:34:43.898 [2024-07-21 03:44:29.062766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.898 [2024-07-21 03:44:29.062792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.898 qpair failed and we were unable to recover it. 00:34:43.898 [2024-07-21 03:44:29.062905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.898 [2024-07-21 03:44:29.062933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.898 qpair failed and we were unable to recover it. 00:34:43.898 [2024-07-21 03:44:29.063099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.898 [2024-07-21 03:44:29.063148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.898 qpair failed and we were unable to recover it. 00:34:43.898 [2024-07-21 03:44:29.063322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.898 [2024-07-21 03:44:29.063367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.898 qpair failed and we were unable to recover it. 00:34:43.898 [2024-07-21 03:44:29.063492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.898 [2024-07-21 03:44:29.063518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.898 qpair failed and we were unable to recover it. 00:34:43.898 [2024-07-21 03:44:29.063621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.898 [2024-07-21 03:44:29.063650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.898 qpair failed and we were unable to recover it. 00:34:43.898 [2024-07-21 03:44:29.063738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.898 [2024-07-21 03:44:29.063781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.898 qpair failed and we were unable to recover it. 00:34:43.898 [2024-07-21 03:44:29.063879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.899 [2024-07-21 03:44:29.063907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.899 qpair failed and we were unable to recover it. 00:34:43.899 [2024-07-21 03:44:29.064061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.899 [2024-07-21 03:44:29.064107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.899 qpair failed and we were unable to recover it. 00:34:43.899 [2024-07-21 03:44:29.064306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.899 [2024-07-21 03:44:29.064333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.899 qpair failed and we were unable to recover it. 00:34:43.899 [2024-07-21 03:44:29.064466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.899 [2024-07-21 03:44:29.064495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.899 qpair failed and we were unable to recover it. 00:34:43.899 [2024-07-21 03:44:29.064648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.899 [2024-07-21 03:44:29.064674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.899 qpair failed and we were unable to recover it. 00:34:43.899 [2024-07-21 03:44:29.064769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.899 [2024-07-21 03:44:29.064813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.899 qpair failed and we were unable to recover it. 00:34:43.899 [2024-07-21 03:44:29.064907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.899 [2024-07-21 03:44:29.064935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.899 qpair failed and we were unable to recover it. 00:34:43.899 [2024-07-21 03:44:29.065074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.899 [2024-07-21 03:44:29.065118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.899 qpair failed and we were unable to recover it. 00:34:43.899 [2024-07-21 03:44:29.065263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.899 [2024-07-21 03:44:29.065308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.899 qpair failed and we were unable to recover it. 00:34:43.899 [2024-07-21 03:44:29.065410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.899 [2024-07-21 03:44:29.065438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.899 qpair failed and we were unable to recover it. 00:34:43.899 [2024-07-21 03:44:29.065549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.899 [2024-07-21 03:44:29.065575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.899 qpair failed and we were unable to recover it. 00:34:43.899 [2024-07-21 03:44:29.065724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.899 [2024-07-21 03:44:29.065764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.899 qpair failed and we were unable to recover it. 00:34:43.899 [2024-07-21 03:44:29.065869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.899 [2024-07-21 03:44:29.065918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.899 qpair failed and we were unable to recover it. 00:34:43.899 [2024-07-21 03:44:29.066079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.899 [2024-07-21 03:44:29.066109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.899 qpair failed and we were unable to recover it. 00:34:43.899 [2024-07-21 03:44:29.066279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.899 [2024-07-21 03:44:29.066307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.899 qpair failed and we were unable to recover it. 00:34:43.899 [2024-07-21 03:44:29.066416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.899 [2024-07-21 03:44:29.066444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.899 qpair failed and we were unable to recover it. 00:34:43.899 [2024-07-21 03:44:29.066573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.899 [2024-07-21 03:44:29.066598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.899 qpair failed and we were unable to recover it. 00:34:43.899 [2024-07-21 03:44:29.066708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.899 [2024-07-21 03:44:29.066735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.899 qpair failed and we were unable to recover it. 00:34:43.899 [2024-07-21 03:44:29.066860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.899 [2024-07-21 03:44:29.066886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.899 qpair failed and we were unable to recover it. 00:34:43.899 [2024-07-21 03:44:29.067000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.899 [2024-07-21 03:44:29.067028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.899 qpair failed and we were unable to recover it. 00:34:43.899 [2024-07-21 03:44:29.067170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.899 [2024-07-21 03:44:29.067215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.899 qpair failed and we were unable to recover it. 00:34:43.899 [2024-07-21 03:44:29.067341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.899 [2024-07-21 03:44:29.067368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.899 qpair failed and we were unable to recover it. 00:34:43.899 [2024-07-21 03:44:29.067503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.899 [2024-07-21 03:44:29.067531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.899 qpair failed and we were unable to recover it. 00:34:43.899 [2024-07-21 03:44:29.067685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.899 [2024-07-21 03:44:29.067713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.899 qpair failed and we were unable to recover it. 00:34:43.899 [2024-07-21 03:44:29.067815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.899 [2024-07-21 03:44:29.067842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.899 qpair failed and we were unable to recover it. 00:34:43.899 [2024-07-21 03:44:29.067984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.899 [2024-07-21 03:44:29.068023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.899 qpair failed and we were unable to recover it. 00:34:43.899 [2024-07-21 03:44:29.068166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.899 [2024-07-21 03:44:29.068196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.899 qpair failed and we were unable to recover it. 00:34:43.899 [2024-07-21 03:44:29.068385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.899 [2024-07-21 03:44:29.068428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.899 qpair failed and we were unable to recover it. 00:34:43.899 [2024-07-21 03:44:29.068543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.899 [2024-07-21 03:44:29.068569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.899 qpair failed and we were unable to recover it. 00:34:43.899 [2024-07-21 03:44:29.068699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.899 [2024-07-21 03:44:29.068726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.899 qpair failed and we were unable to recover it. 00:34:43.899 [2024-07-21 03:44:29.068827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.899 [2024-07-21 03:44:29.068852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.899 qpair failed and we were unable to recover it. 00:34:43.899 [2024-07-21 03:44:29.068965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.899 [2024-07-21 03:44:29.068992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.899 qpair failed and we were unable to recover it. 00:34:43.899 [2024-07-21 03:44:29.069109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.899 [2024-07-21 03:44:29.069150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.899 qpair failed and we were unable to recover it. 00:34:43.899 [2024-07-21 03:44:29.069259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.899 [2024-07-21 03:44:29.069286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.899 qpair failed and we were unable to recover it. 00:34:43.899 [2024-07-21 03:44:29.069415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.899 [2024-07-21 03:44:29.069443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.899 qpair failed and we were unable to recover it. 00:34:43.899 [2024-07-21 03:44:29.069583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.899 [2024-07-21 03:44:29.069608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.899 qpair failed and we were unable to recover it. 00:34:43.899 [2024-07-21 03:44:29.069740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.899 [2024-07-21 03:44:29.069764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.899 qpair failed and we were unable to recover it. 00:34:43.899 [2024-07-21 03:44:29.069859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.899 [2024-07-21 03:44:29.069884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.899 qpair failed and we were unable to recover it. 00:34:43.899 [2024-07-21 03:44:29.070070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.899 [2024-07-21 03:44:29.070098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.899 qpair failed and we were unable to recover it. 00:34:43.899 [2024-07-21 03:44:29.070241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.899 [2024-07-21 03:44:29.070271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.899 qpair failed and we were unable to recover it. 00:34:43.899 [2024-07-21 03:44:29.070411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.899 [2024-07-21 03:44:29.070439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.899 qpair failed and we were unable to recover it. 00:34:43.899 [2024-07-21 03:44:29.070556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.900 [2024-07-21 03:44:29.070580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.900 qpair failed and we were unable to recover it. 00:34:43.900 [2024-07-21 03:44:29.070706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.900 [2024-07-21 03:44:29.070733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.900 qpair failed and we were unable to recover it. 00:34:43.900 [2024-07-21 03:44:29.070816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.900 [2024-07-21 03:44:29.070841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.900 qpair failed and we were unable to recover it. 00:34:43.900 [2024-07-21 03:44:29.070957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.900 [2024-07-21 03:44:29.070982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.900 qpair failed and we were unable to recover it. 00:34:43.900 [2024-07-21 03:44:29.071072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.900 [2024-07-21 03:44:29.071097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.900 qpair failed and we were unable to recover it. 00:34:43.900 [2024-07-21 03:44:29.071239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.900 [2024-07-21 03:44:29.071267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.900 qpair failed and we were unable to recover it. 00:34:43.900 [2024-07-21 03:44:29.071460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.900 [2024-07-21 03:44:29.071488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.900 qpair failed and we were unable to recover it. 00:34:43.900 [2024-07-21 03:44:29.071578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.900 [2024-07-21 03:44:29.071607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.900 qpair failed and we were unable to recover it. 00:34:43.900 [2024-07-21 03:44:29.071755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.900 [2024-07-21 03:44:29.071780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.900 qpair failed and we were unable to recover it. 00:34:43.900 [2024-07-21 03:44:29.071879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.900 [2024-07-21 03:44:29.071904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.900 qpair failed and we were unable to recover it. 00:34:43.900 [2024-07-21 03:44:29.072000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.900 [2024-07-21 03:44:29.072025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.900 qpair failed and we were unable to recover it. 00:34:43.900 [2024-07-21 03:44:29.072141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.900 [2024-07-21 03:44:29.072183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.900 qpair failed and we were unable to recover it. 00:34:43.900 [2024-07-21 03:44:29.072295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.900 [2024-07-21 03:44:29.072323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.900 qpair failed and we were unable to recover it. 00:34:43.900 [2024-07-21 03:44:29.072482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.900 [2024-07-21 03:44:29.072510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.900 qpair failed and we were unable to recover it. 00:34:43.900 [2024-07-21 03:44:29.072645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.900 [2024-07-21 03:44:29.072688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.900 qpair failed and we were unable to recover it. 00:34:43.900 [2024-07-21 03:44:29.072775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.900 [2024-07-21 03:44:29.072800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.900 qpair failed and we were unable to recover it. 00:34:43.900 [2024-07-21 03:44:29.072893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.900 [2024-07-21 03:44:29.072920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.900 qpair failed and we were unable to recover it. 00:34:43.900 [2024-07-21 03:44:29.073062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.900 [2024-07-21 03:44:29.073090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.900 qpair failed and we were unable to recover it. 00:34:43.900 [2024-07-21 03:44:29.073245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.900 [2024-07-21 03:44:29.073272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.900 qpair failed and we were unable to recover it. 00:34:43.900 [2024-07-21 03:44:29.073368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.900 [2024-07-21 03:44:29.073396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.900 qpair failed and we were unable to recover it. 00:34:43.900 [2024-07-21 03:44:29.073565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.900 [2024-07-21 03:44:29.073605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.900 qpair failed and we were unable to recover it. 00:34:43.900 [2024-07-21 03:44:29.073828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.900 [2024-07-21 03:44:29.073856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.900 qpair failed and we were unable to recover it. 00:34:43.900 [2024-07-21 03:44:29.074072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.900 [2024-07-21 03:44:29.074102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.900 qpair failed and we were unable to recover it. 00:34:43.900 [2024-07-21 03:44:29.074270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.900 [2024-07-21 03:44:29.074300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.900 qpair failed and we were unable to recover it. 00:34:43.900 [2024-07-21 03:44:29.074413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.900 [2024-07-21 03:44:29.074456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.900 qpair failed and we were unable to recover it. 00:34:43.900 [2024-07-21 03:44:29.074577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.900 [2024-07-21 03:44:29.074604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.900 qpair failed and we were unable to recover it. 00:34:43.900 [2024-07-21 03:44:29.074762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.900 [2024-07-21 03:44:29.074788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.900 qpair failed and we were unable to recover it. 00:34:43.900 [2024-07-21 03:44:29.074929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.900 [2024-07-21 03:44:29.074958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.900 qpair failed and we were unable to recover it. 00:34:43.900 [2024-07-21 03:44:29.075136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.900 [2024-07-21 03:44:29.075186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.900 qpair failed and we were unable to recover it. 00:34:43.900 [2024-07-21 03:44:29.075437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.900 [2024-07-21 03:44:29.075489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.900 qpair failed and we were unable to recover it. 00:34:43.900 [2024-07-21 03:44:29.075649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.900 [2024-07-21 03:44:29.075693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.900 qpair failed and we were unable to recover it. 00:34:43.900 [2024-07-21 03:44:29.075790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.900 [2024-07-21 03:44:29.075816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.900 qpair failed and we were unable to recover it. 00:34:43.900 [2024-07-21 03:44:29.075981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.900 [2024-07-21 03:44:29.076010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.900 qpair failed and we were unable to recover it. 00:34:43.900 [2024-07-21 03:44:29.076169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.900 [2024-07-21 03:44:29.076198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.900 qpair failed and we were unable to recover it. 00:34:43.900 [2024-07-21 03:44:29.076298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.900 [2024-07-21 03:44:29.076329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.900 qpair failed and we were unable to recover it. 00:34:43.900 [2024-07-21 03:44:29.076468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.900 [2024-07-21 03:44:29.076507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.900 qpair failed and we were unable to recover it. 00:34:43.900 [2024-07-21 03:44:29.076661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.900 [2024-07-21 03:44:29.076689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.900 qpair failed and we were unable to recover it. 00:34:43.900 [2024-07-21 03:44:29.076811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.901 [2024-07-21 03:44:29.076838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.901 qpair failed and we were unable to recover it. 00:34:43.901 [2024-07-21 03:44:29.076957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.901 [2024-07-21 03:44:29.076986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.901 qpair failed and we were unable to recover it. 00:34:43.901 [2024-07-21 03:44:29.077101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.901 [2024-07-21 03:44:29.077142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.901 qpair failed and we were unable to recover it. 00:34:43.901 [2024-07-21 03:44:29.077271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.901 [2024-07-21 03:44:29.077299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.901 qpair failed and we were unable to recover it. 00:34:43.901 [2024-07-21 03:44:29.077461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.901 [2024-07-21 03:44:29.077491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.901 qpair failed and we were unable to recover it. 00:34:43.901 [2024-07-21 03:44:29.077621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.901 [2024-07-21 03:44:29.077662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.901 qpair failed and we were unable to recover it. 00:34:43.901 [2024-07-21 03:44:29.077770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.901 [2024-07-21 03:44:29.077797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.901 qpair failed and we were unable to recover it. 00:34:43.901 [2024-07-21 03:44:29.077914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.901 [2024-07-21 03:44:29.077943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.901 qpair failed and we were unable to recover it. 00:34:43.901 [2024-07-21 03:44:29.078126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.901 [2024-07-21 03:44:29.078169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.901 qpair failed and we were unable to recover it. 00:34:43.901 [2024-07-21 03:44:29.078332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.901 [2024-07-21 03:44:29.078375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.901 qpair failed and we were unable to recover it. 00:34:43.901 [2024-07-21 03:44:29.078503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.901 [2024-07-21 03:44:29.078530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.901 qpair failed and we were unable to recover it. 00:34:43.901 [2024-07-21 03:44:29.078651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.901 [2024-07-21 03:44:29.078678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.901 qpair failed and we were unable to recover it. 00:34:43.901 [2024-07-21 03:44:29.078772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.901 [2024-07-21 03:44:29.078800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.901 qpair failed and we were unable to recover it. 00:34:43.901 [2024-07-21 03:44:29.078956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.901 [2024-07-21 03:44:29.078985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.901 qpair failed and we were unable to recover it. 00:34:43.901 [2024-07-21 03:44:29.079143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.901 [2024-07-21 03:44:29.079177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.901 qpair failed and we were unable to recover it. 00:34:43.901 [2024-07-21 03:44:29.079329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.901 [2024-07-21 03:44:29.079358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.901 qpair failed and we were unable to recover it. 00:34:43.901 [2024-07-21 03:44:29.079525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.901 [2024-07-21 03:44:29.079553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.901 qpair failed and we were unable to recover it. 00:34:43.901 [2024-07-21 03:44:29.079715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.901 [2024-07-21 03:44:29.079755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.901 qpair failed and we were unable to recover it. 00:34:43.901 [2024-07-21 03:44:29.079908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.901 [2024-07-21 03:44:29.079955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.901 qpair failed and we were unable to recover it. 00:34:43.901 [2024-07-21 03:44:29.080098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.901 [2024-07-21 03:44:29.080144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.901 qpair failed and we were unable to recover it. 00:34:43.901 [2024-07-21 03:44:29.080349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.901 [2024-07-21 03:44:29.080391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.901 qpair failed and we were unable to recover it. 00:34:43.901 [2024-07-21 03:44:29.080517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.901 [2024-07-21 03:44:29.080545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.901 qpair failed and we were unable to recover it. 00:34:43.901 [2024-07-21 03:44:29.080700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.901 [2024-07-21 03:44:29.080744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.901 qpair failed and we were unable to recover it. 00:34:43.901 [2024-07-21 03:44:29.080885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.901 [2024-07-21 03:44:29.080916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.901 qpair failed and we were unable to recover it. 00:34:43.901 [2024-07-21 03:44:29.081052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.901 [2024-07-21 03:44:29.081083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.901 qpair failed and we were unable to recover it. 00:34:43.901 [2024-07-21 03:44:29.081209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.901 [2024-07-21 03:44:29.081239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.901 qpair failed and we were unable to recover it. 00:34:43.901 [2024-07-21 03:44:29.081399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.901 [2024-07-21 03:44:29.081427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.901 qpair failed and we were unable to recover it. 00:34:43.901 [2024-07-21 03:44:29.081571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.902 [2024-07-21 03:44:29.081596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.902 qpair failed and we were unable to recover it. 00:34:43.902 [2024-07-21 03:44:29.081730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.902 [2024-07-21 03:44:29.081756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.902 qpair failed and we were unable to recover it. 00:34:43.902 [2024-07-21 03:44:29.081879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.902 [2024-07-21 03:44:29.081905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.902 qpair failed and we were unable to recover it. 00:34:43.902 [2024-07-21 03:44:29.082035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.902 [2024-07-21 03:44:29.082064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.902 qpair failed and we were unable to recover it. 00:34:43.902 [2024-07-21 03:44:29.082260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.902 [2024-07-21 03:44:29.082289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.902 qpair failed and we were unable to recover it. 00:34:43.902 [2024-07-21 03:44:29.082419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.902 [2024-07-21 03:44:29.082449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.902 qpair failed and we were unable to recover it. 00:34:43.902 [2024-07-21 03:44:29.082581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.902 [2024-07-21 03:44:29.082609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.902 qpair failed and we were unable to recover it. 00:34:43.902 [2024-07-21 03:44:29.082754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.902 [2024-07-21 03:44:29.082780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.902 qpair failed and we were unable to recover it. 00:34:43.902 [2024-07-21 03:44:29.082905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.902 [2024-07-21 03:44:29.082947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.902 qpair failed and we were unable to recover it. 00:34:43.902 [2024-07-21 03:44:29.083046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.902 [2024-07-21 03:44:29.083075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.902 qpair failed and we were unable to recover it. 00:34:43.902 [2024-07-21 03:44:29.083278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.902 [2024-07-21 03:44:29.083306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.902 qpair failed and we were unable to recover it. 00:34:43.902 [2024-07-21 03:44:29.083408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.902 [2024-07-21 03:44:29.083436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.902 qpair failed and we were unable to recover it. 00:34:43.902 [2024-07-21 03:44:29.083540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.902 [2024-07-21 03:44:29.083568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.902 qpair failed and we were unable to recover it. 00:34:43.902 [2024-07-21 03:44:29.083718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.902 [2024-07-21 03:44:29.083745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.902 qpair failed and we were unable to recover it. 00:34:43.902 [2024-07-21 03:44:29.083855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.902 [2024-07-21 03:44:29.083910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.902 qpair failed and we were unable to recover it. 00:34:43.902 [2024-07-21 03:44:29.084025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.902 [2024-07-21 03:44:29.084055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.902 qpair failed and we were unable to recover it. 00:34:43.902 [2024-07-21 03:44:29.084207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.902 [2024-07-21 03:44:29.084236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.902 qpair failed and we were unable to recover it. 00:34:43.902 [2024-07-21 03:44:29.084357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.902 [2024-07-21 03:44:29.084385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.902 qpair failed and we were unable to recover it. 00:34:43.902 [2024-07-21 03:44:29.084508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.902 [2024-07-21 03:44:29.084533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.902 qpair failed and we were unable to recover it. 00:34:43.902 [2024-07-21 03:44:29.084678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.902 [2024-07-21 03:44:29.084705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.902 qpair failed and we were unable to recover it. 00:34:43.902 [2024-07-21 03:44:29.084803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.902 [2024-07-21 03:44:29.084829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.902 qpair failed and we were unable to recover it. 00:34:43.902 [2024-07-21 03:44:29.084918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.902 [2024-07-21 03:44:29.084962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.902 qpair failed and we were unable to recover it. 00:34:43.902 [2024-07-21 03:44:29.085090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.902 [2024-07-21 03:44:29.085119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.902 qpair failed and we were unable to recover it. 00:34:43.902 [2024-07-21 03:44:29.085257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.902 [2024-07-21 03:44:29.085300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.902 qpair failed and we were unable to recover it. 00:34:43.902 [2024-07-21 03:44:29.085401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.902 [2024-07-21 03:44:29.085430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.902 qpair failed and we were unable to recover it. 00:34:43.902 [2024-07-21 03:44:29.085535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.902 [2024-07-21 03:44:29.085560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.902 qpair failed and we were unable to recover it. 00:34:43.902 [2024-07-21 03:44:29.085678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.902 [2024-07-21 03:44:29.085704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.902 qpair failed and we were unable to recover it. 00:34:43.902 [2024-07-21 03:44:29.085821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.902 [2024-07-21 03:44:29.085846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.903 qpair failed and we were unable to recover it. 00:34:43.903 [2024-07-21 03:44:29.085998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.903 [2024-07-21 03:44:29.086026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.903 qpair failed and we were unable to recover it. 00:34:43.903 [2024-07-21 03:44:29.086172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.903 [2024-07-21 03:44:29.086237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.903 qpair failed and we were unable to recover it. 00:34:43.903 [2024-07-21 03:44:29.086364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.903 [2024-07-21 03:44:29.086393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.903 qpair failed and we were unable to recover it. 00:34:43.903 [2024-07-21 03:44:29.086534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.903 [2024-07-21 03:44:29.086560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.903 qpair failed and we were unable to recover it. 00:34:43.903 [2024-07-21 03:44:29.086682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.903 [2024-07-21 03:44:29.086709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.903 qpair failed and we were unable to recover it. 00:34:43.903 [2024-07-21 03:44:29.086826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.903 [2024-07-21 03:44:29.086852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.903 qpair failed and we were unable to recover it. 00:34:43.903 [2024-07-21 03:44:29.086983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.903 [2024-07-21 03:44:29.087013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.903 qpair failed and we were unable to recover it. 00:34:43.903 [2024-07-21 03:44:29.087139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.903 [2024-07-21 03:44:29.087168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.903 qpair failed and we were unable to recover it. 00:34:43.903 [2024-07-21 03:44:29.087330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.903 [2024-07-21 03:44:29.087358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.903 qpair failed and we were unable to recover it. 00:34:43.903 [2024-07-21 03:44:29.087488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.903 [2024-07-21 03:44:29.087517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.903 qpair failed and we were unable to recover it. 00:34:43.903 [2024-07-21 03:44:29.087665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.903 [2024-07-21 03:44:29.087692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.903 qpair failed and we were unable to recover it. 00:34:43.903 [2024-07-21 03:44:29.087811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.903 [2024-07-21 03:44:29.087837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.903 qpair failed and we were unable to recover it. 00:34:43.903 [2024-07-21 03:44:29.087943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.903 [2024-07-21 03:44:29.087971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.903 qpair failed and we were unable to recover it. 00:34:43.903 [2024-07-21 03:44:29.088105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.903 [2024-07-21 03:44:29.088135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.903 qpair failed and we were unable to recover it. 00:34:43.903 [2024-07-21 03:44:29.088267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.903 [2024-07-21 03:44:29.088296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.903 EAL: No free 2048 kB hugepages reported on node 1 00:34:43.903 qpair failed and we were unable to recover it. 00:34:43.903 [2024-07-21 03:44:29.088466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.903 [2024-07-21 03:44:29.088522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.903 qpair failed and we were unable to recover it. 00:34:43.903 [2024-07-21 03:44:29.088652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.903 [2024-07-21 03:44:29.088680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.903 qpair failed and we were unable to recover it. 00:34:43.903 [2024-07-21 03:44:29.088803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.903 [2024-07-21 03:44:29.088829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.903 qpair failed and we were unable to recover it. 00:34:43.903 [2024-07-21 03:44:29.088968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.903 [2024-07-21 03:44:29.089014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.903 qpair failed and we were unable to recover it. 00:34:43.903 [2024-07-21 03:44:29.089163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.903 [2024-07-21 03:44:29.089210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.903 qpair failed and we were unable to recover it. 00:34:43.903 [2024-07-21 03:44:29.089342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.903 [2024-07-21 03:44:29.089370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.903 qpair failed and we were unable to recover it. 00:34:43.903 [2024-07-21 03:44:29.089532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.903 [2024-07-21 03:44:29.089557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.903 qpair failed and we were unable to recover it. 00:34:43.903 [2024-07-21 03:44:29.089678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.903 [2024-07-21 03:44:29.089710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.903 qpair failed and we were unable to recover it. 00:34:43.903 [2024-07-21 03:44:29.089848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.903 [2024-07-21 03:44:29.089877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.903 qpair failed and we were unable to recover it. 00:34:43.903 [2024-07-21 03:44:29.090010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.903 [2024-07-21 03:44:29.090038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.903 qpair failed and we were unable to recover it. 00:34:43.903 [2024-07-21 03:44:29.090180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.903 [2024-07-21 03:44:29.090233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.903 qpair failed and we were unable to recover it. 00:34:43.903 [2024-07-21 03:44:29.090370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.903 [2024-07-21 03:44:29.090419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.903 qpair failed and we were unable to recover it. 00:34:43.903 [2024-07-21 03:44:29.090575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.903 [2024-07-21 03:44:29.090605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.903 qpair failed and we were unable to recover it. 00:34:43.903 [2024-07-21 03:44:29.090782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.903 [2024-07-21 03:44:29.090811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.903 qpair failed and we were unable to recover it. 00:34:43.903 [2024-07-21 03:44:29.090953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.903 [2024-07-21 03:44:29.090982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.903 qpair failed and we were unable to recover it. 00:34:43.903 [2024-07-21 03:44:29.091085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.903 [2024-07-21 03:44:29.091113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.903 qpair failed and we were unable to recover it. 00:34:43.903 [2024-07-21 03:44:29.091274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.903 [2024-07-21 03:44:29.091320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.903 qpair failed and we were unable to recover it. 00:34:43.903 [2024-07-21 03:44:29.091470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.904 [2024-07-21 03:44:29.091497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.904 qpair failed and we were unable to recover it. 00:34:43.904 [2024-07-21 03:44:29.091604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.904 [2024-07-21 03:44:29.091657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.904 qpair failed and we were unable to recover it. 00:34:43.904 [2024-07-21 03:44:29.091769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.904 [2024-07-21 03:44:29.091795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.904 qpair failed and we were unable to recover it. 00:34:43.904 [2024-07-21 03:44:29.091888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.904 [2024-07-21 03:44:29.091913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.904 qpair failed and we were unable to recover it. 00:34:43.904 [2024-07-21 03:44:29.092001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.904 [2024-07-21 03:44:29.092028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.904 qpair failed and we were unable to recover it. 00:34:43.904 [2024-07-21 03:44:29.092152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.904 [2024-07-21 03:44:29.092177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.904 qpair failed and we were unable to recover it. 00:34:43.904 [2024-07-21 03:44:29.092289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.904 [2024-07-21 03:44:29.092314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.904 qpair failed and we were unable to recover it. 00:34:43.904 [2024-07-21 03:44:29.092421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.904 [2024-07-21 03:44:29.092448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.904 qpair failed and we were unable to recover it. 00:34:43.904 [2024-07-21 03:44:29.092545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.904 [2024-07-21 03:44:29.092571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.904 qpair failed and we were unable to recover it. 00:34:43.904 [2024-07-21 03:44:29.092727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.904 [2024-07-21 03:44:29.092754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.904 qpair failed and we were unable to recover it. 00:34:43.904 [2024-07-21 03:44:29.092871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.904 [2024-07-21 03:44:29.092897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.904 qpair failed and we were unable to recover it. 00:34:43.904 [2024-07-21 03:44:29.093014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.904 [2024-07-21 03:44:29.093040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.904 qpair failed and we were unable to recover it. 00:34:43.904 [2024-07-21 03:44:29.093127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.904 [2024-07-21 03:44:29.093152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.904 qpair failed and we were unable to recover it. 00:34:43.904 [2024-07-21 03:44:29.093275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.904 [2024-07-21 03:44:29.093300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.904 qpair failed and we were unable to recover it. 00:34:43.904 [2024-07-21 03:44:29.093422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.904 [2024-07-21 03:44:29.093448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.904 qpair failed and we were unable to recover it. 00:34:43.904 [2024-07-21 03:44:29.093542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.904 [2024-07-21 03:44:29.093568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.904 qpair failed and we were unable to recover it. 00:34:43.904 [2024-07-21 03:44:29.093703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.904 [2024-07-21 03:44:29.093732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.904 qpair failed and we were unable to recover it. 00:34:43.904 [2024-07-21 03:44:29.093838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.904 [2024-07-21 03:44:29.093878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.904 qpair failed and we were unable to recover it. 00:34:43.904 [2024-07-21 03:44:29.094003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.904 [2024-07-21 03:44:29.094030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.904 qpair failed and we were unable to recover it. 00:34:43.904 [2024-07-21 03:44:29.094167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.904 [2024-07-21 03:44:29.094193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.904 qpair failed and we were unable to recover it. 00:34:43.904 [2024-07-21 03:44:29.094309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.904 [2024-07-21 03:44:29.094335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.904 qpair failed and we were unable to recover it. 00:34:43.904 [2024-07-21 03:44:29.094435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.904 [2024-07-21 03:44:29.094474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.904 qpair failed and we were unable to recover it. 00:34:43.904 [2024-07-21 03:44:29.094607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.904 [2024-07-21 03:44:29.094643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.904 qpair failed and we were unable to recover it. 00:34:43.904 [2024-07-21 03:44:29.094765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.904 [2024-07-21 03:44:29.094791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.904 qpair failed and we were unable to recover it. 00:34:43.904 [2024-07-21 03:44:29.094892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.904 [2024-07-21 03:44:29.094917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.904 qpair failed and we were unable to recover it. 00:34:43.904 [2024-07-21 03:44:29.095034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.904 [2024-07-21 03:44:29.095060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.904 qpair failed and we were unable to recover it. 00:34:43.904 [2024-07-21 03:44:29.095154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.904 [2024-07-21 03:44:29.095178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.904 qpair failed and we were unable to recover it. 00:34:43.904 [2024-07-21 03:44:29.095300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.904 [2024-07-21 03:44:29.095326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.904 qpair failed and we were unable to recover it. 00:34:43.904 [2024-07-21 03:44:29.095444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.904 [2024-07-21 03:44:29.095468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.904 qpair failed and we were unable to recover it. 00:34:43.904 [2024-07-21 03:44:29.095595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.904 [2024-07-21 03:44:29.095627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.904 qpair failed and we were unable to recover it. 00:34:43.904 [2024-07-21 03:44:29.095738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.904 [2024-07-21 03:44:29.095763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.904 qpair failed and we were unable to recover it. 00:34:43.904 [2024-07-21 03:44:29.095883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.904 [2024-07-21 03:44:29.095910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.904 qpair failed and we were unable to recover it. 00:34:43.904 [2024-07-21 03:44:29.095993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.904 [2024-07-21 03:44:29.096018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.904 qpair failed and we were unable to recover it. 00:34:43.904 [2024-07-21 03:44:29.096137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.904 [2024-07-21 03:44:29.096164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.904 qpair failed and we were unable to recover it. 00:34:43.904 [2024-07-21 03:44:29.096307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.904 [2024-07-21 03:44:29.096337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.904 qpair failed and we were unable to recover it. 00:34:43.904 [2024-07-21 03:44:29.096456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.904 [2024-07-21 03:44:29.096481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.904 qpair failed and we were unable to recover it. 00:34:43.904 [2024-07-21 03:44:29.096602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.905 [2024-07-21 03:44:29.096634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.905 qpair failed and we were unable to recover it. 00:34:43.905 [2024-07-21 03:44:29.096731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.905 [2024-07-21 03:44:29.096757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.905 qpair failed and we were unable to recover it. 00:34:43.905 [2024-07-21 03:44:29.096878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.905 [2024-07-21 03:44:29.096904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.905 qpair failed and we were unable to recover it. 00:34:43.905 [2024-07-21 03:44:29.097021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.905 [2024-07-21 03:44:29.097046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.905 qpair failed and we were unable to recover it. 00:34:43.905 [2024-07-21 03:44:29.097137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.905 [2024-07-21 03:44:29.097162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.905 qpair failed and we were unable to recover it. 00:34:43.905 [2024-07-21 03:44:29.097278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.905 [2024-07-21 03:44:29.097303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.905 qpair failed and we were unable to recover it. 00:34:43.905 [2024-07-21 03:44:29.097397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.905 [2024-07-21 03:44:29.097424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.905 qpair failed and we were unable to recover it. 00:34:43.905 [2024-07-21 03:44:29.097544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.905 [2024-07-21 03:44:29.097569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.905 qpair failed and we were unable to recover it. 00:34:43.905 [2024-07-21 03:44:29.097682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.905 [2024-07-21 03:44:29.097707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.905 qpair failed and we were unable to recover it. 00:34:43.905 [2024-07-21 03:44:29.097803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.905 [2024-07-21 03:44:29.097827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.905 qpair failed and we were unable to recover it. 00:34:43.905 [2024-07-21 03:44:29.097929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.905 [2024-07-21 03:44:29.097954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.905 qpair failed and we were unable to recover it. 00:34:43.905 [2024-07-21 03:44:29.098047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.905 [2024-07-21 03:44:29.098072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.905 qpair failed and we were unable to recover it. 00:34:43.905 [2024-07-21 03:44:29.098200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.905 [2024-07-21 03:44:29.098226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.905 qpair failed and we were unable to recover it. 00:34:43.905 [2024-07-21 03:44:29.098352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.905 [2024-07-21 03:44:29.098377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.905 qpair failed and we were unable to recover it. 00:34:43.905 [2024-07-21 03:44:29.098471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.905 [2024-07-21 03:44:29.098497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.905 qpair failed and we were unable to recover it. 00:34:43.905 [2024-07-21 03:44:29.098638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.905 [2024-07-21 03:44:29.098664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.905 qpair failed and we were unable to recover it. 00:34:43.905 [2024-07-21 03:44:29.098773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.905 [2024-07-21 03:44:29.098812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.905 qpair failed and we were unable to recover it. 00:34:43.905 [2024-07-21 03:44:29.098920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.905 [2024-07-21 03:44:29.098959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.905 qpair failed and we were unable to recover it. 00:34:43.905 [2024-07-21 03:44:29.099062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.905 [2024-07-21 03:44:29.099091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.905 qpair failed and we were unable to recover it. 00:34:43.905 [2024-07-21 03:44:29.099211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.905 [2024-07-21 03:44:29.099237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.905 qpair failed and we were unable to recover it. 00:34:43.905 [2024-07-21 03:44:29.099320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.905 [2024-07-21 03:44:29.099346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.905 qpair failed and we were unable to recover it. 00:34:43.905 [2024-07-21 03:44:29.099433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.905 [2024-07-21 03:44:29.099458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.905 qpair failed and we were unable to recover it. 00:34:43.905 [2024-07-21 03:44:29.099547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.905 [2024-07-21 03:44:29.099572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.905 qpair failed and we were unable to recover it. 00:34:43.905 [2024-07-21 03:44:29.099695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.905 [2024-07-21 03:44:29.099721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.905 qpair failed and we were unable to recover it. 00:34:43.905 [2024-07-21 03:44:29.099811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.905 [2024-07-21 03:44:29.099836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.905 qpair failed and we were unable to recover it. 00:34:43.905 [2024-07-21 03:44:29.099957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.905 [2024-07-21 03:44:29.099987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.905 qpair failed and we were unable to recover it. 00:34:43.905 [2024-07-21 03:44:29.100108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.905 [2024-07-21 03:44:29.100134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.905 qpair failed and we were unable to recover it. 00:34:43.905 [2024-07-21 03:44:29.100240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.905 [2024-07-21 03:44:29.100279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.905 qpair failed and we were unable to recover it. 00:34:43.905 [2024-07-21 03:44:29.100379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.905 [2024-07-21 03:44:29.100407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.905 qpair failed and we were unable to recover it. 00:34:43.905 [2024-07-21 03:44:29.100539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.905 [2024-07-21 03:44:29.100569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.905 qpair failed and we were unable to recover it. 00:34:43.905 [2024-07-21 03:44:29.100676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.905 [2024-07-21 03:44:29.100703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.905 qpair failed and we were unable to recover it. 00:34:43.905 [2024-07-21 03:44:29.100796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.905 [2024-07-21 03:44:29.100822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.905 qpair failed and we were unable to recover it. 00:34:43.905 [2024-07-21 03:44:29.100910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.905 [2024-07-21 03:44:29.100935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.905 qpair failed and we were unable to recover it. 00:34:43.905 [2024-07-21 03:44:29.101055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.905 [2024-07-21 03:44:29.101080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.905 qpair failed and we were unable to recover it. 00:34:43.905 [2024-07-21 03:44:29.101171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.905 [2024-07-21 03:44:29.101200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.905 qpair failed and we were unable to recover it. 00:34:43.905 [2024-07-21 03:44:29.101349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.905 [2024-07-21 03:44:29.101375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.905 qpair failed and we were unable to recover it. 00:34:43.905 [2024-07-21 03:44:29.101469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.905 [2024-07-21 03:44:29.101495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.905 qpair failed and we were unable to recover it. 00:34:43.905 [2024-07-21 03:44:29.101582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.905 [2024-07-21 03:44:29.101608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.905 qpair failed and we were unable to recover it. 00:34:43.905 [2024-07-21 03:44:29.101709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.905 [2024-07-21 03:44:29.101736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.905 qpair failed and we were unable to recover it. 00:34:43.905 [2024-07-21 03:44:29.101860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.905 [2024-07-21 03:44:29.101887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.905 qpair failed and we were unable to recover it. 00:34:43.905 [2024-07-21 03:44:29.102006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.905 [2024-07-21 03:44:29.102032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.905 qpair failed and we were unable to recover it. 00:34:43.906 [2024-07-21 03:44:29.102174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.906 [2024-07-21 03:44:29.102199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.906 qpair failed and we were unable to recover it. 00:34:43.906 [2024-07-21 03:44:29.102290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.906 [2024-07-21 03:44:29.102316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.906 qpair failed and we were unable to recover it. 00:34:43.906 [2024-07-21 03:44:29.102431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.906 [2024-07-21 03:44:29.102456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.906 qpair failed and we were unable to recover it. 00:34:43.906 [2024-07-21 03:44:29.102551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.906 [2024-07-21 03:44:29.102578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.906 qpair failed and we were unable to recover it. 00:34:43.906 [2024-07-21 03:44:29.102689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.906 [2024-07-21 03:44:29.102716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.906 qpair failed and we were unable to recover it. 00:34:43.906 [2024-07-21 03:44:29.102836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.906 [2024-07-21 03:44:29.102863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.906 qpair failed and we were unable to recover it. 00:34:43.906 [2024-07-21 03:44:29.102956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.906 [2024-07-21 03:44:29.102985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.906 qpair failed and we were unable to recover it. 00:34:43.906 [2024-07-21 03:44:29.103133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.906 [2024-07-21 03:44:29.103158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.906 qpair failed and we were unable to recover it. 00:34:43.906 [2024-07-21 03:44:29.103251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.906 [2024-07-21 03:44:29.103277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.906 qpair failed and we were unable to recover it. 00:34:43.906 [2024-07-21 03:44:29.103397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.906 [2024-07-21 03:44:29.103422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.906 qpair failed and we were unable to recover it. 00:34:43.906 [2024-07-21 03:44:29.103545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.906 [2024-07-21 03:44:29.103571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.906 qpair failed and we were unable to recover it. 00:34:43.906 [2024-07-21 03:44:29.103683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.906 [2024-07-21 03:44:29.103712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.906 qpair failed and we were unable to recover it. 00:34:43.906 [2024-07-21 03:44:29.103807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.906 [2024-07-21 03:44:29.103833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.906 qpair failed and we were unable to recover it. 00:34:43.906 [2024-07-21 03:44:29.103922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.906 [2024-07-21 03:44:29.103948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.906 qpair failed and we were unable to recover it. 00:34:43.906 [2024-07-21 03:44:29.104071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.906 [2024-07-21 03:44:29.104097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.906 qpair failed and we were unable to recover it. 00:34:43.906 [2024-07-21 03:44:29.104189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.906 [2024-07-21 03:44:29.104216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.906 qpair failed and we were unable to recover it. 00:34:43.906 [2024-07-21 03:44:29.104337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.906 [2024-07-21 03:44:29.104362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.906 qpair failed and we were unable to recover it. 00:34:43.906 [2024-07-21 03:44:29.104459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.906 [2024-07-21 03:44:29.104487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.906 qpair failed and we were unable to recover it. 00:34:43.906 [2024-07-21 03:44:29.104611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.906 [2024-07-21 03:44:29.104644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.906 qpair failed and we were unable to recover it. 00:34:43.906 [2024-07-21 03:44:29.104782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.906 [2024-07-21 03:44:29.104809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.906 qpair failed and we were unable to recover it. 00:34:43.906 [2024-07-21 03:44:29.104902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.906 [2024-07-21 03:44:29.104928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.906 qpair failed and we were unable to recover it. 00:34:43.906 [2024-07-21 03:44:29.105062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.906 [2024-07-21 03:44:29.105087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.906 qpair failed and we were unable to recover it. 00:34:43.906 [2024-07-21 03:44:29.105229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.906 [2024-07-21 03:44:29.105255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.906 qpair failed and we were unable to recover it. 00:34:43.906 [2024-07-21 03:44:29.105346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.906 [2024-07-21 03:44:29.105371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.906 qpair failed and we were unable to recover it. 00:34:43.906 [2024-07-21 03:44:29.105495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.906 [2024-07-21 03:44:29.105525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.906 qpair failed and we were unable to recover it. 00:34:43.906 [2024-07-21 03:44:29.105642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.906 [2024-07-21 03:44:29.105668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.906 qpair failed and we were unable to recover it. 00:34:43.906 [2024-07-21 03:44:29.105763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.906 [2024-07-21 03:44:29.105789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.906 qpair failed and we were unable to recover it. 00:34:43.906 [2024-07-21 03:44:29.105938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.906 [2024-07-21 03:44:29.105964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.906 qpair failed and we were unable to recover it. 00:34:43.906 [2024-07-21 03:44:29.106084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.906 [2024-07-21 03:44:29.106111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.906 qpair failed and we were unable to recover it. 00:34:43.906 [2024-07-21 03:44:29.106201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.906 [2024-07-21 03:44:29.106229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.906 qpair failed and we were unable to recover it. 00:34:43.906 [2024-07-21 03:44:29.106321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.906 [2024-07-21 03:44:29.106349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.906 qpair failed and we were unable to recover it. 00:34:43.906 [2024-07-21 03:44:29.106476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.906 [2024-07-21 03:44:29.106502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.906 qpair failed and we were unable to recover it. 00:34:43.906 [2024-07-21 03:44:29.106630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.906 [2024-07-21 03:44:29.106659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.907 qpair failed and we were unable to recover it. 00:34:43.907 [2024-07-21 03:44:29.106752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.907 [2024-07-21 03:44:29.106778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.907 qpair failed and we were unable to recover it. 00:34:43.907 [2024-07-21 03:44:29.106874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.907 [2024-07-21 03:44:29.106902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.907 qpair failed and we were unable to recover it. 00:34:43.907 [2024-07-21 03:44:29.106988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.907 [2024-07-21 03:44:29.107014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.907 qpair failed and we were unable to recover it. 00:34:43.907 [2024-07-21 03:44:29.107106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.907 [2024-07-21 03:44:29.107132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.907 qpair failed and we were unable to recover it. 00:34:43.907 [2024-07-21 03:44:29.107275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.907 [2024-07-21 03:44:29.107301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.907 qpair failed and we were unable to recover it. 00:34:43.907 [2024-07-21 03:44:29.107393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.907 [2024-07-21 03:44:29.107422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.907 qpair failed and we were unable to recover it. 00:34:43.907 [2024-07-21 03:44:29.107543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.907 [2024-07-21 03:44:29.107569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.907 qpair failed and we were unable to recover it. 00:34:43.907 [2024-07-21 03:44:29.107694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.907 [2024-07-21 03:44:29.107720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.907 qpair failed and we were unable to recover it. 00:34:43.907 [2024-07-21 03:44:29.107841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.907 [2024-07-21 03:44:29.107867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.907 qpair failed and we were unable to recover it. 00:34:43.907 [2024-07-21 03:44:29.107987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.907 [2024-07-21 03:44:29.108012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.907 qpair failed and we were unable to recover it. 00:34:43.907 [2024-07-21 03:44:29.108134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.907 [2024-07-21 03:44:29.108159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.907 qpair failed and we were unable to recover it. 00:34:43.907 [2024-07-21 03:44:29.108278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.907 [2024-07-21 03:44:29.108303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.907 qpair failed and we were unable to recover it. 00:34:43.907 [2024-07-21 03:44:29.108415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.907 [2024-07-21 03:44:29.108440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.907 qpair failed and we were unable to recover it. 00:34:43.907 [2024-07-21 03:44:29.108539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.907 [2024-07-21 03:44:29.108564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.907 qpair failed and we were unable to recover it. 00:34:43.907 [2024-07-21 03:44:29.108671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.907 [2024-07-21 03:44:29.108699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.907 qpair failed and we were unable to recover it. 00:34:43.907 [2024-07-21 03:44:29.108833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.907 [2024-07-21 03:44:29.108872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.907 qpair failed and we were unable to recover it. 00:34:43.907 [2024-07-21 03:44:29.109022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.907 [2024-07-21 03:44:29.109049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.907 qpair failed and we were unable to recover it. 00:34:43.907 [2024-07-21 03:44:29.109184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.907 [2024-07-21 03:44:29.109210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.907 qpair failed and we were unable to recover it. 00:34:43.907 [2024-07-21 03:44:29.109337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.907 [2024-07-21 03:44:29.109365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.907 qpair failed and we were unable to recover it. 00:34:43.907 [2024-07-21 03:44:29.109457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.907 [2024-07-21 03:44:29.109483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.907 qpair failed and we were unable to recover it. 00:34:43.907 [2024-07-21 03:44:29.109604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.907 [2024-07-21 03:44:29.109635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.907 qpair failed and we were unable to recover it. 00:34:43.907 [2024-07-21 03:44:29.109760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.907 [2024-07-21 03:44:29.109787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.907 qpair failed and we were unable to recover it. 00:34:43.907 [2024-07-21 03:44:29.109905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.907 [2024-07-21 03:44:29.109930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.907 qpair failed and we were unable to recover it. 00:34:43.907 [2024-07-21 03:44:29.110056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.907 [2024-07-21 03:44:29.110081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.907 qpair failed and we were unable to recover it. 00:34:43.907 [2024-07-21 03:44:29.110194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.907 [2024-07-21 03:44:29.110219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.907 qpair failed and we were unable to recover it. 00:34:43.907 [2024-07-21 03:44:29.110309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.907 [2024-07-21 03:44:29.110334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.907 qpair failed and we were unable to recover it. 00:34:43.907 [2024-07-21 03:44:29.110462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.907 [2024-07-21 03:44:29.110487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.907 qpair failed and we were unable to recover it. 00:34:43.907 [2024-07-21 03:44:29.110586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.907 [2024-07-21 03:44:29.110624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.907 qpair failed and we were unable to recover it. 00:34:43.907 [2024-07-21 03:44:29.110722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.907 [2024-07-21 03:44:29.110750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.907 qpair failed and we were unable to recover it. 00:34:43.907 [2024-07-21 03:44:29.110872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.907 [2024-07-21 03:44:29.110898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.907 qpair failed and we were unable to recover it. 00:34:43.907 [2024-07-21 03:44:29.111017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.907 [2024-07-21 03:44:29.111042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.907 qpair failed and we were unable to recover it. 00:34:43.907 [2024-07-21 03:44:29.111138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.907 [2024-07-21 03:44:29.111169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.907 qpair failed and we were unable to recover it. 00:34:43.907 [2024-07-21 03:44:29.111287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.907 [2024-07-21 03:44:29.111313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.907 qpair failed and we were unable to recover it. 00:34:43.907 [2024-07-21 03:44:29.111401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.907 [2024-07-21 03:44:29.111427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.907 qpair failed and we were unable to recover it. 00:34:43.907 [2024-07-21 03:44:29.111540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.907 [2024-07-21 03:44:29.111565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.907 qpair failed and we were unable to recover it. 00:34:43.907 [2024-07-21 03:44:29.111692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.907 [2024-07-21 03:44:29.111732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.907 qpair failed and we were unable to recover it. 00:34:43.907 [2024-07-21 03:44:29.111895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.907 [2024-07-21 03:44:29.111923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.907 qpair failed and we were unable to recover it. 00:34:43.907 [2024-07-21 03:44:29.112046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.907 [2024-07-21 03:44:29.112072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.907 qpair failed and we were unable to recover it. 00:34:43.907 [2024-07-21 03:44:29.112184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.907 [2024-07-21 03:44:29.112209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.907 qpair failed and we were unable to recover it. 00:34:43.907 [2024-07-21 03:44:29.112304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.907 [2024-07-21 03:44:29.112329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.907 qpair failed and we were unable to recover it. 00:34:43.907 [2024-07-21 03:44:29.112422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.908 [2024-07-21 03:44:29.112448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.908 qpair failed and we were unable to recover it. 00:34:43.908 [2024-07-21 03:44:29.112535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.908 [2024-07-21 03:44:29.112565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.908 qpair failed and we were unable to recover it. 00:34:43.908 [2024-07-21 03:44:29.112707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.908 [2024-07-21 03:44:29.112745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.908 qpair failed and we were unable to recover it. 00:34:43.908 [2024-07-21 03:44:29.112872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.908 [2024-07-21 03:44:29.112898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.908 qpair failed and we were unable to recover it. 00:34:43.908 [2024-07-21 03:44:29.112993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.908 [2024-07-21 03:44:29.113021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.908 qpair failed and we were unable to recover it. 00:34:43.908 [2024-07-21 03:44:29.113121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.908 [2024-07-21 03:44:29.113147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.908 qpair failed and we were unable to recover it. 00:34:43.908 [2024-07-21 03:44:29.113268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.908 [2024-07-21 03:44:29.113293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.908 qpair failed and we were unable to recover it. 00:34:43.908 [2024-07-21 03:44:29.113443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.908 [2024-07-21 03:44:29.113469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.908 qpair failed and we were unable to recover it. 00:34:43.908 [2024-07-21 03:44:29.113562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.908 [2024-07-21 03:44:29.113587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.908 qpair failed and we were unable to recover it. 00:34:43.908 [2024-07-21 03:44:29.113686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.908 [2024-07-21 03:44:29.113712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.908 qpair failed and we were unable to recover it. 00:34:43.908 [2024-07-21 03:44:29.113838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.908 [2024-07-21 03:44:29.113864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.908 qpair failed and we were unable to recover it. 00:34:43.908 [2024-07-21 03:44:29.113965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.908 [2024-07-21 03:44:29.113991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.908 qpair failed and we were unable to recover it. 00:34:43.908 [2024-07-21 03:44:29.114109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.908 [2024-07-21 03:44:29.114135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.908 qpair failed and we were unable to recover it. 00:34:43.908 [2024-07-21 03:44:29.114269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.908 [2024-07-21 03:44:29.114294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.908 qpair failed and we were unable to recover it. 00:34:43.908 [2024-07-21 03:44:29.114439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.908 [2024-07-21 03:44:29.114464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.908 qpair failed and we were unable to recover it. 00:34:43.908 [2024-07-21 03:44:29.114588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.908 [2024-07-21 03:44:29.114620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.908 qpair failed and we were unable to recover it. 00:34:43.908 [2024-07-21 03:44:29.114721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.908 [2024-07-21 03:44:29.114748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.908 qpair failed and we were unable to recover it. 00:34:43.908 [2024-07-21 03:44:29.114868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.908 [2024-07-21 03:44:29.114895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.908 qpair failed and we were unable to recover it. 00:34:43.908 [2024-07-21 03:44:29.115009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.908 [2024-07-21 03:44:29.115035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.908 qpair failed and we were unable to recover it. 00:34:43.908 [2024-07-21 03:44:29.115164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.908 [2024-07-21 03:44:29.115190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.908 qpair failed and we were unable to recover it. 00:34:43.908 [2024-07-21 03:44:29.115310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.908 [2024-07-21 03:44:29.115339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.908 qpair failed and we were unable to recover it. 00:34:43.908 [2024-07-21 03:44:29.115487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.908 [2024-07-21 03:44:29.115513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.908 qpair failed and we were unable to recover it. 00:34:43.908 [2024-07-21 03:44:29.115632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.908 [2024-07-21 03:44:29.115658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.908 qpair failed and we were unable to recover it. 00:34:43.908 [2024-07-21 03:44:29.115756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.908 [2024-07-21 03:44:29.115782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.908 qpair failed and we were unable to recover it. 00:34:43.908 [2024-07-21 03:44:29.115872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.908 [2024-07-21 03:44:29.115899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.908 qpair failed and we were unable to recover it. 00:34:43.908 [2024-07-21 03:44:29.115988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.908 [2024-07-21 03:44:29.116013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.908 qpair failed and we were unable to recover it. 00:34:43.908 [2024-07-21 03:44:29.116138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.908 [2024-07-21 03:44:29.116165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.908 qpair failed and we were unable to recover it. 00:34:43.908 [2024-07-21 03:44:29.116311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.908 [2024-07-21 03:44:29.116337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.908 qpair failed and we were unable to recover it. 00:34:43.908 [2024-07-21 03:44:29.116433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.908 [2024-07-21 03:44:29.116458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.908 qpair failed and we were unable to recover it. 00:34:43.908 [2024-07-21 03:44:29.116607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.908 [2024-07-21 03:44:29.116640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.908 qpair failed and we were unable to recover it. 00:34:43.908 [2024-07-21 03:44:29.116755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.908 [2024-07-21 03:44:29.116780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.908 qpair failed and we were unable to recover it. 00:34:43.908 [2024-07-21 03:44:29.116890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.908 [2024-07-21 03:44:29.116934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.908 qpair failed and we were unable to recover it. 00:34:43.908 [2024-07-21 03:44:29.117057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.908 [2024-07-21 03:44:29.117083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.908 qpair failed and we were unable to recover it. 00:34:43.908 [2024-07-21 03:44:29.117208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.908 [2024-07-21 03:44:29.117234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.908 qpair failed and we were unable to recover it. 00:34:43.908 [2024-07-21 03:44:29.117326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.908 [2024-07-21 03:44:29.117352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.908 qpair failed and we were unable to recover it. 00:34:43.908 [2024-07-21 03:44:29.117469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.908 [2024-07-21 03:44:29.117494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.908 qpair failed and we were unable to recover it. 00:34:43.908 [2024-07-21 03:44:29.117623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.908 [2024-07-21 03:44:29.117649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.908 qpair failed and we were unable to recover it. 00:34:43.908 [2024-07-21 03:44:29.117773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.908 [2024-07-21 03:44:29.117798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.908 qpair failed and we were unable to recover it. 00:34:43.908 [2024-07-21 03:44:29.117895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.908 [2024-07-21 03:44:29.117921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.908 qpair failed and we were unable to recover it. 00:34:43.908 [2024-07-21 03:44:29.118019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.908 [2024-07-21 03:44:29.118045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.908 qpair failed and we were unable to recover it. 00:34:43.908 [2024-07-21 03:44:29.118162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.908 [2024-07-21 03:44:29.118188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.909 qpair failed and we were unable to recover it. 00:34:43.909 [2024-07-21 03:44:29.118309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.909 [2024-07-21 03:44:29.118334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.909 qpair failed and we were unable to recover it. 00:34:43.909 [2024-07-21 03:44:29.118459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.909 [2024-07-21 03:44:29.118484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.909 qpair failed and we were unable to recover it. 00:34:43.909 [2024-07-21 03:44:29.118601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.909 [2024-07-21 03:44:29.118636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.909 qpair failed and we were unable to recover it. 00:34:43.909 [2024-07-21 03:44:29.118727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.909 [2024-07-21 03:44:29.118752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.909 qpair failed and we were unable to recover it. 00:34:43.909 [2024-07-21 03:44:29.118849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.909 [2024-07-21 03:44:29.118874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.909 qpair failed and we were unable to recover it. 00:34:43.909 [2024-07-21 03:44:29.118967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.909 [2024-07-21 03:44:29.118992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.909 qpair failed and we were unable to recover it. 00:34:43.909 [2024-07-21 03:44:29.119075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.909 [2024-07-21 03:44:29.119100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.909 qpair failed and we were unable to recover it. 00:34:43.909 [2024-07-21 03:44:29.119221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.909 [2024-07-21 03:44:29.119247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.909 qpair failed and we were unable to recover it. 00:34:43.909 [2024-07-21 03:44:29.119380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.909 [2024-07-21 03:44:29.119419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.909 qpair failed and we were unable to recover it. 00:34:43.909 [2024-07-21 03:44:29.119587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.909 [2024-07-21 03:44:29.119637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.909 qpair failed and we were unable to recover it. 00:34:43.909 [2024-07-21 03:44:29.119735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.909 [2024-07-21 03:44:29.119762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.909 qpair failed and we were unable to recover it. 00:34:43.909 [2024-07-21 03:44:29.119883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.909 [2024-07-21 03:44:29.119909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.909 qpair failed and we were unable to recover it. 00:34:43.909 [2024-07-21 03:44:29.119996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.909 [2024-07-21 03:44:29.120022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.909 qpair failed and we were unable to recover it. 00:34:43.909 [2024-07-21 03:44:29.120145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.909 [2024-07-21 03:44:29.120171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.909 qpair failed and we were unable to recover it. 00:34:43.909 [2024-07-21 03:44:29.120265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.909 [2024-07-21 03:44:29.120290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.909 qpair failed and we were unable to recover it. 00:34:43.909 [2024-07-21 03:44:29.120394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.909 [2024-07-21 03:44:29.120433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.909 qpair failed and we were unable to recover it. 00:34:43.909 [2024-07-21 03:44:29.120560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.909 [2024-07-21 03:44:29.120587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.909 qpair failed and we were unable to recover it. 00:34:43.909 [2024-07-21 03:44:29.120750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.909 [2024-07-21 03:44:29.120779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.909 qpair failed and we were unable to recover it. 00:34:43.909 [2024-07-21 03:44:29.120903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.909 [2024-07-21 03:44:29.120930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.909 qpair failed and we were unable to recover it. 00:34:43.909 [2024-07-21 03:44:29.121025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.909 [2024-07-21 03:44:29.121052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.909 qpair failed and we were unable to recover it. 00:34:43.909 [2024-07-21 03:44:29.121174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.909 [2024-07-21 03:44:29.121200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.909 qpair failed and we were unable to recover it. 00:34:43.909 [2024-07-21 03:44:29.121326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.909 [2024-07-21 03:44:29.121353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.909 qpair failed and we were unable to recover it. 00:34:43.909 [2024-07-21 03:44:29.121454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.909 [2024-07-21 03:44:29.121494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.909 qpair failed and we were unable to recover it. 00:34:43.909 [2024-07-21 03:44:29.121624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.909 [2024-07-21 03:44:29.121652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.909 qpair failed and we were unable to recover it. 00:34:43.909 [2024-07-21 03:44:29.121772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.909 [2024-07-21 03:44:29.121798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.909 qpair failed and we were unable to recover it. 00:34:43.909 [2024-07-21 03:44:29.121913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.909 [2024-07-21 03:44:29.121939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.909 qpair failed and we were unable to recover it. 00:34:43.909 [2024-07-21 03:44:29.122066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.909 [2024-07-21 03:44:29.122092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.909 qpair failed and we were unable to recover it. 00:34:43.909 [2024-07-21 03:44:29.122182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.909 [2024-07-21 03:44:29.122182] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:43.909 [2024-07-21 03:44:29.122208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.909 qpair failed and we were unable to recover it. 00:34:43.909 [2024-07-21 03:44:29.122312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.909 [2024-07-21 03:44:29.122341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.909 qpair failed and we were unable to recover it. 00:34:43.909 [2024-07-21 03:44:29.122460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.909 [2024-07-21 03:44:29.122486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.909 qpair failed and we were unable to recover it. 00:34:43.909 [2024-07-21 03:44:29.122584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.909 [2024-07-21 03:44:29.122617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.909 qpair failed and we were unable to recover it. 00:34:43.909 [2024-07-21 03:44:29.122714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.909 [2024-07-21 03:44:29.122740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.909 qpair failed and we were unable to recover it. 00:34:43.909 [2024-07-21 03:44:29.122835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.909 [2024-07-21 03:44:29.122861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.909 qpair failed and we were unable to recover it. 00:34:43.909 [2024-07-21 03:44:29.123010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.909 [2024-07-21 03:44:29.123034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.909 qpair failed and we were unable to recover it. 00:34:43.909 [2024-07-21 03:44:29.123124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.909 [2024-07-21 03:44:29.123150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.909 qpair failed and we were unable to recover it. 00:34:43.909 [2024-07-21 03:44:29.123294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.909 [2024-07-21 03:44:29.123320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.909 qpair failed and we were unable to recover it. 00:34:43.909 [2024-07-21 03:44:29.123439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.909 [2024-07-21 03:44:29.123465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.909 qpair failed and we were unable to recover it. 00:34:43.909 [2024-07-21 03:44:29.123590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.909 [2024-07-21 03:44:29.123629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.909 qpair failed and we were unable to recover it. 00:34:43.909 [2024-07-21 03:44:29.123780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.909 [2024-07-21 03:44:29.123805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.909 qpair failed and we were unable to recover it. 00:34:43.909 [2024-07-21 03:44:29.123924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.909 [2024-07-21 03:44:29.123949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.909 qpair failed and we were unable to recover it. 00:34:43.910 [2024-07-21 03:44:29.124096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.910 [2024-07-21 03:44:29.124121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.910 qpair failed and we were unable to recover it. 00:34:43.910 [2024-07-21 03:44:29.124234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.910 [2024-07-21 03:44:29.124259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.910 qpair failed and we were unable to recover it. 00:34:43.910 [2024-07-21 03:44:29.124372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.910 [2024-07-21 03:44:29.124397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.910 qpair failed and we were unable to recover it. 00:34:43.910 [2024-07-21 03:44:29.124494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.910 [2024-07-21 03:44:29.124528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.910 qpair failed and we were unable to recover it. 00:34:43.910 [2024-07-21 03:44:29.124644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.910 [2024-07-21 03:44:29.124684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.910 qpair failed and we were unable to recover it. 00:34:43.910 [2024-07-21 03:44:29.124777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.910 [2024-07-21 03:44:29.124803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.910 qpair failed and we were unable to recover it. 00:34:43.910 [2024-07-21 03:44:29.124902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.910 [2024-07-21 03:44:29.124930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.910 qpair failed and we were unable to recover it. 00:34:43.910 [2024-07-21 03:44:29.125034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.910 [2024-07-21 03:44:29.125059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.910 qpair failed and we were unable to recover it. 00:34:43.910 [2024-07-21 03:44:29.125145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.910 [2024-07-21 03:44:29.125171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.910 qpair failed and we were unable to recover it. 00:34:43.910 [2024-07-21 03:44:29.125258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.910 [2024-07-21 03:44:29.125284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.910 qpair failed and we were unable to recover it. 00:34:43.910 [2024-07-21 03:44:29.125370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.910 [2024-07-21 03:44:29.125395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.910 qpair failed and we were unable to recover it. 00:34:43.910 [2024-07-21 03:44:29.125511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.910 [2024-07-21 03:44:29.125537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.910 qpair failed and we were unable to recover it. 00:34:43.910 [2024-07-21 03:44:29.125656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.910 [2024-07-21 03:44:29.125684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.910 qpair failed and we were unable to recover it. 00:34:43.910 [2024-07-21 03:44:29.125786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.910 [2024-07-21 03:44:29.125827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.910 qpair failed and we were unable to recover it. 00:34:43.910 [2024-07-21 03:44:29.125933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.910 [2024-07-21 03:44:29.125959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.910 qpair failed and we were unable to recover it. 00:34:43.910 [2024-07-21 03:44:29.126053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.910 [2024-07-21 03:44:29.126079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.910 qpair failed and we were unable to recover it. 00:34:43.910 [2024-07-21 03:44:29.126178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.910 [2024-07-21 03:44:29.126204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.910 qpair failed and we were unable to recover it. 00:34:43.910 [2024-07-21 03:44:29.126297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.910 [2024-07-21 03:44:29.126324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.910 qpair failed and we were unable to recover it. 00:34:43.910 [2024-07-21 03:44:29.126444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.910 [2024-07-21 03:44:29.126470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.910 qpair failed and we were unable to recover it. 00:34:43.910 [2024-07-21 03:44:29.126588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.910 [2024-07-21 03:44:29.126619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.910 qpair failed and we were unable to recover it. 00:34:43.910 [2024-07-21 03:44:29.126730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.910 [2024-07-21 03:44:29.126756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.910 qpair failed and we were unable to recover it. 00:34:43.910 [2024-07-21 03:44:29.126853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.910 [2024-07-21 03:44:29.126878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.910 qpair failed and we were unable to recover it. 00:34:43.910 [2024-07-21 03:44:29.126996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.910 [2024-07-21 03:44:29.127022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.910 qpair failed and we were unable to recover it. 00:34:43.910 [2024-07-21 03:44:29.127140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.910 [2024-07-21 03:44:29.127166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.910 qpair failed and we were unable to recover it. 00:34:43.910 [2024-07-21 03:44:29.127274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.910 [2024-07-21 03:44:29.127302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.910 qpair failed and we were unable to recover it. 00:34:43.910 [2024-07-21 03:44:29.127441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.910 [2024-07-21 03:44:29.127469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.910 qpair failed and we were unable to recover it. 00:34:43.910 [2024-07-21 03:44:29.127601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.910 [2024-07-21 03:44:29.127650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.910 qpair failed and we were unable to recover it. 00:34:43.910 [2024-07-21 03:44:29.127785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.910 [2024-07-21 03:44:29.127812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.910 qpair failed and we were unable to recover it. 00:34:43.910 [2024-07-21 03:44:29.127936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.910 [2024-07-21 03:44:29.127962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.910 qpair failed and we were unable to recover it. 00:34:43.910 [2024-07-21 03:44:29.128065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.910 [2024-07-21 03:44:29.128090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.910 qpair failed and we were unable to recover it. 00:34:43.910 [2024-07-21 03:44:29.128184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.910 [2024-07-21 03:44:29.128215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.910 qpair failed and we were unable to recover it. 00:34:43.910 [2024-07-21 03:44:29.128306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.910 [2024-07-21 03:44:29.128332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.910 qpair failed and we were unable to recover it. 00:34:43.910 [2024-07-21 03:44:29.128428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.910 [2024-07-21 03:44:29.128454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.910 qpair failed and we were unable to recover it. 00:34:43.910 [2024-07-21 03:44:29.128543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.910 [2024-07-21 03:44:29.128568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.910 qpair failed and we were unable to recover it. 00:34:43.911 [2024-07-21 03:44:29.128727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.911 [2024-07-21 03:44:29.128757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.911 qpair failed and we were unable to recover it. 00:34:43.911 [2024-07-21 03:44:29.128855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.911 [2024-07-21 03:44:29.128882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.911 qpair failed and we were unable to recover it. 00:34:43.911 [2024-07-21 03:44:29.128982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.911 [2024-07-21 03:44:29.129010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.911 qpair failed and we were unable to recover it. 00:34:43.911 [2024-07-21 03:44:29.129138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.911 [2024-07-21 03:44:29.129163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.911 qpair failed and we were unable to recover it. 00:34:43.911 [2024-07-21 03:44:29.129288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.911 [2024-07-21 03:44:29.129313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.911 qpair failed and we were unable to recover it. 00:34:43.911 [2024-07-21 03:44:29.129462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.911 [2024-07-21 03:44:29.129488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.911 qpair failed and we were unable to recover it. 00:34:43.911 [2024-07-21 03:44:29.129603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.911 [2024-07-21 03:44:29.129635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.911 qpair failed and we were unable to recover it. 00:34:43.911 [2024-07-21 03:44:29.129768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.911 [2024-07-21 03:44:29.129807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.911 qpair failed and we were unable to recover it. 00:34:43.911 [2024-07-21 03:44:29.129938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.911 [2024-07-21 03:44:29.129966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.911 qpair failed and we were unable to recover it. 00:34:43.911 [2024-07-21 03:44:29.130068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.911 [2024-07-21 03:44:29.130095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.911 qpair failed and we were unable to recover it. 00:34:43.911 [2024-07-21 03:44:29.130228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.911 [2024-07-21 03:44:29.130254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.911 qpair failed and we were unable to recover it. 00:34:43.911 [2024-07-21 03:44:29.130374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.911 [2024-07-21 03:44:29.130399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.911 qpair failed and we were unable to recover it. 00:34:43.911 [2024-07-21 03:44:29.130519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.911 [2024-07-21 03:44:29.130545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.911 qpair failed and we were unable to recover it. 00:34:43.911 [2024-07-21 03:44:29.130647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.911 [2024-07-21 03:44:29.130675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.911 qpair failed and we were unable to recover it. 00:34:43.911 [2024-07-21 03:44:29.130802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.911 [2024-07-21 03:44:29.130842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.911 qpair failed and we were unable to recover it. 00:34:43.911 [2024-07-21 03:44:29.130970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.911 [2024-07-21 03:44:29.130998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.911 qpair failed and we were unable to recover it. 00:34:43.911 [2024-07-21 03:44:29.131121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.911 [2024-07-21 03:44:29.131147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.911 qpair failed and we were unable to recover it. 00:34:43.911 [2024-07-21 03:44:29.131240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.911 [2024-07-21 03:44:29.131266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.911 qpair failed and we were unable to recover it. 00:34:43.911 [2024-07-21 03:44:29.131391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.911 [2024-07-21 03:44:29.131419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.911 qpair failed and we were unable to recover it. 00:34:43.911 [2024-07-21 03:44:29.131509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.911 [2024-07-21 03:44:29.131535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.911 qpair failed and we were unable to recover it. 00:34:43.911 [2024-07-21 03:44:29.131651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.911 [2024-07-21 03:44:29.131677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.911 qpair failed and we were unable to recover it. 00:34:43.911 [2024-07-21 03:44:29.131776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.911 [2024-07-21 03:44:29.131805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.911 qpair failed and we were unable to recover it. 00:34:43.911 [2024-07-21 03:44:29.131893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.911 [2024-07-21 03:44:29.131920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.911 qpair failed and we were unable to recover it. 00:34:43.911 [2024-07-21 03:44:29.132072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.911 [2024-07-21 03:44:29.132111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.911 qpair failed and we were unable to recover it. 00:34:43.911 [2024-07-21 03:44:29.132209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.911 [2024-07-21 03:44:29.132236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.911 qpair failed and we were unable to recover it. 00:34:43.911 [2024-07-21 03:44:29.132359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.911 [2024-07-21 03:44:29.132384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.911 qpair failed and we were unable to recover it. 00:34:43.911 [2024-07-21 03:44:29.132506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.911 [2024-07-21 03:44:29.132532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.911 qpair failed and we were unable to recover it. 00:34:43.911 [2024-07-21 03:44:29.132631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.911 [2024-07-21 03:44:29.132658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.911 qpair failed and we were unable to recover it. 00:34:43.911 [2024-07-21 03:44:29.132782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.911 [2024-07-21 03:44:29.132808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.911 qpair failed and we were unable to recover it. 00:34:43.911 [2024-07-21 03:44:29.132893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.911 [2024-07-21 03:44:29.132919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.911 qpair failed and we were unable to recover it. 00:34:43.911 [2024-07-21 03:44:29.133011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.911 [2024-07-21 03:44:29.133037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.911 qpair failed and we were unable to recover it. 00:34:43.911 [2024-07-21 03:44:29.133131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.911 [2024-07-21 03:44:29.133158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.911 qpair failed and we were unable to recover it. 00:34:43.911 [2024-07-21 03:44:29.133275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.911 [2024-07-21 03:44:29.133300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.911 qpair failed and we were unable to recover it. 00:34:43.911 [2024-07-21 03:44:29.133424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.911 [2024-07-21 03:44:29.133453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.911 qpair failed and we were unable to recover it. 00:34:43.911 [2024-07-21 03:44:29.133556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.911 [2024-07-21 03:44:29.133585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.911 qpair failed and we were unable to recover it. 00:34:43.911 [2024-07-21 03:44:29.133715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.911 [2024-07-21 03:44:29.133744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.911 qpair failed and we were unable to recover it. 00:34:43.911 [2024-07-21 03:44:29.133868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.911 [2024-07-21 03:44:29.133894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.911 qpair failed and we were unable to recover it. 00:34:43.911 [2024-07-21 03:44:29.134022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.911 [2024-07-21 03:44:29.134047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.911 qpair failed and we were unable to recover it. 00:34:43.911 [2024-07-21 03:44:29.134167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.911 [2024-07-21 03:44:29.134193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.911 qpair failed and we were unable to recover it. 00:34:43.911 [2024-07-21 03:44:29.134277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.911 [2024-07-21 03:44:29.134303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.911 qpair failed and we were unable to recover it. 00:34:43.912 [2024-07-21 03:44:29.134402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.912 [2024-07-21 03:44:29.134428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.912 qpair failed and we were unable to recover it. 00:34:43.912 [2024-07-21 03:44:29.134517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.912 [2024-07-21 03:44:29.134543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.912 qpair failed and we were unable to recover it. 00:34:43.912 [2024-07-21 03:44:29.134634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.912 [2024-07-21 03:44:29.134660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.912 qpair failed and we were unable to recover it. 00:34:43.912 [2024-07-21 03:44:29.134784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.912 [2024-07-21 03:44:29.134812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.912 qpair failed and we were unable to recover it. 00:34:43.912 [2024-07-21 03:44:29.134924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.912 [2024-07-21 03:44:29.134950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.912 qpair failed and we were unable to recover it. 00:34:43.912 [2024-07-21 03:44:29.135046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.912 [2024-07-21 03:44:29.135073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.912 qpair failed and we were unable to recover it. 00:34:43.912 [2024-07-21 03:44:29.135202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.912 [2024-07-21 03:44:29.135228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.912 qpair failed and we were unable to recover it. 00:34:43.912 [2024-07-21 03:44:29.135346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.912 [2024-07-21 03:44:29.135372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.912 qpair failed and we were unable to recover it. 00:34:43.912 [2024-07-21 03:44:29.135467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.912 [2024-07-21 03:44:29.135496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.912 qpair failed and we were unable to recover it. 00:34:43.912 [2024-07-21 03:44:29.135634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.912 [2024-07-21 03:44:29.135661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.912 qpair failed and we were unable to recover it. 00:34:43.912 [2024-07-21 03:44:29.135787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.912 [2024-07-21 03:44:29.135812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.912 qpair failed and we were unable to recover it. 00:34:43.912 [2024-07-21 03:44:29.135904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.912 [2024-07-21 03:44:29.135930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.912 qpair failed and we were unable to recover it. 00:34:43.912 [2024-07-21 03:44:29.136050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.912 [2024-07-21 03:44:29.136076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.912 qpair failed and we were unable to recover it. 00:34:43.912 [2024-07-21 03:44:29.136167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.912 [2024-07-21 03:44:29.136192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.912 qpair failed and we were unable to recover it. 00:34:43.912 [2024-07-21 03:44:29.136288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.912 [2024-07-21 03:44:29.136314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.912 qpair failed and we were unable to recover it. 00:34:43.912 [2024-07-21 03:44:29.136452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.912 [2024-07-21 03:44:29.136491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.912 qpair failed and we were unable to recover it. 00:34:43.912 [2024-07-21 03:44:29.136602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.912 [2024-07-21 03:44:29.136644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.912 qpair failed and we were unable to recover it. 00:34:43.912 [2024-07-21 03:44:29.136764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.912 [2024-07-21 03:44:29.136791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.912 qpair failed and we were unable to recover it. 00:34:43.912 [2024-07-21 03:44:29.136914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.912 [2024-07-21 03:44:29.136941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.912 qpair failed and we were unable to recover it. 00:34:43.912 [2024-07-21 03:44:29.137041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.912 [2024-07-21 03:44:29.137067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.912 qpair failed and we were unable to recover it. 00:34:43.912 [2024-07-21 03:44:29.137164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.912 [2024-07-21 03:44:29.137192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.912 qpair failed and we were unable to recover it. 00:34:43.912 [2024-07-21 03:44:29.137320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.912 [2024-07-21 03:44:29.137346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.912 qpair failed and we were unable to recover it. 00:34:43.912 [2024-07-21 03:44:29.137442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.912 [2024-07-21 03:44:29.137469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.912 qpair failed and we were unable to recover it. 00:34:43.912 [2024-07-21 03:44:29.137592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.912 [2024-07-21 03:44:29.137632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.912 qpair failed and we were unable to recover it. 00:34:43.912 [2024-07-21 03:44:29.137753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.912 [2024-07-21 03:44:29.137779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.912 qpair failed and we were unable to recover it. 00:34:43.912 [2024-07-21 03:44:29.137889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.912 [2024-07-21 03:44:29.137915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.912 qpair failed and we were unable to recover it. 00:34:43.912 [2024-07-21 03:44:29.138038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.912 [2024-07-21 03:44:29.138064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.912 qpair failed and we were unable to recover it. 00:34:43.912 [2024-07-21 03:44:29.138159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.912 [2024-07-21 03:44:29.138184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.912 qpair failed and we were unable to recover it. 00:34:43.912 [2024-07-21 03:44:29.138273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.912 [2024-07-21 03:44:29.138298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.912 qpair failed and we were unable to recover it. 00:34:43.912 [2024-07-21 03:44:29.138404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.912 [2024-07-21 03:44:29.138443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.912 qpair failed and we were unable to recover it. 00:34:43.912 [2024-07-21 03:44:29.138544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.912 [2024-07-21 03:44:29.138572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:43.912 qpair failed and we were unable to recover it. 00:34:43.912 [2024-07-21 03:44:29.138712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.912 [2024-07-21 03:44:29.138751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.912 qpair failed and we were unable to recover it. 00:34:43.912 [2024-07-21 03:44:29.138857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.912 [2024-07-21 03:44:29.138886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.912 qpair failed and we were unable to recover it. 00:34:43.912 [2024-07-21 03:44:29.138987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.912 [2024-07-21 03:44:29.139015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.912 qpair failed and we were unable to recover it. 00:34:43.912 [2024-07-21 03:44:29.139143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.912 [2024-07-21 03:44:29.139169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:43.912 qpair failed and we were unable to recover it. 00:34:43.912 [2024-07-21 03:44:29.139279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.912 [2024-07-21 03:44:29.139307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.912 qpair failed and we were unable to recover it. 00:34:43.912 [2024-07-21 03:44:29.139426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.912 [2024-07-21 03:44:29.139466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.912 qpair failed and we were unable to recover it. 00:34:43.912 [2024-07-21 03:44:29.139600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.912 [2024-07-21 03:44:29.139645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.912 qpair failed and we were unable to recover it. 00:34:43.912 [2024-07-21 03:44:29.139738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.912 [2024-07-21 03:44:29.139764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.912 qpair failed and we were unable to recover it. 00:34:43.912 [2024-07-21 03:44:29.139860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.912 [2024-07-21 03:44:29.139886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.912 qpair failed and we were unable to recover it. 00:34:43.912 [2024-07-21 03:44:29.140013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.913 [2024-07-21 03:44:29.140041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.913 qpair failed and we were unable to recover it. 00:34:43.913 [2024-07-21 03:44:29.140144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.913 [2024-07-21 03:44:29.140170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.913 qpair failed and we were unable to recover it. 00:34:43.913 [2024-07-21 03:44:29.140293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.913 [2024-07-21 03:44:29.140318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.913 qpair failed and we were unable to recover it. 00:34:43.913 [2024-07-21 03:44:29.140414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.913 [2024-07-21 03:44:29.140440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.913 qpair failed and we were unable to recover it. 00:34:43.913 [2024-07-21 03:44:29.140526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.913 [2024-07-21 03:44:29.140551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.913 qpair failed and we were unable to recover it. 00:34:43.913 [2024-07-21 03:44:29.140643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.913 [2024-07-21 03:44:29.140670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.913 qpair failed and we were unable to recover it. 00:34:43.913 [2024-07-21 03:44:29.140787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.913 [2024-07-21 03:44:29.140812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.913 qpair failed and we were unable to recover it. 00:34:43.913 [2024-07-21 03:44:29.140938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.913 [2024-07-21 03:44:29.140964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.913 qpair failed and we were unable to recover it. 00:34:43.913 [2024-07-21 03:44:29.141091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.913 [2024-07-21 03:44:29.141118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.913 qpair failed and we were unable to recover it. 00:34:43.913 [2024-07-21 03:44:29.141237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.913 [2024-07-21 03:44:29.141263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.913 qpair failed and we were unable to recover it. 00:34:43.913 [2024-07-21 03:44:29.141357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.913 [2024-07-21 03:44:29.141386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.913 qpair failed and we were unable to recover it. 00:34:43.913 [2024-07-21 03:44:29.141487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.913 [2024-07-21 03:44:29.141527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.913 qpair failed and we were unable to recover it. 00:34:43.913 [2024-07-21 03:44:29.141644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.913 [2024-07-21 03:44:29.141672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.913 qpair failed and we were unable to recover it. 00:34:43.913 [2024-07-21 03:44:29.141767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.913 [2024-07-21 03:44:29.141793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.913 qpair failed and we were unable to recover it. 00:34:43.913 [2024-07-21 03:44:29.141892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.913 [2024-07-21 03:44:29.141919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.913 qpair failed and we were unable to recover it. 00:34:43.913 [2024-07-21 03:44:29.142009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.913 [2024-07-21 03:44:29.142035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.913 qpair failed and we were unable to recover it. 00:34:43.913 [2024-07-21 03:44:29.142152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.913 [2024-07-21 03:44:29.142184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:43.913 qpair failed and we were unable to recover it. 00:34:43.913 [2024-07-21 03:44:29.142276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.913 [2024-07-21 03:44:29.142302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.913 qpair failed and we were unable to recover it. 00:34:43.913 [2024-07-21 03:44:29.142427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.913 [2024-07-21 03:44:29.142453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.913 qpair failed and we were unable to recover it. 00:34:43.913 [2024-07-21 03:44:29.142574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.913 [2024-07-21 03:44:29.142599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.913 qpair failed and we were unable to recover it. 00:34:43.913 [2024-07-21 03:44:29.142707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.913 [2024-07-21 03:44:29.142732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.913 qpair failed and we were unable to recover it. 00:34:43.913 [2024-07-21 03:44:29.142851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.913 [2024-07-21 03:44:29.142876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.913 qpair failed and we were unable to recover it. 00:34:43.913 [2024-07-21 03:44:29.143000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.913 [2024-07-21 03:44:29.143028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.913 qpair failed and we were unable to recover it. 00:34:43.913 [2024-07-21 03:44:29.143149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.913 [2024-07-21 03:44:29.143175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.913 qpair failed and we were unable to recover it. 00:34:43.913 [2024-07-21 03:44:29.143272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.913 [2024-07-21 03:44:29.143298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.913 qpair failed and we were unable to recover it. 00:34:43.913 [2024-07-21 03:44:29.143387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.913 [2024-07-21 03:44:29.143412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.913 qpair failed and we were unable to recover it. 00:34:43.913 [2024-07-21 03:44:29.143502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.913 [2024-07-21 03:44:29.143527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.913 qpair failed and we were unable to recover it. 00:34:43.913 [2024-07-21 03:44:29.143642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.913 [2024-07-21 03:44:29.143668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.913 qpair failed and we were unable to recover it. 00:34:43.913 [2024-07-21 03:44:29.143815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.913 [2024-07-21 03:44:29.143840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.913 qpair failed and we were unable to recover it. 00:34:43.913 [2024-07-21 03:44:29.143934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.913 [2024-07-21 03:44:29.143959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.913 qpair failed and we were unable to recover it. 00:34:43.913 [2024-07-21 03:44:29.144059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.913 [2024-07-21 03:44:29.144084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.913 qpair failed and we were unable to recover it. 00:34:43.913 [2024-07-21 03:44:29.144178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.913 [2024-07-21 03:44:29.144204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.913 qpair failed and we were unable to recover it. 00:34:43.913 [2024-07-21 03:44:29.144324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.913 [2024-07-21 03:44:29.144350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:43.913 qpair failed and we were unable to recover it. 00:34:44.182 [2024-07-21 03:44:29.144439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.182 [2024-07-21 03:44:29.144464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.182 qpair failed and we were unable to recover it. 00:34:44.182 [2024-07-21 03:44:29.144572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.182 [2024-07-21 03:44:29.144598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.182 qpair failed and we were unable to recover it. 00:34:44.182 [2024-07-21 03:44:29.144701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.182 [2024-07-21 03:44:29.144730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.182 qpair failed and we were unable to recover it. 00:34:44.182 [2024-07-21 03:44:29.144832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.182 [2024-07-21 03:44:29.144859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.182 qpair failed and we were unable to recover it. 00:34:44.182 [2024-07-21 03:44:29.144957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.182 [2024-07-21 03:44:29.144987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.182 qpair failed and we were unable to recover it. 00:34:44.182 [2024-07-21 03:44:29.145097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.182 [2024-07-21 03:44:29.145129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:44.182 qpair failed and we were unable to recover it. 00:34:44.182 [2024-07-21 03:44:29.145229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.182 [2024-07-21 03:44:29.145257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:44.182 qpair failed and we were unable to recover it. 00:34:44.182 [2024-07-21 03:44:29.145386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.182 [2024-07-21 03:44:29.145426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:44.182 qpair failed and we were unable to recover it. 00:34:44.182 [2024-07-21 03:44:29.145532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.182 [2024-07-21 03:44:29.145558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.182 qpair failed and we were unable to recover it. 00:34:44.182 [2024-07-21 03:44:29.145690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.182 [2024-07-21 03:44:29.145717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.182 qpair failed and we were unable to recover it. 00:34:44.182 [2024-07-21 03:44:29.145815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.182 [2024-07-21 03:44:29.145841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.182 qpair failed and we were unable to recover it. 00:34:44.182 [2024-07-21 03:44:29.145922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.182 [2024-07-21 03:44:29.145947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.182 qpair failed and we were unable to recover it. 00:34:44.182 [2024-07-21 03:44:29.146039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.182 [2024-07-21 03:44:29.146065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.182 qpair failed and we were unable to recover it. 00:34:44.182 [2024-07-21 03:44:29.146188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.182 [2024-07-21 03:44:29.146215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.182 qpair failed and we were unable to recover it. 00:34:44.182 [2024-07-21 03:44:29.146335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.182 [2024-07-21 03:44:29.146360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.182 qpair failed and we were unable to recover it. 00:34:44.182 [2024-07-21 03:44:29.146479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.182 [2024-07-21 03:44:29.146504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.182 qpair failed and we were unable to recover it. 00:34:44.182 [2024-07-21 03:44:29.146590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.182 [2024-07-21 03:44:29.146622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.182 qpair failed and we were unable to recover it. 00:34:44.182 [2024-07-21 03:44:29.146727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.182 [2024-07-21 03:44:29.146753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.182 qpair failed and we were unable to recover it. 00:34:44.182 [2024-07-21 03:44:29.146845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.182 [2024-07-21 03:44:29.146871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.182 qpair failed and we were unable to recover it. 00:34:44.182 [2024-07-21 03:44:29.146968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.182 [2024-07-21 03:44:29.146994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.182 qpair failed and we were unable to recover it. 00:34:44.182 [2024-07-21 03:44:29.147117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.182 [2024-07-21 03:44:29.147142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.182 qpair failed and we were unable to recover it. 00:34:44.182 [2024-07-21 03:44:29.147266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.182 [2024-07-21 03:44:29.147293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.182 qpair failed and we were unable to recover it. 00:34:44.182 [2024-07-21 03:44:29.147418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.182 [2024-07-21 03:44:29.147443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.182 qpair failed and we were unable to recover it. 00:34:44.182 [2024-07-21 03:44:29.147561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.182 [2024-07-21 03:44:29.147586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.182 qpair failed and we were unable to recover it. 00:34:44.182 [2024-07-21 03:44:29.147690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.182 [2024-07-21 03:44:29.147717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.182 qpair failed and we were unable to recover it. 00:34:44.182 [2024-07-21 03:44:29.147812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.182 [2024-07-21 03:44:29.147838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.182 qpair failed and we were unable to recover it. 00:34:44.182 [2024-07-21 03:44:29.147959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.182 [2024-07-21 03:44:29.147985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.183 qpair failed and we were unable to recover it. 00:34:44.183 [2024-07-21 03:44:29.148105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.183 [2024-07-21 03:44:29.148131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.183 qpair failed and we were unable to recover it. 00:34:44.183 [2024-07-21 03:44:29.148239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.183 [2024-07-21 03:44:29.148278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:44.183 qpair failed and we were unable to recover it. 00:34:44.183 [2024-07-21 03:44:29.148434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.183 [2024-07-21 03:44:29.148461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:44.183 qpair failed and we were unable to recover it. 00:34:44.183 [2024-07-21 03:44:29.148555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.183 [2024-07-21 03:44:29.148581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:44.183 qpair failed and we were unable to recover it. 00:34:44.183 [2024-07-21 03:44:29.148681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.183 [2024-07-21 03:44:29.148714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:44.183 qpair failed and we were unable to recover it. 00:34:44.183 [2024-07-21 03:44:29.148860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.183 [2024-07-21 03:44:29.148886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:44.183 qpair failed and we were unable to recover it. 00:34:44.183 [2024-07-21 03:44:29.149003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.183 [2024-07-21 03:44:29.149028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:44.183 qpair failed and we were unable to recover it. 00:34:44.183 [2024-07-21 03:44:29.149144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.183 [2024-07-21 03:44:29.149170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:44.183 qpair failed and we were unable to recover it. 00:34:44.183 [2024-07-21 03:44:29.149314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.183 [2024-07-21 03:44:29.149345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:44.183 qpair failed and we were unable to recover it. 00:34:44.183 [2024-07-21 03:44:29.149456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.183 [2024-07-21 03:44:29.149495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:44.183 qpair failed and we were unable to recover it. 00:34:44.183 [2024-07-21 03:44:29.149635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.183 [2024-07-21 03:44:29.149662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.183 qpair failed and we were unable to recover it. 00:34:44.183 [2024-07-21 03:44:29.149758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.183 [2024-07-21 03:44:29.149783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.183 qpair failed and we were unable to recover it. 00:34:44.183 [2024-07-21 03:44:29.149907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.183 [2024-07-21 03:44:29.149934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.183 qpair failed and we were unable to recover it. 00:34:44.183 [2024-07-21 03:44:29.150021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.183 [2024-07-21 03:44:29.150046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.183 qpair failed and we were unable to recover it. 00:34:44.183 [2024-07-21 03:44:29.150167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.183 [2024-07-21 03:44:29.150196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:44.183 qpair failed and we were unable to recover it. 00:34:44.183 [2024-07-21 03:44:29.150336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.183 [2024-07-21 03:44:29.150364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:44.183 qpair failed and we were unable to recover it. 00:34:44.183 [2024-07-21 03:44:29.150519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.183 [2024-07-21 03:44:29.150549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:44.183 qpair failed and we were unable to recover it. 00:34:44.183 [2024-07-21 03:44:29.150649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.183 [2024-07-21 03:44:29.150677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:44.183 qpair failed and we were unable to recover it. 00:34:44.183 [2024-07-21 03:44:29.150812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.183 [2024-07-21 03:44:29.150840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:44.183 qpair failed and we were unable to recover it. 00:34:44.183 [2024-07-21 03:44:29.150938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.183 [2024-07-21 03:44:29.150964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:44.183 qpair failed and we were unable to recover it. 00:34:44.183 [2024-07-21 03:44:29.151044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.183 [2024-07-21 03:44:29.151070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:44.183 qpair failed and we were unable to recover it. 00:34:44.183 [2024-07-21 03:44:29.151161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.183 [2024-07-21 03:44:29.151189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:44.183 qpair failed and we were unable to recover it. 00:34:44.183 [2024-07-21 03:44:29.151303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.183 [2024-07-21 03:44:29.151329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:44.183 qpair failed and we were unable to recover it. 00:34:44.183 [2024-07-21 03:44:29.151447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.183 [2024-07-21 03:44:29.151473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:44.183 qpair failed and we were unable to recover it. 00:34:44.183 [2024-07-21 03:44:29.151594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.183 [2024-07-21 03:44:29.151626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:44.183 qpair failed and we were unable to recover it. 00:34:44.183 [2024-07-21 03:44:29.151743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.183 [2024-07-21 03:44:29.151769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:44.183 qpair failed and we were unable to recover it. 00:34:44.183 [2024-07-21 03:44:29.151891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.183 [2024-07-21 03:44:29.151917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:44.183 qpair failed and we were unable to recover it. 00:34:44.183 [2024-07-21 03:44:29.152042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.183 [2024-07-21 03:44:29.152068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:44.183 qpair failed and we were unable to recover it. 00:34:44.183 [2024-07-21 03:44:29.152154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.183 [2024-07-21 03:44:29.152179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:44.183 qpair failed and we were unable to recover it. 00:34:44.183 [2024-07-21 03:44:29.152282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.183 [2024-07-21 03:44:29.152308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:44.183 qpair failed and we were unable to recover it. 00:34:44.183 [2024-07-21 03:44:29.152398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.183 [2024-07-21 03:44:29.152425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:44.183 qpair failed and we were unable to recover it. 00:34:44.183 [2024-07-21 03:44:29.152563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.183 [2024-07-21 03:44:29.152603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:44.183 qpair failed and we were unable to recover it. 00:34:44.183 [2024-07-21 03:44:29.152725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.183 [2024-07-21 03:44:29.152764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.183 qpair failed and we were unable to recover it. 00:34:44.183 [2024-07-21 03:44:29.152896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.183 [2024-07-21 03:44:29.152925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:44.183 qpair failed and we were unable to recover it. 00:34:44.183 [2024-07-21 03:44:29.153074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.183 [2024-07-21 03:44:29.153100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:44.183 qpair failed and we were unable to recover it. 00:34:44.183 [2024-07-21 03:44:29.153187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.183 [2024-07-21 03:44:29.153212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:44.183 qpair failed and we were unable to recover it. 00:34:44.183 [2024-07-21 03:44:29.153336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.183 [2024-07-21 03:44:29.153362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:44.183 qpair failed and we were unable to recover it. 00:34:44.183 [2024-07-21 03:44:29.153459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.183 [2024-07-21 03:44:29.153488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:44.183 qpair failed and we were unable to recover it. 00:34:44.183 [2024-07-21 03:44:29.153624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.183 [2024-07-21 03:44:29.153655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:44.183 qpair failed and we were unable to recover it. 00:34:44.183 [2024-07-21 03:44:29.153750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.183 [2024-07-21 03:44:29.153776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:44.183 qpair failed and we were unable to recover it. 00:34:44.183 [2024-07-21 03:44:29.153898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.184 [2024-07-21 03:44:29.153924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:44.184 qpair failed and we were unable to recover it. 00:34:44.184 [2024-07-21 03:44:29.154016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.184 [2024-07-21 03:44:29.154044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:44.184 qpair failed and we were unable to recover it. 00:34:44.184 [2024-07-21 03:44:29.154144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.184 [2024-07-21 03:44:29.154170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:44.184 qpair failed and we were unable to recover it. 00:34:44.184 [2024-07-21 03:44:29.154294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.184 [2024-07-21 03:44:29.154320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:44.184 qpair failed and we were unable to recover it. 00:34:44.184 [2024-07-21 03:44:29.154470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.184 [2024-07-21 03:44:29.154502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:44.184 qpair failed and we were unable to recover it. 00:34:44.184 [2024-07-21 03:44:29.154598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.184 [2024-07-21 03:44:29.154634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:44.184 qpair failed and we were unable to recover it. 00:34:44.184 [2024-07-21 03:44:29.154758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.184 [2024-07-21 03:44:29.154785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:44.184 qpair failed and we were unable to recover it. 00:34:44.184 [2024-07-21 03:44:29.154905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.184 [2024-07-21 03:44:29.154932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:44.184 qpair failed and we were unable to recover it. 00:34:44.184 [2024-07-21 03:44:29.155055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.184 [2024-07-21 03:44:29.155081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:44.184 qpair failed and we were unable to recover it. 00:34:44.184 [2024-07-21 03:44:29.155172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.184 [2024-07-21 03:44:29.155197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:44.184 qpair failed and we were unable to recover it. 00:34:44.184 [2024-07-21 03:44:29.155297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.184 [2024-07-21 03:44:29.155325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:44.184 qpair failed and we were unable to recover it. 00:34:44.184 [2024-07-21 03:44:29.155437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.184 [2024-07-21 03:44:29.155475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.184 qpair failed and we were unable to recover it. 00:34:44.184 [2024-07-21 03:44:29.155567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.184 [2024-07-21 03:44:29.155595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:44.184 qpair failed and we were unable to recover it. 00:34:44.184 [2024-07-21 03:44:29.155702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.184 [2024-07-21 03:44:29.155727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:44.184 qpair failed and we were unable to recover it. 00:34:44.184 [2024-07-21 03:44:29.155875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.184 [2024-07-21 03:44:29.155901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:44.184 qpair failed and we were unable to recover it. 00:34:44.184 [2024-07-21 03:44:29.155993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.184 [2024-07-21 03:44:29.156019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:44.184 qpair failed and we were unable to recover it. 00:34:44.184 [2024-07-21 03:44:29.156145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.184 [2024-07-21 03:44:29.156170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:44.184 qpair failed and we were unable to recover it. 00:34:44.184 [2024-07-21 03:44:29.156272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.184 [2024-07-21 03:44:29.156299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:44.184 qpair failed and we were unable to recover it. 00:34:44.184 [2024-07-21 03:44:29.156457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.184 [2024-07-21 03:44:29.156482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:44.184 qpair failed and we were unable to recover it. 00:34:44.184 [2024-07-21 03:44:29.156586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.184 [2024-07-21 03:44:29.156622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:44.184 qpair failed and we were unable to recover it. 00:34:44.184 [2024-07-21 03:44:29.156730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.184 [2024-07-21 03:44:29.156758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:44.184 qpair failed and we were unable to recover it. 00:34:44.184 [2024-07-21 03:44:29.156863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.184 [2024-07-21 03:44:29.156890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:44.184 qpair failed and we were unable to recover it. 00:34:44.184 [2024-07-21 03:44:29.156978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.184 [2024-07-21 03:44:29.157004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:44.184 qpair failed and we were unable to recover it. 00:34:44.184 [2024-07-21 03:44:29.157128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.184 [2024-07-21 03:44:29.157154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:44.184 qpair failed and we were unable to recover it. 00:34:44.184 [2024-07-21 03:44:29.157246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.184 [2024-07-21 03:44:29.157273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:44.184 qpair failed and we were unable to recover it. 00:34:44.184 [2024-07-21 03:44:29.157368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.184 [2024-07-21 03:44:29.157394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:44.184 qpair failed and we were unable to recover it. 00:34:44.184 [2024-07-21 03:44:29.157513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.184 [2024-07-21 03:44:29.157539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:44.184 qpair failed and we were unable to recover it. 00:34:44.184 [2024-07-21 03:44:29.157641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.184 [2024-07-21 03:44:29.157668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:44.184 qpair failed and we were unable to recover it. 00:34:44.184 [2024-07-21 03:44:29.157772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.184 [2024-07-21 03:44:29.157799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:44.184 qpair failed and we were unable to recover it. 00:34:44.184 [2024-07-21 03:44:29.157923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.184 [2024-07-21 03:44:29.157952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:44.184 qpair failed and we were unable to recover it. 00:34:44.184 [2024-07-21 03:44:29.158075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.184 [2024-07-21 03:44:29.158101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:44.184 qpair failed and we were unable to recover it. 00:34:44.184 [2024-07-21 03:44:29.158225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.184 [2024-07-21 03:44:29.158253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:44.184 qpair failed and we were unable to recover it. 00:34:44.184 [2024-07-21 03:44:29.158370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.184 [2024-07-21 03:44:29.158396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:44.184 qpair failed and we were unable to recover it. 00:34:44.184 [2024-07-21 03:44:29.158527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.184 [2024-07-21 03:44:29.158566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.184 qpair failed and we were unable to recover it. 00:34:44.184 [2024-07-21 03:44:29.158704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.184 [2024-07-21 03:44:29.158731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.184 qpair failed and we were unable to recover it. 00:34:44.184 [2024-07-21 03:44:29.158817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.184 [2024-07-21 03:44:29.158843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.184 qpair failed and we were unable to recover it. 00:34:44.184 [2024-07-21 03:44:29.158960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.184 [2024-07-21 03:44:29.158986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.184 qpair failed and we were unable to recover it. 00:34:44.184 [2024-07-21 03:44:29.159108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.184 [2024-07-21 03:44:29.159134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.184 qpair failed and we were unable to recover it. 00:34:44.184 [2024-07-21 03:44:29.159257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.184 [2024-07-21 03:44:29.159283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.184 qpair failed and we were unable to recover it. 00:34:44.184 [2024-07-21 03:44:29.159380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.184 [2024-07-21 03:44:29.159406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.184 qpair failed and we were unable to recover it. 00:34:44.184 [2024-07-21 03:44:29.159494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.184 [2024-07-21 03:44:29.159519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.184 qpair failed and we were unable to recover it. 00:34:44.185 [2024-07-21 03:44:29.159606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.185 [2024-07-21 03:44:29.159638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.185 qpair failed and we were unable to recover it. 00:34:44.185 [2024-07-21 03:44:29.159756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.185 [2024-07-21 03:44:29.159782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.185 qpair failed and we were unable to recover it. 00:34:44.185 [2024-07-21 03:44:29.159931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.185 [2024-07-21 03:44:29.159957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.185 qpair failed and we were unable to recover it. 00:34:44.185 [2024-07-21 03:44:29.160043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.185 [2024-07-21 03:44:29.160068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.185 qpair failed and we were unable to recover it. 00:34:44.185 [2024-07-21 03:44:29.160160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.185 [2024-07-21 03:44:29.160185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.185 qpair failed and we were unable to recover it. 00:34:44.185 [2024-07-21 03:44:29.160309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.185 [2024-07-21 03:44:29.160335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.185 qpair failed and we were unable to recover it. 00:34:44.185 [2024-07-21 03:44:29.160418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.185 [2024-07-21 03:44:29.160443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.185 qpair failed and we were unable to recover it. 00:34:44.185 [2024-07-21 03:44:29.160536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.185 [2024-07-21 03:44:29.160565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:44.185 qpair failed and we were unable to recover it. 00:34:44.185 [2024-07-21 03:44:29.160680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.185 [2024-07-21 03:44:29.160720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:44.185 qpair failed and we were unable to recover it. 00:34:44.185 [2024-07-21 03:44:29.160853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.185 [2024-07-21 03:44:29.160880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:44.185 qpair failed and we were unable to recover it. 00:34:44.185 [2024-07-21 03:44:29.161007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.185 [2024-07-21 03:44:29.161033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:44.185 qpair failed and we were unable to recover it. 00:34:44.185 [2024-07-21 03:44:29.161131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.185 [2024-07-21 03:44:29.161157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:44.185 qpair failed and we were unable to recover it. 00:34:44.185 [2024-07-21 03:44:29.161290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.185 [2024-07-21 03:44:29.161330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:44.185 qpair failed and we were unable to recover it. 00:34:44.185 [2024-07-21 03:44:29.161423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.185 [2024-07-21 03:44:29.161450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.185 qpair failed and we were unable to recover it. 00:34:44.185 [2024-07-21 03:44:29.161582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.185 [2024-07-21 03:44:29.161610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:44.185 qpair failed and we were unable to recover it. 00:34:44.185 [2024-07-21 03:44:29.161715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.185 [2024-07-21 03:44:29.161741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:44.185 qpair failed and we were unable to recover it. 00:34:44.185 [2024-07-21 03:44:29.161834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.185 [2024-07-21 03:44:29.161861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:44.185 qpair failed and we were unable to recover it. 00:34:44.185 [2024-07-21 03:44:29.161996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.185 [2024-07-21 03:44:29.162022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:44.185 qpair failed and we were unable to recover it. 00:34:44.185 [2024-07-21 03:44:29.162172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.185 [2024-07-21 03:44:29.162198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:44.185 qpair failed and we were unable to recover it. 00:34:44.185 [2024-07-21 03:44:29.162289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.185 [2024-07-21 03:44:29.162317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:44.185 qpair failed and we were unable to recover it. 00:34:44.185 [2024-07-21 03:44:29.162460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.185 [2024-07-21 03:44:29.162499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.185 qpair failed and we were unable to recover it. 00:34:44.185 [2024-07-21 03:44:29.162601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.185 [2024-07-21 03:44:29.162636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:44.185 qpair failed and we were unable to recover it. 00:34:44.185 [2024-07-21 03:44:29.162732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.185 [2024-07-21 03:44:29.162760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:44.185 qpair failed and we were unable to recover it. 00:34:44.185 [2024-07-21 03:44:29.162855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.185 [2024-07-21 03:44:29.162880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:44.185 qpair failed and we were unable to recover it. 00:34:44.185 [2024-07-21 03:44:29.162973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.185 [2024-07-21 03:44:29.162998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:44.185 qpair failed and we were unable to recover it. 00:34:44.185 [2024-07-21 03:44:29.163114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.185 [2024-07-21 03:44:29.163139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:44.185 qpair failed and we were unable to recover it. 00:34:44.185 [2024-07-21 03:44:29.163261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.185 [2024-07-21 03:44:29.163286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:44.185 qpair failed and we were unable to recover it. 00:34:44.185 [2024-07-21 03:44:29.163419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.185 [2024-07-21 03:44:29.163457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.185 qpair failed and we were unable to recover it. 00:34:44.185 [2024-07-21 03:44:29.163592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.185 [2024-07-21 03:44:29.163640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:44.185 qpair failed and we were unable to recover it. 00:34:44.185 [2024-07-21 03:44:29.163765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.185 [2024-07-21 03:44:29.163793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:44.185 qpair failed and we were unable to recover it. 00:34:44.185 [2024-07-21 03:44:29.163919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.185 [2024-07-21 03:44:29.163950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:44.185 qpair failed and we were unable to recover it. 00:34:44.185 [2024-07-21 03:44:29.164043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.185 [2024-07-21 03:44:29.164071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:44.185 qpair failed and we were unable to recover it. 00:34:44.185 [2024-07-21 03:44:29.164168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.185 [2024-07-21 03:44:29.164195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:44.185 qpair failed and we were unable to recover it. 00:34:44.185 [2024-07-21 03:44:29.164319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.185 [2024-07-21 03:44:29.164348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:44.185 qpair failed and we were unable to recover it. 00:34:44.185 [2024-07-21 03:44:29.164483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.185 [2024-07-21 03:44:29.164522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.185 qpair failed and we were unable to recover it. 00:34:44.185 [2024-07-21 03:44:29.164630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.185 [2024-07-21 03:44:29.164660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:44.185 qpair failed and we were unable to recover it. 00:34:44.185 [2024-07-21 03:44:29.164783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.185 [2024-07-21 03:44:29.164809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:44.185 qpair failed and we were unable to recover it. 00:34:44.185 [2024-07-21 03:44:29.164903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.185 [2024-07-21 03:44:29.164928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:44.185 qpair failed and we were unable to recover it. 00:34:44.185 [2024-07-21 03:44:29.165020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.185 [2024-07-21 03:44:29.165045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:44.185 qpair failed and we were unable to recover it. 00:34:44.185 [2024-07-21 03:44:29.165165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.185 [2024-07-21 03:44:29.165191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:44.185 qpair failed and we were unable to recover it. 00:34:44.185 [2024-07-21 03:44:29.165318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.185 [2024-07-21 03:44:29.165345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:44.185 qpair failed and we were unable to recover it. 00:34:44.186 [2024-07-21 03:44:29.165446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.186 [2024-07-21 03:44:29.165474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.186 qpair failed and we were unable to recover it. 00:34:44.186 [2024-07-21 03:44:29.165571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.186 [2024-07-21 03:44:29.165600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:44.186 qpair failed and we were unable to recover it. 00:34:44.186 [2024-07-21 03:44:29.165731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.186 [2024-07-21 03:44:29.165758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:44.186 qpair failed and we were unable to recover it. 00:34:44.186 [2024-07-21 03:44:29.165889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.186 [2024-07-21 03:44:29.165914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:44.186 qpair failed and we were unable to recover it. 00:34:44.186 [2024-07-21 03:44:29.166060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.186 [2024-07-21 03:44:29.166085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:44.186 qpair failed and we were unable to recover it. 00:34:44.186 [2024-07-21 03:44:29.166180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.186 [2024-07-21 03:44:29.166206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:44.186 qpair failed and we were unable to recover it. 00:34:44.186 [2024-07-21 03:44:29.166328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.186 [2024-07-21 03:44:29.166355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:44.186 qpair failed and we were unable to recover it. 00:34:44.186 [2024-07-21 03:44:29.166469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.186 [2024-07-21 03:44:29.166508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.186 qpair failed and we were unable to recover it. 00:34:44.186 [2024-07-21 03:44:29.166601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.186 [2024-07-21 03:44:29.166635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:44.186 qpair failed and we were unable to recover it. 00:34:44.186 [2024-07-21 03:44:29.166762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.186 [2024-07-21 03:44:29.166787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:44.186 qpair failed and we were unable to recover it. 00:34:44.186 [2024-07-21 03:44:29.166932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.186 [2024-07-21 03:44:29.166958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:44.186 qpair failed and we were unable to recover it. 00:34:44.186 [2024-07-21 03:44:29.167081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.186 [2024-07-21 03:44:29.167107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:44.186 qpair failed and we were unable to recover it. 00:34:44.186 [2024-07-21 03:44:29.167226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.186 [2024-07-21 03:44:29.167252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:44.186 qpair failed and we were unable to recover it. 00:34:44.186 [2024-07-21 03:44:29.167411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.186 [2024-07-21 03:44:29.167438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:44.186 qpair failed and we were unable to recover it. 00:34:44.186 [2024-07-21 03:44:29.167576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.186 [2024-07-21 03:44:29.167621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.186 qpair failed and we were unable to recover it. 00:34:44.186 [2024-07-21 03:44:29.167721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.186 [2024-07-21 03:44:29.167748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:44.186 qpair failed and we were unable to recover it. 00:34:44.186 [2024-07-21 03:44:29.167841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.186 [2024-07-21 03:44:29.167869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:44.186 qpair failed and we were unable to recover it. 00:34:44.186 [2024-07-21 03:44:29.167991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.186 [2024-07-21 03:44:29.168017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:44.186 qpair failed and we were unable to recover it. 00:34:44.186 [2024-07-21 03:44:29.168111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.186 [2024-07-21 03:44:29.168136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:44.186 qpair failed and we were unable to recover it. 00:34:44.186 [2024-07-21 03:44:29.168254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.186 [2024-07-21 03:44:29.168279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:44.186 qpair failed and we were unable to recover it. 00:34:44.186 [2024-07-21 03:44:29.168389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.186 [2024-07-21 03:44:29.168416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:44.186 qpair failed and we were unable to recover it. 00:34:44.186 [2024-07-21 03:44:29.168563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.186 [2024-07-21 03:44:29.168589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:44.186 qpair failed and we were unable to recover it. 00:34:44.186 [2024-07-21 03:44:29.168722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.186 [2024-07-21 03:44:29.168749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:44.186 qpair failed and we were unable to recover it. 00:34:44.186 [2024-07-21 03:44:29.168883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.186 [2024-07-21 03:44:29.168910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:44.186 qpair failed and we were unable to recover it. 00:34:44.186 [2024-07-21 03:44:29.169058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.186 [2024-07-21 03:44:29.169083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:44.186 qpair failed and we were unable to recover it. 00:34:44.186 [2024-07-21 03:44:29.169176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.186 [2024-07-21 03:44:29.169201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:44.186 qpair failed and we were unable to recover it. 00:34:44.186 [2024-07-21 03:44:29.169324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.186 [2024-07-21 03:44:29.169351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:44.186 qpair failed and we were unable to recover it. 00:34:44.186 [2024-07-21 03:44:29.169468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.186 [2024-07-21 03:44:29.169494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:44.186 qpair failed and we were unable to recover it. 00:34:44.186 [2024-07-21 03:44:29.169635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.186 [2024-07-21 03:44:29.169661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:44.186 qpair failed and we were unable to recover it. 00:34:44.186 [2024-07-21 03:44:29.169751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.186 [2024-07-21 03:44:29.169780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:44.186 qpair failed and we were unable to recover it. 00:34:44.186 [2024-07-21 03:44:29.169900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.186 [2024-07-21 03:44:29.169927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:44.186 qpair failed and we were unable to recover it. 00:34:44.186 [2024-07-21 03:44:29.170072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.186 [2024-07-21 03:44:29.170097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:44.186 qpair failed and we were unable to recover it. 00:34:44.186 [2024-07-21 03:44:29.170220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.186 [2024-07-21 03:44:29.170246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:44.186 qpair failed and we were unable to recover it. 00:34:44.186 [2024-07-21 03:44:29.170367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.186 [2024-07-21 03:44:29.170392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:44.187 qpair failed and we were unable to recover it. 00:34:44.187 [2024-07-21 03:44:29.170496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.187 [2024-07-21 03:44:29.170535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:44.187 qpair failed and we were unable to recover it. 00:34:44.187 [2024-07-21 03:44:29.170653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.187 [2024-07-21 03:44:29.170692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:44.187 qpair failed and we were unable to recover it. 00:34:44.187 [2024-07-21 03:44:29.170830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.187 [2024-07-21 03:44:29.170869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.187 qpair failed and we were unable to recover it. 00:34:44.187 [2024-07-21 03:44:29.170997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.187 [2024-07-21 03:44:29.171025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.187 qpair failed and we were unable to recover it. 00:34:44.187 [2024-07-21 03:44:29.171125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.187 [2024-07-21 03:44:29.171152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.187 qpair failed and we were unable to recover it. 00:34:44.187 [2024-07-21 03:44:29.171251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.187 [2024-07-21 03:44:29.171276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.187 qpair failed and we were unable to recover it. 00:34:44.187 [2024-07-21 03:44:29.171366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.187 [2024-07-21 03:44:29.171392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.187 qpair failed and we were unable to recover it. 00:34:44.187 [2024-07-21 03:44:29.171527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.187 [2024-07-21 03:44:29.171566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:44.187 qpair failed and we were unable to recover it. 00:34:44.187 [2024-07-21 03:44:29.171729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.187 [2024-07-21 03:44:29.171757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:44.187 qpair failed and we were unable to recover it. 00:34:44.187 [2024-07-21 03:44:29.171860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.187 [2024-07-21 03:44:29.171887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:44.187 qpair failed and we were unable to recover it. 00:34:44.187 [2024-07-21 03:44:29.171988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.187 [2024-07-21 03:44:29.172014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:44.187 qpair failed and we were unable to recover it. 00:34:44.187 [2024-07-21 03:44:29.172139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.187 [2024-07-21 03:44:29.172164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:44.187 qpair failed and we were unable to recover it. 00:34:44.187 [2024-07-21 03:44:29.172280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.187 [2024-07-21 03:44:29.172305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:44.187 qpair failed and we were unable to recover it. 00:34:44.187 [2024-07-21 03:44:29.172391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.187 [2024-07-21 03:44:29.172415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:44.187 qpair failed and we were unable to recover it. 00:34:44.187 [2024-07-21 03:44:29.172506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.187 [2024-07-21 03:44:29.172532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:44.187 qpair failed and we were unable to recover it. 00:34:44.187 [2024-07-21 03:44:29.172627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.187 [2024-07-21 03:44:29.172653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:44.187 qpair failed and we were unable to recover it. 00:34:44.187 [2024-07-21 03:44:29.172767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.187 [2024-07-21 03:44:29.172792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:44.187 qpair failed and we were unable to recover it. 00:34:44.187 [2024-07-21 03:44:29.172911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.187 [2024-07-21 03:44:29.172936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:44.187 qpair failed and we were unable to recover it. 00:34:44.187 [2024-07-21 03:44:29.173054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.187 [2024-07-21 03:44:29.173078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:44.187 qpair failed and we were unable to recover it. 00:34:44.187 [2024-07-21 03:44:29.173196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.187 [2024-07-21 03:44:29.173222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:44.187 qpair failed and we were unable to recover it. 00:34:44.187 [2024-07-21 03:44:29.173318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.187 [2024-07-21 03:44:29.173343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:44.187 qpair failed and we were unable to recover it. 00:34:44.187 [2024-07-21 03:44:29.173431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.187 [2024-07-21 03:44:29.173456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:44.187 qpair failed and we were unable to recover it. 00:34:44.187 [2024-07-21 03:44:29.173554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.187 [2024-07-21 03:44:29.173582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:44.187 qpair failed and we were unable to recover it. 00:34:44.187 [2024-07-21 03:44:29.173744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.187 [2024-07-21 03:44:29.173784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:44.187 qpair failed and we were unable to recover it. 00:34:44.187 [2024-07-21 03:44:29.173923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.187 [2024-07-21 03:44:29.173961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:44.187 qpair failed and we were unable to recover it. 00:34:44.187 [2024-07-21 03:44:29.174061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.187 [2024-07-21 03:44:29.174089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:44.187 qpair failed and we were unable to recover it. 00:34:44.187 [2024-07-21 03:44:29.174218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.187 [2024-07-21 03:44:29.174244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:44.187 qpair failed and we were unable to recover it. 00:34:44.187 [2024-07-21 03:44:29.174331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.187 [2024-07-21 03:44:29.174357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:44.187 qpair failed and we were unable to recover it. 00:34:44.187 [2024-07-21 03:44:29.174456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.187 [2024-07-21 03:44:29.174482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:44.187 qpair failed and we were unable to recover it. 00:34:44.187 [2024-07-21 03:44:29.174598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.187 [2024-07-21 03:44:29.174631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:44.187 qpair failed and we were unable to recover it. 00:34:44.187 [2024-07-21 03:44:29.174779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.187 [2024-07-21 03:44:29.174805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:44.187 qpair failed and we were unable to recover it. 00:34:44.187 [2024-07-21 03:44:29.174902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.187 [2024-07-21 03:44:29.174930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:44.187 qpair failed and we were unable to recover it. 00:34:44.187 [2024-07-21 03:44:29.175046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.187 [2024-07-21 03:44:29.175071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:44.187 qpair failed and we were unable to recover it. 00:34:44.187 [2024-07-21 03:44:29.175165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.187 [2024-07-21 03:44:29.175191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:44.187 qpair failed and we were unable to recover it. 00:34:44.187 [2024-07-21 03:44:29.175306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.187 [2024-07-21 03:44:29.175332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:44.187 qpair failed and we were unable to recover it. 00:34:44.187 [2024-07-21 03:44:29.175451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.187 [2024-07-21 03:44:29.175477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:44.187 qpair failed and we were unable to recover it. 00:34:44.187 [2024-07-21 03:44:29.175572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.187 [2024-07-21 03:44:29.175598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:44.187 qpair failed and we were unable to recover it. 00:34:44.187 [2024-07-21 03:44:29.175730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.187 [2024-07-21 03:44:29.175756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:44.187 qpair failed and we were unable to recover it. 00:34:44.187 [2024-07-21 03:44:29.175872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.187 [2024-07-21 03:44:29.175912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:44.187 qpair failed and we were unable to recover it. 00:34:44.187 [2024-07-21 03:44:29.176066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.187 [2024-07-21 03:44:29.176093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:44.187 qpair failed and we were unable to recover it. 00:34:44.187 [2024-07-21 03:44:29.176212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.188 [2024-07-21 03:44:29.176238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:44.188 qpair failed and we were unable to recover it. 00:34:44.188 [2024-07-21 03:44:29.176338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.188 [2024-07-21 03:44:29.176366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:44.188 qpair failed and we were unable to recover it. 00:34:44.188 [2024-07-21 03:44:29.176495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.188 [2024-07-21 03:44:29.176522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:44.188 qpair failed and we were unable to recover it. 00:34:44.188 [2024-07-21 03:44:29.176672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.188 [2024-07-21 03:44:29.176700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:44.188 qpair failed and we were unable to recover it. 00:34:44.188 [2024-07-21 03:44:29.176801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.188 [2024-07-21 03:44:29.176831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:44.188 qpair failed and we were unable to recover it. 00:34:44.188 [2024-07-21 03:44:29.176963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.188 [2024-07-21 03:44:29.176989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:44.188 qpair failed and we were unable to recover it. 00:34:44.188 [2024-07-21 03:44:29.177113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.188 [2024-07-21 03:44:29.177139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:44.188 qpair failed and we were unable to recover it. 00:34:44.188 [2024-07-21 03:44:29.177235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.188 [2024-07-21 03:44:29.177260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:44.188 qpair failed and we were unable to recover it. 00:34:44.188 [2024-07-21 03:44:29.177352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.188 [2024-07-21 03:44:29.177378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:44.188 qpair failed and we were unable to recover it. 00:34:44.188 [2024-07-21 03:44:29.177514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.188 [2024-07-21 03:44:29.177555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.188 qpair failed and we were unable to recover it. 00:34:44.188 [2024-07-21 03:44:29.177687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.188 [2024-07-21 03:44:29.177715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.188 qpair failed and we were unable to recover it. 00:34:44.188 [2024-07-21 03:44:29.177850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.188 [2024-07-21 03:44:29.177890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:44.188 qpair failed and we were unable to recover it. 00:34:44.188 [2024-07-21 03:44:29.178028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.188 [2024-07-21 03:44:29.178055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:44.188 qpair failed and we were unable to recover it. 00:34:44.188 [2024-07-21 03:44:29.178209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.188 [2024-07-21 03:44:29.178243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:44.188 qpair failed and we were unable to recover it. 00:34:44.188 [2024-07-21 03:44:29.178366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.188 [2024-07-21 03:44:29.178392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:44.188 qpair failed and we were unable to recover it. 00:34:44.188 [2024-07-21 03:44:29.178515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.188 [2024-07-21 03:44:29.178541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:44.188 qpair failed and we were unable to recover it. 00:34:44.188 [2024-07-21 03:44:29.178633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.188 [2024-07-21 03:44:29.178661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:44.188 qpair failed and we were unable to recover it. 00:34:44.188 [2024-07-21 03:44:29.178757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.188 [2024-07-21 03:44:29.178783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:44.188 qpair failed and we were unable to recover it. 00:34:44.188 [2024-07-21 03:44:29.178869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.188 [2024-07-21 03:44:29.178896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:44.188 qpair failed and we were unable to recover it. 00:34:44.188 [2024-07-21 03:44:29.178986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.188 [2024-07-21 03:44:29.179013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:44.188 qpair failed and we were unable to recover it. 00:34:44.188 [2024-07-21 03:44:29.179136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.188 [2024-07-21 03:44:29.179162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:44.188 qpair failed and we were unable to recover it. 00:34:44.188 [2024-07-21 03:44:29.179284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.188 [2024-07-21 03:44:29.179311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:44.188 qpair failed and we were unable to recover it. 00:34:44.188 [2024-07-21 03:44:29.179437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.188 [2024-07-21 03:44:29.179469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:44.188 qpair failed and we were unable to recover it. 00:34:44.188 [2024-07-21 03:44:29.179610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.188 [2024-07-21 03:44:29.179655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.188 qpair failed and we were unable to recover it. 00:34:44.188 [2024-07-21 03:44:29.179781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.188 [2024-07-21 03:44:29.179809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.188 qpair failed and we were unable to recover it. 00:34:44.188 [2024-07-21 03:44:29.179899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.188 [2024-07-21 03:44:29.179925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.188 qpair failed and we were unable to recover it. 00:34:44.188 [2024-07-21 03:44:29.180019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.188 [2024-07-21 03:44:29.180045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.188 qpair failed and we were unable to recover it. 00:34:44.188 [2024-07-21 03:44:29.180133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.188 [2024-07-21 03:44:29.180158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.188 qpair failed and we were unable to recover it. 00:34:44.188 [2024-07-21 03:44:29.180271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.188 [2024-07-21 03:44:29.180311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:44.188 qpair failed and we were unable to recover it. 00:34:44.188 [2024-07-21 03:44:29.180416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.188 [2024-07-21 03:44:29.180444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:44.188 qpair failed and we were unable to recover it. 00:34:44.188 [2024-07-21 03:44:29.180594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.188 [2024-07-21 03:44:29.180626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:44.188 qpair failed and we were unable to recover it. 00:34:44.188 [2024-07-21 03:44:29.180731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.188 [2024-07-21 03:44:29.180759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:44.188 qpair failed and we were unable to recover it. 00:34:44.188 [2024-07-21 03:44:29.180872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.188 [2024-07-21 03:44:29.180899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:44.188 qpair failed and we were unable to recover it. 00:34:44.188 [2024-07-21 03:44:29.180990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.188 [2024-07-21 03:44:29.181016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:44.188 qpair failed and we were unable to recover it. 00:34:44.188 [2024-07-21 03:44:29.181112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.188 [2024-07-21 03:44:29.181139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.188 qpair failed and we were unable to recover it. 00:34:44.188 [2024-07-21 03:44:29.181291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.188 [2024-07-21 03:44:29.181320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:44.188 qpair failed and we were unable to recover it. 00:34:44.188 [2024-07-21 03:44:29.181439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.188 [2024-07-21 03:44:29.181478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:44.188 qpair failed and we were unable to recover it. 00:34:44.188 [2024-07-21 03:44:29.181609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.188 [2024-07-21 03:44:29.181646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:44.188 qpair failed and we were unable to recover it. 00:34:44.188 [2024-07-21 03:44:29.181764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.188 [2024-07-21 03:44:29.181790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:44.188 qpair failed and we were unable to recover it. 00:34:44.188 [2024-07-21 03:44:29.181910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.188 [2024-07-21 03:44:29.181935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:44.188 qpair failed and we were unable to recover it. 00:34:44.188 [2024-07-21 03:44:29.182027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.188 [2024-07-21 03:44:29.182053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:44.188 qpair failed and we were unable to recover it. 00:34:44.188 [2024-07-21 03:44:29.182201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.189 [2024-07-21 03:44:29.182226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:44.189 qpair failed and we were unable to recover it. 00:34:44.189 [2024-07-21 03:44:29.182348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.189 [2024-07-21 03:44:29.182374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:44.189 qpair failed and we were unable to recover it. 00:34:44.189 [2024-07-21 03:44:29.182460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.189 [2024-07-21 03:44:29.182485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:44.189 qpair failed and we were unable to recover it. 00:34:44.189 [2024-07-21 03:44:29.182618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.189 [2024-07-21 03:44:29.182647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.189 qpair failed and we were unable to recover it. 00:34:44.189 [2024-07-21 03:44:29.182771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.189 [2024-07-21 03:44:29.182797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.189 qpair failed and we were unable to recover it. 00:34:44.189 [2024-07-21 03:44:29.182892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.189 [2024-07-21 03:44:29.182918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.189 qpair failed and we were unable to recover it. 00:34:44.189 [2024-07-21 03:44:29.183012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.189 [2024-07-21 03:44:29.183038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.189 qpair failed and we were unable to recover it. 00:34:44.189 [2024-07-21 03:44:29.183153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.189 [2024-07-21 03:44:29.183179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.189 qpair failed and we were unable to recover it. 00:34:44.189 [2024-07-21 03:44:29.183308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.189 [2024-07-21 03:44:29.183337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:44.189 qpair failed and we were unable to recover it. 00:34:44.189 [2024-07-21 03:44:29.183431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.189 [2024-07-21 03:44:29.183460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:44.189 qpair failed and we were unable to recover it. 00:34:44.189 [2024-07-21 03:44:29.183573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.189 [2024-07-21 03:44:29.183619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:44.189 qpair failed and we were unable to recover it. 00:34:44.189 [2024-07-21 03:44:29.183748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.189 [2024-07-21 03:44:29.183776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:44.189 qpair failed and we were unable to recover it. 00:34:44.189 [2024-07-21 03:44:29.183871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.189 [2024-07-21 03:44:29.183899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:44.189 qpair failed and we were unable to recover it. 00:34:44.189 [2024-07-21 03:44:29.183993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.189 [2024-07-21 03:44:29.184021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:44.189 qpair failed and we were unable to recover it. 00:34:44.189 [2024-07-21 03:44:29.184156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.189 [2024-07-21 03:44:29.184182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.189 qpair failed and we were unable to recover it. 00:34:44.189 [2024-07-21 03:44:29.184313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.189 [2024-07-21 03:44:29.184340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:44.189 qpair failed and we were unable to recover it. 00:34:44.189 [2024-07-21 03:44:29.184459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.189 [2024-07-21 03:44:29.184485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:44.189 qpair failed and we were unable to recover it. 00:34:44.189 [2024-07-21 03:44:29.184584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.189 [2024-07-21 03:44:29.184610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:44.189 qpair failed and we were unable to recover it. 00:34:44.189 [2024-07-21 03:44:29.184737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.189 [2024-07-21 03:44:29.184764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:44.189 qpair failed and we were unable to recover it. 00:34:44.189 [2024-07-21 03:44:29.184885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.189 [2024-07-21 03:44:29.184911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:44.189 qpair failed and we were unable to recover it. 00:34:44.189 [2024-07-21 03:44:29.185007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.189 [2024-07-21 03:44:29.185032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:44.189 qpair failed and we were unable to recover it. 00:34:44.189 [2024-07-21 03:44:29.185155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.189 [2024-07-21 03:44:29.185180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:44.189 qpair failed and we were unable to recover it. 00:34:44.189 [2024-07-21 03:44:29.185324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.189 [2024-07-21 03:44:29.185350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:44.189 qpair failed and we were unable to recover it. 00:34:44.189 [2024-07-21 03:44:29.185469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.189 [2024-07-21 03:44:29.185495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:44.189 qpair failed and we were unable to recover it. 00:34:44.189 [2024-07-21 03:44:29.185620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.189 [2024-07-21 03:44:29.185646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:44.189 qpair failed and we were unable to recover it. 00:34:44.189 [2024-07-21 03:44:29.185769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.189 [2024-07-21 03:44:29.185797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:44.189 qpair failed and we were unable to recover it. 00:34:44.189 [2024-07-21 03:44:29.185923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.189 [2024-07-21 03:44:29.185949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:44.189 qpair failed and we were unable to recover it. 00:34:44.189 [2024-07-21 03:44:29.186083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.189 [2024-07-21 03:44:29.186110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:44.189 qpair failed and we were unable to recover it. 00:34:44.189 [2024-07-21 03:44:29.186270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.189 [2024-07-21 03:44:29.186296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:44.189 qpair failed and we were unable to recover it. 00:34:44.189 [2024-07-21 03:44:29.186420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.189 [2024-07-21 03:44:29.186445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:44.189 qpair failed and we were unable to recover it. 00:34:44.189 [2024-07-21 03:44:29.186568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.189 [2024-07-21 03:44:29.186593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:44.189 qpair failed and we were unable to recover it. 00:34:44.189 [2024-07-21 03:44:29.186741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.189 [2024-07-21 03:44:29.186780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:44.189 qpair failed and we were unable to recover it. 00:34:44.189 [2024-07-21 03:44:29.186921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.189 [2024-07-21 03:44:29.186960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:44.189 qpair failed and we were unable to recover it. 00:34:44.189 [2024-07-21 03:44:29.187068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.189 [2024-07-21 03:44:29.187108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.189 qpair failed and we were unable to recover it. 00:34:44.189 [2024-07-21 03:44:29.187206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.189 [2024-07-21 03:44:29.187233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.189 qpair failed and we were unable to recover it. 00:34:44.189 [2024-07-21 03:44:29.187340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.189 [2024-07-21 03:44:29.187367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.189 qpair failed and we were unable to recover it. 00:34:44.189 [2024-07-21 03:44:29.187454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.189 [2024-07-21 03:44:29.187479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.189 qpair failed and we were unable to recover it. 00:34:44.189 [2024-07-21 03:44:29.187592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.189 [2024-07-21 03:44:29.187623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.189 qpair failed and we were unable to recover it. 00:34:44.189 [2024-07-21 03:44:29.187745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.189 [2024-07-21 03:44:29.187770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.189 qpair failed and we were unable to recover it. 00:34:44.189 [2024-07-21 03:44:29.187856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.189 [2024-07-21 03:44:29.187881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.189 qpair failed and we were unable to recover it. 00:34:44.189 [2024-07-21 03:44:29.187983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.189 [2024-07-21 03:44:29.188008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.189 qpair failed and we were unable to recover it. 00:34:44.189 [2024-07-21 03:44:29.188101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.190 [2024-07-21 03:44:29.188129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.190 qpair failed and we were unable to recover it. 00:34:44.190 [2024-07-21 03:44:29.188251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.190 [2024-07-21 03:44:29.188277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.190 qpair failed and we were unable to recover it. 00:34:44.190 [2024-07-21 03:44:29.188402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.190 [2024-07-21 03:44:29.188428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.190 qpair failed and we were unable to recover it. 00:34:44.190 [2024-07-21 03:44:29.188520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.190 [2024-07-21 03:44:29.188545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.190 qpair failed and we were unable to recover it. 00:34:44.190 [2024-07-21 03:44:29.188694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.190 [2024-07-21 03:44:29.188724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:44.190 qpair failed and we were unable to recover it. 00:34:44.190 [2024-07-21 03:44:29.188875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.190 [2024-07-21 03:44:29.188914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:44.190 qpair failed and we were unable to recover it. 00:34:44.190 [2024-07-21 03:44:29.189045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.190 [2024-07-21 03:44:29.189072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:44.190 qpair failed and we were unable to recover it. 00:34:44.190 [2024-07-21 03:44:29.189170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.190 [2024-07-21 03:44:29.189202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:44.190 qpair failed and we were unable to recover it. 00:34:44.190 [2024-07-21 03:44:29.189324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.190 [2024-07-21 03:44:29.189350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:44.190 qpair failed and we were unable to recover it. 00:34:44.190 [2024-07-21 03:44:29.189462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.190 [2024-07-21 03:44:29.189502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:44.190 qpair failed and we were unable to recover it. 00:34:44.190 [2024-07-21 03:44:29.189637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.190 [2024-07-21 03:44:29.189664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.190 qpair failed and we were unable to recover it. 00:34:44.190 [2024-07-21 03:44:29.189757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.190 [2024-07-21 03:44:29.189782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.190 qpair failed and we were unable to recover it. 00:34:44.190 [2024-07-21 03:44:29.189909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.190 [2024-07-21 03:44:29.189936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.190 qpair failed and we were unable to recover it. 00:34:44.190 [2024-07-21 03:44:29.190058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.190 [2024-07-21 03:44:29.190084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.190 qpair failed and we were unable to recover it. 00:34:44.190 [2024-07-21 03:44:29.190171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.190 [2024-07-21 03:44:29.190197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.190 qpair failed and we were unable to recover it. 00:34:44.190 [2024-07-21 03:44:29.190294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.190 [2024-07-21 03:44:29.190320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.190 qpair failed and we were unable to recover it. 00:34:44.190 [2024-07-21 03:44:29.190413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.190 [2024-07-21 03:44:29.190439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.190 qpair failed and we were unable to recover it. 00:34:44.190 [2024-07-21 03:44:29.190522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.190 [2024-07-21 03:44:29.190548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.190 qpair failed and we were unable to recover it. 00:34:44.190 [2024-07-21 03:44:29.190663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.190 [2024-07-21 03:44:29.190689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.190 qpair failed and we were unable to recover it. 00:34:44.190 [2024-07-21 03:44:29.190788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.190 [2024-07-21 03:44:29.190813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.190 qpair failed and we were unable to recover it. 00:34:44.190 [2024-07-21 03:44:29.190938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.190 [2024-07-21 03:44:29.190964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.190 qpair failed and we were unable to recover it. 00:34:44.190 [2024-07-21 03:44:29.191065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.190 [2024-07-21 03:44:29.191090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.190 qpair failed and we were unable to recover it. 00:34:44.190 [2024-07-21 03:44:29.191181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.190 [2024-07-21 03:44:29.191206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.190 qpair failed and we were unable to recover it. 00:34:44.190 [2024-07-21 03:44:29.191310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.190 [2024-07-21 03:44:29.191350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:44.190 qpair failed and we were unable to recover it. 00:34:44.190 [2024-07-21 03:44:29.191448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.190 [2024-07-21 03:44:29.191475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:44.190 qpair failed and we were unable to recover it. 00:34:44.190 [2024-07-21 03:44:29.191636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.190 [2024-07-21 03:44:29.191676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:44.190 qpair failed and we were unable to recover it. 00:34:44.190 [2024-07-21 03:44:29.191771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.190 [2024-07-21 03:44:29.191798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:44.190 qpair failed and we were unable to recover it. 00:34:44.190 [2024-07-21 03:44:29.191918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.190 [2024-07-21 03:44:29.191944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:44.190 qpair failed and we were unable to recover it. 00:34:44.190 [2024-07-21 03:44:29.192033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.190 [2024-07-21 03:44:29.192059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:44.190 qpair failed and we were unable to recover it. 00:34:44.190 [2024-07-21 03:44:29.192154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.190 [2024-07-21 03:44:29.192181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.190 qpair failed and we were unable to recover it. 00:34:44.190 [2024-07-21 03:44:29.192300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.190 [2024-07-21 03:44:29.192325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.190 qpair failed and we were unable to recover it. 00:34:44.190 [2024-07-21 03:44:29.192421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.190 [2024-07-21 03:44:29.192447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.190 qpair failed and we were unable to recover it. 00:34:44.190 [2024-07-21 03:44:29.192535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.190 [2024-07-21 03:44:29.192561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.190 qpair failed and we were unable to recover it. 00:34:44.190 [2024-07-21 03:44:29.192677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.190 [2024-07-21 03:44:29.192716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:44.190 qpair failed and we were unable to recover it. 00:34:44.190 [2024-07-21 03:44:29.192814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.190 [2024-07-21 03:44:29.192847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:44.190 qpair failed and we were unable to recover it. 00:34:44.190 [2024-07-21 03:44:29.192971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.190 [2024-07-21 03:44:29.192999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:44.190 qpair failed and we were unable to recover it. 00:34:44.190 [2024-07-21 03:44:29.193092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.191 [2024-07-21 03:44:29.193118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:44.191 qpair failed and we were unable to recover it. 00:34:44.191 [2024-07-21 03:44:29.193207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.191 [2024-07-21 03:44:29.193233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:44.191 qpair failed and we were unable to recover it. 00:34:44.191 [2024-07-21 03:44:29.193328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.191 [2024-07-21 03:44:29.193355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:44.191 qpair failed and we were unable to recover it. 00:34:44.191 [2024-07-21 03:44:29.193459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.191 [2024-07-21 03:44:29.193485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.191 qpair failed and we were unable to recover it. 00:34:44.191 [2024-07-21 03:44:29.193605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.191 [2024-07-21 03:44:29.193643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.191 qpair failed and we were unable to recover it. 00:34:44.191 [2024-07-21 03:44:29.193765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.191 [2024-07-21 03:44:29.193790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.191 qpair failed and we were unable to recover it. 00:34:44.191 [2024-07-21 03:44:29.193881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.191 [2024-07-21 03:44:29.193907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.191 qpair failed and we were unable to recover it. 00:34:44.191 [2024-07-21 03:44:29.194022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.191 [2024-07-21 03:44:29.194047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.191 qpair failed and we were unable to recover it. 00:34:44.191 [2024-07-21 03:44:29.194165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.191 [2024-07-21 03:44:29.194190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.191 qpair failed and we were unable to recover it. 00:34:44.191 [2024-07-21 03:44:29.194316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.191 [2024-07-21 03:44:29.194343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:44.191 qpair failed and we were unable to recover it. 00:34:44.191 [2024-07-21 03:44:29.194458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.191 [2024-07-21 03:44:29.194497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:44.191 qpair failed and we were unable to recover it. 00:34:44.191 [2024-07-21 03:44:29.194630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.191 [2024-07-21 03:44:29.194658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:44.191 qpair failed and we were unable to recover it. 00:34:44.191 [2024-07-21 03:44:29.194788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.191 [2024-07-21 03:44:29.194815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:44.191 qpair failed and we were unable to recover it. 00:34:44.191 [2024-07-21 03:44:29.194934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.191 [2024-07-21 03:44:29.194959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:44.191 qpair failed and we were unable to recover it. 00:34:44.191 [2024-07-21 03:44:29.195079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.191 [2024-07-21 03:44:29.195105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:44.191 qpair failed and we were unable to recover it. 00:34:44.191 [2024-07-21 03:44:29.195229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.191 [2024-07-21 03:44:29.195257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.191 qpair failed and we were unable to recover it. 00:34:44.191 [2024-07-21 03:44:29.195357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.191 [2024-07-21 03:44:29.195385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:44.191 qpair failed and we were unable to recover it. 00:34:44.191 [2024-07-21 03:44:29.195481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.191 [2024-07-21 03:44:29.195509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:44.191 qpair failed and we were unable to recover it. 00:34:44.191 [2024-07-21 03:44:29.195653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.191 [2024-07-21 03:44:29.195680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:44.191 qpair failed and we were unable to recover it. 00:34:44.191 [2024-07-21 03:44:29.195796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.191 [2024-07-21 03:44:29.195822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:44.191 qpair failed and we were unable to recover it. 00:34:44.191 [2024-07-21 03:44:29.195942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.191 [2024-07-21 03:44:29.195968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:44.191 qpair failed and we were unable to recover it. 00:34:44.191 [2024-07-21 03:44:29.196065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.191 [2024-07-21 03:44:29.196091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:44.191 qpair failed and we were unable to recover it. 00:34:44.191 [2024-07-21 03:44:29.196209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.191 [2024-07-21 03:44:29.196234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:44.191 qpair failed and we were unable to recover it. 00:34:44.191 [2024-07-21 03:44:29.196341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.191 [2024-07-21 03:44:29.196381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:44.191 qpair failed and we were unable to recover it. 00:34:44.191 [2024-07-21 03:44:29.196506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.191 [2024-07-21 03:44:29.196534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:44.191 qpair failed and we were unable to recover it. 00:34:44.191 [2024-07-21 03:44:29.196667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.191 [2024-07-21 03:44:29.196695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.191 qpair failed and we were unable to recover it. 00:34:44.191 [2024-07-21 03:44:29.196821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.191 [2024-07-21 03:44:29.196847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.191 qpair failed and we were unable to recover it. 00:34:44.191 [2024-07-21 03:44:29.196970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.191 [2024-07-21 03:44:29.196997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.191 qpair failed and we were unable to recover it. 00:34:44.191 [2024-07-21 03:44:29.197098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.191 [2024-07-21 03:44:29.197123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.191 qpair failed and we were unable to recover it. 00:34:44.191 [2024-07-21 03:44:29.197214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.191 [2024-07-21 03:44:29.197243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:44.191 qpair failed and we were unable to recover it. 00:34:44.191 [2024-07-21 03:44:29.197346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.191 [2024-07-21 03:44:29.197375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:44.191 qpair failed and we were unable to recover it. 00:34:44.191 [2024-07-21 03:44:29.197524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.191 [2024-07-21 03:44:29.197550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:44.191 qpair failed and we were unable to recover it. 00:34:44.191 [2024-07-21 03:44:29.197638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.191 [2024-07-21 03:44:29.197665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:44.191 qpair failed and we were unable to recover it. 00:34:44.191 [2024-07-21 03:44:29.197792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.191 [2024-07-21 03:44:29.197818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:44.191 qpair failed and we were unable to recover it. 00:34:44.191 [2024-07-21 03:44:29.197909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.191 [2024-07-21 03:44:29.197935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:44.191 qpair failed and we were unable to recover it. 00:34:44.191 [2024-07-21 03:44:29.198038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.191 [2024-07-21 03:44:29.198066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.191 qpair failed and we were unable to recover it. 00:34:44.191 [2024-07-21 03:44:29.198214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.191 [2024-07-21 03:44:29.198239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.191 qpair failed and we were unable to recover it. 00:34:44.191 [2024-07-21 03:44:29.198341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.191 [2024-07-21 03:44:29.198380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:44.191 qpair failed and we were unable to recover it. 00:34:44.191 [2024-07-21 03:44:29.198509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.191 [2024-07-21 03:44:29.198535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:44.191 qpair failed and we were unable to recover it. 00:34:44.191 [2024-07-21 03:44:29.198682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.191 [2024-07-21 03:44:29.198710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:44.191 qpair failed and we were unable to recover it. 00:34:44.191 [2024-07-21 03:44:29.198838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.191 [2024-07-21 03:44:29.198864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:44.191 qpair failed and we were unable to recover it. 00:34:44.192 [2024-07-21 03:44:29.198989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.192 [2024-07-21 03:44:29.199014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:44.192 qpair failed and we were unable to recover it. 00:34:44.192 [2024-07-21 03:44:29.199163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.192 [2024-07-21 03:44:29.199190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:44.192 qpair failed and we were unable to recover it. 00:34:44.192 [2024-07-21 03:44:29.199344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.192 [2024-07-21 03:44:29.199371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.192 qpair failed and we were unable to recover it. 00:34:44.192 [2024-07-21 03:44:29.199464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.192 [2024-07-21 03:44:29.199493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:44.192 qpair failed and we were unable to recover it. 00:34:44.192 [2024-07-21 03:44:29.199621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.192 [2024-07-21 03:44:29.199648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:44.192 qpair failed and we were unable to recover it. 00:34:44.192 [2024-07-21 03:44:29.199776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.192 [2024-07-21 03:44:29.199802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:44.192 qpair failed and we were unable to recover it. 00:34:44.192 [2024-07-21 03:44:29.199896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.192 [2024-07-21 03:44:29.199923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:44.192 qpair failed and we were unable to recover it. 00:34:44.192 [2024-07-21 03:44:29.200042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.192 [2024-07-21 03:44:29.200067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:44.192 qpair failed and we were unable to recover it. 00:34:44.192 [2024-07-21 03:44:29.200186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.192 [2024-07-21 03:44:29.200213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:44.192 qpair failed and we were unable to recover it. 00:34:44.192 [2024-07-21 03:44:29.200303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.192 [2024-07-21 03:44:29.200329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:44.192 qpair failed and we were unable to recover it. 00:34:44.192 [2024-07-21 03:44:29.200491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.192 [2024-07-21 03:44:29.200519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:44.192 qpair failed and we were unable to recover it. 00:34:44.192 [2024-07-21 03:44:29.200627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.192 [2024-07-21 03:44:29.200655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:44.192 qpair failed and we were unable to recover it. 00:34:44.192 [2024-07-21 03:44:29.200744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.192 [2024-07-21 03:44:29.200770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:44.192 qpair failed and we were unable to recover it. 00:34:44.192 [2024-07-21 03:44:29.200919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.192 [2024-07-21 03:44:29.200945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:44.192 qpair failed and we were unable to recover it. 00:34:44.192 [2024-07-21 03:44:29.201063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.192 [2024-07-21 03:44:29.201088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:44.192 qpair failed and we were unable to recover it. 00:34:44.192 [2024-07-21 03:44:29.201187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.192 [2024-07-21 03:44:29.201213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:44.192 qpair failed and we were unable to recover it. 00:34:44.192 [2024-07-21 03:44:29.201336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.192 [2024-07-21 03:44:29.201363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:44.192 qpair failed and we were unable to recover it. 00:34:44.192 [2024-07-21 03:44:29.201485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.192 [2024-07-21 03:44:29.201512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:44.192 qpair failed and we were unable to recover it. 00:34:44.192 [2024-07-21 03:44:29.201628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.192 [2024-07-21 03:44:29.201668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.192 qpair failed and we were unable to recover it. 00:34:44.192 [2024-07-21 03:44:29.201769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.192 [2024-07-21 03:44:29.201795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.192 qpair failed and we were unable to recover it. 00:34:44.192 [2024-07-21 03:44:29.201916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.192 [2024-07-21 03:44:29.201941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.192 qpair failed and we were unable to recover it. 00:34:44.192 [2024-07-21 03:44:29.202063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.192 [2024-07-21 03:44:29.202089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.192 qpair failed and we were unable to recover it. 00:34:44.192 [2024-07-21 03:44:29.202182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.192 [2024-07-21 03:44:29.202207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.192 qpair failed and we were unable to recover it. 00:34:44.192 [2024-07-21 03:44:29.202332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.192 [2024-07-21 03:44:29.202359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:44.192 qpair failed and we were unable to recover it. 00:34:44.192 [2024-07-21 03:44:29.202507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.192 [2024-07-21 03:44:29.202540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:44.192 qpair failed and we were unable to recover it. 00:34:44.192 [2024-07-21 03:44:29.202666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.192 [2024-07-21 03:44:29.202693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:44.192 qpair failed and we were unable to recover it. 00:34:44.192 [2024-07-21 03:44:29.202813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.192 [2024-07-21 03:44:29.202839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:44.192 qpair failed and we were unable to recover it. 00:34:44.192 [2024-07-21 03:44:29.202952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.192 [2024-07-21 03:44:29.202978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:44.192 qpair failed and we were unable to recover it. 00:34:44.192 [2024-07-21 03:44:29.203105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.192 [2024-07-21 03:44:29.203131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:44.192 qpair failed and we were unable to recover it. 00:34:44.192 [2024-07-21 03:44:29.203223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.192 [2024-07-21 03:44:29.203251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:44.192 qpair failed and we were unable to recover it. 00:34:44.192 [2024-07-21 03:44:29.203378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.192 [2024-07-21 03:44:29.203406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:44.192 qpair failed and we were unable to recover it. 00:34:44.192 [2024-07-21 03:44:29.203530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.192 [2024-07-21 03:44:29.203556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:44.192 qpair failed and we were unable to recover it. 00:34:44.192 [2024-07-21 03:44:29.203681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.192 [2024-07-21 03:44:29.203709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.192 qpair failed and we were unable to recover it. 00:34:44.192 [2024-07-21 03:44:29.203813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.192 [2024-07-21 03:44:29.203841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.192 qpair failed and we were unable to recover it. 00:34:44.192 [2024-07-21 03:44:29.203961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.192 [2024-07-21 03:44:29.203987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.192 qpair failed and we were unable to recover it. 00:34:44.192 [2024-07-21 03:44:29.204084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.192 [2024-07-21 03:44:29.204110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.192 qpair failed and we were unable to recover it. 00:34:44.192 [2024-07-21 03:44:29.204204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.192 [2024-07-21 03:44:29.204230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.192 qpair failed and we were unable to recover it. 00:34:44.192 [2024-07-21 03:44:29.204341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.192 [2024-07-21 03:44:29.204380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:44.192 qpair failed and we were unable to recover it. 00:34:44.192 [2024-07-21 03:44:29.204492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.192 [2024-07-21 03:44:29.204520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:44.192 qpair failed and we were unable to recover it. 00:34:44.192 [2024-07-21 03:44:29.204619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.192 [2024-07-21 03:44:29.204647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:44.192 qpair failed and we were unable to recover it. 00:34:44.192 [2024-07-21 03:44:29.204771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.193 [2024-07-21 03:44:29.204797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:44.193 qpair failed and we were unable to recover it. 00:34:44.193 [2024-07-21 03:44:29.204886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.193 [2024-07-21 03:44:29.204912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:44.193 qpair failed and we were unable to recover it. 00:34:44.193 [2024-07-21 03:44:29.205040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.193 [2024-07-21 03:44:29.205065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:44.193 qpair failed and we were unable to recover it. 00:34:44.193 [2024-07-21 03:44:29.205189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.193 [2024-07-21 03:44:29.205216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:44.193 qpair failed and we were unable to recover it. 00:34:44.193 [2024-07-21 03:44:29.205336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.193 [2024-07-21 03:44:29.205364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:44.193 qpair failed and we were unable to recover it. 00:34:44.193 [2024-07-21 03:44:29.205486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.193 [2024-07-21 03:44:29.205511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:44.193 qpair failed and we were unable to recover it. 00:34:44.193 [2024-07-21 03:44:29.205602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.193 [2024-07-21 03:44:29.205646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.193 qpair failed and we were unable to recover it. 00:34:44.193 [2024-07-21 03:44:29.205754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.193 [2024-07-21 03:44:29.205793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:44.193 qpair failed and we were unable to recover it. 00:34:44.193 [2024-07-21 03:44:29.205903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.193 [2024-07-21 03:44:29.205943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:44.193 qpair failed and we were unable to recover it. 00:34:44.193 [2024-07-21 03:44:29.206077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.193 [2024-07-21 03:44:29.206105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:44.193 qpair failed and we were unable to recover it. 00:34:44.193 [2024-07-21 03:44:29.206228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.193 [2024-07-21 03:44:29.206254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:44.193 qpair failed and we were unable to recover it. 00:34:44.193 [2024-07-21 03:44:29.206384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.193 [2024-07-21 03:44:29.206412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:44.193 qpair failed and we were unable to recover it. 00:34:44.193 [2024-07-21 03:44:29.206563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.193 [2024-07-21 03:44:29.206589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:44.193 qpair failed and we were unable to recover it. 00:34:44.193 [2024-07-21 03:44:29.206717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.193 [2024-07-21 03:44:29.206744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:44.193 qpair failed and we were unable to recover it. 00:34:44.193 [2024-07-21 03:44:29.206879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.193 [2024-07-21 03:44:29.206905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:44.193 qpair failed and we were unable to recover it. 00:34:44.193 [2024-07-21 03:44:29.206996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.193 [2024-07-21 03:44:29.207022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:44.193 qpair failed and we were unable to recover it. 00:34:44.193 [2024-07-21 03:44:29.207151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.193 [2024-07-21 03:44:29.207178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:44.193 qpair failed and we were unable to recover it. 00:34:44.193 [2024-07-21 03:44:29.207270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.193 [2024-07-21 03:44:29.207296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:44.193 qpair failed and we were unable to recover it. 00:34:44.193 [2024-07-21 03:44:29.207418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.193 [2024-07-21 03:44:29.207446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:44.193 qpair failed and we were unable to recover it. 00:34:44.193 [2024-07-21 03:44:29.207550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.193 [2024-07-21 03:44:29.207589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.193 qpair failed and we were unable to recover it. 00:34:44.193 [2024-07-21 03:44:29.207703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.193 [2024-07-21 03:44:29.207732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:44.193 qpair failed and we were unable to recover it. 00:34:44.193 [2024-07-21 03:44:29.207823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.193 [2024-07-21 03:44:29.207850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:44.193 qpair failed and we were unable to recover it. 00:34:44.193 [2024-07-21 03:44:29.207945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.193 [2024-07-21 03:44:29.207970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:44.193 qpair failed and we were unable to recover it. 00:34:44.193 [2024-07-21 03:44:29.208090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.193 [2024-07-21 03:44:29.208116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:44.193 qpair failed and we were unable to recover it. 00:34:44.193 [2024-07-21 03:44:29.208228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.193 [2024-07-21 03:44:29.208268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:44.193 qpair failed and we were unable to recover it. 00:34:44.193 [2024-07-21 03:44:29.208376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.193 [2024-07-21 03:44:29.208404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:44.193 qpair failed and we were unable to recover it. 00:34:44.193 [2024-07-21 03:44:29.208496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.193 [2024-07-21 03:44:29.208524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:44.193 qpair failed and we were unable to recover it. 00:34:44.193 [2024-07-21 03:44:29.208622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.193 [2024-07-21 03:44:29.208649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:44.193 qpair failed and we were unable to recover it. 00:34:44.193 [2024-07-21 03:44:29.208769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.193 [2024-07-21 03:44:29.208795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:44.193 qpair failed and we were unable to recover it. 00:34:44.193 [2024-07-21 03:44:29.208890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.193 [2024-07-21 03:44:29.208925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:44.193 qpair failed and we were unable to recover it. 00:34:44.193 [2024-07-21 03:44:29.209048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.193 [2024-07-21 03:44:29.209076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:44.193 qpair failed and we were unable to recover it. 00:34:44.193 [2024-07-21 03:44:29.209193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.193 [2024-07-21 03:44:29.209219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:44.193 qpair failed and we were unable to recover it. 00:34:44.193 [2024-07-21 03:44:29.209357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.193 [2024-07-21 03:44:29.209383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:44.193 qpair failed and we were unable to recover it. 00:34:44.193 [2024-07-21 03:44:29.209482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.193 [2024-07-21 03:44:29.209507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:44.193 qpair failed and we were unable to recover it. 00:34:44.193 [2024-07-21 03:44:29.209630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.193 [2024-07-21 03:44:29.209656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:44.193 qpair failed and we were unable to recover it. 00:34:44.193 [2024-07-21 03:44:29.209743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.193 [2024-07-21 03:44:29.209769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:44.193 qpair failed and we were unable to recover it. 00:34:44.193 [2024-07-21 03:44:29.209894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.193 [2024-07-21 03:44:29.209919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:44.193 qpair failed and we were unable to recover it. 00:34:44.193 [2024-07-21 03:44:29.210013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.193 [2024-07-21 03:44:29.210044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:44.193 qpair failed and we were unable to recover it. 00:34:44.193 [2024-07-21 03:44:29.210143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.193 [2024-07-21 03:44:29.210169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:44.193 qpair failed and we were unable to recover it. 00:34:44.193 [2024-07-21 03:44:29.210294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.193 [2024-07-21 03:44:29.210320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:44.193 qpair failed and we were unable to recover it. 00:34:44.193 [2024-07-21 03:44:29.210472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.193 [2024-07-21 03:44:29.210498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:44.193 qpair failed and we were unable to recover it. 00:34:44.193 [2024-07-21 03:44:29.210628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.194 [2024-07-21 03:44:29.210655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:44.194 qpair failed and we were unable to recover it. 00:34:44.194 [2024-07-21 03:44:29.210741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.194 [2024-07-21 03:44:29.210767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:44.194 qpair failed and we were unable to recover it. 00:34:44.194 [2024-07-21 03:44:29.210913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.194 [2024-07-21 03:44:29.210938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:44.194 qpair failed and we were unable to recover it. 00:34:44.194 [2024-07-21 03:44:29.211030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.194 [2024-07-21 03:44:29.211055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:44.194 qpair failed and we were unable to recover it. 00:34:44.194 [2024-07-21 03:44:29.211173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.194 [2024-07-21 03:44:29.211199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:44.194 qpair failed and we were unable to recover it. 00:34:44.194 [2024-07-21 03:44:29.211292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.194 [2024-07-21 03:44:29.211320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:44.194 qpair failed and we were unable to recover it. 00:34:44.194 [2024-07-21 03:44:29.211467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.194 [2024-07-21 03:44:29.211493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:44.194 qpair failed and we were unable to recover it. 00:34:44.194 [2024-07-21 03:44:29.211636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.194 [2024-07-21 03:44:29.211666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:44.194 qpair failed and we were unable to recover it. 00:34:44.194 [2024-07-21 03:44:29.211776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.194 [2024-07-21 03:44:29.211815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:44.194 qpair failed and we were unable to recover it. 00:34:44.194 [2024-07-21 03:44:29.211937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.194 [2024-07-21 03:44:29.211976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.194 qpair failed and we were unable to recover it. 00:34:44.194 [2024-07-21 03:44:29.212072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.194 [2024-07-21 03:44:29.212105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.194 qpair failed and we were unable to recover it. 00:34:44.194 [2024-07-21 03:44:29.212201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.194 [2024-07-21 03:44:29.212227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.194 qpair failed and we were unable to recover it. 00:34:44.194 [2024-07-21 03:44:29.212318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.194 [2024-07-21 03:44:29.212344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.194 qpair failed and we were unable to recover it. 00:34:44.194 [2024-07-21 03:44:29.212431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.194 [2024-07-21 03:44:29.212457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.194 qpair failed and we were unable to recover it. 00:34:44.194 [2024-07-21 03:44:29.212546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.194 [2024-07-21 03:44:29.212572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.194 qpair failed and we were unable to recover it. 00:34:44.194 [2024-07-21 03:44:29.212673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.194 [2024-07-21 03:44:29.212699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.194 qpair failed and we were unable to recover it. 00:34:44.194 [2024-07-21 03:44:29.212781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.194 [2024-07-21 03:44:29.212806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.194 qpair failed and we were unable to recover it. 00:34:44.194 [2024-07-21 03:44:29.212898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.194 [2024-07-21 03:44:29.212923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.194 qpair failed and we were unable to recover it. 00:34:44.194 [2024-07-21 03:44:29.213020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.194 [2024-07-21 03:44:29.213052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.194 qpair failed and we were unable to recover it. 00:34:44.194 [2024-07-21 03:44:29.213143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.194 [2024-07-21 03:44:29.213172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:44.194 qpair failed and we were unable to recover it. 00:34:44.194 [2024-07-21 03:44:29.213252] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:44.194 [2024-07-21 03:44:29.213290] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:44.194 [2024-07-21 03:44:29.213297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.194 [2024-07-21 03:44:29.213314] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:44.194 [2024-07-21 03:44:29.213323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:44.194 [2024-07-21 03:44:29.213333] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:44.194 qpair failed and we were unable to recover it. 00:34:44.194 [2024-07-21 03:44:29.213345] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:44.194 [2024-07-21 03:44:29.213441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.194 [2024-07-21 03:44:29.213466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:44.194 qpair failed and we were unable to recover it. 00:34:44.194 [2024-07-21 03:44:29.213569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.194 [2024-07-21 03:44:29.213598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:44.194 qpair failed and we were unable to recover it. 00:34:44.194 [2024-07-21 03:44:29.213560] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:34:44.194 [2024-07-21 03:44:29.213589] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:34:44.194 [2024-07-21 03:44:29.213730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.194 [2024-07-21 03:44:29.213665] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:34:44.194 [2024-07-21 03:44:29.213669] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:34:44.194 [2024-07-21 03:44:29.213756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:44.194 qpair failed and we were unable to recover it. 00:34:44.194 [2024-07-21 03:44:29.213855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.194 [2024-07-21 03:44:29.213880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:44.194 qpair failed and we were unable to recover it. 00:34:44.194 [2024-07-21 03:44:29.213972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.194 [2024-07-21 03:44:29.213998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:44.194 qpair failed and we were unable to recover it. 00:34:44.194 [2024-07-21 03:44:29.214096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.194 [2024-07-21 03:44:29.214124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:44.194 qpair failed and we were unable to recover it. 00:34:44.194 [2024-07-21 03:44:29.214255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.194 [2024-07-21 03:44:29.214285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:44.194 qpair failed and we were unable to recover it. 00:34:44.194 [2024-07-21 03:44:29.214380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.194 [2024-07-21 03:44:29.214407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:44.194 qpair failed and we were unable to recover it. 00:34:44.194 [2024-07-21 03:44:29.214505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.194 [2024-07-21 03:44:29.214531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:44.194 qpair failed and we were unable to recover it. 00:34:44.194 [2024-07-21 03:44:29.214637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.194 [2024-07-21 03:44:29.214672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:44.194 qpair failed and we were unable to recover it. 00:34:44.194 [2024-07-21 03:44:29.214768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.194 [2024-07-21 03:44:29.214794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:44.194 qpair failed and we were unable to recover it. 00:34:44.194 [2024-07-21 03:44:29.214889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.194 [2024-07-21 03:44:29.214915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:44.194 qpair failed and we were unable to recover it. 00:34:44.194 [2024-07-21 03:44:29.215005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.194 [2024-07-21 03:44:29.215030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:44.194 qpair failed and we were unable to recover it. 00:34:44.194 [2024-07-21 03:44:29.215124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.194 [2024-07-21 03:44:29.215150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:44.194 qpair failed and we were unable to recover it. 00:34:44.194 [2024-07-21 03:44:29.215263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.194 [2024-07-21 03:44:29.215292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:44.194 qpair failed and we were unable to recover it. 00:34:44.194 [2024-07-21 03:44:29.215389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.194 [2024-07-21 03:44:29.215416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:44.194 qpair failed and we were unable to recover it. 00:34:44.194 [2024-07-21 03:44:29.215516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.194 [2024-07-21 03:44:29.215545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.194 qpair failed and we were unable to recover it. 00:34:44.195 [2024-07-21 03:44:29.215641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.195 [2024-07-21 03:44:29.215668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:44.195 qpair failed and we were unable to recover it. 00:34:44.195 [2024-07-21 03:44:29.215767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.195 [2024-07-21 03:44:29.215797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:44.195 qpair failed and we were unable to recover it. 00:34:44.195 [2024-07-21 03:44:29.215887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.195 [2024-07-21 03:44:29.215913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:44.195 qpair failed and we were unable to recover it. 00:34:44.195 [2024-07-21 03:44:29.216003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.195 [2024-07-21 03:44:29.216029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:44.195 qpair failed and we were unable to recover it. 00:34:44.195 [2024-07-21 03:44:29.216120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.195 [2024-07-21 03:44:29.216146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:44.195 qpair failed and we were unable to recover it. 00:34:44.195 [2024-07-21 03:44:29.216229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.195 [2024-07-21 03:44:29.216255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:44.195 qpair failed and we were unable to recover it. 00:34:44.195 [2024-07-21 03:44:29.216398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.195 [2024-07-21 03:44:29.216424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:44.195 qpair failed and we were unable to recover it. 00:34:44.195 [2024-07-21 03:44:29.216513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.195 [2024-07-21 03:44:29.216540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:44.195 qpair failed and we were unable to recover it. 00:34:44.195 [2024-07-21 03:44:29.216741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.195 [2024-07-21 03:44:29.216767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:44.195 qpair failed and we were unable to recover it. 00:34:44.195 [2024-07-21 03:44:29.216889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.195 [2024-07-21 03:44:29.216919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:44.195 qpair failed and we were unable to recover it. 00:34:44.195 [2024-07-21 03:44:29.217010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.195 [2024-07-21 03:44:29.217036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:44.195 qpair failed and we were unable to recover it. 00:34:44.195 [2024-07-21 03:44:29.217161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.195 [2024-07-21 03:44:29.217188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:44.195 qpair failed and we were unable to recover it. 00:34:44.195 [2024-07-21 03:44:29.217311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.195 [2024-07-21 03:44:29.217337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:44.195 qpair failed and we were unable to recover it. 00:34:44.195 [2024-07-21 03:44:29.217431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.195 [2024-07-21 03:44:29.217456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:44.195 qpair failed and we were unable to recover it. 00:34:44.195 [2024-07-21 03:44:29.217569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.195 [2024-07-21 03:44:29.217594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:44.195 qpair failed and we were unable to recover it. 00:34:44.195 [2024-07-21 03:44:29.217729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.195 [2024-07-21 03:44:29.217755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:44.195 qpair failed and we were unable to recover it. 00:34:44.195 [2024-07-21 03:44:29.217880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.195 [2024-07-21 03:44:29.217906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:44.195 qpair failed and we were unable to recover it. 00:34:44.195 [2024-07-21 03:44:29.218021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.195 [2024-07-21 03:44:29.218046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:44.195 qpair failed and we were unable to recover it. 00:34:44.195 [2024-07-21 03:44:29.218141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.195 [2024-07-21 03:44:29.218167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:44.195 qpair failed and we were unable to recover it. 00:34:44.195 [2024-07-21 03:44:29.218262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.195 [2024-07-21 03:44:29.218287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:44.195 qpair failed and we were unable to recover it. 00:34:44.195 [2024-07-21 03:44:29.218403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.195 [2024-07-21 03:44:29.218428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:44.195 qpair failed and we were unable to recover it. 00:34:44.195 [2024-07-21 03:44:29.218521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.195 [2024-07-21 03:44:29.218547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:44.195 qpair failed and we were unable to recover it. 00:34:44.195 [2024-07-21 03:44:29.218649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.195 [2024-07-21 03:44:29.218677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:44.195 qpair failed and we were unable to recover it. 00:34:44.195 [2024-07-21 03:44:29.218785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.195 [2024-07-21 03:44:29.218812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:44.195 qpair failed and we were unable to recover it. 00:34:44.195 [2024-07-21 03:44:29.218932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.195 [2024-07-21 03:44:29.218958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:44.195 qpair failed and we were unable to recover it. 00:34:44.195 [2024-07-21 03:44:29.219052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.195 [2024-07-21 03:44:29.219077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:44.195 qpair failed and we were unable to recover it. 00:34:44.195 [2024-07-21 03:44:29.219170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.195 [2024-07-21 03:44:29.219196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:44.195 qpair failed and we were unable to recover it. 00:34:44.195 [2024-07-21 03:44:29.219315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.195 [2024-07-21 03:44:29.219341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:44.195 qpair failed and we were unable to recover it. 00:34:44.195 [2024-07-21 03:44:29.219428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.195 [2024-07-21 03:44:29.219454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:44.195 qpair failed and we were unable to recover it. 00:34:44.195 [2024-07-21 03:44:29.219574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.195 [2024-07-21 03:44:29.219623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:44.195 qpair failed and we were unable to recover it. 00:34:44.195 [2024-07-21 03:44:29.219754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.195 [2024-07-21 03:44:29.219793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.195 qpair failed and we were unable to recover it. 00:34:44.195 [2024-07-21 03:44:29.219889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.195 [2024-07-21 03:44:29.219918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:44.195 qpair failed and we were unable to recover it. 00:34:44.195 [2024-07-21 03:44:29.220005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.195 [2024-07-21 03:44:29.220031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:44.195 qpair failed and we were unable to recover it. 00:34:44.195 [2024-07-21 03:44:29.220142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.195 [2024-07-21 03:44:29.220168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:44.195 qpair failed and we were unable to recover it. 00:34:44.195 [2024-07-21 03:44:29.220266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.195 [2024-07-21 03:44:29.220292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:44.195 qpair failed and we were unable to recover it. 00:34:44.195 [2024-07-21 03:44:29.220393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.196 [2024-07-21 03:44:29.220421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:44.196 qpair failed and we were unable to recover it. 00:34:44.196 [2024-07-21 03:44:29.220512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.196 [2024-07-21 03:44:29.220539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:44.196 qpair failed and we were unable to recover it. 00:34:44.196 [2024-07-21 03:44:29.220652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.196 [2024-07-21 03:44:29.220691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.196 qpair failed and we were unable to recover it. 00:34:44.196 [2024-07-21 03:44:29.220786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.196 [2024-07-21 03:44:29.220814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.196 qpair failed and we were unable to recover it. 00:34:44.196 [2024-07-21 03:44:29.220910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.196 [2024-07-21 03:44:29.220937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.196 qpair failed and we were unable to recover it. 00:34:44.196 [2024-07-21 03:44:29.221060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.196 [2024-07-21 03:44:29.221086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.196 qpair failed and we were unable to recover it. 00:34:44.196 [2024-07-21 03:44:29.221175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.196 [2024-07-21 03:44:29.221201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.196 qpair failed and we were unable to recover it. 00:34:44.196 [2024-07-21 03:44:29.221318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.196 [2024-07-21 03:44:29.221358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:44.196 qpair failed and we were unable to recover it. 00:34:44.196 [2024-07-21 03:44:29.221491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.196 [2024-07-21 03:44:29.221518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:44.196 qpair failed and we were unable to recover it. 00:34:44.196 [2024-07-21 03:44:29.221631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.196 [2024-07-21 03:44:29.221659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:44.196 qpair failed and we were unable to recover it. 00:34:44.196 [2024-07-21 03:44:29.221749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.196 [2024-07-21 03:44:29.221776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:44.196 qpair failed and we were unable to recover it. 00:34:44.196 [2024-07-21 03:44:29.221884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.196 [2024-07-21 03:44:29.221909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:44.196 qpair failed and we were unable to recover it. 00:34:44.196 [2024-07-21 03:44:29.221998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.196 [2024-07-21 03:44:29.222024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:44.196 qpair failed and we were unable to recover it. 00:34:44.196 [2024-07-21 03:44:29.222140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.196 [2024-07-21 03:44:29.222165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:44.196 qpair failed and we were unable to recover it. 00:34:44.196 [2024-07-21 03:44:29.222277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.196 [2024-07-21 03:44:29.222308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:44.196 qpair failed and we were unable to recover it. 00:34:44.196 [2024-07-21 03:44:29.222411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.196 [2024-07-21 03:44:29.222437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:44.196 qpair failed and we were unable to recover it. 00:34:44.196 [2024-07-21 03:44:29.222543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.196 [2024-07-21 03:44:29.222572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.196 qpair failed and we were unable to recover it. 00:34:44.196 [2024-07-21 03:44:29.222679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.196 [2024-07-21 03:44:29.222706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.196 qpair failed and we were unable to recover it. 00:34:44.196 [2024-07-21 03:44:29.222792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.196 [2024-07-21 03:44:29.222819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.196 qpair failed and we were unable to recover it. 00:34:44.196 [2024-07-21 03:44:29.222910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.196 [2024-07-21 03:44:29.222938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.196 qpair failed and we were unable to recover it. 00:34:44.196 [2024-07-21 03:44:29.223023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.196 [2024-07-21 03:44:29.223048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.196 qpair failed and we were unable to recover it. 00:34:44.196 [2024-07-21 03:44:29.223148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.196 [2024-07-21 03:44:29.223173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.196 qpair failed and we were unable to recover it. 00:34:44.196 [2024-07-21 03:44:29.223271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.196 [2024-07-21 03:44:29.223299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.196 qpair failed and we were unable to recover it. 00:34:44.196 [2024-07-21 03:44:29.223397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.196 [2024-07-21 03:44:29.223422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.196 qpair failed and we were unable to recover it. 00:34:44.196 [2024-07-21 03:44:29.223541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.196 [2024-07-21 03:44:29.223569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:44.196 qpair failed and we were unable to recover it. 00:34:44.196 [2024-07-21 03:44:29.223707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.196 [2024-07-21 03:44:29.223735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:44.196 qpair failed and we were unable to recover it. 00:34:44.196 [2024-07-21 03:44:29.223831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.196 [2024-07-21 03:44:29.223858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:44.196 qpair failed and we were unable to recover it. 00:34:44.196 [2024-07-21 03:44:29.223972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.196 [2024-07-21 03:44:29.223998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:44.196 qpair failed and we were unable to recover it. 00:34:44.196 [2024-07-21 03:44:29.224119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.196 [2024-07-21 03:44:29.224145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:44.196 qpair failed and we were unable to recover it. 00:34:44.196 [2024-07-21 03:44:29.224269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.196 [2024-07-21 03:44:29.224308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:44.196 qpair failed and we were unable to recover it. 00:34:44.196 [2024-07-21 03:44:29.224396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.196 [2024-07-21 03:44:29.224424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.196 qpair failed and we were unable to recover it. 00:34:44.196 [2024-07-21 03:44:29.224520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.196 [2024-07-21 03:44:29.224546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.196 qpair failed and we were unable to recover it. 00:34:44.196 [2024-07-21 03:44:29.224671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.196 [2024-07-21 03:44:29.224697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.196 qpair failed and we were unable to recover it. 00:34:44.196 [2024-07-21 03:44:29.224789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.196 [2024-07-21 03:44:29.224814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.196 qpair failed and we were unable to recover it. 00:34:44.196 [2024-07-21 03:44:29.224949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.196 [2024-07-21 03:44:29.224974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.196 qpair failed and we were unable to recover it. 00:34:44.196 [2024-07-21 03:44:29.225078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.196 [2024-07-21 03:44:29.225105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:44.196 qpair failed and we were unable to recover it. 00:34:44.196 [2024-07-21 03:44:29.225197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.196 [2024-07-21 03:44:29.225223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:44.196 qpair failed and we were unable to recover it. 00:34:44.196 [2024-07-21 03:44:29.225323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.196 [2024-07-21 03:44:29.225348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:44.196 qpair failed and we were unable to recover it. 00:34:44.196 [2024-07-21 03:44:29.225442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.196 [2024-07-21 03:44:29.225467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:44.196 qpair failed and we were unable to recover it. 00:34:44.196 [2024-07-21 03:44:29.225560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.196 [2024-07-21 03:44:29.225587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:44.196 qpair failed and we were unable to recover it. 00:34:44.196 [2024-07-21 03:44:29.225695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.196 [2024-07-21 03:44:29.225723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:44.196 qpair failed and we were unable to recover it. 00:34:44.196 [2024-07-21 03:44:29.225812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.197 [2024-07-21 03:44:29.225842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:44.197 qpair failed and we were unable to recover it. 00:34:44.197 [2024-07-21 03:44:29.225929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.197 [2024-07-21 03:44:29.225955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:44.197 qpair failed and we were unable to recover it. 00:34:44.197 [2024-07-21 03:44:29.226040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.197 [2024-07-21 03:44:29.226065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:44.197 qpair failed and we were unable to recover it. 00:34:44.197 [2024-07-21 03:44:29.226158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.197 [2024-07-21 03:44:29.226186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.197 qpair failed and we were unable to recover it. 00:34:44.197 [2024-07-21 03:44:29.226283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.197 [2024-07-21 03:44:29.226309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.197 qpair failed and we were unable to recover it. 00:34:44.197 [2024-07-21 03:44:29.226423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.197 [2024-07-21 03:44:29.226448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.197 qpair failed and we were unable to recover it. 00:34:44.197 [2024-07-21 03:44:29.226548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.197 [2024-07-21 03:44:29.226574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.197 qpair failed and we were unable to recover it. 00:34:44.197 [2024-07-21 03:44:29.226664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.197 [2024-07-21 03:44:29.226690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.197 qpair failed and we were unable to recover it. 00:34:44.197 [2024-07-21 03:44:29.226781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.197 [2024-07-21 03:44:29.226806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.197 qpair failed and we were unable to recover it. 00:34:44.197 [2024-07-21 03:44:29.226896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.197 [2024-07-21 03:44:29.226923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.197 qpair failed and we were unable to recover it. 00:34:44.197 [2024-07-21 03:44:29.227014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.197 [2024-07-21 03:44:29.227039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.197 qpair failed and we were unable to recover it. 00:34:44.197 [2024-07-21 03:44:29.227133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.197 [2024-07-21 03:44:29.227158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.197 qpair failed and we were unable to recover it. 00:34:44.197 [2024-07-21 03:44:29.227241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.197 [2024-07-21 03:44:29.227266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.197 qpair failed and we were unable to recover it. 00:34:44.197 [2024-07-21 03:44:29.227366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.197 [2024-07-21 03:44:29.227391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.197 qpair failed and we were unable to recover it. 00:34:44.197 [2024-07-21 03:44:29.227484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.197 [2024-07-21 03:44:29.227509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.197 qpair failed and we were unable to recover it. 00:34:44.197 [2024-07-21 03:44:29.227657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.197 [2024-07-21 03:44:29.227682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.197 qpair failed and we were unable to recover it. 00:34:44.197 [2024-07-21 03:44:29.227772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.197 [2024-07-21 03:44:29.227797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.197 qpair failed and we were unable to recover it. 00:34:44.197 [2024-07-21 03:44:29.227887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.197 [2024-07-21 03:44:29.227911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.197 qpair failed and we were unable to recover it. 00:34:44.197 [2024-07-21 03:44:29.228041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.197 [2024-07-21 03:44:29.228066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.197 qpair failed and we were unable to recover it. 00:34:44.197 [2024-07-21 03:44:29.228154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.197 [2024-07-21 03:44:29.228178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.197 qpair failed and we were unable to recover it. 00:34:44.197 [2024-07-21 03:44:29.228270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.197 [2024-07-21 03:44:29.228294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.197 qpair failed and we were unable to recover it. 00:34:44.197 [2024-07-21 03:44:29.228384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.197 [2024-07-21 03:44:29.228409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.197 qpair failed and we were unable to recover it. 00:34:44.197 [2024-07-21 03:44:29.228500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.197 [2024-07-21 03:44:29.228525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.197 qpair failed and we were unable to recover it. 00:34:44.197 [2024-07-21 03:44:29.228621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.197 [2024-07-21 03:44:29.228647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.197 qpair failed and we were unable to recover it. 00:34:44.197 [2024-07-21 03:44:29.228730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.197 [2024-07-21 03:44:29.228756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.197 qpair failed and we were unable to recover it. 00:34:44.197 [2024-07-21 03:44:29.228874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.197 [2024-07-21 03:44:29.228899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.197 qpair failed and we were unable to recover it. 00:34:44.197 [2024-07-21 03:44:29.229019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.197 [2024-07-21 03:44:29.229044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.197 qpair failed and we were unable to recover it. 00:34:44.197 [2024-07-21 03:44:29.229162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.197 [2024-07-21 03:44:29.229192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.197 qpair failed and we were unable to recover it. 00:34:44.197 [2024-07-21 03:44:29.229277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.197 [2024-07-21 03:44:29.229303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.197 qpair failed and we were unable to recover it. 00:34:44.197 [2024-07-21 03:44:29.229398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.197 [2024-07-21 03:44:29.229423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.197 qpair failed and we were unable to recover it. 00:34:44.197 [2024-07-21 03:44:29.229516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.197 [2024-07-21 03:44:29.229541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.197 qpair failed and we were unable to recover it. 00:34:44.197 [2024-07-21 03:44:29.229667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.197 [2024-07-21 03:44:29.229706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:44.197 qpair failed and we were unable to recover it. 00:34:44.197 [2024-07-21 03:44:29.229799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.197 [2024-07-21 03:44:29.229825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:44.197 qpair failed and we were unable to recover it. 00:34:44.197 [2024-07-21 03:44:29.229916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.197 [2024-07-21 03:44:29.229942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:44.197 qpair failed and we were unable to recover it. 00:34:44.197 [2024-07-21 03:44:29.230046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.197 [2024-07-21 03:44:29.230071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:44.197 qpair failed and we were unable to recover it. 00:34:44.197 [2024-07-21 03:44:29.230197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.197 [2024-07-21 03:44:29.230224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:44.197 qpair failed and we were unable to recover it. 00:34:44.197 [2024-07-21 03:44:29.230311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.197 [2024-07-21 03:44:29.230336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:44.197 qpair failed and we were unable to recover it. 00:34:44.197 [2024-07-21 03:44:29.230430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.197 [2024-07-21 03:44:29.230454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:44.197 qpair failed and we were unable to recover it. 00:34:44.197 [2024-07-21 03:44:29.230552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.197 [2024-07-21 03:44:29.230576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:44.197 qpair failed and we were unable to recover it. 00:34:44.197 [2024-07-21 03:44:29.230695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.197 [2024-07-21 03:44:29.230721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:44.197 qpair failed and we were unable to recover it. 00:34:44.197 [2024-07-21 03:44:29.230811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.197 [2024-07-21 03:44:29.230836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:44.197 qpair failed and we were unable to recover it. 00:34:44.197 [2024-07-21 03:44:29.230960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.197 [2024-07-21 03:44:29.230987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:44.197 qpair failed and we were unable to recover it. 00:34:44.198 [2024-07-21 03:44:29.231076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.198 [2024-07-21 03:44:29.231101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:44.198 qpair failed and we were unable to recover it. 00:34:44.198 [2024-07-21 03:44:29.231224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.198 [2024-07-21 03:44:29.231252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.198 qpair failed and we were unable to recover it. 00:34:44.198 [2024-07-21 03:44:29.231346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.198 [2024-07-21 03:44:29.231372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.198 qpair failed and we were unable to recover it. 00:34:44.198 [2024-07-21 03:44:29.231473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.198 [2024-07-21 03:44:29.231513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:44.198 qpair failed and we were unable to recover it. 00:34:44.198 [2024-07-21 03:44:29.231604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.198 [2024-07-21 03:44:29.231638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:44.198 qpair failed and we were unable to recover it. 00:34:44.198 [2024-07-21 03:44:29.231731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.198 [2024-07-21 03:44:29.231756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:44.198 qpair failed and we were unable to recover it. 00:34:44.198 [2024-07-21 03:44:29.231845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.198 [2024-07-21 03:44:29.231870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:44.198 qpair failed and we were unable to recover it. 00:34:44.198 [2024-07-21 03:44:29.231959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.198 [2024-07-21 03:44:29.231984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:44.198 qpair failed and we were unable to recover it. 00:34:44.198 [2024-07-21 03:44:29.232097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.198 [2024-07-21 03:44:29.232123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:44.198 qpair failed and we were unable to recover it. 00:34:44.198 [2024-07-21 03:44:29.232222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.198 [2024-07-21 03:44:29.232249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.198 qpair failed and we were unable to recover it. 00:34:44.198 [2024-07-21 03:44:29.232393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.198 [2024-07-21 03:44:29.232432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:44.198 qpair failed and we were unable to recover it. 00:34:44.198 [2024-07-21 03:44:29.232565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.198 [2024-07-21 03:44:29.232592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:44.198 qpair failed and we were unable to recover it. 00:34:44.198 [2024-07-21 03:44:29.232698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.198 [2024-07-21 03:44:29.232725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:44.198 qpair failed and we were unable to recover it. 00:34:44.198 [2024-07-21 03:44:29.232817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.198 [2024-07-21 03:44:29.232844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:44.198 qpair failed and we were unable to recover it. 00:34:44.198 [2024-07-21 03:44:29.232932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.198 [2024-07-21 03:44:29.232958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:44.198 qpair failed and we were unable to recover it. 00:34:44.198 [2024-07-21 03:44:29.233046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.198 [2024-07-21 03:44:29.233071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:44.198 qpair failed and we were unable to recover it. 00:34:44.198 [2024-07-21 03:44:29.233164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.198 [2024-07-21 03:44:29.233190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.198 qpair failed and we were unable to recover it. 00:34:44.198 [2024-07-21 03:44:29.233282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.198 [2024-07-21 03:44:29.233307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.198 qpair failed and we were unable to recover it. 00:34:44.198 [2024-07-21 03:44:29.233403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.198 [2024-07-21 03:44:29.233428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.198 qpair failed and we were unable to recover it. 00:34:44.198 [2024-07-21 03:44:29.233518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.198 [2024-07-21 03:44:29.233543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.198 qpair failed and we were unable to recover it. 00:34:44.198 [2024-07-21 03:44:29.233634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.198 [2024-07-21 03:44:29.233660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.198 qpair failed and we were unable to recover it. 00:34:44.198 [2024-07-21 03:44:29.233770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.198 [2024-07-21 03:44:29.233795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.198 qpair failed and we were unable to recover it. 00:34:44.198 [2024-07-21 03:44:29.233885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.198 [2024-07-21 03:44:29.233911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.198 qpair failed and we were unable to recover it. 00:34:44.198 [2024-07-21 03:44:29.234027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.198 [2024-07-21 03:44:29.234052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.198 qpair failed and we were unable to recover it. 00:34:44.198 [2024-07-21 03:44:29.234137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.198 [2024-07-21 03:44:29.234163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.198 qpair failed and we were unable to recover it. 00:34:44.198 [2024-07-21 03:44:29.234264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.198 [2024-07-21 03:44:29.234289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.198 qpair failed and we were unable to recover it. 00:34:44.198 [2024-07-21 03:44:29.234401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.198 [2024-07-21 03:44:29.234442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:44.198 qpair failed and we were unable to recover it. 00:34:44.198 [2024-07-21 03:44:29.234587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.198 [2024-07-21 03:44:29.234633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:44.198 qpair failed and we were unable to recover it. 00:34:44.198 [2024-07-21 03:44:29.234740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.198 [2024-07-21 03:44:29.234768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:44.198 qpair failed and we were unable to recover it. 00:34:44.198 [2024-07-21 03:44:29.234868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.198 [2024-07-21 03:44:29.234895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:44.198 qpair failed and we were unable to recover it. 00:34:44.198 [2024-07-21 03:44:29.235013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.198 [2024-07-21 03:44:29.235039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:44.198 qpair failed and we were unable to recover it. 00:34:44.198 [2024-07-21 03:44:29.235164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.198 [2024-07-21 03:44:29.235189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:44.198 qpair failed and we were unable to recover it. 00:34:44.198 [2024-07-21 03:44:29.235276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.198 [2024-07-21 03:44:29.235305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.198 qpair failed and we were unable to recover it. 00:34:44.198 [2024-07-21 03:44:29.235397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.198 [2024-07-21 03:44:29.235422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.198 qpair failed and we were unable to recover it. 00:34:44.198 [2024-07-21 03:44:29.235511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.198 [2024-07-21 03:44:29.235539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:44.198 qpair failed and we were unable to recover it. 00:34:44.198 [2024-07-21 03:44:29.235634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.198 [2024-07-21 03:44:29.235662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:44.198 qpair failed and we were unable to recover it. 00:34:44.198 [2024-07-21 03:44:29.235783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.198 [2024-07-21 03:44:29.235809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:44.198 qpair failed and we were unable to recover it. 00:34:44.198 [2024-07-21 03:44:29.235923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.198 [2024-07-21 03:44:29.235948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:44.198 qpair failed and we were unable to recover it. 00:34:44.198 [2024-07-21 03:44:29.236046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.198 [2024-07-21 03:44:29.236072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:44.198 qpair failed and we were unable to recover it. 00:34:44.198 [2024-07-21 03:44:29.236171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.198 [2024-07-21 03:44:29.236197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:44.198 qpair failed and we were unable to recover it. 00:34:44.198 [2024-07-21 03:44:29.236287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.198 [2024-07-21 03:44:29.236313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.198 qpair failed and we were unable to recover it. 00:34:44.199 [2024-07-21 03:44:29.236404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.199 [2024-07-21 03:44:29.236429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.199 qpair failed and we were unable to recover it. 00:34:44.199 [2024-07-21 03:44:29.236523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.199 [2024-07-21 03:44:29.236553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:44.199 qpair failed and we were unable to recover it. 00:34:44.199 [2024-07-21 03:44:29.236655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.199 [2024-07-21 03:44:29.236683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:44.199 qpair failed and we were unable to recover it. 00:34:44.199 [2024-07-21 03:44:29.236794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.199 [2024-07-21 03:44:29.236834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:44.199 qpair failed and we were unable to recover it. 00:34:44.199 [2024-07-21 03:44:29.236926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.199 [2024-07-21 03:44:29.236952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:44.199 qpair failed and we were unable to recover it. 00:34:44.199 [2024-07-21 03:44:29.237044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.199 [2024-07-21 03:44:29.237072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:44.199 qpair failed and we were unable to recover it. 00:34:44.199 [2024-07-21 03:44:29.237168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.199 [2024-07-21 03:44:29.237194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:44.199 qpair failed and we were unable to recover it. 00:34:44.199 [2024-07-21 03:44:29.237281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.199 [2024-07-21 03:44:29.237308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.199 qpair failed and we were unable to recover it. 00:34:44.199 [2024-07-21 03:44:29.237399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.199 [2024-07-21 03:44:29.237424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.199 qpair failed and we were unable to recover it. 00:34:44.199 [2024-07-21 03:44:29.237516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.199 [2024-07-21 03:44:29.237542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.199 qpair failed and we were unable to recover it. 00:34:44.199 [2024-07-21 03:44:29.237631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.199 [2024-07-21 03:44:29.237658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.199 qpair failed and we were unable to recover it. 00:34:44.199 [2024-07-21 03:44:29.237741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.199 [2024-07-21 03:44:29.237770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.199 qpair failed and we were unable to recover it. 00:34:44.199 [2024-07-21 03:44:29.237866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.199 [2024-07-21 03:44:29.237891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.199 qpair failed and we were unable to recover it. 00:34:44.199 [2024-07-21 03:44:29.238014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.199 [2024-07-21 03:44:29.238040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.199 qpair failed and we were unable to recover it. 00:34:44.199 [2024-07-21 03:44:29.238129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.199 [2024-07-21 03:44:29.238154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.199 qpair failed and we were unable to recover it. 00:34:44.200 [2024-07-21 03:44:29.238252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.200 [2024-07-21 03:44:29.238276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.200 qpair failed and we were unable to recover it. 00:34:44.200 [2024-07-21 03:44:29.238363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.200 [2024-07-21 03:44:29.238388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bba840 with addr=10.0.0.2, port=4420 00:34:44.200 qpair failed and we were unable to recover it. 00:34:44.200 [2024-07-21 03:44:29.238536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.200 [2024-07-21 03:44:29.238565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:44.200 qpair failed and we were unable to recover it. 00:34:44.200 [2024-07-21 03:44:29.238670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.200 [2024-07-21 03:44:29.238700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:44.200 qpair failed and we were unable to recover it. 00:34:44.200 [2024-07-21 03:44:29.238816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.200 [2024-07-21 03:44:29.238842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:44.200 qpair failed and we were unable to recover it. 00:34:44.200 [2024-07-21 03:44:29.238955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.200 [2024-07-21 03:44:29.238980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:44.200 qpair failed and we were unable to recover it. 00:34:44.200 [2024-07-21 03:44:29.239069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.200 [2024-07-21 03:44:29.239096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:44.200 qpair failed and we were unable to recover it. 00:34:44.200 [2024-07-21 03:44:29.239185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.200 [2024-07-21 03:44:29.239213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:44.200 qpair failed and we were unable to recover it. 00:34:44.200 [2024-07-21 03:44:29.239317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.200 [2024-07-21 03:44:29.239343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:44.200 qpair failed and we were unable to recover it. 00:34:44.200 [2024-07-21 03:44:29.239435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.200 [2024-07-21 03:44:29.239474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:44.200 qpair failed and we were unable to recover it. 00:34:44.200 [2024-07-21 03:44:29.239584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.200 [2024-07-21 03:44:29.239612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:44.200 qpair failed and we were unable to recover it. 00:34:44.200 [2024-07-21 03:44:29.239775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.200 [2024-07-21 03:44:29.239802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:44.200 qpair failed and we were unable to recover it. 00:34:44.200 [2024-07-21 03:44:29.239888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.200 [2024-07-21 03:44:29.239914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:44.200 qpair failed and we were unable to recover it. 00:34:44.200 [2024-07-21 03:44:29.240001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.200 [2024-07-21 03:44:29.240026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:44.200 qpair failed and we were unable to recover it. 00:34:44.200 [2024-07-21 03:44:29.240116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.200 [2024-07-21 03:44:29.240141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:44.200 qpair failed and we were unable to recover it. 00:34:44.200 [2024-07-21 03:44:29.240251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.200 [2024-07-21 03:44:29.240275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:44.200 qpair failed and we were unable to recover it. 00:34:44.200 [2024-07-21 03:44:29.240357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.200 [2024-07-21 03:44:29.240382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:44.200 qpair failed and we were unable to recover it. 00:34:44.200 [2024-07-21 03:44:29.240473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.200 [2024-07-21 03:44:29.240498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:44.200 qpair failed and we were unable to recover it. 00:34:44.200 [2024-07-21 03:44:29.240587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.200 [2024-07-21 03:44:29.240620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:44.200 qpair failed and we were unable to recover it. 00:34:44.200 [2024-07-21 03:44:29.240739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.200 [2024-07-21 03:44:29.240764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:44.200 qpair failed and we were unable to recover it. 00:34:44.200 [2024-07-21 03:44:29.240849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.200 [2024-07-21 03:44:29.240875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:44.200 qpair failed and we were unable to recover it. 00:34:44.200 [2024-07-21 03:44:29.240969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.200 [2024-07-21 03:44:29.240993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:44.200 qpair failed and we were unable to recover it. 00:34:44.200 [2024-07-21 03:44:29.241105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.200 [2024-07-21 03:44:29.241130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:44.200 qpair failed and we were unable to recover it. 00:34:44.200 [2024-07-21 03:44:29.241218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.200 [2024-07-21 03:44:29.241248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:44.200 qpair failed and we were unable to recover it. 00:34:44.200 [2024-07-21 03:44:29.241382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.200 [2024-07-21 03:44:29.241421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:44.200 qpair failed and we were unable to recover it. 00:34:44.200 [2024-07-21 03:44:29.241524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.200 [2024-07-21 03:44:29.241551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:44.200 qpair failed and we were unable to recover it. 00:34:44.200 [2024-07-21 03:44:29.241644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.200 [2024-07-21 03:44:29.241671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:44.200 qpair failed and we were unable to recover it. 00:34:44.200 [2024-07-21 03:44:29.241770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.200 [2024-07-21 03:44:29.241798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:44.200 qpair failed and we were unable to recover it. 00:34:44.200 [2024-07-21 03:44:29.241893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.200 [2024-07-21 03:44:29.241920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:44.200 qpair failed and we were unable to recover it. 00:34:44.200 [2024-07-21 03:44:29.242034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.200 [2024-07-21 03:44:29.242060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5fc000b90 with addr=10.0.0.2, port=4420 00:34:44.200 qpair failed and we were unable to recover it. 00:34:44.200 [2024-07-21 03:44:29.242156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.200 [2024-07-21 03:44:29.242183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:44.200 qpair failed and we were unable to recover it. 00:34:44.200 [2024-07-21 03:44:29.242311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.200 [2024-07-21 03:44:29.242336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:44.200 qpair failed and we were unable to recover it. 00:34:44.200 [2024-07-21 03:44:29.242453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.200 [2024-07-21 03:44:29.242479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:44.200 qpair failed and we were unable to recover it. 00:34:44.200 [2024-07-21 03:44:29.242574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.201 [2024-07-21 03:44:29.242600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5f4000b90 with addr=10.0.0.2, port=4420 00:34:44.201 qpair failed and we were unable to recover it. 00:34:44.201 [2024-07-21 03:44:29.242711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.201 [2024-07-21 03:44:29.242740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:44.201 qpair failed and we were unable to recover it. 00:34:44.201 [2024-07-21 03:44:29.242853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.201 [2024-07-21 03:44:29.242878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:44.201 qpair failed and we were unable to recover it. 00:34:44.201 [2024-07-21 03:44:29.242995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.201 [2024-07-21 03:44:29.243020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:44.201 qpair failed and we were unable to recover it. 00:34:44.201 [2024-07-21 03:44:29.243148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.201 [2024-07-21 03:44:29.243174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:44.201 qpair failed and we were unable to recover it. 00:34:44.201 [2024-07-21 03:44:29.243262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.201 [2024-07-21 03:44:29.243287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb5ec000b90 with addr=10.0.0.2, port=4420 00:34:44.201 qpair failed and we were unable to recover it. 00:34:44.201 A controller has encountered a failure and is being reset. 00:34:44.201 [2024-07-21 03:44:29.243473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.201 [2024-07-21 03:44:29.243517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc8390 with addr=10.0.0.2, port=4420 00:34:44.201 [2024-07-21 03:44:29.243538] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc8390 is same with the state(5) to be set 00:34:44.201 [2024-07-21 03:44:29.243563] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bc8390 (9): Bad file descriptor 00:34:44.201 [2024-07-21 03:44:29.243582] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:44.201 [2024-07-21 03:44:29.243597] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:44.201 [2024-07-21 03:44:29.243639] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:44.201 Unable to reset the controller. 00:34:44.201 03:44:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:34:44.201 03:44:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # return 0 00:34:44.201 03:44:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:44.201 03:44:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:44.201 03:44:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:44.201 03:44:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:44.201 03:44:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:44.201 03:44:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:44.201 03:44:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:44.201 Malloc0 00:34:44.201 03:44:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:44.201 03:44:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:34:44.201 03:44:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:44.201 03:44:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:44.201 [2024-07-21 03:44:29.372137] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:44.201 03:44:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:44.201 03:44:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:44.201 03:44:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:44.201 03:44:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:44.201 03:44:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:44.201 03:44:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:44.201 03:44:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:44.201 03:44:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:44.201 03:44:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:44.201 03:44:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:44.201 03:44:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:44.201 03:44:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:44.201 [2024-07-21 03:44:29.400383] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:44.201 03:44:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:44.201 03:44:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:34:44.201 03:44:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:44.201 03:44:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:44.201 03:44:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:44.201 03:44:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 2568367 00:34:45.132 Controller properly reset. 00:34:50.387 Initializing NVMe Controllers 00:34:50.387 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:34:50.387 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:34:50.387 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:34:50.387 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:34:50.387 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:34:50.387 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:34:50.387 Initialization complete. Launching workers. 00:34:50.387 Starting thread on core 1 00:34:50.387 Starting thread on core 2 00:34:50.387 Starting thread on core 3 00:34:50.387 Starting thread on core 0 00:34:50.387 03:44:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:34:50.387 00:34:50.387 real 0m10.713s 00:34:50.387 user 0m34.000s 00:34:50.387 sys 0m7.373s 00:34:50.387 03:44:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:34:50.387 03:44:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:50.387 ************************************ 00:34:50.387 END TEST nvmf_target_disconnect_tc2 00:34:50.387 ************************************ 00:34:50.387 03:44:35 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:34:50.387 03:44:35 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:34:50.387 03:44:35 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:34:50.387 03:44:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:34:50.387 03:44:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:34:50.387 03:44:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:34:50.387 03:44:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:34:50.387 03:44:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:34:50.387 03:44:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:34:50.387 rmmod nvme_tcp 00:34:50.387 rmmod nvme_fabrics 00:34:50.387 rmmod nvme_keyring 00:34:50.387 03:44:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:34:50.387 03:44:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:34:50.387 03:44:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:34:50.387 03:44:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 2568895 ']' 00:34:50.387 03:44:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 2568895 00:34:50.387 03:44:35 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@946 -- # '[' -z 2568895 ']' 00:34:50.387 03:44:35 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@950 -- # kill -0 2568895 00:34:50.387 03:44:35 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@951 -- # uname 00:34:50.387 03:44:35 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:34:50.387 03:44:35 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2568895 00:34:50.387 03:44:35 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # process_name=reactor_4 00:34:50.387 03:44:35 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # '[' reactor_4 = sudo ']' 00:34:50.387 03:44:35 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2568895' 00:34:50.387 killing process with pid 2568895 00:34:50.387 03:44:35 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@965 -- # kill 2568895 00:34:50.387 03:44:35 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@970 -- # wait 2568895 00:34:50.387 03:44:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:34:50.387 03:44:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:34:50.387 03:44:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:34:50.388 03:44:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:50.388 03:44:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:34:50.388 03:44:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:50.388 03:44:35 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:50.388 03:44:35 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:52.915 03:44:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:34:52.915 00:34:52.915 real 0m15.438s 00:34:52.915 user 0m59.338s 00:34:52.915 sys 0m9.863s 00:34:52.915 03:44:37 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1122 -- # xtrace_disable 00:34:52.915 03:44:37 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:34:52.915 ************************************ 00:34:52.915 END TEST nvmf_target_disconnect 00:34:52.915 ************************************ 00:34:52.915 03:44:37 nvmf_tcp -- nvmf/nvmf.sh@126 -- # timing_exit host 00:34:52.916 03:44:37 nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:52.916 03:44:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:52.916 03:44:37 nvmf_tcp -- nvmf/nvmf.sh@128 -- # trap - SIGINT SIGTERM EXIT 00:34:52.916 00:34:52.916 real 26m59.224s 00:34:52.916 user 74m24.682s 00:34:52.916 sys 6m23.742s 00:34:52.916 03:44:37 nvmf_tcp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:34:52.916 03:44:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:52.916 ************************************ 00:34:52.916 END TEST nvmf_tcp 00:34:52.916 ************************************ 00:34:52.916 03:44:37 -- spdk/autotest.sh@288 -- # [[ 0 -eq 0 ]] 00:34:52.916 03:44:37 -- spdk/autotest.sh@289 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:34:52.916 03:44:37 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:34:52.916 03:44:37 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:34:52.916 03:44:37 -- common/autotest_common.sh@10 -- # set +x 00:34:52.916 ************************************ 00:34:52.916 START TEST spdkcli_nvmf_tcp 00:34:52.916 ************************************ 00:34:52.916 03:44:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:34:52.916 * Looking for test storage... 00:34:52.916 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:34:52.916 03:44:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:34:52.916 03:44:37 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:34:52.916 03:44:37 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:34:52.916 03:44:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:52.916 03:44:37 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:34:52.916 03:44:37 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:52.916 03:44:37 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:52.916 03:44:37 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:52.916 03:44:37 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:52.916 03:44:37 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:52.916 03:44:37 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:52.916 03:44:37 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:52.916 03:44:37 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:52.916 03:44:37 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:52.916 03:44:37 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:52.916 03:44:37 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:34:52.916 03:44:37 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:34:52.916 03:44:37 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:52.916 03:44:37 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:52.916 03:44:37 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:52.916 03:44:37 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:52.916 03:44:37 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:52.916 03:44:37 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:52.916 03:44:37 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:52.916 03:44:37 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:52.916 03:44:37 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:52.916 03:44:37 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:52.916 03:44:37 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:52.916 03:44:37 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:34:52.916 03:44:37 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:52.916 03:44:37 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:34:52.916 03:44:37 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:52.916 03:44:37 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:52.916 03:44:37 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:52.916 03:44:37 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:52.916 03:44:37 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:52.916 03:44:37 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:52.916 03:44:37 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:52.916 03:44:37 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:52.916 03:44:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:34:52.916 03:44:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:34:52.916 03:44:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:34:52.916 03:44:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:34:52.916 03:44:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:34:52.916 03:44:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:52.916 03:44:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:34:52.916 03:44:37 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=2570062 00:34:52.916 03:44:37 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:34:52.916 03:44:37 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 2570062 00:34:52.916 03:44:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@827 -- # '[' -z 2570062 ']' 00:34:52.916 03:44:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:52.916 03:44:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@832 -- # local max_retries=100 00:34:52.916 03:44:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:52.916 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:52.916 03:44:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # xtrace_disable 00:34:52.916 03:44:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:52.916 [2024-07-21 03:44:37.852266] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:34:52.916 [2024-07-21 03:44:37.852349] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2570062 ] 00:34:52.916 EAL: No free 2048 kB hugepages reported on node 1 00:34:52.916 [2024-07-21 03:44:37.911700] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:34:52.916 [2024-07-21 03:44:37.998002] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:34:52.916 [2024-07-21 03:44:37.998006] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:34:52.916 03:44:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:34:52.916 03:44:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@860 -- # return 0 00:34:52.916 03:44:38 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:34:52.916 03:44:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:52.916 03:44:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:52.916 03:44:38 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:34:52.916 03:44:38 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:34:52.916 03:44:38 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:34:52.916 03:44:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:34:52.916 03:44:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:52.916 03:44:38 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:34:52.916 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:34:52.916 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:34:52.916 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:34:52.916 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:34:52.916 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:34:52.916 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:34:52.916 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:34:52.916 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:34:52.916 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:34:52.916 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:34:52.916 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:52.916 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:34:52.916 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:34:52.916 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:52.916 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:34:52.916 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:34:52.916 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:34:52.916 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:34:52.916 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:52.916 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:34:52.916 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:34:52.916 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:34:52.916 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:34:52.916 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:52.916 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:34:52.916 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:34:52.916 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:34:52.916 ' 00:34:55.450 [2024-07-21 03:44:40.647114] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:56.840 [2024-07-21 03:44:41.887380] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:34:59.374 [2024-07-21 03:44:44.182705] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:35:01.289 [2024-07-21 03:44:46.141070] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:35:02.671 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:35:02.671 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:35:02.671 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:35:02.671 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:35:02.671 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:35:02.671 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:35:02.671 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:35:02.671 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:35:02.671 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:35:02.671 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:35:02.671 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:35:02.671 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:35:02.671 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:35:02.671 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:35:02.671 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:35:02.671 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:35:02.671 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:35:02.671 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:35:02.671 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:35:02.671 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:35:02.671 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:35:02.671 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:35:02.671 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:35:02.671 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:35:02.671 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:35:02.671 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:35:02.671 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:35:02.671 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:35:02.671 03:44:47 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:35:02.671 03:44:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:02.671 03:44:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:02.671 03:44:47 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:35:02.671 03:44:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:35:02.671 03:44:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:02.671 03:44:47 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:35:02.671 03:44:47 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:35:02.929 03:44:48 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:35:03.187 03:44:48 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:35:03.187 03:44:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:35:03.187 03:44:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:03.187 03:44:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:03.187 03:44:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:35:03.187 03:44:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:35:03.187 03:44:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:03.187 03:44:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:35:03.187 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:35:03.187 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:35:03.187 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:35:03.187 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:35:03.187 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:35:03.187 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:35:03.187 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:35:03.187 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:35:03.187 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:35:03.187 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:35:03.187 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:35:03.187 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:35:03.187 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:35:03.187 ' 00:35:08.454 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:35:08.454 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:35:08.454 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:35:08.454 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:35:08.454 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:35:08.454 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:35:08.454 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:35:08.454 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:35:08.454 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:35:08.454 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:35:08.454 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:35:08.454 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:35:08.454 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:35:08.454 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:35:08.454 03:44:53 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:35:08.454 03:44:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:08.454 03:44:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:08.454 03:44:53 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 2570062 00:35:08.454 03:44:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@946 -- # '[' -z 2570062 ']' 00:35:08.454 03:44:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # kill -0 2570062 00:35:08.454 03:44:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@951 -- # uname 00:35:08.454 03:44:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:35:08.454 03:44:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2570062 00:35:08.454 03:44:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:35:08.454 03:44:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:35:08.454 03:44:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2570062' 00:35:08.454 killing process with pid 2570062 00:35:08.454 03:44:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@965 -- # kill 2570062 00:35:08.454 03:44:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@970 -- # wait 2570062 00:35:08.711 03:44:53 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:35:08.711 03:44:53 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:35:08.711 03:44:53 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 2570062 ']' 00:35:08.711 03:44:53 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 2570062 00:35:08.711 03:44:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@946 -- # '[' -z 2570062 ']' 00:35:08.711 03:44:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # kill -0 2570062 00:35:08.711 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (2570062) - No such process 00:35:08.711 03:44:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # echo 'Process with pid 2570062 is not found' 00:35:08.711 Process with pid 2570062 is not found 00:35:08.711 03:44:53 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:35:08.711 03:44:53 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:35:08.711 03:44:53 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:35:08.711 00:35:08.711 real 0m16.067s 00:35:08.711 user 0m34.056s 00:35:08.711 sys 0m0.811s 00:35:08.711 03:44:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:35:08.711 03:44:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:08.711 ************************************ 00:35:08.711 END TEST spdkcli_nvmf_tcp 00:35:08.711 ************************************ 00:35:08.711 03:44:53 -- spdk/autotest.sh@290 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:35:08.711 03:44:53 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:35:08.711 03:44:53 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:35:08.711 03:44:53 -- common/autotest_common.sh@10 -- # set +x 00:35:08.711 ************************************ 00:35:08.711 START TEST nvmf_identify_passthru 00:35:08.711 ************************************ 00:35:08.711 03:44:53 nvmf_identify_passthru -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:35:08.711 * Looking for test storage... 00:35:08.711 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:08.711 03:44:53 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:08.711 03:44:53 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:35:08.711 03:44:53 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:08.711 03:44:53 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:08.711 03:44:53 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:08.711 03:44:53 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:08.711 03:44:53 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:08.711 03:44:53 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:08.711 03:44:53 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:08.711 03:44:53 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:08.711 03:44:53 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:08.711 03:44:53 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:08.711 03:44:53 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:35:08.711 03:44:53 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:35:08.711 03:44:53 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:08.711 03:44:53 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:08.711 03:44:53 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:08.711 03:44:53 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:08.711 03:44:53 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:08.711 03:44:53 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:08.711 03:44:53 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:08.711 03:44:53 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:08.711 03:44:53 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:08.711 03:44:53 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:08.711 03:44:53 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:08.711 03:44:53 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:35:08.712 03:44:53 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:08.712 03:44:53 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:35:08.712 03:44:53 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:35:08.712 03:44:53 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:35:08.712 03:44:53 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:08.712 03:44:53 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:08.712 03:44:53 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:08.712 03:44:53 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:35:08.712 03:44:53 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:35:08.712 03:44:53 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:35:08.712 03:44:53 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:08.712 03:44:53 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:08.712 03:44:53 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:08.712 03:44:53 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:08.712 03:44:53 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:08.712 03:44:53 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:08.712 03:44:53 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:08.712 03:44:53 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:35:08.712 03:44:53 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:08.712 03:44:53 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:35:08.712 03:44:53 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:35:08.712 03:44:53 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:08.712 03:44:53 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:35:08.712 03:44:53 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:35:08.712 03:44:53 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:35:08.712 03:44:53 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:08.712 03:44:53 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:08.712 03:44:53 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:08.712 03:44:53 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:35:08.712 03:44:53 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:35:08.712 03:44:53 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:35:08.712 03:44:53 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:10.606 03:44:55 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:10.606 03:44:55 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:35:10.606 03:44:55 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:35:10.606 03:44:55 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:35:10.606 03:44:55 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:35:10.606 03:44:55 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:35:10.606 03:44:55 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:35:10.606 03:44:55 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:35:10.606 03:44:55 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:35:10.606 03:44:55 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:35:10.606 03:44:55 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:35:10.606 03:44:55 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:35:10.606 03:44:55 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:35:10.606 03:44:55 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:35:10.606 03:44:55 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:35:10.606 03:44:55 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:10.606 03:44:55 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:10.606 03:44:55 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:10.606 03:44:55 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:10.606 03:44:55 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:10.606 03:44:55 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:10.606 03:44:55 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:10.606 03:44:55 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:10.606 03:44:55 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:10.606 03:44:55 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:10.606 03:44:55 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:10.606 03:44:55 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:35:10.606 03:44:55 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:35:10.606 03:44:55 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:35:10.606 03:44:55 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:35:10.607 03:44:55 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:35:10.607 03:44:55 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:35:10.607 03:44:55 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:10.607 03:44:55 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:35:10.607 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:35:10.607 03:44:55 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:10.607 03:44:55 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:10.607 03:44:55 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:10.607 03:44:55 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:10.607 03:44:55 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:10.607 03:44:55 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:10.607 03:44:55 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:35:10.607 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:35:10.607 03:44:55 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:10.607 03:44:55 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:10.607 03:44:55 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:10.607 03:44:55 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:10.607 03:44:55 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:10.607 03:44:55 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:35:10.607 03:44:55 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:35:10.607 03:44:55 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:35:10.607 03:44:55 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:10.607 03:44:55 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:10.607 03:44:55 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:10.607 03:44:55 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:10.607 03:44:55 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:10.607 03:44:55 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:10.607 03:44:55 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:10.607 03:44:55 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:35:10.607 Found net devices under 0000:0a:00.0: cvl_0_0 00:35:10.607 03:44:55 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:10.607 03:44:55 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:10.607 03:44:55 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:10.607 03:44:55 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:10.607 03:44:55 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:10.607 03:44:55 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:10.607 03:44:55 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:10.607 03:44:55 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:10.607 03:44:55 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:35:10.607 Found net devices under 0000:0a:00.1: cvl_0_1 00:35:10.607 03:44:55 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:10.607 03:44:55 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:35:10.607 03:44:55 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:35:10.607 03:44:55 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:35:10.607 03:44:55 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:35:10.607 03:44:55 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:35:10.607 03:44:55 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:10.607 03:44:55 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:10.607 03:44:55 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:10.607 03:44:55 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:35:10.607 03:44:55 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:10.607 03:44:55 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:10.607 03:44:55 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:35:10.607 03:44:55 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:10.607 03:44:55 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:10.607 03:44:55 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:35:10.607 03:44:55 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:35:10.607 03:44:55 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:35:10.607 03:44:55 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:10.869 03:44:55 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:10.869 03:44:55 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:10.869 03:44:55 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:35:10.869 03:44:55 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:10.869 03:44:56 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:10.869 03:44:56 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:10.869 03:44:56 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:35:10.869 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:10.869 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.108 ms 00:35:10.869 00:35:10.869 --- 10.0.0.2 ping statistics --- 00:35:10.869 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:10.869 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:35:10.869 03:44:56 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:10.869 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:10.869 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.068 ms 00:35:10.869 00:35:10.869 --- 10.0.0.1 ping statistics --- 00:35:10.869 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:10.869 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:35:10.869 03:44:56 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:10.869 03:44:56 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:35:10.869 03:44:56 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:35:10.869 03:44:56 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:10.869 03:44:56 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:35:10.869 03:44:56 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:35:10.869 03:44:56 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:10.869 03:44:56 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:35:10.869 03:44:56 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:35:10.869 03:44:56 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:35:10.869 03:44:56 nvmf_identify_passthru -- common/autotest_common.sh@720 -- # xtrace_disable 00:35:10.869 03:44:56 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:10.869 03:44:56 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:35:10.869 03:44:56 nvmf_identify_passthru -- common/autotest_common.sh@1520 -- # bdfs=() 00:35:10.869 03:44:56 nvmf_identify_passthru -- common/autotest_common.sh@1520 -- # local bdfs 00:35:10.869 03:44:56 nvmf_identify_passthru -- common/autotest_common.sh@1521 -- # bdfs=($(get_nvme_bdfs)) 00:35:10.869 03:44:56 nvmf_identify_passthru -- common/autotest_common.sh@1521 -- # get_nvme_bdfs 00:35:10.869 03:44:56 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:35:10.869 03:44:56 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:35:10.869 03:44:56 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:35:10.869 03:44:56 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:35:10.869 03:44:56 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:35:10.869 03:44:56 nvmf_identify_passthru -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:35:10.869 03:44:56 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:88:00.0 00:35:10.869 03:44:56 nvmf_identify_passthru -- common/autotest_common.sh@1523 -- # echo 0000:88:00.0 00:35:10.869 03:44:56 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:88:00.0 00:35:10.869 03:44:56 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:88:00.0 ']' 00:35:10.869 03:44:56 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:35:10.869 03:44:56 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:35:10.869 03:44:56 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:35:10.869 EAL: No free 2048 kB hugepages reported on node 1 00:35:15.088 03:45:00 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=PHLJ916004901P0FGN 00:35:15.088 03:45:00 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:35:15.088 03:45:00 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:35:15.088 03:45:00 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:35:15.088 EAL: No free 2048 kB hugepages reported on node 1 00:35:19.263 03:45:04 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:35:19.263 03:45:04 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:35:19.263 03:45:04 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:19.263 03:45:04 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:19.263 03:45:04 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:35:19.263 03:45:04 nvmf_identify_passthru -- common/autotest_common.sh@720 -- # xtrace_disable 00:35:19.263 03:45:04 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:19.263 03:45:04 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=2574700 00:35:19.263 03:45:04 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:35:19.263 03:45:04 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:35:19.263 03:45:04 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 2574700 00:35:19.263 03:45:04 nvmf_identify_passthru -- common/autotest_common.sh@827 -- # '[' -z 2574700 ']' 00:35:19.263 03:45:04 nvmf_identify_passthru -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:19.263 03:45:04 nvmf_identify_passthru -- common/autotest_common.sh@832 -- # local max_retries=100 00:35:19.263 03:45:04 nvmf_identify_passthru -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:19.263 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:19.263 03:45:04 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # xtrace_disable 00:35:19.263 03:45:04 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:19.520 [2024-07-21 03:45:04.613081] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:35:19.520 [2024-07-21 03:45:04.613181] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:19.520 EAL: No free 2048 kB hugepages reported on node 1 00:35:19.520 [2024-07-21 03:45:04.684724] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:19.520 [2024-07-21 03:45:04.777783] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:19.520 [2024-07-21 03:45:04.777859] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:19.520 [2024-07-21 03:45:04.777886] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:19.520 [2024-07-21 03:45:04.777900] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:19.520 [2024-07-21 03:45:04.777913] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:19.520 [2024-07-21 03:45:04.777966] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:35:19.520 [2024-07-21 03:45:04.777992] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:35:19.520 [2024-07-21 03:45:04.778051] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:35:19.520 [2024-07-21 03:45:04.778053] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:35:19.520 03:45:04 nvmf_identify_passthru -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:35:19.520 03:45:04 nvmf_identify_passthru -- common/autotest_common.sh@860 -- # return 0 00:35:19.520 03:45:04 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:35:19.520 03:45:04 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:19.520 03:45:04 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:19.520 INFO: Log level set to 20 00:35:19.520 INFO: Requests: 00:35:19.520 { 00:35:19.520 "jsonrpc": "2.0", 00:35:19.520 "method": "nvmf_set_config", 00:35:19.520 "id": 1, 00:35:19.520 "params": { 00:35:19.520 "admin_cmd_passthru": { 00:35:19.520 "identify_ctrlr": true 00:35:19.520 } 00:35:19.520 } 00:35:19.521 } 00:35:19.521 00:35:19.521 INFO: response: 00:35:19.521 { 00:35:19.521 "jsonrpc": "2.0", 00:35:19.521 "id": 1, 00:35:19.521 "result": true 00:35:19.521 } 00:35:19.521 00:35:19.521 03:45:04 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:19.521 03:45:04 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:35:19.521 03:45:04 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:19.521 03:45:04 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:19.521 INFO: Setting log level to 20 00:35:19.521 INFO: Setting log level to 20 00:35:19.521 INFO: Log level set to 20 00:35:19.521 INFO: Log level set to 20 00:35:19.521 INFO: Requests: 00:35:19.521 { 00:35:19.521 "jsonrpc": "2.0", 00:35:19.521 "method": "framework_start_init", 00:35:19.521 "id": 1 00:35:19.521 } 00:35:19.521 00:35:19.521 INFO: Requests: 00:35:19.521 { 00:35:19.521 "jsonrpc": "2.0", 00:35:19.521 "method": "framework_start_init", 00:35:19.521 "id": 1 00:35:19.521 } 00:35:19.521 00:35:19.778 [2024-07-21 03:45:04.925961] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:35:19.778 INFO: response: 00:35:19.778 { 00:35:19.778 "jsonrpc": "2.0", 00:35:19.778 "id": 1, 00:35:19.778 "result": true 00:35:19.778 } 00:35:19.778 00:35:19.778 INFO: response: 00:35:19.778 { 00:35:19.778 "jsonrpc": "2.0", 00:35:19.778 "id": 1, 00:35:19.778 "result": true 00:35:19.778 } 00:35:19.778 00:35:19.778 03:45:04 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:19.778 03:45:04 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:19.778 03:45:04 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:19.778 03:45:04 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:19.778 INFO: Setting log level to 40 00:35:19.778 INFO: Setting log level to 40 00:35:19.778 INFO: Setting log level to 40 00:35:19.778 [2024-07-21 03:45:04.936065] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:19.778 03:45:04 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:19.778 03:45:04 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:35:19.778 03:45:04 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:19.778 03:45:04 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:19.778 03:45:04 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 00:35:19.778 03:45:04 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:19.778 03:45:04 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:23.053 Nvme0n1 00:35:23.053 03:45:07 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:23.053 03:45:07 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:35:23.053 03:45:07 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:23.053 03:45:07 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:23.053 03:45:07 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:23.053 03:45:07 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:35:23.053 03:45:07 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:23.053 03:45:07 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:23.053 03:45:07 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:23.053 03:45:07 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:23.053 03:45:07 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:23.053 03:45:07 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:23.053 [2024-07-21 03:45:07.825806] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:23.053 03:45:07 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:23.053 03:45:07 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:35:23.053 03:45:07 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:23.053 03:45:07 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:23.053 [ 00:35:23.053 { 00:35:23.053 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:35:23.053 "subtype": "Discovery", 00:35:23.053 "listen_addresses": [], 00:35:23.053 "allow_any_host": true, 00:35:23.053 "hosts": [] 00:35:23.053 }, 00:35:23.053 { 00:35:23.053 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:35:23.053 "subtype": "NVMe", 00:35:23.053 "listen_addresses": [ 00:35:23.053 { 00:35:23.053 "trtype": "TCP", 00:35:23.053 "adrfam": "IPv4", 00:35:23.053 "traddr": "10.0.0.2", 00:35:23.053 "trsvcid": "4420" 00:35:23.053 } 00:35:23.053 ], 00:35:23.053 "allow_any_host": true, 00:35:23.053 "hosts": [], 00:35:23.053 "serial_number": "SPDK00000000000001", 00:35:23.053 "model_number": "SPDK bdev Controller", 00:35:23.053 "max_namespaces": 1, 00:35:23.053 "min_cntlid": 1, 00:35:23.053 "max_cntlid": 65519, 00:35:23.053 "namespaces": [ 00:35:23.053 { 00:35:23.053 "nsid": 1, 00:35:23.053 "bdev_name": "Nvme0n1", 00:35:23.053 "name": "Nvme0n1", 00:35:23.053 "nguid": "E669CD4F093D469AA5D74F0DC7431185", 00:35:23.053 "uuid": "e669cd4f-093d-469a-a5d7-4f0dc7431185" 00:35:23.053 } 00:35:23.053 ] 00:35:23.053 } 00:35:23.053 ] 00:35:23.053 03:45:07 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:23.053 03:45:07 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:35:23.053 03:45:07 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:35:23.053 03:45:07 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:35:23.053 EAL: No free 2048 kB hugepages reported on node 1 00:35:23.053 03:45:07 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=PHLJ916004901P0FGN 00:35:23.053 03:45:07 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:35:23.053 03:45:07 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:35:23.054 03:45:07 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:35:23.054 EAL: No free 2048 kB hugepages reported on node 1 00:35:23.054 03:45:08 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:35:23.054 03:45:08 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' PHLJ916004901P0FGN '!=' PHLJ916004901P0FGN ']' 00:35:23.054 03:45:08 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:35:23.054 03:45:08 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:23.054 03:45:08 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:23.054 03:45:08 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:23.054 03:45:08 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:23.054 03:45:08 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:35:23.054 03:45:08 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:35:23.054 03:45:08 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:35:23.054 03:45:08 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:35:23.054 03:45:08 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:35:23.054 03:45:08 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:35:23.054 03:45:08 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:35:23.054 03:45:08 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:35:23.054 rmmod nvme_tcp 00:35:23.054 rmmod nvme_fabrics 00:35:23.054 rmmod nvme_keyring 00:35:23.054 03:45:08 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:35:23.054 03:45:08 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:35:23.054 03:45:08 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:35:23.054 03:45:08 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 2574700 ']' 00:35:23.054 03:45:08 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 2574700 00:35:23.054 03:45:08 nvmf_identify_passthru -- common/autotest_common.sh@946 -- # '[' -z 2574700 ']' 00:35:23.054 03:45:08 nvmf_identify_passthru -- common/autotest_common.sh@950 -- # kill -0 2574700 00:35:23.054 03:45:08 nvmf_identify_passthru -- common/autotest_common.sh@951 -- # uname 00:35:23.054 03:45:08 nvmf_identify_passthru -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:35:23.054 03:45:08 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2574700 00:35:23.054 03:45:08 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:35:23.054 03:45:08 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:35:23.054 03:45:08 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2574700' 00:35:23.054 killing process with pid 2574700 00:35:23.054 03:45:08 nvmf_identify_passthru -- common/autotest_common.sh@965 -- # kill 2574700 00:35:23.054 03:45:08 nvmf_identify_passthru -- common/autotest_common.sh@970 -- # wait 2574700 00:35:24.426 03:45:09 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:35:24.426 03:45:09 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:35:24.426 03:45:09 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:35:24.426 03:45:09 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:35:24.426 03:45:09 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:35:24.426 03:45:09 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:24.426 03:45:09 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:24.426 03:45:09 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:26.959 03:45:11 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:35:26.959 00:35:26.959 real 0m17.866s 00:35:26.959 user 0m26.156s 00:35:26.959 sys 0m2.272s 00:35:26.959 03:45:11 nvmf_identify_passthru -- common/autotest_common.sh@1122 -- # xtrace_disable 00:35:26.959 03:45:11 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:26.959 ************************************ 00:35:26.959 END TEST nvmf_identify_passthru 00:35:26.959 ************************************ 00:35:26.959 03:45:11 -- spdk/autotest.sh@292 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:35:26.959 03:45:11 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:35:26.959 03:45:11 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:35:26.959 03:45:11 -- common/autotest_common.sh@10 -- # set +x 00:35:26.959 ************************************ 00:35:26.959 START TEST nvmf_dif 00:35:26.959 ************************************ 00:35:26.959 03:45:11 nvmf_dif -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:35:26.959 * Looking for test storage... 00:35:26.959 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:26.959 03:45:11 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:26.959 03:45:11 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:35:26.959 03:45:11 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:26.959 03:45:11 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:26.959 03:45:11 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:26.959 03:45:11 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:26.959 03:45:11 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:26.959 03:45:11 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:26.959 03:45:11 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:26.959 03:45:11 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:26.959 03:45:11 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:26.959 03:45:11 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:26.959 03:45:11 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:35:26.959 03:45:11 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:35:26.959 03:45:11 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:26.959 03:45:11 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:26.959 03:45:11 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:26.959 03:45:11 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:26.959 03:45:11 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:26.959 03:45:11 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:26.959 03:45:11 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:26.959 03:45:11 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:26.959 03:45:11 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:26.959 03:45:11 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:26.959 03:45:11 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:26.959 03:45:11 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:35:26.959 03:45:11 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:26.959 03:45:11 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:35:26.959 03:45:11 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:35:26.959 03:45:11 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:35:26.959 03:45:11 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:26.959 03:45:11 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:26.959 03:45:11 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:26.959 03:45:11 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:35:26.959 03:45:11 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:35:26.959 03:45:11 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:35:26.959 03:45:11 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:35:26.959 03:45:11 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:35:26.959 03:45:11 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:35:26.959 03:45:11 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:35:26.959 03:45:11 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:35:26.959 03:45:11 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:35:26.959 03:45:11 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:26.959 03:45:11 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:35:26.959 03:45:11 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:35:26.959 03:45:11 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:35:26.959 03:45:11 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:26.959 03:45:11 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:26.959 03:45:11 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:26.959 03:45:11 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:35:26.959 03:45:11 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:35:26.959 03:45:11 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:35:26.959 03:45:11 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:28.858 03:45:13 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:28.858 03:45:13 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:35:28.858 03:45:13 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:35:28.858 03:45:13 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:35:28.858 03:45:13 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:35:28.858 03:45:13 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:35:28.858 03:45:13 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:35:28.858 03:45:13 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:35:28.858 03:45:13 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:35:28.858 03:45:13 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:35:28.858 03:45:13 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:35:28.858 03:45:13 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:35:28.858 03:45:13 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:35:28.858 03:45:13 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:35:28.858 03:45:13 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:35:28.858 03:45:13 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:28.858 03:45:13 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:28.858 03:45:13 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:28.858 03:45:13 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:28.858 03:45:13 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:28.858 03:45:13 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:28.858 03:45:13 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:28.858 03:45:13 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:28.858 03:45:13 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:28.858 03:45:13 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:28.858 03:45:13 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:28.858 03:45:13 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:35:28.858 03:45:13 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:35:28.858 03:45:13 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:35:28.858 03:45:13 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:35:28.858 03:45:13 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:35:28.858 03:45:13 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:35:28.858 03:45:13 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:28.858 03:45:13 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:35:28.858 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:35:28.858 03:45:13 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:28.858 03:45:13 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:28.858 03:45:13 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:28.858 03:45:13 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:28.858 03:45:13 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:28.858 03:45:13 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:28.858 03:45:13 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:35:28.858 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:35:28.858 03:45:13 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:28.858 03:45:13 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:28.858 03:45:13 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:28.858 03:45:13 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:28.858 03:45:13 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:28.858 03:45:13 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:35:28.858 03:45:13 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:35:28.858 03:45:13 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:35:28.858 03:45:13 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:28.858 03:45:13 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:28.858 03:45:13 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:28.858 03:45:13 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:28.858 03:45:13 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:28.859 03:45:13 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:28.859 03:45:13 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:28.859 03:45:13 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:35:28.859 Found net devices under 0000:0a:00.0: cvl_0_0 00:35:28.859 03:45:13 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:28.859 03:45:13 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:28.859 03:45:13 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:28.859 03:45:13 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:28.859 03:45:13 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:28.859 03:45:13 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:28.859 03:45:13 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:28.859 03:45:13 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:28.859 03:45:13 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:35:28.859 Found net devices under 0000:0a:00.1: cvl_0_1 00:35:28.859 03:45:13 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:28.859 03:45:13 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:35:28.859 03:45:13 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:35:28.859 03:45:13 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:35:28.859 03:45:13 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:35:28.859 03:45:13 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:35:28.859 03:45:13 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:28.859 03:45:13 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:28.859 03:45:13 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:28.859 03:45:13 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:35:28.859 03:45:13 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:28.859 03:45:13 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:28.859 03:45:13 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:35:28.859 03:45:13 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:28.859 03:45:13 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:28.859 03:45:13 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:35:28.859 03:45:13 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:35:28.859 03:45:13 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:35:28.859 03:45:13 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:28.859 03:45:13 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:28.859 03:45:13 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:28.859 03:45:13 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:35:28.859 03:45:13 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:28.859 03:45:13 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:28.859 03:45:13 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:28.859 03:45:13 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:35:28.859 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:28.859 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.135 ms 00:35:28.859 00:35:28.859 --- 10.0.0.2 ping statistics --- 00:35:28.859 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:28.859 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:35:28.859 03:45:13 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:28.859 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:28.859 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.106 ms 00:35:28.859 00:35:28.859 --- 10.0.0.1 ping statistics --- 00:35:28.859 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:28.859 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:35:28.859 03:45:13 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:28.859 03:45:13 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:35:28.859 03:45:13 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:35:28.859 03:45:13 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:35:29.789 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:35:29.789 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:35:29.789 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:35:29.789 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:35:29.789 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:35:29.789 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:35:29.789 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:35:29.789 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:35:29.789 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:35:29.789 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:35:29.789 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:35:29.789 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:35:29.789 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:35:29.789 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:35:29.789 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:35:29.789 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:35:29.789 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:35:29.789 03:45:15 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:29.789 03:45:15 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:35:29.789 03:45:15 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:35:29.789 03:45:15 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:29.789 03:45:15 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:35:29.789 03:45:15 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:35:29.789 03:45:15 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:35:29.789 03:45:15 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:35:29.789 03:45:15 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:35:29.789 03:45:15 nvmf_dif -- common/autotest_common.sh@720 -- # xtrace_disable 00:35:29.789 03:45:15 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:29.789 03:45:15 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=2578338 00:35:29.789 03:45:15 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:35:29.789 03:45:15 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 2578338 00:35:29.789 03:45:15 nvmf_dif -- common/autotest_common.sh@827 -- # '[' -z 2578338 ']' 00:35:29.789 03:45:15 nvmf_dif -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:29.789 03:45:15 nvmf_dif -- common/autotest_common.sh@832 -- # local max_retries=100 00:35:29.789 03:45:15 nvmf_dif -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:29.789 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:29.789 03:45:15 nvmf_dif -- common/autotest_common.sh@836 -- # xtrace_disable 00:35:29.789 03:45:15 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:30.045 [2024-07-21 03:45:15.110452] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:35:30.045 [2024-07-21 03:45:15.110537] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:30.045 EAL: No free 2048 kB hugepages reported on node 1 00:35:30.045 [2024-07-21 03:45:15.181277] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:30.045 [2024-07-21 03:45:15.274949] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:30.045 [2024-07-21 03:45:15.275017] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:30.045 [2024-07-21 03:45:15.275033] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:30.045 [2024-07-21 03:45:15.275047] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:30.045 [2024-07-21 03:45:15.275059] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:30.045 [2024-07-21 03:45:15.275100] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:35:30.303 03:45:15 nvmf_dif -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:35:30.303 03:45:15 nvmf_dif -- common/autotest_common.sh@860 -- # return 0 00:35:30.303 03:45:15 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:35:30.303 03:45:15 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:30.303 03:45:15 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:30.303 03:45:15 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:30.303 03:45:15 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:35:30.303 03:45:15 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:35:30.303 03:45:15 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:30.303 03:45:15 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:30.303 [2024-07-21 03:45:15.419099] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:30.303 03:45:15 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:30.303 03:45:15 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:35:30.303 03:45:15 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:35:30.303 03:45:15 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:35:30.303 03:45:15 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:30.303 ************************************ 00:35:30.303 START TEST fio_dif_1_default 00:35:30.303 ************************************ 00:35:30.303 03:45:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1121 -- # fio_dif_1 00:35:30.303 03:45:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:35:30.303 03:45:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:35:30.303 03:45:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:35:30.303 03:45:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:35:30.303 03:45:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:35:30.303 03:45:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:35:30.303 03:45:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:30.303 03:45:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:30.303 bdev_null0 00:35:30.303 03:45:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:30.303 03:45:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:30.303 03:45:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:30.303 03:45:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:30.303 03:45:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:30.303 03:45:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:30.303 03:45:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:30.303 03:45:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:30.303 03:45:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:30.303 03:45:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:30.303 03:45:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:30.303 03:45:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:30.303 [2024-07-21 03:45:15.475392] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:30.303 03:45:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:30.303 03:45:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:35:30.303 03:45:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:35:30.303 03:45:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:35:30.303 03:45:15 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:35:30.303 03:45:15 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:35:30.303 03:45:15 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:30.303 03:45:15 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:30.303 { 00:35:30.303 "params": { 00:35:30.303 "name": "Nvme$subsystem", 00:35:30.303 "trtype": "$TEST_TRANSPORT", 00:35:30.303 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:30.303 "adrfam": "ipv4", 00:35:30.303 "trsvcid": "$NVMF_PORT", 00:35:30.303 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:30.303 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:30.303 "hdgst": ${hdgst:-false}, 00:35:30.303 "ddgst": ${ddgst:-false} 00:35:30.303 }, 00:35:30.303 "method": "bdev_nvme_attach_controller" 00:35:30.303 } 00:35:30.303 EOF 00:35:30.303 )") 00:35:30.303 03:45:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:30.303 03:45:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:30.303 03:45:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:35:30.303 03:45:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:30.303 03:45:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:35:30.303 03:45:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1335 -- # local sanitizers 00:35:30.303 03:45:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:35:30.303 03:45:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:30.303 03:45:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # shift 00:35:30.303 03:45:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:35:30.303 03:45:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local asan_lib= 00:35:30.303 03:45:15 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:35:30.303 03:45:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:35:30.303 03:45:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:30.303 03:45:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:35:30.303 03:45:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # grep libasan 00:35:30.303 03:45:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:35:30.303 03:45:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:35:30.303 03:45:15 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:35:30.303 03:45:15 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:35:30.303 03:45:15 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:35:30.303 "params": { 00:35:30.303 "name": "Nvme0", 00:35:30.303 "trtype": "tcp", 00:35:30.303 "traddr": "10.0.0.2", 00:35:30.303 "adrfam": "ipv4", 00:35:30.303 "trsvcid": "4420", 00:35:30.303 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:30.303 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:30.303 "hdgst": false, 00:35:30.303 "ddgst": false 00:35:30.303 }, 00:35:30.303 "method": "bdev_nvme_attach_controller" 00:35:30.303 }' 00:35:30.303 03:45:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # asan_lib= 00:35:30.303 03:45:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:35:30.303 03:45:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:35:30.303 03:45:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:30.303 03:45:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:35:30.303 03:45:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:35:30.303 03:45:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # asan_lib= 00:35:30.303 03:45:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:35:30.303 03:45:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:30.303 03:45:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:30.560 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:35:30.560 fio-3.35 00:35:30.560 Starting 1 thread 00:35:30.560 EAL: No free 2048 kB hugepages reported on node 1 00:35:42.775 00:35:42.775 filename0: (groupid=0, jobs=1): err= 0: pid=2578567: Sun Jul 21 03:45:26 2024 00:35:42.775 read: IOPS=189, BW=758KiB/s (776kB/s)(7584KiB/10010msec) 00:35:42.775 slat (nsec): min=4507, max=45680, avg=8345.62, stdev=2672.93 00:35:42.775 clat (usec): min=540, max=48636, avg=21091.59, stdev=20366.84 00:35:42.775 lat (usec): min=547, max=48656, avg=21099.94, stdev=20366.59 00:35:42.775 clat percentiles (usec): 00:35:42.775 | 1.00th=[ 611], 5.00th=[ 627], 10.00th=[ 627], 20.00th=[ 644], 00:35:42.775 | 30.00th=[ 652], 40.00th=[ 660], 50.00th=[41157], 60.00th=[41157], 00:35:42.775 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:35:42.775 | 99.00th=[42206], 99.50th=[42206], 99.90th=[48497], 99.95th=[48497], 00:35:42.775 | 99.99th=[48497] 00:35:42.775 bw ( KiB/s): min= 672, max= 768, per=99.78%, avg=756.80, stdev=28.00, samples=20 00:35:42.775 iops : min= 168, max= 192, avg=189.20, stdev= 7.00, samples=20 00:35:42.775 lat (usec) : 750=49.58%, 1000=0.21% 00:35:42.775 lat (msec) : 50=50.21% 00:35:42.775 cpu : usr=90.05%, sys=9.67%, ctx=21, majf=0, minf=283 00:35:42.775 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:42.775 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:42.775 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:42.775 issued rwts: total=1896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:42.775 latency : target=0, window=0, percentile=100.00%, depth=4 00:35:42.775 00:35:42.775 Run status group 0 (all jobs): 00:35:42.775 READ: bw=758KiB/s (776kB/s), 758KiB/s-758KiB/s (776kB/s-776kB/s), io=7584KiB (7766kB), run=10010-10010msec 00:35:42.775 03:45:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:35:42.775 03:45:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:35:42.775 03:45:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:35:42.775 03:45:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:42.775 03:45:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:35:42.775 03:45:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:42.775 03:45:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:42.775 03:45:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:42.775 03:45:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:42.775 03:45:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:42.775 03:45:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:42.775 03:45:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:42.775 03:45:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:42.775 00:35:42.775 real 0m11.114s 00:35:42.775 user 0m10.140s 00:35:42.775 sys 0m1.270s 00:35:42.775 03:45:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1122 -- # xtrace_disable 00:35:42.775 03:45:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:42.775 ************************************ 00:35:42.775 END TEST fio_dif_1_default 00:35:42.775 ************************************ 00:35:42.775 03:45:26 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:35:42.775 03:45:26 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:35:42.775 03:45:26 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:35:42.775 03:45:26 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:42.775 ************************************ 00:35:42.775 START TEST fio_dif_1_multi_subsystems 00:35:42.775 ************************************ 00:35:42.775 03:45:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1121 -- # fio_dif_1_multi_subsystems 00:35:42.775 03:45:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:35:42.775 03:45:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:35:42.775 03:45:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:35:42.775 03:45:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:35:42.775 03:45:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:35:42.775 03:45:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:35:42.775 03:45:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:35:42.775 03:45:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:42.775 03:45:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:42.775 bdev_null0 00:35:42.775 03:45:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:42.775 03:45:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:42.775 03:45:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:42.775 03:45:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:42.775 03:45:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:42.775 03:45:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:42.775 03:45:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:42.775 03:45:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:42.775 03:45:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:42.775 03:45:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:42.775 03:45:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:42.775 03:45:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:42.775 [2024-07-21 03:45:26.637072] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:42.775 03:45:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:42.775 03:45:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:35:42.775 03:45:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:35:42.775 03:45:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:35:42.775 03:45:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:35:42.775 03:45:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:42.775 03:45:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:42.775 bdev_null1 00:35:42.775 03:45:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:42.775 03:45:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:35:42.775 03:45:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:42.775 03:45:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:42.775 03:45:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:42.775 03:45:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:35:42.775 03:45:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:42.775 03:45:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:42.775 03:45:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:42.775 03:45:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:42.775 03:45:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:42.775 03:45:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:42.775 03:45:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:42.775 03:45:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:35:42.775 03:45:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:35:42.775 03:45:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:35:42.775 03:45:26 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:35:42.775 03:45:26 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:35:42.775 03:45:26 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:42.775 03:45:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:42.775 03:45:26 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:42.775 { 00:35:42.775 "params": { 00:35:42.775 "name": "Nvme$subsystem", 00:35:42.775 "trtype": "$TEST_TRANSPORT", 00:35:42.775 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:42.775 "adrfam": "ipv4", 00:35:42.775 "trsvcid": "$NVMF_PORT", 00:35:42.775 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:42.775 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:42.775 "hdgst": ${hdgst:-false}, 00:35:42.775 "ddgst": ${ddgst:-false} 00:35:42.775 }, 00:35:42.775 "method": "bdev_nvme_attach_controller" 00:35:42.775 } 00:35:42.775 EOF 00:35:42.775 )") 00:35:42.775 03:45:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:35:42.775 03:45:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:42.775 03:45:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:35:42.775 03:45:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:35:42.776 03:45:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:35:42.776 03:45:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:42.776 03:45:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1335 -- # local sanitizers 00:35:42.776 03:45:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:42.776 03:45:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # shift 00:35:42.776 03:45:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local asan_lib= 00:35:42.776 03:45:26 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:35:42.776 03:45:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:35:42.776 03:45:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:35:42.776 03:45:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:35:42.776 03:45:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:35:42.776 03:45:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:42.776 03:45:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # grep libasan 00:35:42.776 03:45:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:35:42.776 03:45:26 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:42.776 03:45:26 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:42.776 { 00:35:42.776 "params": { 00:35:42.776 "name": "Nvme$subsystem", 00:35:42.776 "trtype": "$TEST_TRANSPORT", 00:35:42.776 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:42.776 "adrfam": "ipv4", 00:35:42.776 "trsvcid": "$NVMF_PORT", 00:35:42.776 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:42.776 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:42.776 "hdgst": ${hdgst:-false}, 00:35:42.776 "ddgst": ${ddgst:-false} 00:35:42.776 }, 00:35:42.776 "method": "bdev_nvme_attach_controller" 00:35:42.776 } 00:35:42.776 EOF 00:35:42.776 )") 00:35:42.776 03:45:26 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:35:42.776 03:45:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:35:42.776 03:45:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:35:42.776 03:45:26 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:35:42.776 03:45:26 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:35:42.776 03:45:26 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:35:42.776 "params": { 00:35:42.776 "name": "Nvme0", 00:35:42.776 "trtype": "tcp", 00:35:42.776 "traddr": "10.0.0.2", 00:35:42.776 "adrfam": "ipv4", 00:35:42.776 "trsvcid": "4420", 00:35:42.776 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:42.776 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:42.776 "hdgst": false, 00:35:42.776 "ddgst": false 00:35:42.776 }, 00:35:42.776 "method": "bdev_nvme_attach_controller" 00:35:42.776 },{ 00:35:42.776 "params": { 00:35:42.776 "name": "Nvme1", 00:35:42.776 "trtype": "tcp", 00:35:42.776 "traddr": "10.0.0.2", 00:35:42.776 "adrfam": "ipv4", 00:35:42.776 "trsvcid": "4420", 00:35:42.776 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:42.776 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:42.776 "hdgst": false, 00:35:42.776 "ddgst": false 00:35:42.776 }, 00:35:42.776 "method": "bdev_nvme_attach_controller" 00:35:42.776 }' 00:35:42.776 03:45:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # asan_lib= 00:35:42.776 03:45:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:35:42.776 03:45:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:35:42.776 03:45:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:42.776 03:45:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:35:42.776 03:45:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:35:42.776 03:45:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # asan_lib= 00:35:42.776 03:45:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:35:42.776 03:45:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:42.776 03:45:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:42.776 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:35:42.776 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:35:42.776 fio-3.35 00:35:42.776 Starting 2 threads 00:35:42.776 EAL: No free 2048 kB hugepages reported on node 1 00:35:52.734 00:35:52.734 filename0: (groupid=0, jobs=1): err= 0: pid=2579967: Sun Jul 21 03:45:37 2024 00:35:52.734 read: IOPS=189, BW=758KiB/s (777kB/s)(7600KiB/10021msec) 00:35:52.734 slat (nsec): min=6986, max=95144, avg=10211.01, stdev=5729.39 00:35:52.734 clat (usec): min=580, max=42725, avg=21064.14, stdev=20318.02 00:35:52.734 lat (usec): min=588, max=42755, avg=21074.35, stdev=20317.19 00:35:52.734 clat percentiles (usec): 00:35:52.734 | 1.00th=[ 627], 5.00th=[ 635], 10.00th=[ 652], 20.00th=[ 668], 00:35:52.734 | 30.00th=[ 734], 40.00th=[ 758], 50.00th=[41157], 60.00th=[41157], 00:35:52.734 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:35:52.734 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:35:52.734 | 99.99th=[42730] 00:35:52.734 bw ( KiB/s): min= 704, max= 832, per=56.99%, avg=758.40, stdev=29.55, samples=20 00:35:52.734 iops : min= 176, max= 208, avg=189.60, stdev= 7.39, samples=20 00:35:52.734 lat (usec) : 750=37.53%, 1000=11.74% 00:35:52.734 lat (msec) : 2=0.63%, 50=50.11% 00:35:52.734 cpu : usr=97.39%, sys=2.34%, ctx=15, majf=0, minf=291 00:35:52.734 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:52.734 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:52.734 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:52.734 issued rwts: total=1900,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:52.734 latency : target=0, window=0, percentile=100.00%, depth=4 00:35:52.734 filename1: (groupid=0, jobs=1): err= 0: pid=2579968: Sun Jul 21 03:45:37 2024 00:35:52.734 read: IOPS=143, BW=572KiB/s (586kB/s)(5728KiB/10009msec) 00:35:52.734 slat (nsec): min=7883, max=45490, avg=11143.79, stdev=4955.11 00:35:52.734 clat (usec): min=598, max=42256, avg=27922.39, stdev=18950.79 00:35:52.734 lat (usec): min=607, max=42284, avg=27933.53, stdev=18950.62 00:35:52.734 clat percentiles (usec): 00:35:52.734 | 1.00th=[ 611], 5.00th=[ 660], 10.00th=[ 709], 20.00th=[ 766], 00:35:52.734 | 30.00th=[ 807], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:35:52.734 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:35:52.734 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:35:52.734 | 99.99th=[42206] 00:35:52.734 bw ( KiB/s): min= 384, max= 768, per=42.93%, avg=571.20, stdev=181.99, samples=20 00:35:52.734 iops : min= 96, max= 192, avg=142.80, stdev=45.50, samples=20 00:35:52.734 lat (usec) : 750=14.80%, 1000=17.88% 00:35:52.734 lat (msec) : 50=67.32% 00:35:52.734 cpu : usr=96.57%, sys=3.12%, ctx=14, majf=0, minf=87 00:35:52.734 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:52.734 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:52.734 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:52.734 issued rwts: total=1432,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:52.734 latency : target=0, window=0, percentile=100.00%, depth=4 00:35:52.734 00:35:52.734 Run status group 0 (all jobs): 00:35:52.734 READ: bw=1330KiB/s (1362kB/s), 572KiB/s-758KiB/s (586kB/s-777kB/s), io=13.0MiB (13.6MB), run=10009-10021msec 00:35:52.734 03:45:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:35:52.734 03:45:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:35:52.734 03:45:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:35:52.734 03:45:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:52.734 03:45:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:35:52.734 03:45:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:52.734 03:45:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:52.734 03:45:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:52.734 03:45:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:52.734 03:45:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:52.734 03:45:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:52.734 03:45:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:52.734 03:45:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:52.734 03:45:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:35:52.734 03:45:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:35:52.734 03:45:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:35:52.734 03:45:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:52.734 03:45:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:52.734 03:45:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:52.734 03:45:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:52.734 03:45:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:35:52.734 03:45:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:52.734 03:45:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:52.734 03:45:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:52.734 00:35:52.735 real 0m11.214s 00:35:52.735 user 0m20.672s 00:35:52.735 sys 0m0.836s 00:35:52.735 03:45:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1122 -- # xtrace_disable 00:35:52.735 03:45:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:52.735 ************************************ 00:35:52.735 END TEST fio_dif_1_multi_subsystems 00:35:52.735 ************************************ 00:35:52.735 03:45:37 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:35:52.735 03:45:37 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:35:52.735 03:45:37 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:35:52.735 03:45:37 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:52.735 ************************************ 00:35:52.735 START TEST fio_dif_rand_params 00:35:52.735 ************************************ 00:35:52.735 03:45:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1121 -- # fio_dif_rand_params 00:35:52.735 03:45:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:35:52.735 03:45:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:35:52.735 03:45:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:35:52.735 03:45:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:35:52.735 03:45:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:35:52.735 03:45:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:35:52.735 03:45:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:35:52.735 03:45:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:35:52.735 03:45:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:35:52.735 03:45:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:52.735 03:45:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:35:52.735 03:45:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:35:52.735 03:45:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:35:52.735 03:45:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:52.735 03:45:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:52.735 bdev_null0 00:35:52.735 03:45:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:52.735 03:45:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:52.735 03:45:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:52.735 03:45:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:52.735 03:45:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:52.735 03:45:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:52.735 03:45:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:52.735 03:45:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:52.735 03:45:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:52.735 03:45:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:52.735 03:45:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:52.735 03:45:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:52.735 [2024-07-21 03:45:37.893325] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:52.735 03:45:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:52.735 03:45:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:35:52.735 03:45:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:35:52.735 03:45:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:35:52.735 03:45:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:35:52.735 03:45:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:35:52.735 03:45:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:52.735 03:45:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:52.735 03:45:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:52.735 { 00:35:52.735 "params": { 00:35:52.735 "name": "Nvme$subsystem", 00:35:52.735 "trtype": "$TEST_TRANSPORT", 00:35:52.735 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:52.735 "adrfam": "ipv4", 00:35:52.735 "trsvcid": "$NVMF_PORT", 00:35:52.735 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:52.735 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:52.735 "hdgst": ${hdgst:-false}, 00:35:52.735 "ddgst": ${ddgst:-false} 00:35:52.735 }, 00:35:52.735 "method": "bdev_nvme_attach_controller" 00:35:52.735 } 00:35:52.735 EOF 00:35:52.735 )") 00:35:52.735 03:45:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:35:52.735 03:45:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:52.735 03:45:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:35:52.735 03:45:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:35:52.735 03:45:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:35:52.735 03:45:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:52.735 03:45:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # local sanitizers 00:35:52.735 03:45:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:52.735 03:45:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # shift 00:35:52.735 03:45:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local asan_lib= 00:35:52.735 03:45:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:35:52.735 03:45:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:35:52.735 03:45:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:35:52.735 03:45:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:52.735 03:45:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:52.735 03:45:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libasan 00:35:52.735 03:45:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:35:52.735 03:45:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:35:52.735 03:45:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:35:52.735 03:45:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:35:52.735 "params": { 00:35:52.735 "name": "Nvme0", 00:35:52.735 "trtype": "tcp", 00:35:52.735 "traddr": "10.0.0.2", 00:35:52.735 "adrfam": "ipv4", 00:35:52.735 "trsvcid": "4420", 00:35:52.735 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:52.735 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:52.735 "hdgst": false, 00:35:52.735 "ddgst": false 00:35:52.735 }, 00:35:52.735 "method": "bdev_nvme_attach_controller" 00:35:52.735 }' 00:35:52.735 03:45:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:35:52.735 03:45:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:35:52.735 03:45:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:35:52.735 03:45:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:52.735 03:45:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:35:52.735 03:45:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:35:52.735 03:45:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:35:52.735 03:45:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:35:52.735 03:45:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:52.735 03:45:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:52.993 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:35:52.994 ... 00:35:52.994 fio-3.35 00:35:52.994 Starting 3 threads 00:35:52.994 EAL: No free 2048 kB hugepages reported on node 1 00:35:59.548 00:35:59.548 filename0: (groupid=0, jobs=1): err= 0: pid=2581362: Sun Jul 21 03:45:43 2024 00:35:59.548 read: IOPS=228, BW=28.6MiB/s (30.0MB/s)(144MiB/5048msec) 00:35:59.548 slat (nsec): min=4769, max=60917, avg=16219.26, stdev=5335.01 00:35:59.548 clat (usec): min=4816, max=55282, avg=13064.88, stdev=8107.89 00:35:59.548 lat (usec): min=4844, max=55301, avg=13081.10, stdev=8107.79 00:35:59.548 clat percentiles (usec): 00:35:59.548 | 1.00th=[ 5145], 5.00th=[ 7111], 10.00th=[ 8356], 20.00th=[ 9503], 00:35:59.548 | 30.00th=[11207], 40.00th=[11731], 50.00th=[12125], 60.00th=[12518], 00:35:59.548 | 70.00th=[12780], 80.00th=[13304], 90.00th=[13960], 95.00th=[14746], 00:35:59.548 | 99.00th=[52691], 99.50th=[53740], 99.90th=[54789], 99.95th=[55313], 00:35:59.548 | 99.99th=[55313] 00:35:59.548 bw ( KiB/s): min=27392, max=33536, per=33.92%, avg=29465.60, stdev=1799.91, samples=10 00:35:59.548 iops : min= 214, max= 262, avg=230.20, stdev=14.06, samples=10 00:35:59.548 lat (msec) : 10=22.27%, 20=73.66%, 50=1.56%, 100=2.51% 00:35:59.548 cpu : usr=86.86%, sys=9.21%, ctx=552, majf=0, minf=138 00:35:59.548 IO depths : 1=0.7%, 2=99.3%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:59.548 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:59.548 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:59.548 issued rwts: total=1154,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:59.548 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:59.548 filename0: (groupid=0, jobs=1): err= 0: pid=2581363: Sun Jul 21 03:45:43 2024 00:35:59.548 read: IOPS=217, BW=27.2MiB/s (28.6MB/s)(136MiB/5008msec) 00:35:59.548 slat (nsec): min=4371, max=34782, avg=14201.26, stdev=2374.05 00:35:59.548 clat (usec): min=4978, max=54151, avg=13744.64, stdev=9383.36 00:35:59.548 lat (usec): min=4991, max=54165, avg=13758.84, stdev=9383.38 00:35:59.548 clat percentiles (usec): 00:35:59.548 | 1.00th=[ 6390], 5.00th=[ 8094], 10.00th=[ 8586], 20.00th=[10683], 00:35:59.548 | 30.00th=[11338], 40.00th=[11600], 50.00th=[11994], 60.00th=[12256], 00:35:59.548 | 70.00th=[12518], 80.00th=[12911], 90.00th=[13435], 95.00th=[49021], 00:35:59.548 | 99.00th=[52691], 99.50th=[53216], 99.90th=[53740], 99.95th=[54264], 00:35:59.548 | 99.99th=[54264] 00:35:59.548 bw ( KiB/s): min=23552, max=32000, per=32.09%, avg=27878.40, stdev=2758.39, samples=10 00:35:59.548 iops : min= 184, max= 250, avg=217.80, stdev=21.55, samples=10 00:35:59.548 lat (msec) : 10=15.67%, 20=78.55%, 50=1.74%, 100=4.03% 00:35:59.548 cpu : usr=92.79%, sys=6.59%, ctx=31, majf=0, minf=91 00:35:59.548 IO depths : 1=0.6%, 2=99.4%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:59.548 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:59.548 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:59.548 issued rwts: total=1091,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:59.548 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:59.548 filename0: (groupid=0, jobs=1): err= 0: pid=2581364: Sun Jul 21 03:45:43 2024 00:35:59.548 read: IOPS=235, BW=29.5MiB/s (30.9MB/s)(148MiB/5007msec) 00:35:59.548 slat (nsec): min=4038, max=42949, avg=14480.80, stdev=2516.66 00:35:59.548 clat (usec): min=4789, max=54692, avg=12699.72, stdev=6072.54 00:35:59.548 lat (usec): min=4802, max=54706, avg=12714.20, stdev=6072.49 00:35:59.548 clat percentiles (usec): 00:35:59.548 | 1.00th=[ 5211], 5.00th=[ 6849], 10.00th=[ 8225], 20.00th=[ 9110], 00:35:59.548 | 30.00th=[10683], 40.00th=[11863], 50.00th=[12518], 60.00th=[12911], 00:35:59.548 | 70.00th=[13435], 80.00th=[14484], 90.00th=[15795], 95.00th=[16581], 00:35:59.548 | 99.00th=[51119], 99.50th=[52167], 99.90th=[54789], 99.95th=[54789], 00:35:59.548 | 99.99th=[54789] 00:35:59.548 bw ( KiB/s): min=25344, max=36096, per=34.72%, avg=30162.90, stdev=3072.83, samples=10 00:35:59.548 iops : min= 198, max= 282, avg=235.60, stdev=24.00, samples=10 00:35:59.548 lat (msec) : 10=27.18%, 20=70.62%, 50=1.02%, 100=1.19% 00:35:59.548 cpu : usr=92.17%, sys=6.99%, ctx=53, majf=0, minf=56 00:35:59.548 IO depths : 1=0.4%, 2=99.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:59.548 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:59.548 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:59.548 issued rwts: total=1181,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:59.548 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:59.548 00:35:59.548 Run status group 0 (all jobs): 00:35:59.548 READ: bw=84.8MiB/s (89.0MB/s), 27.2MiB/s-29.5MiB/s (28.6MB/s-30.9MB/s), io=428MiB (449MB), run=5007-5048msec 00:35:59.548 03:45:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:35:59.548 03:45:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:35:59.548 03:45:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:59.548 03:45:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:59.548 03:45:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:35:59.548 03:45:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:59.549 03:45:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:59.549 03:45:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:59.549 03:45:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:59.549 03:45:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:59.549 03:45:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:59.549 03:45:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:59.549 03:45:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:59.549 03:45:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:35:59.549 03:45:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:35:59.549 03:45:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:35:59.549 03:45:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:35:59.549 03:45:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:35:59.549 03:45:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:35:59.549 03:45:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:35:59.549 03:45:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:35:59.549 03:45:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:59.549 03:45:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:35:59.549 03:45:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:35:59.549 03:45:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:35:59.549 03:45:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:59.549 03:45:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:59.549 bdev_null0 00:35:59.549 03:45:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:59.549 03:45:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:59.549 03:45:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:59.549 03:45:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:59.549 03:45:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:59.549 03:45:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:59.549 03:45:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:59.549 03:45:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:59.549 03:45:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:59.549 03:45:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:59.549 03:45:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:59.549 03:45:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:59.549 [2024-07-21 03:45:44.032644] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:59.549 03:45:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:59.549 03:45:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:59.549 03:45:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:35:59.549 03:45:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:35:59.549 03:45:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:35:59.549 03:45:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:59.549 03:45:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:59.549 bdev_null1 00:35:59.549 03:45:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:59.549 03:45:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:35:59.549 03:45:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:59.549 03:45:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:59.549 03:45:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:59.549 03:45:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:35:59.549 03:45:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:59.549 03:45:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:59.549 03:45:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:59.549 03:45:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:59.549 03:45:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:59.549 03:45:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:59.549 03:45:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:59.549 03:45:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:59.549 03:45:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:35:59.549 03:45:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:35:59.549 03:45:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:35:59.549 03:45:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:59.549 03:45:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:59.549 bdev_null2 00:35:59.549 03:45:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:59.549 03:45:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:35:59.549 03:45:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:59.549 03:45:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:59.549 03:45:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:59.549 03:45:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:35:59.549 03:45:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:59.549 03:45:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:59.549 03:45:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:59.549 03:45:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:35:59.549 03:45:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:59.549 03:45:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:59.549 03:45:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:59.549 03:45:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:35:59.549 03:45:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:35:59.549 03:45:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:35:59.549 03:45:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:59.549 03:45:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:35:59.549 03:45:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:59.549 03:45:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:35:59.549 03:45:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:35:59.549 03:45:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:35:59.549 03:45:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:59.549 03:45:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:59.549 03:45:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:35:59.549 03:45:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # local sanitizers 00:35:59.549 03:45:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:59.549 { 00:35:59.549 "params": { 00:35:59.549 "name": "Nvme$subsystem", 00:35:59.549 "trtype": "$TEST_TRANSPORT", 00:35:59.549 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:59.549 "adrfam": "ipv4", 00:35:59.549 "trsvcid": "$NVMF_PORT", 00:35:59.549 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:59.549 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:59.549 "hdgst": ${hdgst:-false}, 00:35:59.549 "ddgst": ${ddgst:-false} 00:35:59.549 }, 00:35:59.549 "method": "bdev_nvme_attach_controller" 00:35:59.549 } 00:35:59.549 EOF 00:35:59.549 )") 00:35:59.549 03:45:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:59.549 03:45:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:35:59.549 03:45:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # shift 00:35:59.549 03:45:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local asan_lib= 00:35:59.549 03:45:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:35:59.549 03:45:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:35:59.549 03:45:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:59.549 03:45:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:35:59.549 03:45:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libasan 00:35:59.549 03:45:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:59.549 03:45:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:35:59.549 03:45:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:35:59.549 03:45:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:59.549 03:45:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:59.549 { 00:35:59.549 "params": { 00:35:59.549 "name": "Nvme$subsystem", 00:35:59.549 "trtype": "$TEST_TRANSPORT", 00:35:59.549 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:59.549 "adrfam": "ipv4", 00:35:59.549 "trsvcid": "$NVMF_PORT", 00:35:59.549 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:59.549 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:59.549 "hdgst": ${hdgst:-false}, 00:35:59.549 "ddgst": ${ddgst:-false} 00:35:59.549 }, 00:35:59.549 "method": "bdev_nvme_attach_controller" 00:35:59.549 } 00:35:59.549 EOF 00:35:59.549 )") 00:35:59.549 03:45:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:35:59.549 03:45:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:59.549 03:45:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:35:59.549 03:45:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:35:59.549 03:45:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:35:59.549 03:45:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:59.549 03:45:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:59.549 03:45:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:59.549 { 00:35:59.549 "params": { 00:35:59.549 "name": "Nvme$subsystem", 00:35:59.549 "trtype": "$TEST_TRANSPORT", 00:35:59.549 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:59.549 "adrfam": "ipv4", 00:35:59.549 "trsvcid": "$NVMF_PORT", 00:35:59.549 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:59.549 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:59.549 "hdgst": ${hdgst:-false}, 00:35:59.549 "ddgst": ${ddgst:-false} 00:35:59.549 }, 00:35:59.549 "method": "bdev_nvme_attach_controller" 00:35:59.549 } 00:35:59.549 EOF 00:35:59.549 )") 00:35:59.549 03:45:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:35:59.549 03:45:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:35:59.549 03:45:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:35:59.549 03:45:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:35:59.549 "params": { 00:35:59.549 "name": "Nvme0", 00:35:59.549 "trtype": "tcp", 00:35:59.549 "traddr": "10.0.0.2", 00:35:59.549 "adrfam": "ipv4", 00:35:59.549 "trsvcid": "4420", 00:35:59.549 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:59.549 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:59.549 "hdgst": false, 00:35:59.549 "ddgst": false 00:35:59.549 }, 00:35:59.549 "method": "bdev_nvme_attach_controller" 00:35:59.549 },{ 00:35:59.549 "params": { 00:35:59.549 "name": "Nvme1", 00:35:59.549 "trtype": "tcp", 00:35:59.549 "traddr": "10.0.0.2", 00:35:59.550 "adrfam": "ipv4", 00:35:59.550 "trsvcid": "4420", 00:35:59.550 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:59.550 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:59.550 "hdgst": false, 00:35:59.550 "ddgst": false 00:35:59.550 }, 00:35:59.550 "method": "bdev_nvme_attach_controller" 00:35:59.550 },{ 00:35:59.550 "params": { 00:35:59.550 "name": "Nvme2", 00:35:59.550 "trtype": "tcp", 00:35:59.550 "traddr": "10.0.0.2", 00:35:59.550 "adrfam": "ipv4", 00:35:59.550 "trsvcid": "4420", 00:35:59.550 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:35:59.550 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:35:59.550 "hdgst": false, 00:35:59.550 "ddgst": false 00:35:59.550 }, 00:35:59.550 "method": "bdev_nvme_attach_controller" 00:35:59.550 }' 00:35:59.550 03:45:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:35:59.550 03:45:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:35:59.550 03:45:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:35:59.550 03:45:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:59.550 03:45:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:35:59.550 03:45:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:35:59.550 03:45:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:35:59.550 03:45:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:35:59.550 03:45:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:59.550 03:45:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:59.550 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:35:59.550 ... 00:35:59.550 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:35:59.550 ... 00:35:59.550 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:35:59.550 ... 00:35:59.550 fio-3.35 00:35:59.550 Starting 24 threads 00:35:59.550 EAL: No free 2048 kB hugepages reported on node 1 00:36:11.736 00:36:11.736 filename0: (groupid=0, jobs=1): err= 0: pid=2582230: Sun Jul 21 03:45:55 2024 00:36:11.736 read: IOPS=480, BW=1920KiB/s (1966kB/s)(18.8MiB/10024msec) 00:36:11.736 slat (nsec): min=8491, max=97696, avg=33189.23, stdev=13309.81 00:36:11.736 clat (usec): min=23017, max=44221, avg=33022.79, stdev=1385.19 00:36:11.736 lat (usec): min=23051, max=44276, avg=33055.98, stdev=1383.75 00:36:11.736 clat percentiles (usec): 00:36:11.736 | 1.00th=[32113], 5.00th=[32113], 10.00th=[32375], 20.00th=[32375], 00:36:11.736 | 30.00th=[32375], 40.00th=[32637], 50.00th=[32637], 60.00th=[32637], 00:36:11.736 | 70.00th=[32900], 80.00th=[33424], 90.00th=[34341], 95.00th=[35914], 00:36:11.736 | 99.00th=[36439], 99.50th=[43779], 99.90th=[44303], 99.95th=[44303], 00:36:11.736 | 99.99th=[44303] 00:36:11.736 bw ( KiB/s): min= 1664, max= 2048, per=4.17%, avg=1920.00, stdev=73.90, samples=19 00:36:11.736 iops : min= 416, max= 512, avg=480.00, stdev=18.48, samples=19 00:36:11.736 lat (msec) : 50=100.00% 00:36:11.736 cpu : usr=98.09%, sys=1.50%, ctx=16, majf=0, minf=9 00:36:11.736 IO depths : 1=6.3%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:36:11.736 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:11.736 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:11.736 issued rwts: total=4812,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:11.736 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:11.736 filename0: (groupid=0, jobs=1): err= 0: pid=2582231: Sun Jul 21 03:45:55 2024 00:36:11.736 read: IOPS=481, BW=1925KiB/s (1971kB/s)(18.8MiB/10006msec) 00:36:11.736 slat (usec): min=5, max=106, avg=42.24, stdev=22.52 00:36:11.736 clat (usec): min=16419, max=42255, avg=32878.15, stdev=1499.58 00:36:11.736 lat (usec): min=16464, max=42282, avg=32920.38, stdev=1496.66 00:36:11.736 clat percentiles (usec): 00:36:11.736 | 1.00th=[31589], 5.00th=[31851], 10.00th=[32113], 20.00th=[32375], 00:36:11.736 | 30.00th=[32375], 40.00th=[32637], 50.00th=[32637], 60.00th=[32637], 00:36:11.736 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33817], 95.00th=[35390], 00:36:11.736 | 99.00th=[36963], 99.50th=[39584], 99.90th=[42206], 99.95th=[42206], 00:36:11.736 | 99.99th=[42206] 00:36:11.736 bw ( KiB/s): min= 1792, max= 2048, per=4.17%, avg=1920.00, stdev=71.93, samples=20 00:36:11.736 iops : min= 448, max= 512, avg=480.00, stdev=17.98, samples=20 00:36:11.736 lat (msec) : 20=0.33%, 50=99.67% 00:36:11.736 cpu : usr=95.96%, sys=2.37%, ctx=237, majf=0, minf=9 00:36:11.736 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:36:11.736 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:11.736 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:11.736 issued rwts: total=4816,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:11.736 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:11.736 filename0: (groupid=0, jobs=1): err= 0: pid=2582232: Sun Jul 21 03:45:55 2024 00:36:11.736 read: IOPS=481, BW=1924KiB/s (1970kB/s)(18.8MiB/10011msec) 00:36:11.736 slat (nsec): min=8557, max=70235, avg=32761.51, stdev=8900.10 00:36:11.736 clat (usec): min=10768, max=51687, avg=32948.08, stdev=2028.28 00:36:11.736 lat (usec): min=10794, max=51736, avg=32980.84, stdev=2028.67 00:36:11.736 clat percentiles (usec): 00:36:11.736 | 1.00th=[31851], 5.00th=[32113], 10.00th=[32375], 20.00th=[32375], 00:36:11.736 | 30.00th=[32375], 40.00th=[32637], 50.00th=[32637], 60.00th=[32637], 00:36:11.736 | 70.00th=[32900], 80.00th=[33424], 90.00th=[33817], 95.00th=[35390], 00:36:11.736 | 99.00th=[36963], 99.50th=[42206], 99.90th=[51643], 99.95th=[51643], 00:36:11.736 | 99.99th=[51643] 00:36:11.736 bw ( KiB/s): min= 1664, max= 2048, per=4.17%, avg=1920.00, stdev=85.33, samples=19 00:36:11.736 iops : min= 416, max= 512, avg=480.00, stdev=21.33, samples=19 00:36:11.736 lat (msec) : 20=0.33%, 50=99.34%, 100=0.33% 00:36:11.736 cpu : usr=98.14%, sys=1.41%, ctx=16, majf=0, minf=9 00:36:11.736 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:36:11.736 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:11.736 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:11.736 issued rwts: total=4816,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:11.736 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:11.736 filename0: (groupid=0, jobs=1): err= 0: pid=2582233: Sun Jul 21 03:45:55 2024 00:36:11.736 read: IOPS=479, BW=1920KiB/s (1966kB/s)(18.8MiB/10002msec) 00:36:11.736 slat (usec): min=8, max=129, avg=43.04, stdev=19.35 00:36:11.736 clat (usec): min=18712, max=65600, avg=32966.66, stdev=2337.70 00:36:11.736 lat (usec): min=18756, max=65644, avg=33009.71, stdev=2336.99 00:36:11.736 clat percentiles (usec): 00:36:11.736 | 1.00th=[31327], 5.00th=[31851], 10.00th=[32113], 20.00th=[32375], 00:36:11.736 | 30.00th=[32375], 40.00th=[32637], 50.00th=[32637], 60.00th=[32637], 00:36:11.736 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33817], 95.00th=[35390], 00:36:11.736 | 99.00th=[36963], 99.50th=[43779], 99.90th=[65274], 99.95th=[65274], 00:36:11.736 | 99.99th=[65799] 00:36:11.736 bw ( KiB/s): min= 1536, max= 2048, per=4.17%, avg=1920.00, stdev=104.51, samples=19 00:36:11.736 iops : min= 384, max= 512, avg=480.00, stdev=26.13, samples=19 00:36:11.736 lat (msec) : 20=0.33%, 50=99.33%, 100=0.33% 00:36:11.736 cpu : usr=98.20%, sys=1.38%, ctx=14, majf=0, minf=9 00:36:11.736 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:36:11.736 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:11.736 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:11.736 issued rwts: total=4800,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:11.736 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:11.736 filename0: (groupid=0, jobs=1): err= 0: pid=2582234: Sun Jul 21 03:45:55 2024 00:36:11.736 read: IOPS=479, BW=1918KiB/s (1965kB/s)(18.8MiB/10008msec) 00:36:11.736 slat (nsec): min=8099, max=80556, avg=14666.60, stdev=10815.05 00:36:11.736 clat (usec): min=30939, max=54871, avg=33214.49, stdev=1657.77 00:36:11.736 lat (usec): min=30948, max=54921, avg=33229.15, stdev=1661.50 00:36:11.736 clat percentiles (usec): 00:36:11.736 | 1.00th=[32375], 5.00th=[32375], 10.00th=[32375], 20.00th=[32637], 00:36:11.736 | 30.00th=[32637], 40.00th=[32637], 50.00th=[32900], 60.00th=[32900], 00:36:11.736 | 70.00th=[32900], 80.00th=[33424], 90.00th=[33817], 95.00th=[35390], 00:36:11.736 | 99.00th=[36963], 99.50th=[43779], 99.90th=[54789], 99.95th=[54789], 00:36:11.736 | 99.99th=[54789] 00:36:11.736 bw ( KiB/s): min= 1664, max= 2048, per=4.17%, avg=1920.00, stdev=85.33, samples=19 00:36:11.736 iops : min= 416, max= 512, avg=480.00, stdev=21.33, samples=19 00:36:11.736 lat (msec) : 50=99.67%, 100=0.33% 00:36:11.736 cpu : usr=98.24%, sys=1.39%, ctx=16, majf=0, minf=9 00:36:11.736 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:36:11.736 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:11.736 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:11.736 issued rwts: total=4800,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:11.736 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:11.736 filename0: (groupid=0, jobs=1): err= 0: pid=2582235: Sun Jul 21 03:45:55 2024 00:36:11.736 read: IOPS=480, BW=1922KiB/s (1968kB/s)(18.8MiB/10022msec) 00:36:11.736 slat (usec): min=9, max=117, avg=36.19, stdev=12.22 00:36:11.736 clat (usec): min=23054, max=52581, avg=32952.12, stdev=1430.03 00:36:11.736 lat (usec): min=23068, max=52603, avg=32988.31, stdev=1429.32 00:36:11.736 clat percentiles (usec): 00:36:11.736 | 1.00th=[31851], 5.00th=[32113], 10.00th=[32113], 20.00th=[32375], 00:36:11.736 | 30.00th=[32375], 40.00th=[32637], 50.00th=[32637], 60.00th=[32637], 00:36:11.736 | 70.00th=[32900], 80.00th=[33162], 90.00th=[34341], 95.00th=[35390], 00:36:11.736 | 99.00th=[36439], 99.50th=[42730], 99.90th=[43779], 99.95th=[43779], 00:36:11.736 | 99.99th=[52691] 00:36:11.736 bw ( KiB/s): min= 1664, max= 2048, per=4.17%, avg=1920.00, stdev=73.90, samples=19 00:36:11.736 iops : min= 416, max= 512, avg=480.00, stdev=18.48, samples=19 00:36:11.736 lat (msec) : 50=99.96%, 100=0.04% 00:36:11.736 cpu : usr=97.94%, sys=1.50%, ctx=39, majf=0, minf=9 00:36:11.736 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:36:11.736 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:11.736 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:11.736 issued rwts: total=4816,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:11.736 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:11.736 filename0: (groupid=0, jobs=1): err= 0: pid=2582236: Sun Jul 21 03:45:55 2024 00:36:11.736 read: IOPS=479, BW=1917KiB/s (1963kB/s)(18.8MiB/10017msec) 00:36:11.736 slat (nsec): min=8816, max=70235, avg=32401.37, stdev=8906.91 00:36:11.736 clat (usec): min=25893, max=79835, avg=33094.10, stdev=2469.14 00:36:11.736 lat (usec): min=25941, max=79896, avg=33126.50, stdev=2468.58 00:36:11.736 clat percentiles (usec): 00:36:11.736 | 1.00th=[32113], 5.00th=[32113], 10.00th=[32375], 20.00th=[32375], 00:36:11.736 | 30.00th=[32375], 40.00th=[32637], 50.00th=[32637], 60.00th=[32637], 00:36:11.736 | 70.00th=[32900], 80.00th=[33424], 90.00th=[33817], 95.00th=[35390], 00:36:11.736 | 99.00th=[36963], 99.50th=[42206], 99.90th=[69731], 99.95th=[69731], 00:36:11.736 | 99.99th=[80217] 00:36:11.736 bw ( KiB/s): min= 1536, max= 2048, per=4.15%, avg=1913.26, stdev=99.82, samples=19 00:36:11.736 iops : min= 384, max= 512, avg=478.32, stdev=24.96, samples=19 00:36:11.736 lat (msec) : 50=99.67%, 100=0.33% 00:36:11.736 cpu : usr=98.12%, sys=1.48%, ctx=14, majf=0, minf=9 00:36:11.736 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:36:11.736 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:11.736 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:11.736 issued rwts: total=4800,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:11.736 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:11.736 filename0: (groupid=0, jobs=1): err= 0: pid=2582237: Sun Jul 21 03:45:55 2024 00:36:11.736 read: IOPS=479, BW=1920KiB/s (1966kB/s)(18.8MiB/10002msec) 00:36:11.736 slat (nsec): min=8760, max=97817, avg=37577.94, stdev=17678.81 00:36:11.736 clat (usec): min=18737, max=65505, avg=33029.86, stdev=2658.84 00:36:11.736 lat (usec): min=18774, max=65549, avg=33067.44, stdev=2659.66 00:36:11.736 clat percentiles (usec): 00:36:11.736 | 1.00th=[30540], 5.00th=[32113], 10.00th=[32113], 20.00th=[32375], 00:36:11.736 | 30.00th=[32637], 40.00th=[32637], 50.00th=[32637], 60.00th=[32900], 00:36:11.736 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33817], 95.00th=[35390], 00:36:11.736 | 99.00th=[43779], 99.50th=[45351], 99.90th=[65274], 99.95th=[65274], 00:36:11.736 | 99.99th=[65274] 00:36:11.736 bw ( KiB/s): min= 1539, max= 2048, per=4.17%, avg=1920.16, stdev=102.94, samples=19 00:36:11.736 iops : min= 384, max= 512, avg=480.00, stdev=25.89, samples=19 00:36:11.736 lat (msec) : 20=0.58%, 50=99.08%, 100=0.33% 00:36:11.736 cpu : usr=96.55%, sys=2.18%, ctx=153, majf=0, minf=9 00:36:11.736 IO depths : 1=4.2%, 2=10.5%, 4=25.0%, 8=52.0%, 16=8.3%, 32=0.0%, >=64=0.0% 00:36:11.736 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:11.736 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:11.737 issued rwts: total=4800,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:11.737 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:11.737 filename1: (groupid=0, jobs=1): err= 0: pid=2582238: Sun Jul 21 03:45:55 2024 00:36:11.737 read: IOPS=479, BW=1918KiB/s (1964kB/s)(18.8MiB/10009msec) 00:36:11.737 slat (usec): min=11, max=119, avg=74.56, stdev=10.65 00:36:11.737 clat (usec): min=29704, max=55036, avg=32696.81, stdev=1766.77 00:36:11.737 lat (usec): min=29783, max=55088, avg=32771.37, stdev=1762.70 00:36:11.737 clat percentiles (usec): 00:36:11.737 | 1.00th=[31327], 5.00th=[31589], 10.00th=[31589], 20.00th=[31851], 00:36:11.737 | 30.00th=[32113], 40.00th=[32113], 50.00th=[32375], 60.00th=[32637], 00:36:11.737 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33817], 95.00th=[35390], 00:36:11.737 | 99.00th=[36439], 99.50th=[43254], 99.90th=[54789], 99.95th=[54789], 00:36:11.737 | 99.99th=[54789] 00:36:11.737 bw ( KiB/s): min= 1664, max= 2048, per=4.17%, avg=1920.00, stdev=85.33, samples=19 00:36:11.737 iops : min= 416, max= 512, avg=480.00, stdev=21.33, samples=19 00:36:11.737 lat (msec) : 50=99.67%, 100=0.33% 00:36:11.737 cpu : usr=98.07%, sys=1.48%, ctx=16, majf=0, minf=9 00:36:11.737 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:36:11.737 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:11.737 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:11.737 issued rwts: total=4800,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:11.737 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:11.737 filename1: (groupid=0, jobs=1): err= 0: pid=2582239: Sun Jul 21 03:45:55 2024 00:36:11.737 read: IOPS=479, BW=1920KiB/s (1966kB/s)(18.8MiB/10002msec) 00:36:11.737 slat (nsec): min=8206, max=59160, avg=26609.83, stdev=10227.48 00:36:11.737 clat (usec): min=18749, max=65380, avg=33092.24, stdev=2360.19 00:36:11.737 lat (usec): min=18761, max=65419, avg=33118.85, stdev=2360.16 00:36:11.737 clat percentiles (usec): 00:36:11.737 | 1.00th=[31851], 5.00th=[32113], 10.00th=[32375], 20.00th=[32375], 00:36:11.737 | 30.00th=[32637], 40.00th=[32637], 50.00th=[32637], 60.00th=[32900], 00:36:11.737 | 70.00th=[32900], 80.00th=[33424], 90.00th=[33817], 95.00th=[35390], 00:36:11.737 | 99.00th=[37487], 99.50th=[43779], 99.90th=[65274], 99.95th=[65274], 00:36:11.737 | 99.99th=[65274] 00:36:11.737 bw ( KiB/s): min= 1536, max= 2048, per=4.17%, avg=1920.00, stdev=104.51, samples=19 00:36:11.737 iops : min= 384, max= 512, avg=480.00, stdev=26.13, samples=19 00:36:11.737 lat (msec) : 20=0.33%, 50=99.29%, 100=0.38% 00:36:11.737 cpu : usr=96.60%, sys=2.19%, ctx=119, majf=0, minf=9 00:36:11.737 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:36:11.737 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:11.737 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:11.737 issued rwts: total=4800,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:11.737 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:11.737 filename1: (groupid=0, jobs=1): err= 0: pid=2582240: Sun Jul 21 03:45:55 2024 00:36:11.737 read: IOPS=480, BW=1924KiB/s (1970kB/s)(18.8MiB/10014msec) 00:36:11.737 slat (nsec): min=8329, max=98440, avg=27839.56, stdev=13130.69 00:36:11.737 clat (usec): min=18891, max=43942, avg=33038.13, stdev=1398.54 00:36:11.737 lat (usec): min=18960, max=43960, avg=33065.97, stdev=1396.15 00:36:11.737 clat percentiles (usec): 00:36:11.737 | 1.00th=[32113], 5.00th=[32375], 10.00th=[32375], 20.00th=[32375], 00:36:11.737 | 30.00th=[32637], 40.00th=[32637], 50.00th=[32637], 60.00th=[32900], 00:36:11.737 | 70.00th=[32900], 80.00th=[33424], 90.00th=[34341], 95.00th=[35914], 00:36:11.737 | 99.00th=[36439], 99.50th=[38011], 99.90th=[43779], 99.95th=[43779], 00:36:11.737 | 99.99th=[43779] 00:36:11.737 bw ( KiB/s): min= 1792, max= 2048, per=4.17%, avg=1920.15, stdev=71.65, samples=20 00:36:11.737 iops : min= 448, max= 512, avg=480.00, stdev=17.98, samples=20 00:36:11.737 lat (msec) : 20=0.33%, 50=99.67% 00:36:11.737 cpu : usr=97.13%, sys=1.88%, ctx=62, majf=0, minf=11 00:36:11.737 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:36:11.737 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:11.737 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:11.737 issued rwts: total=4816,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:11.737 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:11.737 filename1: (groupid=0, jobs=1): err= 0: pid=2582241: Sun Jul 21 03:45:55 2024 00:36:11.737 read: IOPS=479, BW=1920KiB/s (1966kB/s)(18.8MiB/10002msec) 00:36:11.737 slat (usec): min=8, max=113, avg=48.67, stdev=22.18 00:36:11.737 clat (usec): min=18714, max=79737, avg=32904.16, stdev=2566.72 00:36:11.737 lat (usec): min=18735, max=79777, avg=32952.83, stdev=2565.53 00:36:11.737 clat percentiles (usec): 00:36:11.737 | 1.00th=[31327], 5.00th=[31851], 10.00th=[32113], 20.00th=[32113], 00:36:11.737 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32637], 60.00th=[32637], 00:36:11.737 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33817], 95.00th=[35390], 00:36:11.737 | 99.00th=[37487], 99.50th=[44303], 99.90th=[65274], 99.95th=[65274], 00:36:11.737 | 99.99th=[80217] 00:36:11.737 bw ( KiB/s): min= 1539, max= 2048, per=4.17%, avg=1920.16, stdev=103.90, samples=19 00:36:11.737 iops : min= 384, max= 512, avg=480.00, stdev=26.13, samples=19 00:36:11.737 lat (msec) : 20=0.33%, 50=99.19%, 100=0.48% 00:36:11.737 cpu : usr=98.42%, sys=1.17%, ctx=13, majf=0, minf=9 00:36:11.737 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.3%, 32=0.0%, >=64=0.0% 00:36:11.737 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:11.737 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:11.737 issued rwts: total=4800,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:11.737 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:11.737 filename1: (groupid=0, jobs=1): err= 0: pid=2582242: Sun Jul 21 03:45:55 2024 00:36:11.737 read: IOPS=479, BW=1919KiB/s (1965kB/s)(18.8MiB/10024msec) 00:36:11.737 slat (nsec): min=8376, max=98435, avg=32360.48, stdev=11238.39 00:36:11.737 clat (usec): min=23018, max=44255, avg=33042.77, stdev=1356.85 00:36:11.737 lat (usec): min=23059, max=44312, avg=33075.13, stdev=1355.81 00:36:11.737 clat percentiles (usec): 00:36:11.737 | 1.00th=[32113], 5.00th=[32113], 10.00th=[32375], 20.00th=[32375], 00:36:11.737 | 30.00th=[32637], 40.00th=[32637], 50.00th=[32637], 60.00th=[32900], 00:36:11.737 | 70.00th=[32900], 80.00th=[33424], 90.00th=[34341], 95.00th=[35914], 00:36:11.737 | 99.00th=[36439], 99.50th=[43779], 99.90th=[44303], 99.95th=[44303], 00:36:11.737 | 99.99th=[44303] 00:36:11.737 bw ( KiB/s): min= 1664, max= 2048, per=4.17%, avg=1920.00, stdev=73.90, samples=19 00:36:11.737 iops : min= 416, max= 512, avg=480.00, stdev=18.48, samples=19 00:36:11.737 lat (msec) : 50=100.00% 00:36:11.737 cpu : usr=97.17%, sys=1.88%, ctx=58, majf=0, minf=9 00:36:11.737 IO depths : 1=6.3%, 2=12.5%, 4=25.0%, 8=49.9%, 16=6.2%, 32=0.0%, >=64=0.0% 00:36:11.737 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:11.737 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:11.737 issued rwts: total=4809,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:11.737 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:11.737 filename1: (groupid=0, jobs=1): err= 0: pid=2582243: Sun Jul 21 03:45:55 2024 00:36:11.737 read: IOPS=481, BW=1924KiB/s (1971kB/s)(18.8MiB/10010msec) 00:36:11.737 slat (nsec): min=8571, max=75950, avg=30279.49, stdev=8214.44 00:36:11.737 clat (usec): min=10794, max=50720, avg=32974.51, stdev=1994.11 00:36:11.737 lat (usec): min=10803, max=50757, avg=33004.79, stdev=1994.57 00:36:11.737 clat percentiles (usec): 00:36:11.737 | 1.00th=[32113], 5.00th=[32113], 10.00th=[32375], 20.00th=[32375], 00:36:11.737 | 30.00th=[32375], 40.00th=[32637], 50.00th=[32637], 60.00th=[32637], 00:36:11.737 | 70.00th=[32900], 80.00th=[33424], 90.00th=[33817], 95.00th=[35390], 00:36:11.737 | 99.00th=[36963], 99.50th=[42206], 99.90th=[50594], 99.95th=[50594], 00:36:11.737 | 99.99th=[50594] 00:36:11.737 bw ( KiB/s): min= 1667, max= 2048, per=4.17%, avg=1920.16, stdev=84.83, samples=19 00:36:11.737 iops : min= 416, max= 512, avg=480.00, stdev=21.33, samples=19 00:36:11.737 lat (msec) : 20=0.33%, 50=99.34%, 100=0.33% 00:36:11.737 cpu : usr=96.16%, sys=2.34%, ctx=288, majf=0, minf=9 00:36:11.737 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:36:11.737 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:11.737 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:11.737 issued rwts: total=4816,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:11.737 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:11.737 filename1: (groupid=0, jobs=1): err= 0: pid=2582244: Sun Jul 21 03:45:55 2024 00:36:11.737 read: IOPS=479, BW=1918KiB/s (1965kB/s)(18.8MiB/10008msec) 00:36:11.737 slat (usec): min=11, max=102, avg=30.98, stdev= 9.13 00:36:11.737 clat (usec): min=25927, max=70695, avg=33079.24, stdev=2001.26 00:36:11.737 lat (usec): min=25973, max=70789, avg=33110.22, stdev=2003.61 00:36:11.737 clat percentiles (usec): 00:36:11.737 | 1.00th=[32113], 5.00th=[32113], 10.00th=[32375], 20.00th=[32375], 00:36:11.737 | 30.00th=[32375], 40.00th=[32637], 50.00th=[32637], 60.00th=[32637], 00:36:11.737 | 70.00th=[32900], 80.00th=[33424], 90.00th=[33817], 95.00th=[35390], 00:36:11.737 | 99.00th=[36963], 99.50th=[42206], 99.90th=[60031], 99.95th=[60031], 00:36:11.737 | 99.99th=[70779] 00:36:11.737 bw ( KiB/s): min= 1664, max= 2048, per=4.17%, avg=1920.00, stdev=85.33, samples=19 00:36:11.737 iops : min= 416, max= 512, avg=480.00, stdev=21.33, samples=19 00:36:11.737 lat (msec) : 50=99.67%, 100=0.33% 00:36:11.737 cpu : usr=98.34%, sys=1.27%, ctx=13, majf=0, minf=9 00:36:11.737 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:36:11.737 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:11.737 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:11.737 issued rwts: total=4800,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:11.737 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:11.737 filename1: (groupid=0, jobs=1): err= 0: pid=2582245: Sun Jul 21 03:45:55 2024 00:36:11.737 read: IOPS=479, BW=1917KiB/s (1963kB/s)(18.8MiB/10023msec) 00:36:11.737 slat (nsec): min=10470, max=86627, avg=36516.12, stdev=10944.78 00:36:11.737 clat (usec): min=22845, max=54213, avg=33000.82, stdev=1358.71 00:36:11.737 lat (usec): min=22884, max=54264, avg=33037.33, stdev=1360.71 00:36:11.737 clat percentiles (usec): 00:36:11.737 | 1.00th=[32113], 5.00th=[32113], 10.00th=[32113], 20.00th=[32375], 00:36:11.737 | 30.00th=[32375], 40.00th=[32637], 50.00th=[32637], 60.00th=[32637], 00:36:11.737 | 70.00th=[32900], 80.00th=[33424], 90.00th=[34341], 95.00th=[35390], 00:36:11.737 | 99.00th=[36439], 99.50th=[43779], 99.90th=[44303], 99.95th=[44303], 00:36:11.737 | 99.99th=[54264] 00:36:11.737 bw ( KiB/s): min= 1664, max= 2048, per=4.17%, avg=1920.00, stdev=73.90, samples=19 00:36:11.737 iops : min= 416, max= 512, avg=480.00, stdev=18.48, samples=19 00:36:11.737 lat (msec) : 50=99.96%, 100=0.04% 00:36:11.737 cpu : usr=96.27%, sys=2.23%, ctx=112, majf=0, minf=9 00:36:11.737 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:36:11.737 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:11.737 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:11.737 issued rwts: total=4803,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:11.737 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:11.737 filename2: (groupid=0, jobs=1): err= 0: pid=2582246: Sun Jul 21 03:45:55 2024 00:36:11.737 read: IOPS=479, BW=1918KiB/s (1964kB/s)(18.8MiB/10023msec) 00:36:11.737 slat (nsec): min=10616, max=92007, avg=35380.63, stdev=12054.52 00:36:11.737 clat (usec): min=22923, max=46010, avg=32985.45, stdev=1377.06 00:36:11.737 lat (usec): min=22957, max=46041, avg=33020.83, stdev=1376.02 00:36:11.737 clat percentiles (usec): 00:36:11.737 | 1.00th=[32113], 5.00th=[32113], 10.00th=[32113], 20.00th=[32375], 00:36:11.737 | 30.00th=[32375], 40.00th=[32637], 50.00th=[32637], 60.00th=[32637], 00:36:11.737 | 70.00th=[32900], 80.00th=[33162], 90.00th=[34341], 95.00th=[35390], 00:36:11.737 | 99.00th=[36439], 99.50th=[43779], 99.90th=[44303], 99.95th=[44303], 00:36:11.737 | 99.99th=[45876] 00:36:11.737 bw ( KiB/s): min= 1664, max= 2048, per=4.17%, avg=1920.00, stdev=73.90, samples=19 00:36:11.737 iops : min= 416, max= 512, avg=480.00, stdev=18.48, samples=19 00:36:11.737 lat (msec) : 50=100.00% 00:36:11.737 cpu : usr=98.26%, sys=1.33%, ctx=13, majf=0, minf=9 00:36:11.737 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:36:11.737 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:11.737 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:11.737 issued rwts: total=4805,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:11.737 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:11.738 filename2: (groupid=0, jobs=1): err= 0: pid=2582247: Sun Jul 21 03:45:55 2024 00:36:11.738 read: IOPS=479, BW=1917KiB/s (1963kB/s)(18.8MiB/10014msec) 00:36:11.738 slat (nsec): min=6859, max=60098, avg=30423.92, stdev=9203.19 00:36:11.738 clat (usec): min=23147, max=66114, avg=33115.05, stdev=2417.31 00:36:11.738 lat (usec): min=23157, max=66133, avg=33145.48, stdev=2416.24 00:36:11.738 clat percentiles (usec): 00:36:11.738 | 1.00th=[31851], 5.00th=[32113], 10.00th=[32375], 20.00th=[32375], 00:36:11.738 | 30.00th=[32375], 40.00th=[32637], 50.00th=[32637], 60.00th=[32900], 00:36:11.738 | 70.00th=[32900], 80.00th=[33424], 90.00th=[33817], 95.00th=[35914], 00:36:11.738 | 99.00th=[41681], 99.50th=[42206], 99.90th=[66323], 99.95th=[66323], 00:36:11.738 | 99.99th=[66323] 00:36:11.738 bw ( KiB/s): min= 1539, max= 2048, per=4.15%, avg=1913.42, stdev=99.19, samples=19 00:36:11.738 iops : min= 384, max= 512, avg=478.32, stdev=24.96, samples=19 00:36:11.738 lat (msec) : 50=99.67%, 100=0.33% 00:36:11.738 cpu : usr=97.99%, sys=1.60%, ctx=15, majf=0, minf=9 00:36:11.738 IO depths : 1=6.0%, 2=12.3%, 4=25.0%, 8=50.3%, 16=6.5%, 32=0.0%, >=64=0.0% 00:36:11.738 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:11.738 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:11.738 issued rwts: total=4800,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:11.738 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:11.738 filename2: (groupid=0, jobs=1): err= 0: pid=2582248: Sun Jul 21 03:45:55 2024 00:36:11.738 read: IOPS=480, BW=1922KiB/s (1968kB/s)(18.8MiB/10022msec) 00:36:11.738 slat (nsec): min=9790, max=99645, avg=41257.20, stdev=16251.35 00:36:11.738 clat (usec): min=23067, max=43944, avg=32921.77, stdev=1362.44 00:36:11.738 lat (usec): min=23079, max=43981, avg=32963.03, stdev=1364.08 00:36:11.738 clat percentiles (usec): 00:36:11.738 | 1.00th=[31589], 5.00th=[32113], 10.00th=[32113], 20.00th=[32375], 00:36:11.738 | 30.00th=[32375], 40.00th=[32637], 50.00th=[32637], 60.00th=[32637], 00:36:11.738 | 70.00th=[32900], 80.00th=[33162], 90.00th=[34341], 95.00th=[35390], 00:36:11.738 | 99.00th=[36439], 99.50th=[42730], 99.90th=[43779], 99.95th=[43779], 00:36:11.738 | 99.99th=[43779] 00:36:11.738 bw ( KiB/s): min= 1664, max= 2048, per=4.17%, avg=1920.00, stdev=73.90, samples=19 00:36:11.738 iops : min= 416, max= 512, avg=480.00, stdev=18.48, samples=19 00:36:11.738 lat (msec) : 50=100.00% 00:36:11.738 cpu : usr=97.27%, sys=1.90%, ctx=63, majf=0, minf=9 00:36:11.738 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:36:11.738 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:11.738 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:11.738 issued rwts: total=4816,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:11.738 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:11.738 filename2: (groupid=0, jobs=1): err= 0: pid=2582249: Sun Jul 21 03:45:55 2024 00:36:11.738 read: IOPS=479, BW=1920KiB/s (1966kB/s)(18.8MiB/10002msec) 00:36:11.738 slat (usec): min=27, max=125, avg=75.31, stdev=11.64 00:36:11.738 clat (usec): min=21479, max=58198, avg=32665.88, stdev=1972.57 00:36:11.738 lat (usec): min=21579, max=58240, avg=32741.19, stdev=1969.18 00:36:11.738 clat percentiles (usec): 00:36:11.738 | 1.00th=[31327], 5.00th=[31589], 10.00th=[31589], 20.00th=[31851], 00:36:11.738 | 30.00th=[32113], 40.00th=[32113], 50.00th=[32375], 60.00th=[32375], 00:36:11.738 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33817], 95.00th=[35390], 00:36:11.738 | 99.00th=[36439], 99.50th=[43254], 99.90th=[57934], 99.95th=[57934], 00:36:11.738 | 99.99th=[58459] 00:36:11.738 bw ( KiB/s): min= 1664, max= 2048, per=4.17%, avg=1920.00, stdev=85.33, samples=19 00:36:11.738 iops : min= 416, max= 512, avg=480.00, stdev=21.33, samples=19 00:36:11.738 lat (msec) : 50=99.67%, 100=0.33% 00:36:11.738 cpu : usr=98.22%, sys=1.33%, ctx=13, majf=0, minf=9 00:36:11.738 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:36:11.738 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:11.738 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:11.738 issued rwts: total=4800,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:11.738 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:11.738 filename2: (groupid=0, jobs=1): err= 0: pid=2582250: Sun Jul 21 03:45:55 2024 00:36:11.738 read: IOPS=480, BW=1922KiB/s (1968kB/s)(18.8MiB/10022msec) 00:36:11.738 slat (nsec): min=6706, max=71620, avg=14098.62, stdev=5729.95 00:36:11.738 clat (usec): min=25788, max=44105, avg=33162.72, stdev=1296.27 00:36:11.738 lat (usec): min=25845, max=44121, avg=33176.81, stdev=1296.51 00:36:11.738 clat percentiles (usec): 00:36:11.738 | 1.00th=[32375], 5.00th=[32375], 10.00th=[32375], 20.00th=[32637], 00:36:11.738 | 30.00th=[32637], 40.00th=[32637], 50.00th=[32900], 60.00th=[32900], 00:36:11.738 | 70.00th=[32900], 80.00th=[33424], 90.00th=[33817], 95.00th=[35914], 00:36:11.738 | 99.00th=[36963], 99.50th=[41681], 99.90th=[44303], 99.95th=[44303], 00:36:11.738 | 99.99th=[44303] 00:36:11.738 bw ( KiB/s): min= 1667, max= 2048, per=4.17%, avg=1920.15, stdev=71.37, samples=20 00:36:11.738 iops : min= 416, max= 512, avg=480.00, stdev=17.98, samples=20 00:36:11.738 lat (msec) : 50=100.00% 00:36:11.738 cpu : usr=98.18%, sys=1.42%, ctx=15, majf=0, minf=9 00:36:11.738 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:36:11.738 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:11.738 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:11.738 issued rwts: total=4816,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:11.738 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:11.738 filename2: (groupid=0, jobs=1): err= 0: pid=2582251: Sun Jul 21 03:45:55 2024 00:36:11.738 read: IOPS=479, BW=1920KiB/s (1966kB/s)(18.8MiB/10002msec) 00:36:11.738 slat (usec): min=7, max=108, avg=43.28, stdev=19.44 00:36:11.738 clat (usec): min=22807, max=58395, avg=32974.68, stdev=2047.46 00:36:11.738 lat (usec): min=22863, max=58416, avg=33017.96, stdev=2046.93 00:36:11.738 clat percentiles (usec): 00:36:11.738 | 1.00th=[31589], 5.00th=[32113], 10.00th=[32113], 20.00th=[32375], 00:36:11.738 | 30.00th=[32375], 40.00th=[32637], 50.00th=[32637], 60.00th=[32637], 00:36:11.738 | 70.00th=[32900], 80.00th=[33162], 90.00th=[34341], 95.00th=[35390], 00:36:11.738 | 99.00th=[36439], 99.50th=[43779], 99.90th=[58459], 99.95th=[58459], 00:36:11.738 | 99.99th=[58459] 00:36:11.738 bw ( KiB/s): min= 1664, max= 2048, per=4.17%, avg=1920.00, stdev=82.97, samples=19 00:36:11.738 iops : min= 416, max= 512, avg=480.00, stdev=20.74, samples=19 00:36:11.738 lat (msec) : 50=99.67%, 100=0.33% 00:36:11.738 cpu : usr=98.04%, sys=1.38%, ctx=49, majf=0, minf=9 00:36:11.738 IO depths : 1=3.9%, 2=10.1%, 4=24.9%, 8=52.4%, 16=8.6%, 32=0.0%, >=64=0.0% 00:36:11.738 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:11.738 complete : 0=0.0%, 4=94.2%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:11.738 issued rwts: total=4800,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:11.738 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:11.738 filename2: (groupid=0, jobs=1): err= 0: pid=2582252: Sun Jul 21 03:45:55 2024 00:36:11.738 read: IOPS=479, BW=1918KiB/s (1964kB/s)(18.8MiB/10023msec) 00:36:11.738 slat (usec): min=13, max=102, avg=36.87, stdev=11.00 00:36:11.738 clat (usec): min=22881, max=54280, avg=32993.48, stdev=1397.47 00:36:11.738 lat (usec): min=22920, max=54335, avg=33030.36, stdev=1396.07 00:36:11.738 clat percentiles (usec): 00:36:11.738 | 1.00th=[31851], 5.00th=[32113], 10.00th=[32113], 20.00th=[32375], 00:36:11.738 | 30.00th=[32375], 40.00th=[32637], 50.00th=[32637], 60.00th=[32637], 00:36:11.738 | 70.00th=[32900], 80.00th=[33162], 90.00th=[34341], 95.00th=[35914], 00:36:11.738 | 99.00th=[36439], 99.50th=[43779], 99.90th=[44303], 99.95th=[44303], 00:36:11.738 | 99.99th=[54264] 00:36:11.738 bw ( KiB/s): min= 1664, max= 2048, per=4.17%, avg=1920.00, stdev=73.90, samples=19 00:36:11.738 iops : min= 416, max= 512, avg=480.00, stdev=18.48, samples=19 00:36:11.738 lat (msec) : 50=99.96%, 100=0.04% 00:36:11.738 cpu : usr=96.69%, sys=2.05%, ctx=119, majf=0, minf=9 00:36:11.738 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:36:11.738 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:11.738 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:11.738 issued rwts: total=4805,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:11.738 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:11.738 filename2: (groupid=0, jobs=1): err= 0: pid=2582253: Sun Jul 21 03:45:55 2024 00:36:11.738 read: IOPS=487, BW=1950KiB/s (1997kB/s)(19.1MiB/10011msec) 00:36:11.738 slat (nsec): min=3695, max=64095, avg=18014.43, stdev=9749.23 00:36:11.738 clat (usec): min=1805, max=42184, avg=32669.39, stdev=3574.10 00:36:11.738 lat (usec): min=1814, max=42206, avg=32687.40, stdev=3575.31 00:36:11.738 clat percentiles (usec): 00:36:11.738 | 1.00th=[ 8029], 5.00th=[32375], 10.00th=[32375], 20.00th=[32637], 00:36:11.738 | 30.00th=[32637], 40.00th=[32637], 50.00th=[32900], 60.00th=[32900], 00:36:11.738 | 70.00th=[32900], 80.00th=[33424], 90.00th=[33817], 95.00th=[35390], 00:36:11.738 | 99.00th=[36439], 99.50th=[39584], 99.90th=[42206], 99.95th=[42206], 00:36:11.738 | 99.99th=[42206] 00:36:11.738 bw ( KiB/s): min= 1792, max= 2304, per=4.22%, avg=1945.60, stdev=106.69, samples=20 00:36:11.738 iops : min= 448, max= 576, avg=486.40, stdev=26.67, samples=20 00:36:11.738 lat (msec) : 2=0.51%, 4=0.43%, 10=0.08%, 20=0.61%, 50=98.36% 00:36:11.738 cpu : usr=95.18%, sys=2.86%, ctx=1160, majf=0, minf=0 00:36:11.738 IO depths : 1=6.1%, 2=12.4%, 4=24.8%, 8=50.3%, 16=6.4%, 32=0.0%, >=64=0.0% 00:36:11.738 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:11.738 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:11.738 issued rwts: total=4880,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:11.738 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:11.738 00:36:11.738 Run status group 0 (all jobs): 00:36:11.738 READ: bw=45.0MiB/s (47.2MB/s), 1917KiB/s-1950KiB/s (1963kB/s-1997kB/s), io=451MiB (473MB), run=10002-10024msec 00:36:11.738 03:45:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:36:11.738 03:45:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:36:11.738 03:45:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:11.738 03:45:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:11.738 03:45:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:36:11.738 03:45:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:11.738 03:45:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:11.738 03:45:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:11.738 03:45:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:11.738 03:45:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:11.738 03:45:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:11.738 03:45:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:11.738 03:45:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:11.738 03:45:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:11.738 03:45:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:36:11.738 03:45:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:36:11.738 03:45:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:11.738 03:45:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:11.738 03:45:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:11.738 03:45:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:11.738 03:45:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:36:11.738 03:45:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:11.738 03:45:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:11.738 03:45:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:11.738 03:45:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:11.738 03:45:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:36:11.738 03:45:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:36:11.738 03:45:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:36:11.739 03:45:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:11.739 03:45:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:11.739 03:45:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:11.739 03:45:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:36:11.739 03:45:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:11.739 03:45:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:11.739 03:45:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:11.739 03:45:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:36:11.739 03:45:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:36:11.739 03:45:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:36:11.739 03:45:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:36:11.739 03:45:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:36:11.739 03:45:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:36:11.739 03:45:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:36:11.739 03:45:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:36:11.739 03:45:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:11.739 03:45:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:36:11.739 03:45:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:36:11.739 03:45:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:36:11.739 03:45:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:11.739 03:45:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:11.739 bdev_null0 00:36:11.739 03:45:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:11.739 03:45:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:11.739 03:45:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:11.739 03:45:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:11.739 03:45:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:11.739 03:45:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:11.739 03:45:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:11.739 03:45:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:11.739 03:45:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:11.739 03:45:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:11.739 03:45:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:11.739 03:45:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:11.739 [2024-07-21 03:45:55.862652] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:11.739 03:45:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:11.739 03:45:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:11.739 03:45:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:36:11.739 03:45:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:36:11.739 03:45:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:36:11.739 03:45:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:11.739 03:45:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:11.739 bdev_null1 00:36:11.739 03:45:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:11.739 03:45:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:36:11.739 03:45:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:11.739 03:45:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:11.739 03:45:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:11.739 03:45:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:36:11.739 03:45:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:11.739 03:45:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:11.739 03:45:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:11.739 03:45:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:11.739 03:45:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:11.739 03:45:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:11.739 03:45:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:11.739 03:45:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:36:11.739 03:45:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:36:11.739 03:45:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:36:11.739 03:45:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:36:11.739 03:45:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:36:11.739 03:45:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:36:11.739 03:45:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:36:11.739 { 00:36:11.739 "params": { 00:36:11.739 "name": "Nvme$subsystem", 00:36:11.739 "trtype": "$TEST_TRANSPORT", 00:36:11.739 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:11.739 "adrfam": "ipv4", 00:36:11.739 "trsvcid": "$NVMF_PORT", 00:36:11.739 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:11.739 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:11.739 "hdgst": ${hdgst:-false}, 00:36:11.739 "ddgst": ${ddgst:-false} 00:36:11.739 }, 00:36:11.739 "method": "bdev_nvme_attach_controller" 00:36:11.739 } 00:36:11.739 EOF 00:36:11.739 )") 00:36:11.739 03:45:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:11.739 03:45:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:11.739 03:45:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:36:11.739 03:45:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:11.739 03:45:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # local sanitizers 00:36:11.739 03:45:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:36:11.739 03:45:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:11.739 03:45:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # shift 00:36:11.739 03:45:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:36:11.739 03:45:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local asan_lib= 00:36:11.739 03:45:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:36:11.739 03:45:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:36:11.739 03:45:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:36:11.739 03:45:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:11.739 03:45:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libasan 00:36:11.739 03:45:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:36:11.739 03:45:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:36:11.739 03:45:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:11.739 03:45:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:36:11.739 03:45:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:36:11.739 03:45:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:36:11.739 { 00:36:11.739 "params": { 00:36:11.739 "name": "Nvme$subsystem", 00:36:11.739 "trtype": "$TEST_TRANSPORT", 00:36:11.739 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:11.739 "adrfam": "ipv4", 00:36:11.739 "trsvcid": "$NVMF_PORT", 00:36:11.739 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:11.739 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:11.739 "hdgst": ${hdgst:-false}, 00:36:11.739 "ddgst": ${ddgst:-false} 00:36:11.739 }, 00:36:11.739 "method": "bdev_nvme_attach_controller" 00:36:11.739 } 00:36:11.739 EOF 00:36:11.739 )") 00:36:11.739 03:45:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:36:11.739 03:45:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:36:11.739 03:45:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:11.739 03:45:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:36:11.739 03:45:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:36:11.739 03:45:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:36:11.739 "params": { 00:36:11.739 "name": "Nvme0", 00:36:11.739 "trtype": "tcp", 00:36:11.739 "traddr": "10.0.0.2", 00:36:11.739 "adrfam": "ipv4", 00:36:11.739 "trsvcid": "4420", 00:36:11.739 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:11.739 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:11.739 "hdgst": false, 00:36:11.739 "ddgst": false 00:36:11.739 }, 00:36:11.739 "method": "bdev_nvme_attach_controller" 00:36:11.739 },{ 00:36:11.739 "params": { 00:36:11.739 "name": "Nvme1", 00:36:11.739 "trtype": "tcp", 00:36:11.739 "traddr": "10.0.0.2", 00:36:11.739 "adrfam": "ipv4", 00:36:11.739 "trsvcid": "4420", 00:36:11.739 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:11.739 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:11.739 "hdgst": false, 00:36:11.739 "ddgst": false 00:36:11.739 }, 00:36:11.739 "method": "bdev_nvme_attach_controller" 00:36:11.739 }' 00:36:11.739 03:45:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:36:11.739 03:45:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:36:11.739 03:45:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:36:11.739 03:45:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:11.739 03:45:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:36:11.739 03:45:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:36:11.739 03:45:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:36:11.739 03:45:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:36:11.739 03:45:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:11.739 03:45:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:11.739 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:36:11.739 ... 00:36:11.739 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:36:11.739 ... 00:36:11.739 fio-3.35 00:36:11.739 Starting 4 threads 00:36:11.739 EAL: No free 2048 kB hugepages reported on node 1 00:36:16.998 00:36:16.998 filename0: (groupid=0, jobs=1): err= 0: pid=2583628: Sun Jul 21 03:46:01 2024 00:36:16.998 read: IOPS=1892, BW=14.8MiB/s (15.5MB/s)(74.0MiB/5002msec) 00:36:16.998 slat (nsec): min=4160, max=62003, avg=19264.14, stdev=8096.72 00:36:16.998 clat (usec): min=1041, max=7580, avg=4159.81, stdev=592.44 00:36:16.998 lat (usec): min=1060, max=7593, avg=4179.08, stdev=592.19 00:36:16.998 clat percentiles (usec): 00:36:16.998 | 1.00th=[ 2409], 5.00th=[ 3392], 10.00th=[ 3687], 20.00th=[ 3949], 00:36:16.998 | 30.00th=[ 4015], 40.00th=[ 4047], 50.00th=[ 4113], 60.00th=[ 4178], 00:36:16.998 | 70.00th=[ 4228], 80.00th=[ 4293], 90.00th=[ 4686], 95.00th=[ 5211], 00:36:16.998 | 99.00th=[ 6456], 99.50th=[ 6783], 99.90th=[ 7177], 99.95th=[ 7308], 00:36:16.998 | 99.99th=[ 7570] 00:36:16.998 bw ( KiB/s): min=14512, max=15872, per=24.74%, avg=15137.30, stdev=374.31, samples=10 00:36:16.998 iops : min= 1814, max= 1984, avg=1892.10, stdev=46.84, samples=10 00:36:16.998 lat (msec) : 2=0.57%, 4=27.95%, 10=71.48% 00:36:16.998 cpu : usr=95.00%, sys=4.50%, ctx=11, majf=0, minf=0 00:36:16.998 IO depths : 1=0.1%, 2=16.1%, 4=56.6%, 8=27.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:16.998 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:16.998 complete : 0=0.0%, 4=91.8%, 8=8.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:16.998 issued rwts: total=9466,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:16.998 latency : target=0, window=0, percentile=100.00%, depth=8 00:36:16.998 filename0: (groupid=0, jobs=1): err= 0: pid=2583629: Sun Jul 21 03:46:01 2024 00:36:16.998 read: IOPS=1947, BW=15.2MiB/s (16.0MB/s)(76.1MiB/5002msec) 00:36:16.998 slat (nsec): min=4266, max=60712, avg=14811.37, stdev=8133.76 00:36:16.998 clat (usec): min=745, max=8091, avg=4059.39, stdev=533.08 00:36:16.998 lat (usec): min=763, max=8106, avg=4074.20, stdev=533.22 00:36:16.998 clat percentiles (usec): 00:36:16.998 | 1.00th=[ 2311], 5.00th=[ 3228], 10.00th=[ 3523], 20.00th=[ 3818], 00:36:16.998 | 30.00th=[ 3949], 40.00th=[ 4047], 50.00th=[ 4113], 60.00th=[ 4146], 00:36:16.998 | 70.00th=[ 4228], 80.00th=[ 4293], 90.00th=[ 4490], 95.00th=[ 4686], 00:36:16.998 | 99.00th=[ 5866], 99.50th=[ 6587], 99.90th=[ 7308], 99.95th=[ 7898], 00:36:16.998 | 99.99th=[ 8094] 00:36:16.998 bw ( KiB/s): min=15152, max=16688, per=25.51%, avg=15610.67, stdev=517.97, samples=9 00:36:16.998 iops : min= 1894, max= 2086, avg=1951.33, stdev=64.75, samples=9 00:36:16.998 lat (usec) : 750=0.02%, 1000=0.06% 00:36:16.998 lat (msec) : 2=0.61%, 4=34.09%, 10=65.23% 00:36:16.998 cpu : usr=95.80%, sys=3.62%, ctx=10, majf=0, minf=0 00:36:16.998 IO depths : 1=0.3%, 2=11.1%, 4=60.0%, 8=28.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:16.998 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:16.998 complete : 0=0.0%, 4=93.1%, 8=6.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:16.998 issued rwts: total=9740,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:16.998 latency : target=0, window=0, percentile=100.00%, depth=8 00:36:16.998 filename1: (groupid=0, jobs=1): err= 0: pid=2583630: Sun Jul 21 03:46:01 2024 00:36:16.998 read: IOPS=1907, BW=14.9MiB/s (15.6MB/s)(74.5MiB/5001msec) 00:36:16.998 slat (nsec): min=4314, max=74006, avg=19224.03, stdev=10327.91 00:36:16.998 clat (usec): min=693, max=7984, avg=4123.59, stdev=581.28 00:36:16.998 lat (usec): min=707, max=8054, avg=4142.82, stdev=581.33 00:36:16.998 clat percentiles (usec): 00:36:16.998 | 1.00th=[ 2343], 5.00th=[ 3392], 10.00th=[ 3621], 20.00th=[ 3884], 00:36:16.998 | 30.00th=[ 3982], 40.00th=[ 4047], 50.00th=[ 4080], 60.00th=[ 4146], 00:36:16.998 | 70.00th=[ 4228], 80.00th=[ 4293], 90.00th=[ 4555], 95.00th=[ 5080], 00:36:16.998 | 99.00th=[ 6456], 99.50th=[ 6849], 99.90th=[ 7373], 99.95th=[ 7504], 00:36:16.998 | 99.99th=[ 7963] 00:36:16.998 bw ( KiB/s): min=14944, max=15808, per=24.97%, avg=15276.44, stdev=233.89, samples=9 00:36:16.998 iops : min= 1868, max= 1976, avg=1909.56, stdev=29.24, samples=9 00:36:16.998 lat (usec) : 750=0.02%, 1000=0.05% 00:36:16.998 lat (msec) : 2=0.63%, 4=32.34%, 10=66.96% 00:36:16.998 cpu : usr=95.56%, sys=3.86%, ctx=13, majf=0, minf=0 00:36:16.998 IO depths : 1=0.1%, 2=16.2%, 4=56.2%, 8=27.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:16.998 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:16.998 complete : 0=0.0%, 4=92.1%, 8=7.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:16.998 issued rwts: total=9540,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:16.998 latency : target=0, window=0, percentile=100.00%, depth=8 00:36:16.998 filename1: (groupid=0, jobs=1): err= 0: pid=2583631: Sun Jul 21 03:46:01 2024 00:36:16.998 read: IOPS=1901, BW=14.9MiB/s (15.6MB/s)(74.3MiB/5002msec) 00:36:16.998 slat (nsec): min=4335, max=68230, avg=19255.83, stdev=10239.63 00:36:16.998 clat (usec): min=721, max=7415, avg=4134.37, stdev=582.13 00:36:16.998 lat (usec): min=734, max=7430, avg=4153.62, stdev=581.95 00:36:16.998 clat percentiles (usec): 00:36:16.998 | 1.00th=[ 2212], 5.00th=[ 3359], 10.00th=[ 3654], 20.00th=[ 3916], 00:36:16.998 | 30.00th=[ 3982], 40.00th=[ 4047], 50.00th=[ 4113], 60.00th=[ 4146], 00:36:16.998 | 70.00th=[ 4228], 80.00th=[ 4293], 90.00th=[ 4621], 95.00th=[ 5145], 00:36:16.998 | 99.00th=[ 6325], 99.50th=[ 6587], 99.90th=[ 7177], 99.95th=[ 7373], 00:36:16.998 | 99.99th=[ 7439] 00:36:16.998 bw ( KiB/s): min=14864, max=15616, per=24.81%, avg=15178.67, stdev=249.42, samples=9 00:36:16.998 iops : min= 1858, max= 1952, avg=1897.33, stdev=31.18, samples=9 00:36:16.998 lat (usec) : 750=0.02%, 1000=0.05% 00:36:16.998 lat (msec) : 2=0.66%, 4=29.74%, 10=69.52% 00:36:16.998 cpu : usr=96.08%, sys=3.44%, ctx=8, majf=0, minf=0 00:36:16.998 IO depths : 1=0.2%, 2=17.3%, 4=55.4%, 8=27.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:16.998 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:16.998 complete : 0=0.0%, 4=91.8%, 8=8.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:16.998 issued rwts: total=9512,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:16.998 latency : target=0, window=0, percentile=100.00%, depth=8 00:36:16.998 00:36:16.998 Run status group 0 (all jobs): 00:36:16.998 READ: bw=59.8MiB/s (62.7MB/s), 14.8MiB/s-15.2MiB/s (15.5MB/s-16.0MB/s), io=299MiB (313MB), run=5001-5002msec 00:36:16.998 03:46:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:36:16.998 03:46:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:36:16.998 03:46:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:16.998 03:46:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:16.998 03:46:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:36:16.998 03:46:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:16.998 03:46:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:16.998 03:46:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:16.998 03:46:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:16.998 03:46:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:16.998 03:46:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:16.998 03:46:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:16.998 03:46:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:16.998 03:46:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:16.998 03:46:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:36:16.998 03:46:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:36:16.998 03:46:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:16.998 03:46:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:16.998 03:46:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:16.998 03:46:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:16.998 03:46:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:36:16.998 03:46:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:16.998 03:46:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:16.998 03:46:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:16.998 00:36:16.998 real 0m24.340s 00:36:16.998 user 4m31.920s 00:36:16.998 sys 0m6.992s 00:36:16.998 03:46:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1122 -- # xtrace_disable 00:36:16.998 03:46:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:16.998 ************************************ 00:36:16.998 END TEST fio_dif_rand_params 00:36:16.998 ************************************ 00:36:16.998 03:46:02 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:36:16.998 03:46:02 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:36:16.999 03:46:02 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:36:16.999 03:46:02 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:16.999 ************************************ 00:36:16.999 START TEST fio_dif_digest 00:36:16.999 ************************************ 00:36:16.999 03:46:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1121 -- # fio_dif_digest 00:36:16.999 03:46:02 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:36:16.999 03:46:02 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:36:16.999 03:46:02 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:36:16.999 03:46:02 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:36:16.999 03:46:02 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:36:16.999 03:46:02 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:36:16.999 03:46:02 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:36:16.999 03:46:02 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:36:16.999 03:46:02 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:36:16.999 03:46:02 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:36:16.999 03:46:02 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:36:16.999 03:46:02 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:36:16.999 03:46:02 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:36:16.999 03:46:02 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:36:16.999 03:46:02 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:36:16.999 03:46:02 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:36:16.999 03:46:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:16.999 03:46:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:16.999 bdev_null0 00:36:16.999 03:46:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:16.999 03:46:02 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:16.999 03:46:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:16.999 03:46:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:16.999 03:46:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:16.999 03:46:02 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:16.999 03:46:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:16.999 03:46:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:16.999 03:46:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:16.999 03:46:02 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:16.999 03:46:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:16.999 03:46:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:16.999 [2024-07-21 03:46:02.280146] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:16.999 03:46:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:16.999 03:46:02 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:36:16.999 03:46:02 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:36:16.999 03:46:02 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:36:16.999 03:46:02 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:36:16.999 03:46:02 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:36:16.999 03:46:02 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:36:16.999 03:46:02 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:36:16.999 { 00:36:16.999 "params": { 00:36:16.999 "name": "Nvme$subsystem", 00:36:16.999 "trtype": "$TEST_TRANSPORT", 00:36:16.999 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:16.999 "adrfam": "ipv4", 00:36:16.999 "trsvcid": "$NVMF_PORT", 00:36:16.999 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:16.999 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:16.999 "hdgst": ${hdgst:-false}, 00:36:16.999 "ddgst": ${ddgst:-false} 00:36:16.999 }, 00:36:16.999 "method": "bdev_nvme_attach_controller" 00:36:16.999 } 00:36:16.999 EOF 00:36:16.999 )") 00:36:16.999 03:46:02 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:16.999 03:46:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:16.999 03:46:02 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:36:16.999 03:46:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:36:16.999 03:46:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:16.999 03:46:02 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:36:16.999 03:46:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1335 -- # local sanitizers 00:36:16.999 03:46:02 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:36:16.999 03:46:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:16.999 03:46:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # shift 00:36:16.999 03:46:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local asan_lib= 00:36:16.999 03:46:02 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:36:16.999 03:46:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:36:16.999 03:46:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:16.999 03:46:02 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:36:16.999 03:46:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # grep libasan 00:36:16.999 03:46:02 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:36:16.999 03:46:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:36:16.999 03:46:02 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:36:16.999 03:46:02 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:36:16.999 03:46:02 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:36:16.999 "params": { 00:36:16.999 "name": "Nvme0", 00:36:16.999 "trtype": "tcp", 00:36:16.999 "traddr": "10.0.0.2", 00:36:16.999 "adrfam": "ipv4", 00:36:16.999 "trsvcid": "4420", 00:36:16.999 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:16.999 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:16.999 "hdgst": true, 00:36:16.999 "ddgst": true 00:36:16.999 }, 00:36:16.999 "method": "bdev_nvme_attach_controller" 00:36:16.999 }' 00:36:16.999 03:46:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # asan_lib= 00:36:16.999 03:46:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:36:16.999 03:46:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:36:16.999 03:46:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:16.999 03:46:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:36:16.999 03:46:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:36:17.257 03:46:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # asan_lib= 00:36:17.257 03:46:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:36:17.257 03:46:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:17.257 03:46:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:17.257 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:36:17.257 ... 00:36:17.257 fio-3.35 00:36:17.257 Starting 3 threads 00:36:17.257 EAL: No free 2048 kB hugepages reported on node 1 00:36:29.493 00:36:29.493 filename0: (groupid=0, jobs=1): err= 0: pid=2584381: Sun Jul 21 03:46:13 2024 00:36:29.493 read: IOPS=198, BW=24.9MiB/s (26.1MB/s)(250MiB/10046msec) 00:36:29.493 slat (nsec): min=4761, max=43033, avg=15536.07, stdev=2845.30 00:36:29.493 clat (usec): min=11886, max=53809, avg=15043.51, stdev=1519.52 00:36:29.493 lat (usec): min=11901, max=53828, avg=15059.05, stdev=1519.67 00:36:29.493 clat percentiles (usec): 00:36:29.493 | 1.00th=[13042], 5.00th=[13566], 10.00th=[13960], 20.00th=[14222], 00:36:29.493 | 30.00th=[14484], 40.00th=[14746], 50.00th=[15008], 60.00th=[15139], 00:36:29.493 | 70.00th=[15401], 80.00th=[15795], 90.00th=[16188], 95.00th=[16712], 00:36:29.493 | 99.00th=[17695], 99.50th=[17957], 99.90th=[51643], 99.95th=[53740], 00:36:29.493 | 99.99th=[53740] 00:36:29.493 bw ( KiB/s): min=24832, max=26112, per=31.98%, avg=25538.50, stdev=376.53, samples=20 00:36:29.493 iops : min= 194, max= 204, avg=199.50, stdev= 2.96, samples=20 00:36:29.493 lat (msec) : 20=99.70%, 50=0.20%, 100=0.10% 00:36:29.493 cpu : usr=93.94%, sys=5.58%, ctx=22, majf=0, minf=110 00:36:29.493 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:29.493 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:29.493 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:29.493 issued rwts: total=1998,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:29.493 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:29.493 filename0: (groupid=0, jobs=1): err= 0: pid=2584382: Sun Jul 21 03:46:13 2024 00:36:29.493 read: IOPS=217, BW=27.1MiB/s (28.5MB/s)(273MiB/10047msec) 00:36:29.493 slat (nsec): min=4852, max=40705, avg=15702.32, stdev=3000.38 00:36:29.493 clat (usec): min=9834, max=55186, avg=13774.28, stdev=1486.19 00:36:29.493 lat (usec): min=9848, max=55206, avg=13789.98, stdev=1486.24 00:36:29.493 clat percentiles (usec): 00:36:29.493 | 1.00th=[11600], 5.00th=[12256], 10.00th=[12649], 20.00th=[13042], 00:36:29.493 | 30.00th=[13304], 40.00th=[13566], 50.00th=[13698], 60.00th=[13960], 00:36:29.493 | 70.00th=[14222], 80.00th=[14353], 90.00th=[14746], 95.00th=[15139], 00:36:29.493 | 99.00th=[15926], 99.50th=[16319], 99.90th=[22414], 99.95th=[48497], 00:36:29.493 | 99.99th=[55313] 00:36:29.493 bw ( KiB/s): min=26880, max=28672, per=34.94%, avg=27904.00, stdev=423.51, samples=20 00:36:29.493 iops : min= 210, max= 224, avg=218.00, stdev= 3.31, samples=20 00:36:29.493 lat (msec) : 10=0.05%, 20=99.73%, 50=0.18%, 100=0.05% 00:36:29.493 cpu : usr=93.65%, sys=5.74%, ctx=18, majf=0, minf=135 00:36:29.493 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:29.493 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:29.493 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:29.493 issued rwts: total=2182,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:29.493 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:29.493 filename0: (groupid=0, jobs=1): err= 0: pid=2584383: Sun Jul 21 03:46:13 2024 00:36:29.493 read: IOPS=207, BW=26.0MiB/s (27.3MB/s)(261MiB/10046msec) 00:36:29.493 slat (nsec): min=5269, max=43345, avg=16539.59, stdev=3341.33 00:36:29.493 clat (usec): min=10942, max=49972, avg=14387.23, stdev=1438.22 00:36:29.493 lat (usec): min=10957, max=49994, avg=14403.77, stdev=1438.21 00:36:29.493 clat percentiles (usec): 00:36:29.493 | 1.00th=[12256], 5.00th=[12911], 10.00th=[13173], 20.00th=[13566], 00:36:29.493 | 30.00th=[13829], 40.00th=[14091], 50.00th=[14353], 60.00th=[14615], 00:36:29.493 | 70.00th=[14746], 80.00th=[15139], 90.00th=[15533], 95.00th=[15795], 00:36:29.493 | 99.00th=[16909], 99.50th=[17171], 99.90th=[21627], 99.95th=[47973], 00:36:29.493 | 99.99th=[50070] 00:36:29.493 bw ( KiB/s): min=26368, max=27392, per=33.43%, avg=26700.80, stdev=276.72, samples=20 00:36:29.493 iops : min= 206, max= 214, avg=208.60, stdev= 2.16, samples=20 00:36:29.493 lat (msec) : 20=99.76%, 50=0.24% 00:36:29.493 cpu : usr=92.24%, sys=6.43%, ctx=198, majf=0, minf=128 00:36:29.493 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:29.493 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:29.493 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:29.493 issued rwts: total=2089,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:29.494 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:29.494 00:36:29.494 Run status group 0 (all jobs): 00:36:29.494 READ: bw=78.0MiB/s (81.8MB/s), 24.9MiB/s-27.1MiB/s (26.1MB/s-28.5MB/s), io=784MiB (822MB), run=10046-10047msec 00:36:29.494 03:46:13 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:36:29.494 03:46:13 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:36:29.494 03:46:13 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:36:29.494 03:46:13 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:29.494 03:46:13 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:36:29.494 03:46:13 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:29.494 03:46:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:29.494 03:46:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:29.494 03:46:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:29.494 03:46:13 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:29.494 03:46:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:29.494 03:46:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:29.494 03:46:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:29.494 00:36:29.494 real 0m11.103s 00:36:29.494 user 0m29.149s 00:36:29.494 sys 0m2.056s 00:36:29.494 03:46:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1122 -- # xtrace_disable 00:36:29.494 03:46:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:29.494 ************************************ 00:36:29.494 END TEST fio_dif_digest 00:36:29.494 ************************************ 00:36:29.494 03:46:13 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:36:29.494 03:46:13 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:36:29.494 03:46:13 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:36:29.494 03:46:13 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:36:29.494 03:46:13 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:36:29.494 03:46:13 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:36:29.494 03:46:13 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:36:29.494 03:46:13 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:36:29.494 rmmod nvme_tcp 00:36:29.494 rmmod nvme_fabrics 00:36:29.494 rmmod nvme_keyring 00:36:29.494 03:46:13 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:36:29.494 03:46:13 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:36:29.494 03:46:13 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:36:29.494 03:46:13 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 2578338 ']' 00:36:29.494 03:46:13 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 2578338 00:36:29.494 03:46:13 nvmf_dif -- common/autotest_common.sh@946 -- # '[' -z 2578338 ']' 00:36:29.494 03:46:13 nvmf_dif -- common/autotest_common.sh@950 -- # kill -0 2578338 00:36:29.494 03:46:13 nvmf_dif -- common/autotest_common.sh@951 -- # uname 00:36:29.494 03:46:13 nvmf_dif -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:36:29.494 03:46:13 nvmf_dif -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2578338 00:36:29.494 03:46:13 nvmf_dif -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:36:29.494 03:46:13 nvmf_dif -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:36:29.494 03:46:13 nvmf_dif -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2578338' 00:36:29.494 killing process with pid 2578338 00:36:29.494 03:46:13 nvmf_dif -- common/autotest_common.sh@965 -- # kill 2578338 00:36:29.494 03:46:13 nvmf_dif -- common/autotest_common.sh@970 -- # wait 2578338 00:36:29.494 03:46:13 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:36:29.494 03:46:13 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:36:29.494 Waiting for block devices as requested 00:36:29.494 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:36:29.751 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:36:29.751 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:36:29.751 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:36:30.008 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:36:30.008 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:36:30.008 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:36:30.008 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:36:30.266 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:36:30.266 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:36:30.266 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:36:30.266 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:36:30.523 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:36:30.523 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:36:30.523 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:36:30.523 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:36:30.523 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:36:30.780 03:46:15 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:36:30.780 03:46:15 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:36:30.780 03:46:15 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:36:30.780 03:46:15 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:36:30.780 03:46:15 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:30.780 03:46:15 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:30.780 03:46:15 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:32.690 03:46:17 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:36:32.690 00:36:32.690 real 1m6.194s 00:36:32.690 user 6m28.370s 00:36:32.690 sys 0m17.856s 00:36:32.690 03:46:17 nvmf_dif -- common/autotest_common.sh@1122 -- # xtrace_disable 00:36:32.690 03:46:17 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:32.690 ************************************ 00:36:32.690 END TEST nvmf_dif 00:36:32.690 ************************************ 00:36:32.690 03:46:18 -- spdk/autotest.sh@293 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:36:32.690 03:46:18 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:36:32.690 03:46:18 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:36:32.690 03:46:18 -- common/autotest_common.sh@10 -- # set +x 00:36:32.948 ************************************ 00:36:32.948 START TEST nvmf_abort_qd_sizes 00:36:32.948 ************************************ 00:36:32.948 03:46:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:36:32.948 * Looking for test storage... 00:36:32.948 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:32.948 03:46:18 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:32.948 03:46:18 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:36:32.948 03:46:18 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:32.948 03:46:18 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:32.948 03:46:18 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:32.948 03:46:18 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:32.948 03:46:18 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:32.948 03:46:18 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:32.948 03:46:18 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:32.948 03:46:18 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:32.948 03:46:18 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:32.948 03:46:18 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:32.948 03:46:18 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:36:32.948 03:46:18 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:36:32.948 03:46:18 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:32.948 03:46:18 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:32.948 03:46:18 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:32.948 03:46:18 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:32.948 03:46:18 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:32.948 03:46:18 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:32.948 03:46:18 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:32.948 03:46:18 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:32.948 03:46:18 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:32.948 03:46:18 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:32.948 03:46:18 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:32.948 03:46:18 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:36:32.948 03:46:18 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:32.948 03:46:18 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:36:32.948 03:46:18 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:36:32.948 03:46:18 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:36:32.948 03:46:18 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:32.948 03:46:18 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:32.948 03:46:18 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:32.948 03:46:18 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:36:32.948 03:46:18 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:36:32.948 03:46:18 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:36:32.948 03:46:18 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:36:32.948 03:46:18 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:36:32.948 03:46:18 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:32.948 03:46:18 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:36:32.948 03:46:18 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:36:32.948 03:46:18 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:36:32.948 03:46:18 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:32.948 03:46:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:32.948 03:46:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:32.948 03:46:18 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:36:32.948 03:46:18 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:36:32.948 03:46:18 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:36:32.948 03:46:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:34.846 03:46:19 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:34.846 03:46:19 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:36:34.846 03:46:19 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:36:34.846 03:46:19 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:36:34.846 03:46:19 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:36:34.847 03:46:19 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:36:34.847 03:46:19 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:36:34.847 03:46:19 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:36:34.847 03:46:19 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:36:34.847 03:46:19 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:36:34.847 03:46:19 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:36:34.847 03:46:19 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:36:34.847 03:46:19 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:36:34.847 03:46:19 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:36:34.847 03:46:19 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:36:34.847 03:46:19 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:34.847 03:46:19 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:34.847 03:46:19 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:34.847 03:46:19 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:34.847 03:46:19 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:34.847 03:46:19 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:34.847 03:46:19 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:34.847 03:46:19 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:34.847 03:46:19 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:34.847 03:46:19 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:34.847 03:46:19 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:34.847 03:46:19 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:36:34.847 03:46:19 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:36:34.847 03:46:19 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:36:34.847 03:46:19 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:36:34.847 03:46:19 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:36:34.847 03:46:19 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:36:34.847 03:46:19 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:34.847 03:46:19 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:36:34.847 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:36:34.847 03:46:19 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:34.847 03:46:19 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:34.847 03:46:19 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:34.847 03:46:19 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:34.847 03:46:19 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:34.847 03:46:19 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:34.847 03:46:19 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:36:34.847 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:36:34.847 03:46:19 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:34.847 03:46:19 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:34.847 03:46:19 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:34.847 03:46:19 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:34.847 03:46:19 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:34.847 03:46:19 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:36:34.847 03:46:19 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:36:34.847 03:46:19 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:36:34.847 03:46:19 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:34.847 03:46:19 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:34.847 03:46:19 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:34.847 03:46:19 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:34.847 03:46:19 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:34.847 03:46:19 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:34.847 03:46:19 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:34.847 03:46:19 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:36:34.847 Found net devices under 0000:0a:00.0: cvl_0_0 00:36:34.847 03:46:19 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:34.847 03:46:19 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:34.847 03:46:19 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:34.847 03:46:19 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:34.847 03:46:19 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:34.847 03:46:19 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:34.847 03:46:19 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:34.847 03:46:19 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:34.847 03:46:19 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:36:34.847 Found net devices under 0000:0a:00.1: cvl_0_1 00:36:34.847 03:46:19 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:34.847 03:46:19 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:36:34.847 03:46:19 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:36:34.847 03:46:19 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:36:34.847 03:46:19 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:36:34.847 03:46:19 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:36:34.847 03:46:19 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:34.847 03:46:19 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:34.847 03:46:19 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:34.847 03:46:19 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:36:34.847 03:46:19 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:34.847 03:46:19 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:34.847 03:46:19 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:36:34.847 03:46:19 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:34.847 03:46:19 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:34.847 03:46:19 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:36:34.847 03:46:19 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:36:34.847 03:46:19 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:36:34.847 03:46:19 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:34.847 03:46:19 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:34.847 03:46:19 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:34.847 03:46:19 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:36:34.847 03:46:19 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:34.847 03:46:20 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:34.847 03:46:20 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:34.847 03:46:20 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:36:34.847 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:34.847 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.208 ms 00:36:34.847 00:36:34.847 --- 10.0.0.2 ping statistics --- 00:36:34.847 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:34.847 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:36:34.847 03:46:20 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:34.847 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:34.847 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.097 ms 00:36:34.847 00:36:34.847 --- 10.0.0.1 ping statistics --- 00:36:34.847 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:34.847 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:36:34.847 03:46:20 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:34.847 03:46:20 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:36:34.847 03:46:20 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:36:34.847 03:46:20 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:36:35.781 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:36:35.781 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:36:35.781 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:36:35.781 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:36:36.039 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:36:36.039 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:36:36.039 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:36:36.039 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:36:36.039 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:36:36.039 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:36:36.039 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:36:36.039 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:36:36.039 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:36:36.039 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:36:36.039 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:36:36.039 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:36:36.972 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:36:36.972 03:46:22 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:36.972 03:46:22 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:36:36.972 03:46:22 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:36:36.972 03:46:22 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:36.972 03:46:22 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:36:36.972 03:46:22 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:36:36.972 03:46:22 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:36:36.972 03:46:22 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:36:36.972 03:46:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@720 -- # xtrace_disable 00:36:36.972 03:46:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:36.972 03:46:22 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=2589164 00:36:36.972 03:46:22 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:36:36.972 03:46:22 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 2589164 00:36:36.972 03:46:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@827 -- # '[' -z 2589164 ']' 00:36:36.972 03:46:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:36.972 03:46:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@832 -- # local max_retries=100 00:36:36.972 03:46:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:36.972 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:36.972 03:46:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # xtrace_disable 00:36:36.972 03:46:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:37.229 [2024-07-21 03:46:22.285902] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:36:37.229 [2024-07-21 03:46:22.286002] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:37.229 EAL: No free 2048 kB hugepages reported on node 1 00:36:37.229 [2024-07-21 03:46:22.350490] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:37.229 [2024-07-21 03:46:22.436626] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:37.229 [2024-07-21 03:46:22.436675] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:37.229 [2024-07-21 03:46:22.436699] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:37.229 [2024-07-21 03:46:22.436711] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:37.229 [2024-07-21 03:46:22.436722] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:37.229 [2024-07-21 03:46:22.436788] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:36:37.229 [2024-07-21 03:46:22.436848] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:36:37.229 [2024-07-21 03:46:22.436914] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:36:37.229 [2024-07-21 03:46:22.436917] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:36:37.485 03:46:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:36:37.485 03:46:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@860 -- # return 0 00:36:37.485 03:46:22 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:36:37.485 03:46:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:37.485 03:46:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:37.485 03:46:22 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:37.485 03:46:22 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:36:37.485 03:46:22 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:36:37.485 03:46:22 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:36:37.485 03:46:22 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:36:37.485 03:46:22 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:36:37.485 03:46:22 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:88:00.0 ]] 00:36:37.485 03:46:22 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:36:37.485 03:46:22 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:36:37.485 03:46:22 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:88:00.0 ]] 00:36:37.485 03:46:22 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:36:37.485 03:46:22 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:36:37.485 03:46:22 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:36:37.485 03:46:22 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:36:37.485 03:46:22 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:88:00.0 00:36:37.486 03:46:22 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:36:37.486 03:46:22 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:88:00.0 00:36:37.486 03:46:22 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:36:37.486 03:46:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:36:37.486 03:46:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@1103 -- # xtrace_disable 00:36:37.486 03:46:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:37.486 ************************************ 00:36:37.486 START TEST spdk_target_abort 00:36:37.486 ************************************ 00:36:37.486 03:46:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1121 -- # spdk_target 00:36:37.486 03:46:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:36:37.486 03:46:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:88:00.0 -b spdk_target 00:36:37.486 03:46:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:37.486 03:46:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:40.757 spdk_targetn1 00:36:40.757 03:46:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:40.757 03:46:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:40.757 03:46:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:40.757 03:46:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:40.757 [2024-07-21 03:46:25.470568] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:40.757 03:46:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:40.757 03:46:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:36:40.757 03:46:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:40.757 03:46:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:40.757 03:46:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:40.757 03:46:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:36:40.757 03:46:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:40.757 03:46:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:40.757 03:46:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:40.757 03:46:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:36:40.757 03:46:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:40.757 03:46:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:40.757 [2024-07-21 03:46:25.502864] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:40.757 03:46:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:40.757 03:46:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:36:40.757 03:46:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:36:40.757 03:46:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:36:40.757 03:46:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:36:40.757 03:46:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:36:40.757 03:46:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:36:40.757 03:46:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:36:40.757 03:46:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:36:40.757 03:46:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:36:40.757 03:46:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:40.757 03:46:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:36:40.757 03:46:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:40.757 03:46:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:36:40.757 03:46:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:40.757 03:46:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:36:40.757 03:46:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:40.757 03:46:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:36:40.757 03:46:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:40.757 03:46:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:40.757 03:46:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:40.757 03:46:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:40.757 EAL: No free 2048 kB hugepages reported on node 1 00:36:44.034 Initializing NVMe Controllers 00:36:44.034 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:36:44.034 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:44.034 Initialization complete. Launching workers. 00:36:44.034 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 13077, failed: 0 00:36:44.035 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1265, failed to submit 11812 00:36:44.035 success 731, unsuccess 534, failed 0 00:36:44.035 03:46:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:44.035 03:46:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:44.035 EAL: No free 2048 kB hugepages reported on node 1 00:36:47.314 Initializing NVMe Controllers 00:36:47.314 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:36:47.314 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:47.314 Initialization complete. Launching workers. 00:36:47.314 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8682, failed: 0 00:36:47.314 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1214, failed to submit 7468 00:36:47.314 success 333, unsuccess 881, failed 0 00:36:47.314 03:46:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:47.314 03:46:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:47.314 EAL: No free 2048 kB hugepages reported on node 1 00:36:49.836 Initializing NVMe Controllers 00:36:49.836 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:36:49.836 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:49.836 Initialization complete. Launching workers. 00:36:49.836 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 31511, failed: 0 00:36:49.836 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2735, failed to submit 28776 00:36:49.836 success 535, unsuccess 2200, failed 0 00:36:50.093 03:46:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:36:50.093 03:46:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:50.093 03:46:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:50.093 03:46:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:50.093 03:46:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:36:50.093 03:46:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:50.093 03:46:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:51.495 03:46:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:51.495 03:46:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 2589164 00:36:51.495 03:46:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@946 -- # '[' -z 2589164 ']' 00:36:51.495 03:46:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@950 -- # kill -0 2589164 00:36:51.495 03:46:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@951 -- # uname 00:36:51.495 03:46:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:36:51.495 03:46:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2589164 00:36:51.495 03:46:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:36:51.495 03:46:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:36:51.495 03:46:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2589164' 00:36:51.495 killing process with pid 2589164 00:36:51.495 03:46:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@965 -- # kill 2589164 00:36:51.495 03:46:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@970 -- # wait 2589164 00:36:51.757 00:36:51.757 real 0m14.177s 00:36:51.757 user 0m53.954s 00:36:51.757 sys 0m2.413s 00:36:51.757 03:46:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1122 -- # xtrace_disable 00:36:51.757 03:46:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:51.757 ************************************ 00:36:51.757 END TEST spdk_target_abort 00:36:51.757 ************************************ 00:36:51.757 03:46:36 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:36:51.757 03:46:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:36:51.757 03:46:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@1103 -- # xtrace_disable 00:36:51.757 03:46:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:51.757 ************************************ 00:36:51.757 START TEST kernel_target_abort 00:36:51.757 ************************************ 00:36:51.757 03:46:36 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1121 -- # kernel_target 00:36:51.757 03:46:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:36:51.757 03:46:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:36:51.757 03:46:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:51.757 03:46:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:51.757 03:46:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:51.757 03:46:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:51.757 03:46:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:51.757 03:46:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:51.757 03:46:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:51.757 03:46:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:51.757 03:46:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:51.757 03:46:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:36:51.757 03:46:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:36:51.757 03:46:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:36:51.757 03:46:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:51.757 03:46:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:36:51.757 03:46:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:36:51.757 03:46:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:36:51.757 03:46:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:36:51.757 03:46:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:36:51.757 03:46:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:36:51.757 03:46:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:36:52.693 Waiting for block devices as requested 00:36:52.693 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:36:52.952 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:36:52.952 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:36:52.952 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:36:53.210 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:36:53.210 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:36:53.210 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:36:53.210 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:36:53.210 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:36:53.468 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:36:53.468 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:36:53.468 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:36:53.726 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:36:53.726 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:36:53.726 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:36:53.726 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:36:53.982 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:36:53.982 03:46:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:36:53.982 03:46:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:36:53.982 03:46:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:36:53.982 03:46:39 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:36:53.982 03:46:39 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:36:53.982 03:46:39 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:36:53.982 03:46:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:36:53.982 03:46:39 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:36:53.982 03:46:39 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:36:53.982 No valid GPT data, bailing 00:36:53.982 03:46:39 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:36:53.982 03:46:39 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:36:53.982 03:46:39 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:36:53.982 03:46:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:36:53.982 03:46:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:36:53.982 03:46:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:53.982 03:46:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:36:53.982 03:46:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:36:53.982 03:46:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:36:53.983 03:46:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:36:53.983 03:46:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:36:53.983 03:46:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:36:53.983 03:46:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:36:53.983 03:46:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:36:53.983 03:46:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:36:53.983 03:46:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:36:53.983 03:46:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:36:53.983 03:46:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:36:54.240 00:36:54.240 Discovery Log Number of Records 2, Generation counter 2 00:36:54.240 =====Discovery Log Entry 0====== 00:36:54.240 trtype: tcp 00:36:54.240 adrfam: ipv4 00:36:54.240 subtype: current discovery subsystem 00:36:54.240 treq: not specified, sq flow control disable supported 00:36:54.240 portid: 1 00:36:54.240 trsvcid: 4420 00:36:54.240 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:36:54.240 traddr: 10.0.0.1 00:36:54.240 eflags: none 00:36:54.240 sectype: none 00:36:54.240 =====Discovery Log Entry 1====== 00:36:54.240 trtype: tcp 00:36:54.240 adrfam: ipv4 00:36:54.240 subtype: nvme subsystem 00:36:54.240 treq: not specified, sq flow control disable supported 00:36:54.240 portid: 1 00:36:54.240 trsvcid: 4420 00:36:54.240 subnqn: nqn.2016-06.io.spdk:testnqn 00:36:54.240 traddr: 10.0.0.1 00:36:54.240 eflags: none 00:36:54.240 sectype: none 00:36:54.240 03:46:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:36:54.240 03:46:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:36:54.240 03:46:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:36:54.240 03:46:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:36:54.240 03:46:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:36:54.240 03:46:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:36:54.240 03:46:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:36:54.240 03:46:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:36:54.240 03:46:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:36:54.240 03:46:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:54.240 03:46:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:36:54.240 03:46:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:54.240 03:46:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:36:54.240 03:46:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:54.240 03:46:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:36:54.240 03:46:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:54.240 03:46:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:36:54.240 03:46:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:54.240 03:46:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:54.240 03:46:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:54.240 03:46:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:54.240 EAL: No free 2048 kB hugepages reported on node 1 00:36:57.514 Initializing NVMe Controllers 00:36:57.514 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:36:57.514 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:57.514 Initialization complete. Launching workers. 00:36:57.514 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 45065, failed: 0 00:36:57.514 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 45065, failed to submit 0 00:36:57.514 success 0, unsuccess 45065, failed 0 00:36:57.514 03:46:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:57.514 03:46:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:57.514 EAL: No free 2048 kB hugepages reported on node 1 00:37:00.791 Initializing NVMe Controllers 00:37:00.791 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:37:00.791 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:37:00.791 Initialization complete. Launching workers. 00:37:00.791 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 78796, failed: 0 00:37:00.791 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 19866, failed to submit 58930 00:37:00.791 success 0, unsuccess 19866, failed 0 00:37:00.791 03:46:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:00.791 03:46:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:00.791 EAL: No free 2048 kB hugepages reported on node 1 00:37:04.067 Initializing NVMe Controllers 00:37:04.067 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:37:04.067 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:37:04.067 Initialization complete. Launching workers. 00:37:04.067 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 76708, failed: 0 00:37:04.067 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 19162, failed to submit 57546 00:37:04.067 success 0, unsuccess 19162, failed 0 00:37:04.067 03:46:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:37:04.067 03:46:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:37:04.067 03:46:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:37:04.067 03:46:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:37:04.067 03:46:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:37:04.067 03:46:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:37:04.067 03:46:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:37:04.067 03:46:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:37:04.067 03:46:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:37:04.067 03:46:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:37:04.632 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:37:04.632 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:37:04.632 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:37:04.632 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:37:04.632 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:37:04.632 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:37:04.632 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:37:04.632 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:37:04.891 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:37:04.891 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:37:04.891 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:37:04.891 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:37:04.891 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:37:04.891 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:37:04.891 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:37:04.891 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:37:05.826 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:37:05.826 00:37:05.826 real 0m14.178s 00:37:05.826 user 0m6.234s 00:37:05.826 sys 0m3.163s 00:37:05.826 03:46:51 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1122 -- # xtrace_disable 00:37:05.826 03:46:51 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:05.826 ************************************ 00:37:05.826 END TEST kernel_target_abort 00:37:05.826 ************************************ 00:37:05.826 03:46:51 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:37:05.826 03:46:51 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:37:05.826 03:46:51 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:37:05.826 03:46:51 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:37:05.826 03:46:51 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:37:05.826 03:46:51 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:37:05.826 03:46:51 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:37:05.826 03:46:51 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:37:05.826 rmmod nvme_tcp 00:37:05.826 rmmod nvme_fabrics 00:37:05.826 rmmod nvme_keyring 00:37:05.826 03:46:51 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:37:05.826 03:46:51 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:37:05.826 03:46:51 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:37:05.826 03:46:51 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 2589164 ']' 00:37:05.826 03:46:51 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 2589164 00:37:05.826 03:46:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@946 -- # '[' -z 2589164 ']' 00:37:05.826 03:46:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@950 -- # kill -0 2589164 00:37:05.826 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (2589164) - No such process 00:37:05.826 03:46:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@973 -- # echo 'Process with pid 2589164 is not found' 00:37:05.826 Process with pid 2589164 is not found 00:37:05.826 03:46:51 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:37:05.826 03:46:51 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:37:07.202 Waiting for block devices as requested 00:37:07.202 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:37:07.202 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:37:07.202 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:37:07.459 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:37:07.459 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:37:07.459 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:37:07.459 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:37:07.717 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:37:07.717 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:37:07.717 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:37:07.717 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:37:07.717 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:37:07.975 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:37:07.975 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:37:07.975 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:37:07.975 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:37:08.232 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:37:08.232 03:46:53 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:37:08.232 03:46:53 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:37:08.232 03:46:53 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:37:08.232 03:46:53 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:37:08.232 03:46:53 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:08.232 03:46:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:37:08.232 03:46:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:10.757 03:46:55 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:37:10.757 00:37:10.757 real 0m37.464s 00:37:10.757 user 1m2.221s 00:37:10.757 sys 0m8.730s 00:37:10.757 03:46:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@1122 -- # xtrace_disable 00:37:10.757 03:46:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:10.757 ************************************ 00:37:10.757 END TEST nvmf_abort_qd_sizes 00:37:10.757 ************************************ 00:37:10.757 03:46:55 -- spdk/autotest.sh@295 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:37:10.757 03:46:55 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:37:10.757 03:46:55 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:37:10.757 03:46:55 -- common/autotest_common.sh@10 -- # set +x 00:37:10.757 ************************************ 00:37:10.757 START TEST keyring_file 00:37:10.757 ************************************ 00:37:10.757 03:46:55 keyring_file -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:37:10.757 * Looking for test storage... 00:37:10.757 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:37:10.757 03:46:55 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:37:10.757 03:46:55 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:10.757 03:46:55 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:37:10.757 03:46:55 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:10.757 03:46:55 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:10.757 03:46:55 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:10.757 03:46:55 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:10.757 03:46:55 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:10.757 03:46:55 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:10.757 03:46:55 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:10.757 03:46:55 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:10.757 03:46:55 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:10.757 03:46:55 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:10.757 03:46:55 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:37:10.758 03:46:55 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:37:10.758 03:46:55 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:10.758 03:46:55 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:10.758 03:46:55 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:10.758 03:46:55 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:10.758 03:46:55 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:10.758 03:46:55 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:10.758 03:46:55 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:10.758 03:46:55 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:10.758 03:46:55 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:10.758 03:46:55 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:10.758 03:46:55 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:10.758 03:46:55 keyring_file -- paths/export.sh@5 -- # export PATH 00:37:10.758 03:46:55 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:10.758 03:46:55 keyring_file -- nvmf/common.sh@47 -- # : 0 00:37:10.758 03:46:55 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:37:10.758 03:46:55 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:37:10.758 03:46:55 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:10.758 03:46:55 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:10.758 03:46:55 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:10.758 03:46:55 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:37:10.758 03:46:55 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:37:10.758 03:46:55 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:37:10.758 03:46:55 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:37:10.758 03:46:55 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:37:10.758 03:46:55 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:37:10.758 03:46:55 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:37:10.758 03:46:55 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:37:10.758 03:46:55 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:37:10.758 03:46:55 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:37:10.758 03:46:55 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:37:10.758 03:46:55 keyring_file -- keyring/common.sh@17 -- # name=key0 00:37:10.758 03:46:55 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:37:10.758 03:46:55 keyring_file -- keyring/common.sh@17 -- # digest=0 00:37:10.758 03:46:55 keyring_file -- keyring/common.sh@18 -- # mktemp 00:37:10.758 03:46:55 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.2rOaDxBT2b 00:37:10.758 03:46:55 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:37:10.758 03:46:55 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:37:10.758 03:46:55 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:37:10.758 03:46:55 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:37:10.758 03:46:55 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:37:10.758 03:46:55 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:37:10.758 03:46:55 keyring_file -- nvmf/common.sh@705 -- # python - 00:37:10.758 03:46:55 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.2rOaDxBT2b 00:37:10.758 03:46:55 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.2rOaDxBT2b 00:37:10.758 03:46:55 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.2rOaDxBT2b 00:37:10.758 03:46:55 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:37:10.758 03:46:55 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:37:10.758 03:46:55 keyring_file -- keyring/common.sh@17 -- # name=key1 00:37:10.758 03:46:55 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:37:10.758 03:46:55 keyring_file -- keyring/common.sh@17 -- # digest=0 00:37:10.758 03:46:55 keyring_file -- keyring/common.sh@18 -- # mktemp 00:37:10.758 03:46:55 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.kCrWPp7Pcj 00:37:10.758 03:46:55 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:37:10.758 03:46:55 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:37:10.758 03:46:55 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:37:10.758 03:46:55 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:37:10.758 03:46:55 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:37:10.758 03:46:55 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:37:10.758 03:46:55 keyring_file -- nvmf/common.sh@705 -- # python - 00:37:10.758 03:46:55 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.kCrWPp7Pcj 00:37:10.758 03:46:55 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.kCrWPp7Pcj 00:37:10.758 03:46:55 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.kCrWPp7Pcj 00:37:10.758 03:46:55 keyring_file -- keyring/file.sh@30 -- # tgtpid=2594912 00:37:10.758 03:46:55 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:37:10.758 03:46:55 keyring_file -- keyring/file.sh@32 -- # waitforlisten 2594912 00:37:10.758 03:46:55 keyring_file -- common/autotest_common.sh@827 -- # '[' -z 2594912 ']' 00:37:10.758 03:46:55 keyring_file -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:10.758 03:46:55 keyring_file -- common/autotest_common.sh@832 -- # local max_retries=100 00:37:10.758 03:46:55 keyring_file -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:10.758 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:10.758 03:46:55 keyring_file -- common/autotest_common.sh@836 -- # xtrace_disable 00:37:10.758 03:46:55 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:10.758 [2024-07-21 03:46:55.741255] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:37:10.758 [2024-07-21 03:46:55.741331] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2594912 ] 00:37:10.758 EAL: No free 2048 kB hugepages reported on node 1 00:37:10.758 [2024-07-21 03:46:55.803935] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:10.758 [2024-07-21 03:46:55.893264] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:37:11.016 03:46:56 keyring_file -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:37:11.016 03:46:56 keyring_file -- common/autotest_common.sh@860 -- # return 0 00:37:11.016 03:46:56 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:37:11.016 03:46:56 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:11.016 03:46:56 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:11.016 [2024-07-21 03:46:56.145611] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:11.016 null0 00:37:11.016 [2024-07-21 03:46:56.177688] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:37:11.016 [2024-07-21 03:46:56.178177] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:37:11.016 [2024-07-21 03:46:56.185715] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:37:11.016 03:46:56 keyring_file -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:11.016 03:46:56 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:37:11.016 03:46:56 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:37:11.016 03:46:56 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:37:11.016 03:46:56 keyring_file -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:37:11.016 03:46:56 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:11.016 03:46:56 keyring_file -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:37:11.016 03:46:56 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:11.016 03:46:56 keyring_file -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:37:11.016 03:46:56 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:11.016 03:46:56 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:11.016 [2024-07-21 03:46:56.197724] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:37:11.016 request: 00:37:11.016 { 00:37:11.016 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:37:11.016 "secure_channel": false, 00:37:11.016 "listen_address": { 00:37:11.016 "trtype": "tcp", 00:37:11.016 "traddr": "127.0.0.1", 00:37:11.016 "trsvcid": "4420" 00:37:11.016 }, 00:37:11.016 "method": "nvmf_subsystem_add_listener", 00:37:11.016 "req_id": 1 00:37:11.016 } 00:37:11.016 Got JSON-RPC error response 00:37:11.016 response: 00:37:11.016 { 00:37:11.016 "code": -32602, 00:37:11.016 "message": "Invalid parameters" 00:37:11.016 } 00:37:11.016 03:46:56 keyring_file -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:37:11.016 03:46:56 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:37:11.016 03:46:56 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:37:11.016 03:46:56 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:37:11.016 03:46:56 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:37:11.016 03:46:56 keyring_file -- keyring/file.sh@46 -- # bperfpid=2594922 00:37:11.016 03:46:56 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:37:11.016 03:46:56 keyring_file -- keyring/file.sh@48 -- # waitforlisten 2594922 /var/tmp/bperf.sock 00:37:11.016 03:46:56 keyring_file -- common/autotest_common.sh@827 -- # '[' -z 2594922 ']' 00:37:11.016 03:46:56 keyring_file -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:11.016 03:46:56 keyring_file -- common/autotest_common.sh@832 -- # local max_retries=100 00:37:11.016 03:46:56 keyring_file -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:11.016 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:11.016 03:46:56 keyring_file -- common/autotest_common.sh@836 -- # xtrace_disable 00:37:11.016 03:46:56 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:11.016 [2024-07-21 03:46:56.244791] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:37:11.016 [2024-07-21 03:46:56.244870] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2594922 ] 00:37:11.016 EAL: No free 2048 kB hugepages reported on node 1 00:37:11.016 [2024-07-21 03:46:56.308452] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:11.274 [2024-07-21 03:46:56.399238] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:37:11.274 03:46:56 keyring_file -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:37:11.274 03:46:56 keyring_file -- common/autotest_common.sh@860 -- # return 0 00:37:11.274 03:46:56 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.2rOaDxBT2b 00:37:11.274 03:46:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.2rOaDxBT2b 00:37:11.530 03:46:56 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.kCrWPp7Pcj 00:37:11.530 03:46:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.kCrWPp7Pcj 00:37:11.787 03:46:57 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:37:11.787 03:46:57 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:37:11.787 03:46:57 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:11.787 03:46:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:11.787 03:46:57 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:12.044 03:46:57 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.2rOaDxBT2b == \/\t\m\p\/\t\m\p\.\2\r\O\a\D\x\B\T\2\b ]] 00:37:12.044 03:46:57 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:37:12.044 03:46:57 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:37:12.044 03:46:57 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:12.044 03:46:57 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:12.044 03:46:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:12.300 03:46:57 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.kCrWPp7Pcj == \/\t\m\p\/\t\m\p\.\k\C\r\W\P\p\7\P\c\j ]] 00:37:12.300 03:46:57 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:37:12.300 03:46:57 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:12.300 03:46:57 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:12.300 03:46:57 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:12.300 03:46:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:12.300 03:46:57 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:12.593 03:46:57 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:37:12.593 03:46:57 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:37:12.593 03:46:57 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:37:12.593 03:46:57 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:12.593 03:46:57 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:12.593 03:46:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:12.593 03:46:57 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:12.850 03:46:58 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:37:12.850 03:46:58 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:12.850 03:46:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:13.107 [2024-07-21 03:46:58.241182] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:37:13.107 nvme0n1 00:37:13.107 03:46:58 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:37:13.107 03:46:58 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:13.107 03:46:58 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:13.107 03:46:58 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:13.107 03:46:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:13.107 03:46:58 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:13.363 03:46:58 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:37:13.363 03:46:58 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:37:13.363 03:46:58 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:37:13.363 03:46:58 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:13.364 03:46:58 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:13.364 03:46:58 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:13.364 03:46:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:13.620 03:46:58 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:37:13.620 03:46:58 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:13.620 Running I/O for 1 seconds... 00:37:14.991 00:37:14.991 Latency(us) 00:37:14.991 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:14.991 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:37:14.991 nvme0n1 : 1.01 8032.31 31.38 0.00 0.00 15858.83 4805.97 23690.05 00:37:14.991 =================================================================================================================== 00:37:14.991 Total : 8032.31 31.38 0.00 0.00 15858.83 4805.97 23690.05 00:37:14.991 0 00:37:14.991 03:46:59 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:37:14.991 03:46:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:37:14.991 03:47:00 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:37:14.991 03:47:00 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:14.991 03:47:00 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:14.991 03:47:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:14.991 03:47:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:14.991 03:47:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:15.249 03:47:00 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:37:15.249 03:47:00 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:37:15.249 03:47:00 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:37:15.249 03:47:00 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:15.249 03:47:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:15.249 03:47:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:15.249 03:47:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:15.507 03:47:00 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:37:15.507 03:47:00 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:37:15.507 03:47:00 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:37:15.507 03:47:00 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:37:15.507 03:47:00 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:37:15.507 03:47:00 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:15.507 03:47:00 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:37:15.507 03:47:00 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:15.507 03:47:00 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:37:15.507 03:47:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:37:15.765 [2024-07-21 03:47:00.937873] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:37:15.765 [2024-07-21 03:47:00.938291] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbaf730 (107): Transport endpoint is not connected 00:37:15.765 [2024-07-21 03:47:00.939282] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbaf730 (9): Bad file descriptor 00:37:15.765 [2024-07-21 03:47:00.940280] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:37:15.765 [2024-07-21 03:47:00.940309] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:37:15.765 [2024-07-21 03:47:00.940326] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:37:15.765 request: 00:37:15.765 { 00:37:15.765 "name": "nvme0", 00:37:15.765 "trtype": "tcp", 00:37:15.765 "traddr": "127.0.0.1", 00:37:15.765 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:15.765 "adrfam": "ipv4", 00:37:15.765 "trsvcid": "4420", 00:37:15.765 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:15.765 "psk": "key1", 00:37:15.765 "method": "bdev_nvme_attach_controller", 00:37:15.765 "req_id": 1 00:37:15.765 } 00:37:15.765 Got JSON-RPC error response 00:37:15.765 response: 00:37:15.765 { 00:37:15.765 "code": -5, 00:37:15.765 "message": "Input/output error" 00:37:15.765 } 00:37:15.765 03:47:00 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:37:15.765 03:47:00 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:37:15.765 03:47:00 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:37:15.765 03:47:00 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:37:15.765 03:47:00 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:37:15.765 03:47:00 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:15.765 03:47:00 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:15.765 03:47:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:15.765 03:47:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:15.765 03:47:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:16.023 03:47:01 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:37:16.023 03:47:01 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:37:16.023 03:47:01 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:37:16.023 03:47:01 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:16.023 03:47:01 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:16.023 03:47:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:16.023 03:47:01 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:16.280 03:47:01 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:37:16.280 03:47:01 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:37:16.280 03:47:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:37:16.538 03:47:01 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:37:16.538 03:47:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:37:16.795 03:47:01 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:37:16.795 03:47:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:16.795 03:47:01 keyring_file -- keyring/file.sh@77 -- # jq length 00:37:17.052 03:47:02 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:37:17.052 03:47:02 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.2rOaDxBT2b 00:37:17.052 03:47:02 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.2rOaDxBT2b 00:37:17.052 03:47:02 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:37:17.052 03:47:02 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.2rOaDxBT2b 00:37:17.052 03:47:02 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:37:17.052 03:47:02 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:17.052 03:47:02 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:37:17.052 03:47:02 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:17.053 03:47:02 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.2rOaDxBT2b 00:37:17.053 03:47:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.2rOaDxBT2b 00:37:17.310 [2024-07-21 03:47:02.432374] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.2rOaDxBT2b': 0100660 00:37:17.310 [2024-07-21 03:47:02.432420] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:37:17.310 request: 00:37:17.310 { 00:37:17.310 "name": "key0", 00:37:17.310 "path": "/tmp/tmp.2rOaDxBT2b", 00:37:17.310 "method": "keyring_file_add_key", 00:37:17.310 "req_id": 1 00:37:17.310 } 00:37:17.310 Got JSON-RPC error response 00:37:17.310 response: 00:37:17.310 { 00:37:17.310 "code": -1, 00:37:17.310 "message": "Operation not permitted" 00:37:17.310 } 00:37:17.310 03:47:02 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:37:17.310 03:47:02 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:37:17.310 03:47:02 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:37:17.310 03:47:02 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:37:17.310 03:47:02 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.2rOaDxBT2b 00:37:17.310 03:47:02 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.2rOaDxBT2b 00:37:17.310 03:47:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.2rOaDxBT2b 00:37:17.582 03:47:02 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.2rOaDxBT2b 00:37:17.582 03:47:02 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:37:17.582 03:47:02 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:17.582 03:47:02 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:17.582 03:47:02 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:17.582 03:47:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:17.582 03:47:02 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:17.840 03:47:02 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:37:17.840 03:47:02 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:17.840 03:47:02 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:37:17.840 03:47:02 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:17.840 03:47:02 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:37:17.840 03:47:02 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:17.840 03:47:02 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:37:17.840 03:47:02 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:17.840 03:47:02 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:17.840 03:47:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:18.097 [2024-07-21 03:47:03.178392] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.2rOaDxBT2b': No such file or directory 00:37:18.097 [2024-07-21 03:47:03.178426] nvme_tcp.c:2573:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:37:18.097 [2024-07-21 03:47:03.178458] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:37:18.097 [2024-07-21 03:47:03.178472] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:37:18.097 [2024-07-21 03:47:03.178485] bdev_nvme.c:6269:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:37:18.097 request: 00:37:18.097 { 00:37:18.097 "name": "nvme0", 00:37:18.097 "trtype": "tcp", 00:37:18.097 "traddr": "127.0.0.1", 00:37:18.097 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:18.097 "adrfam": "ipv4", 00:37:18.097 "trsvcid": "4420", 00:37:18.097 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:18.097 "psk": "key0", 00:37:18.097 "method": "bdev_nvme_attach_controller", 00:37:18.097 "req_id": 1 00:37:18.097 } 00:37:18.097 Got JSON-RPC error response 00:37:18.097 response: 00:37:18.097 { 00:37:18.097 "code": -19, 00:37:18.097 "message": "No such device" 00:37:18.097 } 00:37:18.097 03:47:03 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:37:18.097 03:47:03 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:37:18.097 03:47:03 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:37:18.097 03:47:03 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:37:18.097 03:47:03 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:37:18.097 03:47:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:37:18.353 03:47:03 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:37:18.353 03:47:03 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:37:18.353 03:47:03 keyring_file -- keyring/common.sh@17 -- # name=key0 00:37:18.353 03:47:03 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:37:18.353 03:47:03 keyring_file -- keyring/common.sh@17 -- # digest=0 00:37:18.353 03:47:03 keyring_file -- keyring/common.sh@18 -- # mktemp 00:37:18.353 03:47:03 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.q69dLgyheE 00:37:18.353 03:47:03 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:37:18.353 03:47:03 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:37:18.353 03:47:03 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:37:18.353 03:47:03 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:37:18.353 03:47:03 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:37:18.353 03:47:03 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:37:18.353 03:47:03 keyring_file -- nvmf/common.sh@705 -- # python - 00:37:18.353 03:47:03 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.q69dLgyheE 00:37:18.354 03:47:03 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.q69dLgyheE 00:37:18.354 03:47:03 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.q69dLgyheE 00:37:18.354 03:47:03 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.q69dLgyheE 00:37:18.354 03:47:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.q69dLgyheE 00:37:18.610 03:47:03 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:18.610 03:47:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:18.866 nvme0n1 00:37:18.866 03:47:04 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:37:18.866 03:47:04 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:18.866 03:47:04 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:18.866 03:47:04 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:18.866 03:47:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:18.866 03:47:04 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:19.123 03:47:04 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:37:19.123 03:47:04 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:37:19.123 03:47:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:37:19.381 03:47:04 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:37:19.381 03:47:04 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:37:19.381 03:47:04 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:19.381 03:47:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:19.381 03:47:04 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:19.638 03:47:04 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:37:19.638 03:47:04 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:37:19.638 03:47:04 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:19.638 03:47:04 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:19.638 03:47:04 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:19.638 03:47:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:19.638 03:47:04 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:19.894 03:47:05 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:37:19.894 03:47:05 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:37:19.894 03:47:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:37:20.150 03:47:05 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:37:20.150 03:47:05 keyring_file -- keyring/file.sh@104 -- # jq length 00:37:20.150 03:47:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:20.407 03:47:05 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:37:20.407 03:47:05 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.q69dLgyheE 00:37:20.407 03:47:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.q69dLgyheE 00:37:20.663 03:47:05 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.kCrWPp7Pcj 00:37:20.663 03:47:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.kCrWPp7Pcj 00:37:20.919 03:47:06 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:20.919 03:47:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:21.177 nvme0n1 00:37:21.177 03:47:06 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:37:21.177 03:47:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:37:21.435 03:47:06 keyring_file -- keyring/file.sh@112 -- # config='{ 00:37:21.435 "subsystems": [ 00:37:21.435 { 00:37:21.435 "subsystem": "keyring", 00:37:21.435 "config": [ 00:37:21.435 { 00:37:21.435 "method": "keyring_file_add_key", 00:37:21.435 "params": { 00:37:21.435 "name": "key0", 00:37:21.435 "path": "/tmp/tmp.q69dLgyheE" 00:37:21.435 } 00:37:21.435 }, 00:37:21.435 { 00:37:21.435 "method": "keyring_file_add_key", 00:37:21.435 "params": { 00:37:21.435 "name": "key1", 00:37:21.435 "path": "/tmp/tmp.kCrWPp7Pcj" 00:37:21.435 } 00:37:21.435 } 00:37:21.435 ] 00:37:21.435 }, 00:37:21.435 { 00:37:21.435 "subsystem": "iobuf", 00:37:21.435 "config": [ 00:37:21.435 { 00:37:21.435 "method": "iobuf_set_options", 00:37:21.435 "params": { 00:37:21.435 "small_pool_count": 8192, 00:37:21.435 "large_pool_count": 1024, 00:37:21.435 "small_bufsize": 8192, 00:37:21.435 "large_bufsize": 135168 00:37:21.435 } 00:37:21.435 } 00:37:21.435 ] 00:37:21.435 }, 00:37:21.435 { 00:37:21.435 "subsystem": "sock", 00:37:21.435 "config": [ 00:37:21.435 { 00:37:21.435 "method": "sock_set_default_impl", 00:37:21.435 "params": { 00:37:21.435 "impl_name": "posix" 00:37:21.435 } 00:37:21.435 }, 00:37:21.435 { 00:37:21.435 "method": "sock_impl_set_options", 00:37:21.435 "params": { 00:37:21.435 "impl_name": "ssl", 00:37:21.435 "recv_buf_size": 4096, 00:37:21.435 "send_buf_size": 4096, 00:37:21.435 "enable_recv_pipe": true, 00:37:21.435 "enable_quickack": false, 00:37:21.435 "enable_placement_id": 0, 00:37:21.435 "enable_zerocopy_send_server": true, 00:37:21.435 "enable_zerocopy_send_client": false, 00:37:21.435 "zerocopy_threshold": 0, 00:37:21.435 "tls_version": 0, 00:37:21.435 "enable_ktls": false 00:37:21.435 } 00:37:21.435 }, 00:37:21.435 { 00:37:21.435 "method": "sock_impl_set_options", 00:37:21.435 "params": { 00:37:21.435 "impl_name": "posix", 00:37:21.435 "recv_buf_size": 2097152, 00:37:21.435 "send_buf_size": 2097152, 00:37:21.435 "enable_recv_pipe": true, 00:37:21.435 "enable_quickack": false, 00:37:21.435 "enable_placement_id": 0, 00:37:21.435 "enable_zerocopy_send_server": true, 00:37:21.435 "enable_zerocopy_send_client": false, 00:37:21.435 "zerocopy_threshold": 0, 00:37:21.435 "tls_version": 0, 00:37:21.435 "enable_ktls": false 00:37:21.435 } 00:37:21.435 } 00:37:21.435 ] 00:37:21.435 }, 00:37:21.435 { 00:37:21.435 "subsystem": "vmd", 00:37:21.435 "config": [] 00:37:21.435 }, 00:37:21.435 { 00:37:21.435 "subsystem": "accel", 00:37:21.435 "config": [ 00:37:21.435 { 00:37:21.435 "method": "accel_set_options", 00:37:21.435 "params": { 00:37:21.435 "small_cache_size": 128, 00:37:21.435 "large_cache_size": 16, 00:37:21.435 "task_count": 2048, 00:37:21.435 "sequence_count": 2048, 00:37:21.435 "buf_count": 2048 00:37:21.435 } 00:37:21.435 } 00:37:21.435 ] 00:37:21.435 }, 00:37:21.435 { 00:37:21.435 "subsystem": "bdev", 00:37:21.435 "config": [ 00:37:21.435 { 00:37:21.435 "method": "bdev_set_options", 00:37:21.435 "params": { 00:37:21.435 "bdev_io_pool_size": 65535, 00:37:21.435 "bdev_io_cache_size": 256, 00:37:21.435 "bdev_auto_examine": true, 00:37:21.435 "iobuf_small_cache_size": 128, 00:37:21.435 "iobuf_large_cache_size": 16 00:37:21.435 } 00:37:21.435 }, 00:37:21.435 { 00:37:21.435 "method": "bdev_raid_set_options", 00:37:21.435 "params": { 00:37:21.435 "process_window_size_kb": 1024 00:37:21.435 } 00:37:21.435 }, 00:37:21.435 { 00:37:21.435 "method": "bdev_iscsi_set_options", 00:37:21.435 "params": { 00:37:21.435 "timeout_sec": 30 00:37:21.435 } 00:37:21.435 }, 00:37:21.435 { 00:37:21.435 "method": "bdev_nvme_set_options", 00:37:21.435 "params": { 00:37:21.435 "action_on_timeout": "none", 00:37:21.435 "timeout_us": 0, 00:37:21.435 "timeout_admin_us": 0, 00:37:21.435 "keep_alive_timeout_ms": 10000, 00:37:21.435 "arbitration_burst": 0, 00:37:21.435 "low_priority_weight": 0, 00:37:21.435 "medium_priority_weight": 0, 00:37:21.435 "high_priority_weight": 0, 00:37:21.435 "nvme_adminq_poll_period_us": 10000, 00:37:21.435 "nvme_ioq_poll_period_us": 0, 00:37:21.435 "io_queue_requests": 512, 00:37:21.435 "delay_cmd_submit": true, 00:37:21.435 "transport_retry_count": 4, 00:37:21.435 "bdev_retry_count": 3, 00:37:21.435 "transport_ack_timeout": 0, 00:37:21.435 "ctrlr_loss_timeout_sec": 0, 00:37:21.435 "reconnect_delay_sec": 0, 00:37:21.435 "fast_io_fail_timeout_sec": 0, 00:37:21.435 "disable_auto_failback": false, 00:37:21.435 "generate_uuids": false, 00:37:21.435 "transport_tos": 0, 00:37:21.435 "nvme_error_stat": false, 00:37:21.435 "rdma_srq_size": 0, 00:37:21.435 "io_path_stat": false, 00:37:21.435 "allow_accel_sequence": false, 00:37:21.435 "rdma_max_cq_size": 0, 00:37:21.435 "rdma_cm_event_timeout_ms": 0, 00:37:21.435 "dhchap_digests": [ 00:37:21.435 "sha256", 00:37:21.435 "sha384", 00:37:21.435 "sha512" 00:37:21.435 ], 00:37:21.435 "dhchap_dhgroups": [ 00:37:21.435 "null", 00:37:21.435 "ffdhe2048", 00:37:21.435 "ffdhe3072", 00:37:21.435 "ffdhe4096", 00:37:21.435 "ffdhe6144", 00:37:21.435 "ffdhe8192" 00:37:21.435 ] 00:37:21.435 } 00:37:21.435 }, 00:37:21.435 { 00:37:21.435 "method": "bdev_nvme_attach_controller", 00:37:21.435 "params": { 00:37:21.435 "name": "nvme0", 00:37:21.435 "trtype": "TCP", 00:37:21.435 "adrfam": "IPv4", 00:37:21.435 "traddr": "127.0.0.1", 00:37:21.435 "trsvcid": "4420", 00:37:21.435 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:21.435 "prchk_reftag": false, 00:37:21.435 "prchk_guard": false, 00:37:21.435 "ctrlr_loss_timeout_sec": 0, 00:37:21.435 "reconnect_delay_sec": 0, 00:37:21.435 "fast_io_fail_timeout_sec": 0, 00:37:21.435 "psk": "key0", 00:37:21.435 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:21.435 "hdgst": false, 00:37:21.436 "ddgst": false 00:37:21.436 } 00:37:21.436 }, 00:37:21.436 { 00:37:21.436 "method": "bdev_nvme_set_hotplug", 00:37:21.436 "params": { 00:37:21.436 "period_us": 100000, 00:37:21.436 "enable": false 00:37:21.436 } 00:37:21.436 }, 00:37:21.436 { 00:37:21.436 "method": "bdev_wait_for_examine" 00:37:21.436 } 00:37:21.436 ] 00:37:21.436 }, 00:37:21.436 { 00:37:21.436 "subsystem": "nbd", 00:37:21.436 "config": [] 00:37:21.436 } 00:37:21.436 ] 00:37:21.436 }' 00:37:21.436 03:47:06 keyring_file -- keyring/file.sh@114 -- # killprocess 2594922 00:37:21.436 03:47:06 keyring_file -- common/autotest_common.sh@946 -- # '[' -z 2594922 ']' 00:37:21.436 03:47:06 keyring_file -- common/autotest_common.sh@950 -- # kill -0 2594922 00:37:21.436 03:47:06 keyring_file -- common/autotest_common.sh@951 -- # uname 00:37:21.436 03:47:06 keyring_file -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:37:21.436 03:47:06 keyring_file -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2594922 00:37:21.436 03:47:06 keyring_file -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:37:21.436 03:47:06 keyring_file -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:37:21.436 03:47:06 keyring_file -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2594922' 00:37:21.436 killing process with pid 2594922 00:37:21.436 03:47:06 keyring_file -- common/autotest_common.sh@965 -- # kill 2594922 00:37:21.436 Received shutdown signal, test time was about 1.000000 seconds 00:37:21.436 00:37:21.436 Latency(us) 00:37:21.436 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:21.436 =================================================================================================================== 00:37:21.436 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:21.436 03:47:06 keyring_file -- common/autotest_common.sh@970 -- # wait 2594922 00:37:21.695 03:47:06 keyring_file -- keyring/file.sh@117 -- # bperfpid=2596338 00:37:21.695 03:47:06 keyring_file -- keyring/file.sh@119 -- # waitforlisten 2596338 /var/tmp/bperf.sock 00:37:21.695 03:47:06 keyring_file -- common/autotest_common.sh@827 -- # '[' -z 2596338 ']' 00:37:21.695 03:47:06 keyring_file -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:21.695 03:47:06 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:37:21.695 03:47:06 keyring_file -- common/autotest_common.sh@832 -- # local max_retries=100 00:37:21.695 03:47:06 keyring_file -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:21.695 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:21.695 03:47:06 keyring_file -- common/autotest_common.sh@836 -- # xtrace_disable 00:37:21.695 03:47:06 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:37:21.695 "subsystems": [ 00:37:21.695 { 00:37:21.695 "subsystem": "keyring", 00:37:21.695 "config": [ 00:37:21.695 { 00:37:21.695 "method": "keyring_file_add_key", 00:37:21.695 "params": { 00:37:21.695 "name": "key0", 00:37:21.695 "path": "/tmp/tmp.q69dLgyheE" 00:37:21.695 } 00:37:21.695 }, 00:37:21.695 { 00:37:21.695 "method": "keyring_file_add_key", 00:37:21.695 "params": { 00:37:21.695 "name": "key1", 00:37:21.695 "path": "/tmp/tmp.kCrWPp7Pcj" 00:37:21.695 } 00:37:21.695 } 00:37:21.695 ] 00:37:21.695 }, 00:37:21.695 { 00:37:21.695 "subsystem": "iobuf", 00:37:21.695 "config": [ 00:37:21.695 { 00:37:21.695 "method": "iobuf_set_options", 00:37:21.695 "params": { 00:37:21.695 "small_pool_count": 8192, 00:37:21.695 "large_pool_count": 1024, 00:37:21.695 "small_bufsize": 8192, 00:37:21.695 "large_bufsize": 135168 00:37:21.695 } 00:37:21.695 } 00:37:21.695 ] 00:37:21.695 }, 00:37:21.695 { 00:37:21.695 "subsystem": "sock", 00:37:21.695 "config": [ 00:37:21.695 { 00:37:21.695 "method": "sock_set_default_impl", 00:37:21.695 "params": { 00:37:21.695 "impl_name": "posix" 00:37:21.695 } 00:37:21.695 }, 00:37:21.695 { 00:37:21.695 "method": "sock_impl_set_options", 00:37:21.695 "params": { 00:37:21.695 "impl_name": "ssl", 00:37:21.695 "recv_buf_size": 4096, 00:37:21.695 "send_buf_size": 4096, 00:37:21.695 "enable_recv_pipe": true, 00:37:21.695 "enable_quickack": false, 00:37:21.695 "enable_placement_id": 0, 00:37:21.695 "enable_zerocopy_send_server": true, 00:37:21.695 "enable_zerocopy_send_client": false, 00:37:21.695 "zerocopy_threshold": 0, 00:37:21.695 "tls_version": 0, 00:37:21.695 "enable_ktls": false 00:37:21.695 } 00:37:21.695 }, 00:37:21.695 { 00:37:21.695 "method": "sock_impl_set_options", 00:37:21.695 "params": { 00:37:21.695 "impl_name": "posix", 00:37:21.695 "recv_buf_size": 2097152, 00:37:21.695 "send_buf_size": 2097152, 00:37:21.695 "enable_recv_pipe": true, 00:37:21.695 "enable_quickack": false, 00:37:21.695 "enable_placement_id": 0, 00:37:21.695 "enable_zerocopy_send_server": true, 00:37:21.695 "enable_zerocopy_send_client": false, 00:37:21.695 "zerocopy_threshold": 0, 00:37:21.695 "tls_version": 0, 00:37:21.695 "enable_ktls": false 00:37:21.695 } 00:37:21.695 } 00:37:21.695 ] 00:37:21.695 }, 00:37:21.695 { 00:37:21.695 "subsystem": "vmd", 00:37:21.695 "config": [] 00:37:21.695 }, 00:37:21.695 { 00:37:21.695 "subsystem": "accel", 00:37:21.695 "config": [ 00:37:21.695 { 00:37:21.695 "method": "accel_set_options", 00:37:21.695 "params": { 00:37:21.695 "small_cache_size": 128, 00:37:21.695 "large_cache_size": 16, 00:37:21.695 "task_count": 2048, 00:37:21.695 "sequence_count": 2048, 00:37:21.695 "buf_count": 2048 00:37:21.695 } 00:37:21.695 } 00:37:21.695 ] 00:37:21.695 }, 00:37:21.695 { 00:37:21.695 "subsystem": "bdev", 00:37:21.695 "config": [ 00:37:21.695 { 00:37:21.695 "method": "bdev_set_options", 00:37:21.695 "params": { 00:37:21.695 "bdev_io_pool_size": 65535, 00:37:21.695 "bdev_io_cache_size": 256, 00:37:21.696 "bdev_auto_examine": true, 00:37:21.696 "iobuf_small_cache_size": 128, 00:37:21.696 "iobuf_large_cache_size": 16 00:37:21.696 } 00:37:21.696 }, 00:37:21.696 { 00:37:21.696 "method": "bdev_raid_set_options", 00:37:21.696 "params": { 00:37:21.696 "process_window_size_kb": 1024 00:37:21.696 } 00:37:21.696 }, 00:37:21.696 { 00:37:21.696 "method": "bdev_iscsi_set_options", 00:37:21.696 "params": { 00:37:21.696 "timeout_sec": 30 00:37:21.696 } 00:37:21.696 }, 00:37:21.696 { 00:37:21.696 "method": "bdev_nvme_set_options", 00:37:21.696 "params": { 00:37:21.696 "action_on_timeout": "none", 00:37:21.696 "timeout_us": 0, 00:37:21.696 "timeout_admin_us": 0, 00:37:21.696 "keep_alive_timeout_ms": 10000, 00:37:21.696 "arbitration_burst": 0, 00:37:21.696 "low_priority_weight": 0, 00:37:21.696 "medium_priority_weight": 0, 00:37:21.696 "high_priority_weight": 0, 00:37:21.696 "nvme_adminq_poll_period_us": 10000, 00:37:21.696 "nvme_ioq_poll_period_us": 0, 00:37:21.696 "io_queue_requests": 512, 00:37:21.696 "delay_cmd_submit": true, 00:37:21.696 "transport_retry_count": 4, 00:37:21.696 "bdev_retry_count": 3, 00:37:21.696 "transport_ack_timeout": 0, 00:37:21.696 "ctrlr_loss_timeout_sec": 0, 00:37:21.696 "reconnect_delay_sec": 0, 00:37:21.696 "fast_io_fail_timeout_sec": 0, 00:37:21.696 "disable_auto_failback": false, 00:37:21.696 "generate_uuids": false, 00:37:21.696 "transport_tos": 0, 00:37:21.696 "nvme_error_stat": false, 00:37:21.696 "rdma_srq_size": 0, 00:37:21.696 "io_path_stat": false, 00:37:21.696 "allow_accel_sequence": false, 00:37:21.696 "rdma_max_cq_size": 0, 00:37:21.696 "rdma_cm_event_timeout_ms": 0, 00:37:21.696 "dhchap_digests": [ 00:37:21.696 "sha256", 00:37:21.696 "sha384", 00:37:21.696 "sha512" 00:37:21.696 ], 00:37:21.696 "dhchap_dhgroups": [ 00:37:21.696 "null", 00:37:21.696 "ffdhe2048", 00:37:21.696 "ffdhe3072", 00:37:21.696 "ffdhe4096", 00:37:21.696 "ffdhe6144", 00:37:21.696 "ffdhe8192" 00:37:21.696 ] 00:37:21.696 } 00:37:21.696 }, 00:37:21.696 { 00:37:21.696 "method": "bdev_nvme_attach_controller", 00:37:21.696 "params": { 00:37:21.696 "name": "nvme0", 00:37:21.696 "trtype": "TCP", 00:37:21.696 "adrfam": "IPv4", 00:37:21.696 "traddr": "127.0.0.1", 00:37:21.696 "trsvcid": "4420", 00:37:21.696 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:21.696 "prchk_reftag": false, 00:37:21.696 "prchk_guard": false, 00:37:21.696 "ctrlr_loss_timeout_sec": 0, 00:37:21.696 "reconnect_delay_sec": 0, 00:37:21.696 "fast_io_fail_timeout_sec": 0, 00:37:21.696 "psk": "key0", 00:37:21.696 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:21.696 "hdgst": false, 00:37:21.696 "ddgst": false 00:37:21.696 } 00:37:21.696 }, 00:37:21.696 { 00:37:21.696 "method": "bdev_nvme_set_hotplug", 00:37:21.696 "params": { 00:37:21.696 "period_us": 100000, 00:37:21.696 "enable": false 00:37:21.696 } 00:37:21.696 }, 00:37:21.696 { 00:37:21.696 "method": "bdev_wait_for_examine" 00:37:21.696 } 00:37:21.696 ] 00:37:21.696 }, 00:37:21.696 { 00:37:21.696 "subsystem": "nbd", 00:37:21.696 "config": [] 00:37:21.696 } 00:37:21.696 ] 00:37:21.696 }' 00:37:21.696 03:47:06 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:21.696 [2024-07-21 03:47:06.916089] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:37:21.696 [2024-07-21 03:47:06.916172] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2596338 ] 00:37:21.696 EAL: No free 2048 kB hugepages reported on node 1 00:37:21.696 [2024-07-21 03:47:06.979164] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:21.954 [2024-07-21 03:47:07.070157] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:37:21.954 [2024-07-21 03:47:07.260714] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:37:22.889 03:47:07 keyring_file -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:37:22.889 03:47:07 keyring_file -- common/autotest_common.sh@860 -- # return 0 00:37:22.889 03:47:07 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:37:22.889 03:47:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:22.889 03:47:07 keyring_file -- keyring/file.sh@120 -- # jq length 00:37:22.889 03:47:08 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:37:22.889 03:47:08 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:37:22.889 03:47:08 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:22.889 03:47:08 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:22.889 03:47:08 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:22.889 03:47:08 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:22.889 03:47:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:23.147 03:47:08 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:37:23.147 03:47:08 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:37:23.147 03:47:08 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:37:23.147 03:47:08 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:23.147 03:47:08 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:23.147 03:47:08 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:23.147 03:47:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:23.406 03:47:08 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:37:23.406 03:47:08 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:37:23.406 03:47:08 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:37:23.406 03:47:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:37:23.663 03:47:08 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:37:23.663 03:47:08 keyring_file -- keyring/file.sh@1 -- # cleanup 00:37:23.663 03:47:08 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.q69dLgyheE /tmp/tmp.kCrWPp7Pcj 00:37:23.663 03:47:08 keyring_file -- keyring/file.sh@20 -- # killprocess 2596338 00:37:23.663 03:47:08 keyring_file -- common/autotest_common.sh@946 -- # '[' -z 2596338 ']' 00:37:23.663 03:47:08 keyring_file -- common/autotest_common.sh@950 -- # kill -0 2596338 00:37:23.663 03:47:08 keyring_file -- common/autotest_common.sh@951 -- # uname 00:37:23.663 03:47:08 keyring_file -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:37:23.663 03:47:08 keyring_file -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2596338 00:37:23.663 03:47:08 keyring_file -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:37:23.663 03:47:08 keyring_file -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:37:23.663 03:47:08 keyring_file -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2596338' 00:37:23.664 killing process with pid 2596338 00:37:23.664 03:47:08 keyring_file -- common/autotest_common.sh@965 -- # kill 2596338 00:37:23.664 Received shutdown signal, test time was about 1.000000 seconds 00:37:23.664 00:37:23.664 Latency(us) 00:37:23.664 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:23.664 =================================================================================================================== 00:37:23.664 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:37:23.664 03:47:08 keyring_file -- common/autotest_common.sh@970 -- # wait 2596338 00:37:23.921 03:47:09 keyring_file -- keyring/file.sh@21 -- # killprocess 2594912 00:37:23.921 03:47:09 keyring_file -- common/autotest_common.sh@946 -- # '[' -z 2594912 ']' 00:37:23.921 03:47:09 keyring_file -- common/autotest_common.sh@950 -- # kill -0 2594912 00:37:23.921 03:47:09 keyring_file -- common/autotest_common.sh@951 -- # uname 00:37:23.921 03:47:09 keyring_file -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:37:23.921 03:47:09 keyring_file -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2594912 00:37:23.921 03:47:09 keyring_file -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:37:23.921 03:47:09 keyring_file -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:37:23.921 03:47:09 keyring_file -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2594912' 00:37:23.921 killing process with pid 2594912 00:37:23.921 03:47:09 keyring_file -- common/autotest_common.sh@965 -- # kill 2594912 00:37:23.921 [2024-07-21 03:47:09.130856] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:37:23.921 03:47:09 keyring_file -- common/autotest_common.sh@970 -- # wait 2594912 00:37:24.486 00:37:24.486 real 0m14.018s 00:37:24.486 user 0m35.039s 00:37:24.486 sys 0m3.244s 00:37:24.486 03:47:09 keyring_file -- common/autotest_common.sh@1122 -- # xtrace_disable 00:37:24.486 03:47:09 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:24.486 ************************************ 00:37:24.486 END TEST keyring_file 00:37:24.486 ************************************ 00:37:24.486 03:47:09 -- spdk/autotest.sh@296 -- # [[ y == y ]] 00:37:24.486 03:47:09 -- spdk/autotest.sh@297 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:37:24.486 03:47:09 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:37:24.486 03:47:09 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:37:24.486 03:47:09 -- common/autotest_common.sh@10 -- # set +x 00:37:24.486 ************************************ 00:37:24.486 START TEST keyring_linux 00:37:24.486 ************************************ 00:37:24.486 03:47:09 keyring_linux -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:37:24.486 * Looking for test storage... 00:37:24.486 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:37:24.486 03:47:09 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:37:24.486 03:47:09 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:24.486 03:47:09 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:37:24.486 03:47:09 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:24.486 03:47:09 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:24.486 03:47:09 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:24.486 03:47:09 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:24.487 03:47:09 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:24.487 03:47:09 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:24.487 03:47:09 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:24.487 03:47:09 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:24.487 03:47:09 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:24.487 03:47:09 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:24.487 03:47:09 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:37:24.487 03:47:09 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:37:24.487 03:47:09 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:24.487 03:47:09 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:24.487 03:47:09 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:24.487 03:47:09 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:24.487 03:47:09 keyring_linux -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:24.487 03:47:09 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:24.487 03:47:09 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:24.487 03:47:09 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:24.487 03:47:09 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:24.487 03:47:09 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:24.487 03:47:09 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:24.487 03:47:09 keyring_linux -- paths/export.sh@5 -- # export PATH 00:37:24.487 03:47:09 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:24.487 03:47:09 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:37:24.487 03:47:09 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:37:24.487 03:47:09 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:37:24.487 03:47:09 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:24.487 03:47:09 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:24.487 03:47:09 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:24.487 03:47:09 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:37:24.487 03:47:09 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:37:24.487 03:47:09 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:37:24.487 03:47:09 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:37:24.487 03:47:09 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:37:24.487 03:47:09 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:37:24.487 03:47:09 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:37:24.487 03:47:09 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:37:24.487 03:47:09 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:37:24.487 03:47:09 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:37:24.487 03:47:09 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:37:24.487 03:47:09 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:37:24.487 03:47:09 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:37:24.487 03:47:09 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:37:24.487 03:47:09 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:37:24.487 03:47:09 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:37:24.487 03:47:09 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:37:24.487 03:47:09 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:37:24.487 03:47:09 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:37:24.487 03:47:09 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:37:24.487 03:47:09 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:37:24.487 03:47:09 keyring_linux -- nvmf/common.sh@705 -- # python - 00:37:24.487 03:47:09 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:37:24.487 03:47:09 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:37:24.487 /tmp/:spdk-test:key0 00:37:24.487 03:47:09 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:37:24.487 03:47:09 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:37:24.487 03:47:09 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:37:24.487 03:47:09 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:37:24.487 03:47:09 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:37:24.487 03:47:09 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:37:24.487 03:47:09 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:37:24.487 03:47:09 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:37:24.487 03:47:09 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:37:24.487 03:47:09 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:37:24.487 03:47:09 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:37:24.487 03:47:09 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:37:24.487 03:47:09 keyring_linux -- nvmf/common.sh@705 -- # python - 00:37:24.487 03:47:09 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:37:24.487 03:47:09 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:37:24.487 /tmp/:spdk-test:key1 00:37:24.487 03:47:09 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=2596736 00:37:24.487 03:47:09 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:37:24.487 03:47:09 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 2596736 00:37:24.487 03:47:09 keyring_linux -- common/autotest_common.sh@827 -- # '[' -z 2596736 ']' 00:37:24.487 03:47:09 keyring_linux -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:24.487 03:47:09 keyring_linux -- common/autotest_common.sh@832 -- # local max_retries=100 00:37:24.487 03:47:09 keyring_linux -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:24.487 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:24.487 03:47:09 keyring_linux -- common/autotest_common.sh@836 -- # xtrace_disable 00:37:24.487 03:47:09 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:37:24.745 [2024-07-21 03:47:09.819838] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:37:24.745 [2024-07-21 03:47:09.819944] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2596736 ] 00:37:24.745 EAL: No free 2048 kB hugepages reported on node 1 00:37:24.745 [2024-07-21 03:47:09.878048] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:24.745 [2024-07-21 03:47:09.966069] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:37:25.003 03:47:10 keyring_linux -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:37:25.003 03:47:10 keyring_linux -- common/autotest_common.sh@860 -- # return 0 00:37:25.003 03:47:10 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:37:25.003 03:47:10 keyring_linux -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:25.003 03:47:10 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:37:25.003 [2024-07-21 03:47:10.226361] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:25.003 null0 00:37:25.003 [2024-07-21 03:47:10.258420] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:37:25.003 [2024-07-21 03:47:10.258916] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:37:25.003 03:47:10 keyring_linux -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:25.003 03:47:10 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:37:25.003 273778020 00:37:25.004 03:47:10 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:37:25.004 121764532 00:37:25.004 03:47:10 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=2596810 00:37:25.004 03:47:10 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 2596810 /var/tmp/bperf.sock 00:37:25.004 03:47:10 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:37:25.004 03:47:10 keyring_linux -- common/autotest_common.sh@827 -- # '[' -z 2596810 ']' 00:37:25.004 03:47:10 keyring_linux -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:25.004 03:47:10 keyring_linux -- common/autotest_common.sh@832 -- # local max_retries=100 00:37:25.004 03:47:10 keyring_linux -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:25.004 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:25.004 03:47:10 keyring_linux -- common/autotest_common.sh@836 -- # xtrace_disable 00:37:25.004 03:47:10 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:37:25.262 [2024-07-21 03:47:10.327557] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:37:25.262 [2024-07-21 03:47:10.327673] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2596810 ] 00:37:25.262 EAL: No free 2048 kB hugepages reported on node 1 00:37:25.262 [2024-07-21 03:47:10.390423] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:25.262 [2024-07-21 03:47:10.475469] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:37:25.262 03:47:10 keyring_linux -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:37:25.262 03:47:10 keyring_linux -- common/autotest_common.sh@860 -- # return 0 00:37:25.262 03:47:10 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:37:25.262 03:47:10 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:37:25.519 03:47:10 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:37:25.519 03:47:10 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:37:26.085 03:47:11 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:37:26.085 03:47:11 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:37:26.085 [2024-07-21 03:47:11.373905] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:37:26.342 nvme0n1 00:37:26.342 03:47:11 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:37:26.342 03:47:11 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:37:26.342 03:47:11 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:37:26.342 03:47:11 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:37:26.342 03:47:11 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:37:26.342 03:47:11 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:26.599 03:47:11 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:37:26.599 03:47:11 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:37:26.599 03:47:11 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:37:26.599 03:47:11 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:37:26.599 03:47:11 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:26.599 03:47:11 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:37:26.599 03:47:11 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:26.856 03:47:11 keyring_linux -- keyring/linux.sh@25 -- # sn=273778020 00:37:26.857 03:47:11 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:37:26.857 03:47:11 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:37:26.857 03:47:11 keyring_linux -- keyring/linux.sh@26 -- # [[ 273778020 == \2\7\3\7\7\8\0\2\0 ]] 00:37:26.857 03:47:11 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 273778020 00:37:26.857 03:47:11 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:37:26.857 03:47:11 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:26.857 Running I/O for 1 seconds... 00:37:27.789 00:37:27.789 Latency(us) 00:37:27.789 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:27.789 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:37:27.789 nvme0n1 : 1.01 8263.58 32.28 0.00 0.00 15363.90 8446.86 24369.68 00:37:27.789 =================================================================================================================== 00:37:27.789 Total : 8263.58 32.28 0.00 0.00 15363.90 8446.86 24369.68 00:37:27.789 0 00:37:27.789 03:47:13 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:37:27.789 03:47:13 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:37:28.065 03:47:13 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:37:28.065 03:47:13 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:37:28.065 03:47:13 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:37:28.065 03:47:13 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:37:28.065 03:47:13 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:28.065 03:47:13 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:37:28.325 03:47:13 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:37:28.325 03:47:13 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:37:28.325 03:47:13 keyring_linux -- keyring/linux.sh@23 -- # return 00:37:28.325 03:47:13 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:37:28.325 03:47:13 keyring_linux -- common/autotest_common.sh@648 -- # local es=0 00:37:28.325 03:47:13 keyring_linux -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:37:28.325 03:47:13 keyring_linux -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:37:28.325 03:47:13 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:28.325 03:47:13 keyring_linux -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:37:28.325 03:47:13 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:28.325 03:47:13 keyring_linux -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:37:28.325 03:47:13 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:37:28.582 [2024-07-21 03:47:13.815013] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:37:28.582 [2024-07-21 03:47:13.815635] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f9fea0 (107): Transport endpoint is not connected 00:37:28.582 [2024-07-21 03:47:13.816625] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f9fea0 (9): Bad file descriptor 00:37:28.582 [2024-07-21 03:47:13.817620] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:37:28.582 [2024-07-21 03:47:13.817641] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:37:28.582 [2024-07-21 03:47:13.817657] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:37:28.582 request: 00:37:28.582 { 00:37:28.582 "name": "nvme0", 00:37:28.582 "trtype": "tcp", 00:37:28.582 "traddr": "127.0.0.1", 00:37:28.582 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:28.582 "adrfam": "ipv4", 00:37:28.582 "trsvcid": "4420", 00:37:28.582 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:28.582 "psk": ":spdk-test:key1", 00:37:28.582 "method": "bdev_nvme_attach_controller", 00:37:28.582 "req_id": 1 00:37:28.582 } 00:37:28.582 Got JSON-RPC error response 00:37:28.582 response: 00:37:28.582 { 00:37:28.582 "code": -5, 00:37:28.582 "message": "Input/output error" 00:37:28.582 } 00:37:28.582 03:47:13 keyring_linux -- common/autotest_common.sh@651 -- # es=1 00:37:28.582 03:47:13 keyring_linux -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:37:28.582 03:47:13 keyring_linux -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:37:28.582 03:47:13 keyring_linux -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:37:28.582 03:47:13 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:37:28.582 03:47:13 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:37:28.582 03:47:13 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:37:28.582 03:47:13 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:37:28.582 03:47:13 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:37:28.582 03:47:13 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:37:28.582 03:47:13 keyring_linux -- keyring/linux.sh@33 -- # sn=273778020 00:37:28.582 03:47:13 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 273778020 00:37:28.582 1 links removed 00:37:28.582 03:47:13 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:37:28.582 03:47:13 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:37:28.582 03:47:13 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:37:28.582 03:47:13 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:37:28.582 03:47:13 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:37:28.582 03:47:13 keyring_linux -- keyring/linux.sh@33 -- # sn=121764532 00:37:28.582 03:47:13 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 121764532 00:37:28.582 1 links removed 00:37:28.582 03:47:13 keyring_linux -- keyring/linux.sh@41 -- # killprocess 2596810 00:37:28.582 03:47:13 keyring_linux -- common/autotest_common.sh@946 -- # '[' -z 2596810 ']' 00:37:28.582 03:47:13 keyring_linux -- common/autotest_common.sh@950 -- # kill -0 2596810 00:37:28.582 03:47:13 keyring_linux -- common/autotest_common.sh@951 -- # uname 00:37:28.582 03:47:13 keyring_linux -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:37:28.582 03:47:13 keyring_linux -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2596810 00:37:28.582 03:47:13 keyring_linux -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:37:28.582 03:47:13 keyring_linux -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:37:28.582 03:47:13 keyring_linux -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2596810' 00:37:28.582 killing process with pid 2596810 00:37:28.582 03:47:13 keyring_linux -- common/autotest_common.sh@965 -- # kill 2596810 00:37:28.582 Received shutdown signal, test time was about 1.000000 seconds 00:37:28.582 00:37:28.582 Latency(us) 00:37:28.582 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:28.582 =================================================================================================================== 00:37:28.582 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:28.582 03:47:13 keyring_linux -- common/autotest_common.sh@970 -- # wait 2596810 00:37:28.839 03:47:14 keyring_linux -- keyring/linux.sh@42 -- # killprocess 2596736 00:37:28.839 03:47:14 keyring_linux -- common/autotest_common.sh@946 -- # '[' -z 2596736 ']' 00:37:28.839 03:47:14 keyring_linux -- common/autotest_common.sh@950 -- # kill -0 2596736 00:37:28.839 03:47:14 keyring_linux -- common/autotest_common.sh@951 -- # uname 00:37:28.839 03:47:14 keyring_linux -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:37:28.839 03:47:14 keyring_linux -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2596736 00:37:28.839 03:47:14 keyring_linux -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:37:28.839 03:47:14 keyring_linux -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:37:28.839 03:47:14 keyring_linux -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2596736' 00:37:28.839 killing process with pid 2596736 00:37:28.839 03:47:14 keyring_linux -- common/autotest_common.sh@965 -- # kill 2596736 00:37:28.839 03:47:14 keyring_linux -- common/autotest_common.sh@970 -- # wait 2596736 00:37:29.403 00:37:29.403 real 0m4.935s 00:37:29.403 user 0m9.437s 00:37:29.403 sys 0m1.671s 00:37:29.403 03:47:14 keyring_linux -- common/autotest_common.sh@1122 -- # xtrace_disable 00:37:29.403 03:47:14 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:37:29.403 ************************************ 00:37:29.403 END TEST keyring_linux 00:37:29.403 ************************************ 00:37:29.403 03:47:14 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:37:29.403 03:47:14 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:37:29.403 03:47:14 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:37:29.403 03:47:14 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:37:29.403 03:47:14 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:37:29.403 03:47:14 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:37:29.403 03:47:14 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:37:29.403 03:47:14 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:37:29.403 03:47:14 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:37:29.403 03:47:14 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:37:29.403 03:47:14 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:37:29.403 03:47:14 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:37:29.403 03:47:14 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:37:29.403 03:47:14 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:37:29.403 03:47:14 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:37:29.403 03:47:14 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:37:29.403 03:47:14 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:37:29.403 03:47:14 -- common/autotest_common.sh@720 -- # xtrace_disable 00:37:29.403 03:47:14 -- common/autotest_common.sh@10 -- # set +x 00:37:29.403 03:47:14 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:37:29.403 03:47:14 -- common/autotest_common.sh@1388 -- # local autotest_es=0 00:37:29.403 03:47:14 -- common/autotest_common.sh@1389 -- # xtrace_disable 00:37:29.403 03:47:14 -- common/autotest_common.sh@10 -- # set +x 00:37:31.302 INFO: APP EXITING 00:37:31.302 INFO: killing all VMs 00:37:31.302 INFO: killing vhost app 00:37:31.302 INFO: EXIT DONE 00:37:32.236 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:37:32.236 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:37:32.236 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:37:32.236 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:37:32.236 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:37:32.236 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:37:32.236 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:37:32.236 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:37:32.236 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:37:32.236 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:37:32.236 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:37:32.236 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:37:32.236 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:37:32.236 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:37:32.236 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:37:32.494 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:37:32.494 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:37:33.870 Cleaning 00:37:33.870 Removing: /var/run/dpdk/spdk0/config 00:37:33.870 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:37:33.870 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:37:33.870 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:37:33.870 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:37:33.870 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:37:33.870 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:37:33.870 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:37:33.870 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:37:33.870 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:37:33.870 Removing: /var/run/dpdk/spdk0/hugepage_info 00:37:33.870 Removing: /var/run/dpdk/spdk1/config 00:37:33.870 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:37:33.870 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:37:33.870 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:37:33.871 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:37:33.871 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:37:33.871 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:37:33.871 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:37:33.871 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:37:33.871 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:37:33.871 Removing: /var/run/dpdk/spdk1/hugepage_info 00:37:33.871 Removing: /var/run/dpdk/spdk1/mp_socket 00:37:33.871 Removing: /var/run/dpdk/spdk2/config 00:37:33.871 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:37:33.871 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:37:33.871 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:37:33.871 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:37:33.871 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:37:33.871 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:37:33.871 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:37:33.871 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:37:33.871 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:37:33.871 Removing: /var/run/dpdk/spdk2/hugepage_info 00:37:33.871 Removing: /var/run/dpdk/spdk3/config 00:37:33.871 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:37:33.871 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:37:33.871 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:37:33.871 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:37:33.871 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:37:33.871 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:37:33.871 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:37:33.871 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:37:33.871 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:37:33.871 Removing: /var/run/dpdk/spdk3/hugepage_info 00:37:33.871 Removing: /var/run/dpdk/spdk4/config 00:37:33.871 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:37:33.871 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:37:33.871 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:37:33.871 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:37:33.871 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:37:33.871 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:37:33.871 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:37:33.871 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:37:33.871 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:37:33.871 Removing: /var/run/dpdk/spdk4/hugepage_info 00:37:33.871 Removing: /dev/shm/bdev_svc_trace.1 00:37:33.871 Removing: /dev/shm/nvmf_trace.0 00:37:33.871 Removing: /dev/shm/spdk_tgt_trace.pid2277691 00:37:33.871 Removing: /var/run/dpdk/spdk0 00:37:33.871 Removing: /var/run/dpdk/spdk1 00:37:33.871 Removing: /var/run/dpdk/spdk2 00:37:33.871 Removing: /var/run/dpdk/spdk3 00:37:33.871 Removing: /var/run/dpdk/spdk4 00:37:33.871 Removing: /var/run/dpdk/spdk_pid2276140 00:37:33.871 Removing: /var/run/dpdk/spdk_pid2276871 00:37:33.871 Removing: /var/run/dpdk/spdk_pid2277691 00:37:33.871 Removing: /var/run/dpdk/spdk_pid2278121 00:37:33.871 Removing: /var/run/dpdk/spdk_pid2278812 00:37:33.871 Removing: /var/run/dpdk/spdk_pid2278948 00:37:33.871 Removing: /var/run/dpdk/spdk_pid2279671 00:37:33.871 Removing: /var/run/dpdk/spdk_pid2279677 00:37:33.871 Removing: /var/run/dpdk/spdk_pid2279919 00:37:33.871 Removing: /var/run/dpdk/spdk_pid2281110 00:37:33.871 Removing: /var/run/dpdk/spdk_pid2282154 00:37:33.871 Removing: /var/run/dpdk/spdk_pid2282350 00:37:33.871 Removing: /var/run/dpdk/spdk_pid2282653 00:37:33.871 Removing: /var/run/dpdk/spdk_pid2282854 00:37:33.871 Removing: /var/run/dpdk/spdk_pid2283044 00:37:33.871 Removing: /var/run/dpdk/spdk_pid2283201 00:37:33.871 Removing: /var/run/dpdk/spdk_pid2283364 00:37:33.871 Removing: /var/run/dpdk/spdk_pid2283542 00:37:33.871 Removing: /var/run/dpdk/spdk_pid2284115 00:37:33.871 Removing: /var/run/dpdk/spdk_pid2286468 00:37:33.871 Removing: /var/run/dpdk/spdk_pid2286632 00:37:33.871 Removing: /var/run/dpdk/spdk_pid2286792 00:37:33.871 Removing: /var/run/dpdk/spdk_pid2286822 00:37:33.871 Removing: /var/run/dpdk/spdk_pid2287228 00:37:33.871 Removing: /var/run/dpdk/spdk_pid2287237 00:37:33.871 Removing: /var/run/dpdk/spdk_pid2287662 00:37:33.871 Removing: /var/run/dpdk/spdk_pid2287671 00:37:33.871 Removing: /var/run/dpdk/spdk_pid2287966 00:37:33.871 Removing: /var/run/dpdk/spdk_pid2287971 00:37:33.871 Removing: /var/run/dpdk/spdk_pid2288135 00:37:33.871 Removing: /var/run/dpdk/spdk_pid2288271 00:37:33.871 Removing: /var/run/dpdk/spdk_pid2288635 00:37:33.871 Removing: /var/run/dpdk/spdk_pid2288793 00:37:33.871 Removing: /var/run/dpdk/spdk_pid2288984 00:37:33.871 Removing: /var/run/dpdk/spdk_pid2289154 00:37:33.871 Removing: /var/run/dpdk/spdk_pid2289291 00:37:33.871 Removing: /var/run/dpdk/spdk_pid2289365 00:37:33.871 Removing: /var/run/dpdk/spdk_pid2289518 00:37:33.871 Removing: /var/run/dpdk/spdk_pid2289796 00:37:33.871 Removing: /var/run/dpdk/spdk_pid2289955 00:37:33.871 Removing: /var/run/dpdk/spdk_pid2290112 00:37:33.871 Removing: /var/run/dpdk/spdk_pid2290313 00:37:33.871 Removing: /var/run/dpdk/spdk_pid2290543 00:37:33.871 Removing: /var/run/dpdk/spdk_pid2290700 00:37:33.871 Removing: /var/run/dpdk/spdk_pid2290851 00:37:33.871 Removing: /var/run/dpdk/spdk_pid2291125 00:37:33.871 Removing: /var/run/dpdk/spdk_pid2291283 00:37:33.871 Removing: /var/run/dpdk/spdk_pid2291443 00:37:33.871 Removing: /var/run/dpdk/spdk_pid2291613 00:37:33.871 Removing: /var/run/dpdk/spdk_pid2291889 00:37:33.871 Removing: /var/run/dpdk/spdk_pid2292050 00:37:33.871 Removing: /var/run/dpdk/spdk_pid2292202 00:37:33.871 Removing: /var/run/dpdk/spdk_pid2292457 00:37:33.871 Removing: /var/run/dpdk/spdk_pid2292634 00:37:33.871 Removing: /var/run/dpdk/spdk_pid2292802 00:37:33.871 Removing: /var/run/dpdk/spdk_pid2292953 00:37:33.871 Removing: /var/run/dpdk/spdk_pid2293228 00:37:33.871 Removing: /var/run/dpdk/spdk_pid2293300 00:37:33.871 Removing: /var/run/dpdk/spdk_pid2293504 00:37:33.871 Removing: /var/run/dpdk/spdk_pid2295676 00:37:33.871 Removing: /var/run/dpdk/spdk_pid2349344 00:37:33.871 Removing: /var/run/dpdk/spdk_pid2351863 00:37:33.871 Removing: /var/run/dpdk/spdk_pid2358804 00:37:33.871 Removing: /var/run/dpdk/spdk_pid2362481 00:37:33.871 Removing: /var/run/dpdk/spdk_pid2364813 00:37:33.871 Removing: /var/run/dpdk/spdk_pid2365230 00:37:33.871 Removing: /var/run/dpdk/spdk_pid2372458 00:37:33.871 Removing: /var/run/dpdk/spdk_pid2372461 00:37:33.871 Removing: /var/run/dpdk/spdk_pid2373106 00:37:33.871 Removing: /var/run/dpdk/spdk_pid2373656 00:37:33.871 Removing: /var/run/dpdk/spdk_pid2374314 00:37:33.871 Removing: /var/run/dpdk/spdk_pid2374707 00:37:33.871 Removing: /var/run/dpdk/spdk_pid2374718 00:37:33.871 Removing: /var/run/dpdk/spdk_pid2374969 00:37:33.871 Removing: /var/run/dpdk/spdk_pid2374990 00:37:33.871 Removing: /var/run/dpdk/spdk_pid2375106 00:37:33.871 Removing: /var/run/dpdk/spdk_pid2375649 00:37:33.871 Removing: /var/run/dpdk/spdk_pid2376301 00:37:33.871 Removing: /var/run/dpdk/spdk_pid2376961 00:37:33.871 Removing: /var/run/dpdk/spdk_pid2377367 00:37:33.871 Removing: /var/run/dpdk/spdk_pid2377376 00:37:33.871 Removing: /var/run/dpdk/spdk_pid2377631 00:37:33.871 Removing: /var/run/dpdk/spdk_pid2378507 00:37:33.871 Removing: /var/run/dpdk/spdk_pid2379230 00:37:33.871 Removing: /var/run/dpdk/spdk_pid2384574 00:37:33.871 Removing: /var/run/dpdk/spdk_pid2384736 00:37:33.871 Removing: /var/run/dpdk/spdk_pid2387237 00:37:33.871 Removing: /var/run/dpdk/spdk_pid2391196 00:37:33.871 Removing: /var/run/dpdk/spdk_pid2393715 00:37:33.871 Removing: /var/run/dpdk/spdk_pid2399972 00:37:33.871 Removing: /var/run/dpdk/spdk_pid2405178 00:37:33.871 Removing: /var/run/dpdk/spdk_pid2406487 00:37:33.871 Removing: /var/run/dpdk/spdk_pid2407148 00:37:33.871 Removing: /var/run/dpdk/spdk_pid2417209 00:37:33.871 Removing: /var/run/dpdk/spdk_pid2419425 00:37:33.871 Removing: /var/run/dpdk/spdk_pid2444459 00:37:33.871 Removing: /var/run/dpdk/spdk_pid2447237 00:37:33.871 Removing: /var/run/dpdk/spdk_pid2448415 00:37:33.871 Removing: /var/run/dpdk/spdk_pid2449852 00:37:34.130 Removing: /var/run/dpdk/spdk_pid2449939 00:37:34.130 Removing: /var/run/dpdk/spdk_pid2450011 00:37:34.130 Removing: /var/run/dpdk/spdk_pid2450147 00:37:34.130 Removing: /var/run/dpdk/spdk_pid2450935 00:37:34.130 Removing: /var/run/dpdk/spdk_pid2452275 00:37:34.130 Removing: /var/run/dpdk/spdk_pid2452995 00:37:34.130 Removing: /var/run/dpdk/spdk_pid2453304 00:37:34.130 Removing: /var/run/dpdk/spdk_pid2454912 00:37:34.130 Removing: /var/run/dpdk/spdk_pid2455329 00:37:34.130 Removing: /var/run/dpdk/spdk_pid2455777 00:37:34.130 Removing: /var/run/dpdk/spdk_pid2458161 00:37:34.130 Removing: /var/run/dpdk/spdk_pid2461421 00:37:34.130 Removing: /var/run/dpdk/spdk_pid2464951 00:37:34.130 Removing: /var/run/dpdk/spdk_pid2488466 00:37:34.130 Removing: /var/run/dpdk/spdk_pid2491192 00:37:34.130 Removing: /var/run/dpdk/spdk_pid2494957 00:37:34.130 Removing: /var/run/dpdk/spdk_pid2495899 00:37:34.130 Removing: /var/run/dpdk/spdk_pid2496994 00:37:34.130 Removing: /var/run/dpdk/spdk_pid2499472 00:37:34.130 Removing: /var/run/dpdk/spdk_pid2501762 00:37:34.130 Removing: /var/run/dpdk/spdk_pid2505904 00:37:34.130 Removing: /var/run/dpdk/spdk_pid2505968 00:37:34.130 Removing: /var/run/dpdk/spdk_pid2508734 00:37:34.130 Removing: /var/run/dpdk/spdk_pid2508871 00:37:34.130 Removing: /var/run/dpdk/spdk_pid2509005 00:37:34.130 Removing: /var/run/dpdk/spdk_pid2509275 00:37:34.130 Removing: /var/run/dpdk/spdk_pid2509398 00:37:34.130 Removing: /var/run/dpdk/spdk_pid2510471 00:37:34.130 Removing: /var/run/dpdk/spdk_pid2511761 00:37:34.130 Removing: /var/run/dpdk/spdk_pid2513448 00:37:34.130 Removing: /var/run/dpdk/spdk_pid2514625 00:37:34.130 Removing: /var/run/dpdk/spdk_pid2515801 00:37:34.130 Removing: /var/run/dpdk/spdk_pid2516979 00:37:34.130 Removing: /var/run/dpdk/spdk_pid2520778 00:37:34.130 Removing: /var/run/dpdk/spdk_pid2521107 00:37:34.130 Removing: /var/run/dpdk/spdk_pid2522388 00:37:34.130 Removing: /var/run/dpdk/spdk_pid2523121 00:37:34.130 Removing: /var/run/dpdk/spdk_pid2526828 00:37:34.130 Removing: /var/run/dpdk/spdk_pid2528685 00:37:34.130 Removing: /var/run/dpdk/spdk_pid2532084 00:37:34.130 Removing: /var/run/dpdk/spdk_pid2535324 00:37:34.130 Removing: /var/run/dpdk/spdk_pid2542141 00:37:34.130 Removing: /var/run/dpdk/spdk_pid2546560 00:37:34.130 Removing: /var/run/dpdk/spdk_pid2546562 00:37:34.130 Removing: /var/run/dpdk/spdk_pid2558758 00:37:34.130 Removing: /var/run/dpdk/spdk_pid2559172 00:37:34.130 Removing: /var/run/dpdk/spdk_pid2559576 00:37:34.130 Removing: /var/run/dpdk/spdk_pid2560087 00:37:34.130 Removing: /var/run/dpdk/spdk_pid2560560 00:37:34.130 Removing: /var/run/dpdk/spdk_pid2561020 00:37:34.130 Removing: /var/run/dpdk/spdk_pid2561490 00:37:34.130 Removing: /var/run/dpdk/spdk_pid2561899 00:37:34.130 Removing: /var/run/dpdk/spdk_pid2564305 00:37:34.130 Removing: /var/run/dpdk/spdk_pid2564532 00:37:34.130 Removing: /var/run/dpdk/spdk_pid2568319 00:37:34.130 Removing: /var/run/dpdk/spdk_pid2568367 00:37:34.130 Removing: /var/run/dpdk/spdk_pid2570062 00:37:34.130 Removing: /var/run/dpdk/spdk_pid2575360 00:37:34.130 Removing: /var/run/dpdk/spdk_pid2575420 00:37:34.130 Removing: /var/run/dpdk/spdk_pid2578505 00:37:34.130 Removing: /var/run/dpdk/spdk_pid2579908 00:37:34.130 Removing: /var/run/dpdk/spdk_pid2581306 00:37:34.131 Removing: /var/run/dpdk/spdk_pid2582049 00:37:34.131 Removing: /var/run/dpdk/spdk_pid2583446 00:37:34.131 Removing: /var/run/dpdk/spdk_pid2584325 00:37:34.131 Removing: /var/run/dpdk/spdk_pid2589526 00:37:34.131 Removing: /var/run/dpdk/spdk_pid2589862 00:37:34.131 Removing: /var/run/dpdk/spdk_pid2590249 00:37:34.131 Removing: /var/run/dpdk/spdk_pid2591802 00:37:34.131 Removing: /var/run/dpdk/spdk_pid2592082 00:37:34.131 Removing: /var/run/dpdk/spdk_pid2592478 00:37:34.131 Removing: /var/run/dpdk/spdk_pid2594912 00:37:34.131 Removing: /var/run/dpdk/spdk_pid2594922 00:37:34.131 Removing: /var/run/dpdk/spdk_pid2596338 00:37:34.131 Removing: /var/run/dpdk/spdk_pid2596736 00:37:34.131 Removing: /var/run/dpdk/spdk_pid2596810 00:37:34.131 Clean 00:37:34.131 03:47:19 -- common/autotest_common.sh@1447 -- # return 0 00:37:34.131 03:47:19 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:37:34.131 03:47:19 -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:34.131 03:47:19 -- common/autotest_common.sh@10 -- # set +x 00:37:34.131 03:47:19 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:37:34.131 03:47:19 -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:34.131 03:47:19 -- common/autotest_common.sh@10 -- # set +x 00:37:34.389 03:47:19 -- spdk/autotest.sh@387 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:37:34.389 03:47:19 -- spdk/autotest.sh@389 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:37:34.389 03:47:19 -- spdk/autotest.sh@389 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:37:34.389 03:47:19 -- spdk/autotest.sh@391 -- # hash lcov 00:37:34.389 03:47:19 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:37:34.389 03:47:19 -- spdk/autotest.sh@393 -- # hostname 00:37:34.389 03:47:19 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-11 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:37:34.389 geninfo: WARNING: invalid characters removed from testname! 00:38:06.447 03:47:47 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:06.706 03:47:51 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:09.993 03:47:54 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:12.530 03:47:57 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:15.898 03:48:00 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:18.425 03:48:03 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:22.602 03:48:07 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:38:22.602 03:48:07 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:22.602 03:48:07 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:38:22.603 03:48:07 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:22.603 03:48:07 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:22.603 03:48:07 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:22.603 03:48:07 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:22.603 03:48:07 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:22.603 03:48:07 -- paths/export.sh@5 -- $ export PATH 00:38:22.603 03:48:07 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:22.603 03:48:07 -- common/autobuild_common.sh@436 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:38:22.603 03:48:07 -- common/autobuild_common.sh@437 -- $ date +%s 00:38:22.603 03:48:07 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1721526487.XXXXXX 00:38:22.603 03:48:07 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1721526487.UlJTKS 00:38:22.603 03:48:07 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:38:22.603 03:48:07 -- common/autobuild_common.sh@443 -- $ '[' -n v23.11 ']' 00:38:22.603 03:48:07 -- common/autobuild_common.sh@444 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:38:22.603 03:48:07 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:38:22.603 03:48:07 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:38:22.603 03:48:07 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:38:22.603 03:48:07 -- common/autobuild_common.sh@453 -- $ get_config_params 00:38:22.603 03:48:07 -- common/autotest_common.sh@395 -- $ xtrace_disable 00:38:22.603 03:48:07 -- common/autotest_common.sh@10 -- $ set +x 00:38:22.603 03:48:07 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:38:22.603 03:48:07 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:38:22.603 03:48:07 -- pm/common@17 -- $ local monitor 00:38:22.603 03:48:07 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:22.603 03:48:07 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:22.603 03:48:07 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:22.603 03:48:07 -- pm/common@21 -- $ date +%s 00:38:22.603 03:48:07 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:22.603 03:48:07 -- pm/common@21 -- $ date +%s 00:38:22.603 03:48:07 -- pm/common@25 -- $ sleep 1 00:38:22.603 03:48:07 -- pm/common@21 -- $ date +%s 00:38:22.603 03:48:07 -- pm/common@21 -- $ date +%s 00:38:22.603 03:48:07 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721526487 00:38:22.603 03:48:07 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721526487 00:38:22.603 03:48:07 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721526487 00:38:22.603 03:48:07 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721526487 00:38:22.603 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721526487_collect-vmstat.pm.log 00:38:22.603 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721526487_collect-cpu-load.pm.log 00:38:22.603 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721526487_collect-cpu-temp.pm.log 00:38:22.603 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721526487_collect-bmc-pm.bmc.pm.log 00:38:23.168 03:48:08 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:38:23.169 03:48:08 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j48 00:38:23.169 03:48:08 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:38:23.169 03:48:08 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:38:23.169 03:48:08 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:38:23.169 03:48:08 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:38:23.169 03:48:08 -- spdk/autopackage.sh@19 -- $ timing_finish 00:38:23.169 03:48:08 -- common/autotest_common.sh@732 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:38:23.169 03:48:08 -- common/autotest_common.sh@733 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:38:23.169 03:48:08 -- common/autotest_common.sh@735 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:38:23.169 03:48:08 -- spdk/autopackage.sh@20 -- $ exit 0 00:38:23.169 03:48:08 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:38:23.169 03:48:08 -- pm/common@29 -- $ signal_monitor_resources TERM 00:38:23.169 03:48:08 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:38:23.169 03:48:08 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:23.169 03:48:08 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:38:23.169 03:48:08 -- pm/common@44 -- $ pid=2608456 00:38:23.169 03:48:08 -- pm/common@50 -- $ kill -TERM 2608456 00:38:23.169 03:48:08 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:23.169 03:48:08 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:38:23.169 03:48:08 -- pm/common@44 -- $ pid=2608458 00:38:23.169 03:48:08 -- pm/common@50 -- $ kill -TERM 2608458 00:38:23.169 03:48:08 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:23.169 03:48:08 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:38:23.169 03:48:08 -- pm/common@44 -- $ pid=2608460 00:38:23.169 03:48:08 -- pm/common@50 -- $ kill -TERM 2608460 00:38:23.169 03:48:08 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:23.169 03:48:08 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:38:23.169 03:48:08 -- pm/common@44 -- $ pid=2608495 00:38:23.169 03:48:08 -- pm/common@50 -- $ sudo -E kill -TERM 2608495 00:38:23.169 + [[ -n 2171025 ]] 00:38:23.169 + sudo kill 2171025 00:38:23.181 [Pipeline] } 00:38:23.202 [Pipeline] // stage 00:38:23.208 [Pipeline] } 00:38:23.228 [Pipeline] // timeout 00:38:23.234 [Pipeline] } 00:38:23.253 [Pipeline] // catchError 00:38:23.259 [Pipeline] } 00:38:23.300 [Pipeline] // wrap 00:38:23.307 [Pipeline] } 00:38:23.328 [Pipeline] // catchError 00:38:23.338 [Pipeline] stage 00:38:23.341 [Pipeline] { (Epilogue) 00:38:23.360 [Pipeline] catchError 00:38:23.362 [Pipeline] { 00:38:23.380 [Pipeline] echo 00:38:23.382 Cleanup processes 00:38:23.390 [Pipeline] sh 00:38:23.672 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:38:23.672 2608609 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:38:23.672 2608952 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:38:23.687 [Pipeline] sh 00:38:23.966 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:38:23.966 ++ grep -v 'sudo pgrep' 00:38:23.966 ++ awk '{print $1}' 00:38:23.966 + sudo kill -9 2608609 00:38:23.979 [Pipeline] sh 00:38:24.258 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:38:34.242 [Pipeline] sh 00:38:34.524 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:38:34.525 Artifacts sizes are good 00:38:34.539 [Pipeline] archiveArtifacts 00:38:34.546 Archiving artifacts 00:38:34.770 [Pipeline] sh 00:38:35.047 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:38:35.061 [Pipeline] cleanWs 00:38:35.070 [WS-CLEANUP] Deleting project workspace... 00:38:35.071 [WS-CLEANUP] Deferred wipeout is used... 00:38:35.078 [WS-CLEANUP] done 00:38:35.079 [Pipeline] } 00:38:35.095 [Pipeline] // catchError 00:38:35.110 [Pipeline] sh 00:38:35.393 + logger -p user.info -t JENKINS-CI 00:38:35.401 [Pipeline] } 00:38:35.418 [Pipeline] // stage 00:38:35.424 [Pipeline] } 00:38:35.442 [Pipeline] // node 00:38:35.448 [Pipeline] End of Pipeline 00:38:35.480 Finished: SUCCESS